Sample records for large deviation rate

  1. Large deviation theory for the kinetics and energetics of turnover of enzyme catalysis in a chemiostatic flow.

    PubMed

    Das, Biswajit; Gangopadhyay, Gautam

    2018-05-07

    In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.

  2. Large deviation theory for the kinetics and energetics of turnover of enzyme catalysis in a chemiostatic flow

    NASA Astrophysics Data System (ADS)

    Das, Biswajit; Gangopadhyay, Gautam

    2018-05-01

    In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.

  3. The large deviation function for entropy production: the optimal trajectory and the role of fluctuations

    NASA Astrophysics Data System (ADS)

    Speck, Thomas; Engel, Andreas; Seifert, Udo

    2012-12-01

    We study the large deviation function for the entropy production rate in two driven one-dimensional systems: the asymmetric random walk on a discrete lattice and Brownian motion in a continuous periodic potential. We compare two approaches: using the Donsker-Varadhan theory and using the Freidlin-Wentzell theory. We show that the wings of the large deviation function are dominated by a single optimal trajectory: either in the forward direction (positive rate) or in the backward direction (negative rate). The joining of the two branches at zero entropy production implies a non-differentiability and thus the appearance of a ‘kink’. However, around zero entropy production, many trajectories contribute and thus the ‘kink’ is smeared out.

  4. A General Conditional Large Deviation Principle

    DOE PAGES

    La Cour, Brian R.; Schieve, William C.

    2015-07-18

    Given a sequence of Borel probability measures on a Hausdorff space which satisfy a large deviation principle (LDP), we consider the corresponding sequence of measures formed by conditioning on a set B. If the large deviation rate function I is good and effectively continuous, and the conditioning set has the property that (1)more » $$\\overline{B°}$$=$$\\overline{B}$$ and (2) I(x)<∞ for all xε$$\\overline{B}$$, then the sequence of conditional measures satisfies a LDP with the good, effectively continuous rate function I B, where I B(x)=I(x)-inf I(B) if xε$$\\overline{B}$$ and I B(x)=∞ otherwise.« less

  5. A Large Deviations Analysis of Certain Qualitative Properties of Parallel Tempering and Infinite Swapping Algorithms

    DOE PAGES

    Doll, J.; Dupuis, P.; Nyquist, P.

    2017-02-08

    Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less

  6. Lower Current Large Deviations for Zero-Range Processes on a Ring

    NASA Astrophysics Data System (ADS)

    Chleboun, Paul; Grosskinsky, Stefan; Pizzoferrato, Andrea

    2017-04-01

    We study lower large deviations for the current of totally asymmetric zero-range processes on a ring with concave current-density relation. We use an approach by Jensen and Varadhan which has previously been applied to exclusion processes, to realize current fluctuations by travelling wave density profiles corresponding to non-entropic weak solutions of the hyperbolic scaling limit of the process. We further establish a dynamic transition, where large deviations of the current below a certain value are no longer typically attained by non-entropic weak solutions, but by condensed profiles, where a non-zero fraction of all the particles accumulates on a single fixed lattice site. This leads to a general characterization of the rate function, which is illustrated by providing detailed results for four generic examples of jump rates, including constant rates, decreasing rates, unbounded sublinear rates and asymptotically linear rates. Our results on the dynamic transition are supported by numerical simulations using a cloning algorithm.

  7. Large deviations in the presence of cooperativity and slow dynamics

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-06-01

    We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.

  8. Entanglement transitions induced by large deviations

    NASA Astrophysics Data System (ADS)

    Bhosale, Udaysinh T.

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  9. Entanglement transitions induced by large deviations.

    PubMed

    Bhosale, Udaysinh T

    2017-12-01

    The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B, is computed analytically using a Coulomb gas method. It is shown that this probability, for large N, goes as exp[-βN^{2}Φ(ζ)], where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ(ζ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A, using the properties of the density matrix's partial transpose ρ_{12}^{Γ}. The density of states of ρ_{12}^{Γ} is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ. Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.

  10. Large Deviations: Advanced Probability for Undergrads

    ERIC Educational Resources Information Center

    Rolls, David A.

    2007-01-01

    In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…

  11. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  12. Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function

    NASA Astrophysics Data System (ADS)

    Tzella, Alexandra; Vanneste, Jacques

    2016-09-01

    The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.

  13. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  14. Heterogeneity-induced large deviations in activity and (in some cases) entropy production

    NASA Astrophysics Data System (ADS)

    Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.

    2014-10-01

    We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.

  15. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    NASA Astrophysics Data System (ADS)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  16. The impact of physical and mental tasks on pilot mental workoad

    NASA Technical Reports Server (NTRS)

    Berg, S. L.; Sheridan, T. B.

    1986-01-01

    Seven instrument-rated pilots with a wide range of backgrounds and experience levels flew four different scenarios on a fixed-base simulator. The Baseline scenario was the simplest of the four and had few mental and physical tasks. An activity scenario had many physical but few mental tasks. The Planning scenario had few physical and many mental taks. A Combined scenario had high mental and physical task loads. The magnitude of each pilot's altitude and airspeed deviations was measured, subjective workload ratings were recorded, and the degree of pilot compliance with assigned memory/planning tasks was noted. Mental and physical performance was a strong function of the manual activity level, but not influenced by the mental task load. High manual task loads resulted in a large percentage of mental errors even under low mental task loads. Although all the pilots gave similar subjective ratings when the manual task load was high, subjective ratings showed greater individual differences with high mental task loads. Altitude or airspeed deviations and subjective ratings were most correlated when the total task load was very high. Although airspeed deviations, altitude deviations, and subjective workload ratings were similar for both low experience and high experience pilots, at very high total task loads, mental performance was much lower for the low experience pilots.

  17. Sub-Scale Analysis of New Large Aircraft Pool Fire-Suppression

    DTIC Science & Technology

    2016-01-01

    discrete ordinates radiation and single step Khan and Greeves soot model provided radiation and soot interaction. Agent spray dynamics were...Notable differences observed showed a modeled increase in the mockup surface heat-up rate as well as a modeled decreased rate of soot production...488 K SUPPRESSION STARTED  Large deviation between sensors due to sensor alignment challenges and asymmetric fuel surface ignition  Unremarkable

  18. Rare behavior of growth processes via umbrella sampling of trajectories

    NASA Astrophysics Data System (ADS)

    Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen

    2018-03-01

    We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.

  19. Excitation laser energy dependence of surface-enhanced fluorescence showing plasmon-induced ultrafast electronic dynamics in dye molecules

    NASA Astrophysics Data System (ADS)

    Itoh, Tamitake; Yamamoto, Yuko S.; Tamaru, Hiroharu; Biju, Vasudevanpillai; Murase, Norio; Ozaki, Yukihiro

    2013-06-01

    We find unique properties accompanying surface-enhanced fluorescence (SEF) from dye molecules adsorbed on Ag nanoparticle aggregates, which generate surface-enhanced Raman scattering. The properties are observed in excitation laser energy dependence of SEF after excluding plasmonic spectral modulation in SEF. The unique properties are large blue shifts of fluorescence spectra, deviation of ratios between anti-Stokes SEF intensity and Stokes from those of normal fluorescence, super-broadening of Stokes spectra, and returning to original fluorescence by lower energy excitation. We elucidate that these properties are induced by electromagnetic enhancement of radiative decay rates exceeding the vibrational relaxation rates within an electronic excited state, which suggests that molecular electronic dynamics in strong plasmonic fields can be largely deviated from that in free space.

  20. Objective Motion Cueing Criteria Investigation Based on Three Flight Tasks

    NASA Technical Reports Server (NTRS)

    Zaal, Petrus M. T.; Schroeder, Jeffery A.; Chung, William W.

    2015-01-01

    This paper intends to help establish fidelity criteria to accompany the simulator motion system diagnostic test specified by the International Civil Aviation Organization. Twelve air- line transport pilots flew three tasks in the NASA Vertical Motion Simulator under four different motion conditions. The experiment used three different hexapod motion configurations, each with a different tradeoff between motion filter gain and break frequency, and one large motion configuration that utilized as much of the simulator's motion space as possible. The motion condition significantly affected: 1) pilot motion fidelity ratings, and sink rate and lateral deviation at touchdown for the approach and landing task, 2) pilot motion fidelity ratings, roll deviations, maximum pitch rate, and number of stick shaker activations in the stall task, and 3) heading deviation after an engine failure in the takeoff task. Significant differences in pilot-vehicle performance were used to define initial objective motion cueing criteria boundaries. These initial fidelity boundaries show promise but need refinement.

  1. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  2. Experimental measurement of the orbital paths of particles sedimenting within a rotating viscous fluid as influenced by gravity

    NASA Technical Reports Server (NTRS)

    Wolf, David A.; Schwarz, Ray P.

    1992-01-01

    Measurements were taken of the path of a simulated typical tissue segment or 'particle' within a rotating fluid as a function of gravitational strength, fluid rotation rate, particle sedimentation rate, and particle initial position. Parameters were examined within the useful range for tissue culture in the NASA rotating wall culture vessels. The particle moves along a nearly circular path through the fluid (as observed from the rotating reference frame of the fluid) at the same speed as its linear terminal sedimentation speed for the external gravitational field. This gravitationally induced motion causes an increasing deviation of the particle from its original position within the fluid for a decreased rotational rate, for a more rapidly sedimenting particle, and for an increased gravitational strength. Under low gravity conditions (less than 0.1 G), the particle's motion through the fluid and its deviation from its original position become negligible. Under unit gravity conditions, large distortions (greater than 0.25 inch) occur even for particles of slow sedimentation rate (less than 1.0 cm/sec). The particle's motion is nearly independent of the particle's initial position. Comparison with mathematically predicted particle paths show that a significant error in the mathematically predicted path occurs for large particle deviations. This results from a geometric approximation and numerically accumulating error in the mathematical technique.

  3. Does standard deviation matter? Using "standard deviation" to quantify security of multistage testing.

    PubMed

    Wang, Chun; Zheng, Yi; Chang, Hua-Hua

    2014-01-01

    With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.

  4. Large-deviation theory for diluted Wishart random matrices

    NASA Astrophysics Data System (ADS)

    Castillo, Isaac Pérez; Metz, Fernando L.

    2018-03-01

    Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.

  5. Sanov and central limit theorems for output statistics of quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horssen, Merlijn van, E-mail: merlijn.vanhorssen@nottingham.ac.uk; Guţă, Mădălin, E-mail: madalin.guta@nottingham.ac.uk

    2015-02-15

    In this paper, we consider the statistics of repeated measurements on the output of a quantum Markov chain. We establish a large deviations result analogous to Sanov’s theorem for the multi-site empirical measure associated to finite sequences of consecutive outcomes of a classical stochastic process. Our result relies on the construction of an extended quantum transition operator (which keeps track of previous outcomes) in terms of which we compute moment generating functions, and whose spectral radius is related to the large deviations rate function. As a corollary to this, we obtain a central limit theorem for the empirical measure. Suchmore » higher level statistics may be used to uncover critical behaviour such as dynamical phase transitions, which are not captured by lower level statistics such as the sample mean. As a step in this direction, we give an example of a finite system whose level-1 (empirical mean) rate function is independent of a model parameter while the level-2 (empirical measure) rate is not.« less

  6. Derivation of an analytic expression for the error associated with the noise reduction rating

    NASA Astrophysics Data System (ADS)

    Murphy, William J.

    2005-04-01

    Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.

  7. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  8. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    NASA Astrophysics Data System (ADS)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2017-12-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  9. Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion

    NASA Astrophysics Data System (ADS)

    Lazarescu, Alexandre

    2017-06-01

    Dynamical phase transitions are crucial features of the fluctuations of statistical systems, corresponding to boundaries between qualitatively different mechanisms of maintaining unlikely values of dynamical observables over long periods of time. They manifest themselves in the form of non-analyticities in the large deviation function of those observables. In this paper, we look at bulk-driven exclusion processes with open boundaries. It is known that the standard asymmetric simple exclusion process exhibits a dynamical phase transition in the large deviations of the current of particles flowing through it. That phase transition has been described thanks to specific calculation methods relying on the model being exactly solvable, but more general methods have also been used to describe the extreme large deviations of that current, far from the phase transition. We extend those methods to a large class of models based on the ASEP, where we add arbitrary spatial inhomogeneities in the rates and short-range potentials between the particles. We show that, as for the regular ASEP, the large deviation function of the current scales differently with the size of the system if one considers very high or very low currents, pointing to the existence of a dynamical phase transition between those two regimes: high current large deviations are extensive in the system size, and the typical states associated to them are Coulomb gases, which are highly correlated; low current large deviations do not depend on the system size, and the typical states associated to them are anti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate our results numerically on a simple example, and we interpret the transition in terms of the current pushing beyond its maximal hydrodynamic value, as well as relate it to the appearance of Tracy-Widom distributions in the relaxation statistics of such models. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Alexandre Lazarescu was selected by the Editorial Board of J. Phys. A as an Emerging Talent.

  10. Effects of sales promotion on smoking among U.S. ninth graders.

    PubMed

    Redmond, W H

    1999-03-01

    The purpose of this study was to examine the association between tobacco marketing efforts and daily cigarette smoking by adolescents. This was a longitudinal study of uptake of smoking on a daily basis with smoking data from the Monitoring the Future project. Diffusion modeling was used to generate expected rates of daily smoking initiation, which were compared with actual rates. Study data were from a national survey, administered annually from 1978 through 1995. Between 4,416 and 6,099 high school seniors participated per year, for a total of 94,652. The main outcome measure was a deviation score based on expected rates from diffusion modeling vs actual rates of initiation of daily use of cigarettes by ninth graders. Annual data on cigarette marketing expenditures were reported by the Federal Trade Commission. The deviation scores of expected vs actual rates of smoking initiation for ninth graders were correlated with annual changes in marketing expenditures. The correlation between sales promotion expenditures and the deviation score in daily smoking initiation was large (r = 0. 769) and statistically significant (P = 0.009) in the 1983-1992 period. Correlations between sales promotion and smoking initiation were not statistically significant in 1978-1982. Correlations between advertising expenditures and smoking initiation were not significant in either period. In years of high promotional expenditures, the rate of daily smoking initiation among ninth graders was higher than expected from diffusion model predictions. Large promotional pushes by cigarette marketers in the 1980s and 1990s appear to be linked with increased levels of daily smoking initiation among ninth graders. Copyright 1999 American Health Foundation and Academic Press.

  11. Net Reaction Rate and Neutrino Cooling Rate for the Urca Process in Departure from Chemical Equilibrium in the Crust of Fast-accreting Neutron Stars

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Hua; Huang, Xi; Zheng, Xiao-Ping

    We discuss the effect of compression on Urca shells in the ocean and crust of accreting neutron stars, especially in superbursting sources. We find that Urca shells may be deviated from chemical equilibrium in neutron stars which accrete at several tenths of the local Eddington accretion rate. The deviation depends on the energy threshold of the parent and daughter nuclei, the transition strength, the temperature, and the local accretion rate. In a typical crust model of accreting neutron stars, the chemical departures range from a few tenths of kBT to tens of kBT for various Urca pairs. If the Urca shell can exist in crusts of accreting neutron stars, compression may enhance the net neutrino cooling rate by a factor of about 1-2 relative to the neutrino emissivity in chemical equilibrium. For some cases, such as Urca pairs with small energy thresholds and/or weak transition strength, the large chemical departure may result in net heating rather than cooling, although the released heat can be small. Strong Urca pairs in the deep crust are hard to be deviated even in neutron stars accreting at the local Eddington accretion rate.

  12. A model of curved saccade trajectories: spike rate adaptation in the brainstem as the cause of deviation away.

    PubMed

    Kruijne, Wouter; Van der Stigchel, Stefan; Meeter, Martijn

    2014-03-01

    The trajectory of saccades to a target is often affected whenever there is a distractor in the visual field. Distractors can cause a saccade to deviate towards their location or away from it. The oculomotor mechanisms that produce deviation towards distractors have been thoroughly explored in behavioral, neurophysiological and computational studies. The mechanisms underlying deviation away, on the other hand, remain unclear. Behavioral findings suggest a mechanism of spatially focused, top-down inhibition in a saccade map, and deviation away has become a tool to investigate such inhibition. However, this inhibition hypothesis has little neuroanatomical or neurophysiological support, and recent findings go against it. Here, we propose that deviation away results from an unbalanced saccade drive from the brainstem, caused by spike rate adaptation in brainstem long-lead burst neurons. Adaptation to stimulation in the direction of the distractor results in an unbalanced drive away from it. An existing model of the saccade system was extended with this theory. The resulting model simulates a wide range of findings on saccade trajectories, including findings that have classically been interpreted to support inhibition views. Furthermore, the model replicated the effect of saccade latency on deviation away, but predicted this effect would be absent with large (400 ms) distractor-target onset asynchrony. This prediction was confirmed in an experiment, which demonstrates that the theory both explains classical findings on saccade trajectories and predicts new findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Variations in rotation rate and polar motion of a non-hydrostatic Titan

    NASA Astrophysics Data System (ADS)

    Coyette, Alexis; Baland, Rose-Marie; Van Hoolst, Tim

    2018-06-01

    Observation of the rotation of synchronously rotating satellites can help to probe their interior. Previous studies mostly assume that these large icy satellites are in hydrostatic equilibrium, although several measurements indicate that they deviate from such a state. Here we investigate the effect of non-hydrostatic equilibrium and of flow in the subsurface ocean on the rotation of Titan. We consider the variations in rotation rate and the polar motion due to (1) the gravitational force exerted by Saturn at orbital period and (2) exchanges of angular momentum between the seasonally varying atmosphere and the solid surface. The deviation of the mass distribution from hydrostaticity can significantly increase the diurnal libration and decrease the amplitude of the seasonal libration. The effect of the non-hydrostatic mass distribution is less important for polar motion, which is more sensitive to flow in the subsurface ocean. By including a large spectrum of atmospheric perturbations, the smaller than synchronous rotation rate measured by Cassini in the 2004-2009 period (Meriggiola et al., 2016) could be explained by the atmospheric forcing. If our interpretation is correct, we predict a larger than synchronous rotation rate in the 2009-2014 period.

  14. Large deviation function for a driven underdamped particle in a periodic potential

    NASA Astrophysics Data System (ADS)

    Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo

    2018-02-01

    Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.

  15. A large deviations principle for stochastic flows of viscous fluids

    NASA Astrophysics Data System (ADS)

    Cipriano, Fernanda; Costa, Tiago

    2018-04-01

    We study the well-posedness of a stochastic differential equation on the two dimensional torus T2, driven by an infinite dimensional Wiener process with drift in the Sobolev space L2 (0 , T ;H1 (T2)) . The solution corresponds to a stochastic Lagrangian flow in the sense of DiPerna Lions. By taking into account that the motion of a viscous incompressible fluid on the torus can be described through a suitable stochastic differential equation of the previous type, we study the inviscid limit. By establishing a large deviations principle, we show that, as the viscosity goes to zero, the Lagrangian stochastic Navier-Stokes flow approaches the Euler deterministic Lagrangian flow with an exponential rate function.

  16. Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain

    NASA Astrophysics Data System (ADS)

    Žnidarič, Marko

    2014-01-01

    We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.

  17. The explicit form of the rate function for semi-Markov processes and its contractions

    NASA Astrophysics Data System (ADS)

    Sughiyama, Yuki; Kobayashi, Testuya J.

    2018-03-01

    We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.

  18. Effects of expected-value information and display format on recognition of aircraft subsystem abnormalities

    NASA Technical Reports Server (NTRS)

    Palmer, Michael T.; Abbott, Kathy H.

    1994-01-01

    This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.

  19. Large deviations of a long-time average in the Ehrenfest urn model

    NASA Astrophysics Data System (ADS)

    Meerson, Baruch; Zilber, Pini

    2018-05-01

    Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .

  20. Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence

    NASA Astrophysics Data System (ADS)

    Laurie, J.; Bouchet, F.; Zaboronski, O.

    2012-12-01

    We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.

  1. Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws

    NASA Astrophysics Data System (ADS)

    Barré, J.; Bernardin, C.; Chetrite, R.

    2018-02-01

    We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.

  2. Large deviation analysis of a simple information engine

    NASA Astrophysics Data System (ADS)

    Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.

    2015-11-01

    Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.

  3. Fluctuation theorems for discrete kinetic models of molecular motors

    NASA Astrophysics Data System (ADS)

    Faggionato, Alessandra; Silvestri, Vittoria

    2017-04-01

    Motivated by discrete kinetic models for non-cooperative molecular motors on periodic tracks, we consider random walks (also not Markov) on quasi one dimensional (1d) lattices, obtained by gluing several copies of a fundamental graph in a linear fashion. We show that, for a suitable class of quasi-1d lattices, the large deviation rate function associated to the position of the walker satisfies a Gallavotti-Cohen symmetry for any choice of the dynamical parameters defining the stochastic walk. This class includes the linear model considered in Lacoste et al (2008 Phys. Rev. E 78 011915). We also derive fluctuation theorems for the time-integrated cycle currents and discuss how the matrix approach of Lacoste et al (2008 Phys. Rev. E 78 011915) can be extended to derive the above Gallavotti-Cohen symmetry for any Markov random walk on {Z} with periodic jump rates. Finally, we review in the present context some large deviation results of Faggionato and Silvestri (2017 Ann. Inst. Henri Poincaré 53 46-78) and give some specific examples with explicit computations.

  4. From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction

    NASA Astrophysics Data System (ADS)

    Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo

    This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.

  5. Southern San Andreas Fault seismicity is consistent with the Gutenberg-Richter magnitude-frequency distribution

    USGS Publications Warehouse

    Page, Morgan T.; Felzer, Karen

    2015-01-01

    The magnitudes of any collection of earthquakes nucleating in a region are generally observed to follow the Gutenberg-Richter (G-R) distribution. On some major faults, however, paleoseismic rates are higher than a G-R extrapolation from the modern rate of small earthquakes would predict. This, along with other observations, led to formulation of the characteristic earthquake hypothesis, which holds that the rate of small to moderate earthquakes is permanently low on large faults relative to the large-earthquake rate (Wesnousky et al., 1983; Schwartz and Coppersmith, 1984). We examine the rate difference between recent small to moderate earthquakes on the southern San Andreas fault (SSAF) and the paleoseismic record, hypothesizing that the discrepancy can be explained as a rate change in time rather than a deviation from G-R statistics. We find that with reasonable assumptions, the rate changes necessary to bring the small and large earthquake rates into alignment agree with the size of rate changes seen in epidemic-type aftershock sequence (ETAS) modeling, where aftershock triggering of large earthquakes drives strong fluctuations in the seismicity rates for earthquakes of all magnitudes. The necessary rate changes are also comparable to rate changes observed for other faults worldwide. These results are consistent with paleoseismic observations of temporally clustered bursts of large earthquakes on the SSAF and the absence of M greater than or equal to 7 earthquakes on the SSAF since 1857.

  6. Rates of speciation in the fossil record

    NASA Technical Reports Server (NTRS)

    Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1998-01-01

    Data from palaeontology and biodiversity suggest that the global biota should produce an average of three new species per year. However, the fossil record shows large variation around this mean. Rates of origination have declined through the Phanerozoic. This appears to have been largely a function of sorting among higher taxa (especially classes), which exhibit characteristic rates of speciation (and extinction) that differ among them by nearly an order of magnitude. Secular decline of origination rates is hardly constant, however; many positive deviations reflect accelerated speciation during rebounds from mass extinctions. There has also been general decline in rates of speciation within major taxa through their histories, although rates have tended to remain higher among members in tropical regions. Finally, pulses of speciation appear sometimes to be associated with climate change, although moderate oscillations of climate do not necessarily promote speciation despite forcing changes in species' geographical ranges.

  7. Deformation induced dynamic recrystallization and precipitation strengthening in an Mg−Zn−Mn alloy processed by high strain rate rolling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Jimiao; Song, Min

    2016-11-15

    The microstructure of a high strain-rate rolled Mg−Zn−Mn alloy was investigated by transmission electron microscopy to understand the relationship between the microstructure and mechanical properties. The results indicate that: (1) a bimodal microstructure consisting of the fine dynamic recrystallized grains and the largely deformed grains was formed; (2) a large number of dynamic precipitates including plate-like MgZn{sub 2} phase, spherical MgZn{sub 2} phase and spherical Mn particles distribute uniformly in the grains; (3) the major facets of many plate-like MgZn{sub 2} precipitates deviated several to tens of degrees (3°–30°) from the matrix basal plane. It has been shown that themore » high strength of the alloy is attributed to the formation of the bimodal microstructure, dynamic precipitation, and the interaction between the dislocations and the dynamic precipitates. - Highlights: •A bimodal microstructure was formed in a high strain-rate rolled Mg−Zn−Mn alloy. •Plate-like MgZn{sub 2}, spherical MgZn{sub 2} and spherical Mn phases were observed. •The major facet of the plate-like MgZn{sub 2} deviated from the matrix basal plane.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavignet, A.A.; Sobey, I.J.

    At present, drilling of highly deviated wells is complicated by the possibility of the formation of a thick bed of cuttings at low flow rates. The bed of cuttings can cause large torque loads on drill pipe and can fall back around the bit resulting in a stuck bit. Previous investigators have made experimental observations which show that bed formation is characterized by a relatively rapid increase in bed thickness as either the flow rate is lowered past some critical value, or as the deviation from the vertical increases. The authors present a simple model which explains these observations. Themore » model shows that the bed thickness is controlled by the interfacial stress caused by the different velocities of the mud and the cuttings layer. The results confirm previous observations that bed formation is relatively insensitive to mud rheology. Eccentricity of the drill pipe in the hole is an important factor. The model is used to determine critical flow rate needed to prevent the formation of a thick bed of cuttings and the inclination, hole size and rate of penetration are varied.« less

  9. Transport Coefficients from Large Deviation Functions

    NASA Astrophysics Data System (ADS)

    Gao, Chloe; Limmer, David

    2017-10-01

    We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yong, E-mail: 83229994@qq.com; Ge, Hao, E-mail: haoge@pku.edu.cn; Xiong, Jie, E-mail: jiexiong@umac.mo

    Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.

  11. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  12. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  13. The Effectiveness of a Rater Training Booklet in Increasing Accuracy of Performance Ratings

    DTIC Science & Technology

    1988-04-01

    subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...r IS % _. Findings: The absolute deviation scores of each individual’s ratings from the "true score" provided by subject matter experts were analyzed

  14. Air-flow regulation system for a coal gasifier

    DOEpatents

    Fasching, George E.

    1984-01-01

    An improved air-flow regulator for a fixed-bed coal gasifier is provided which allows close air-flow regulation from a compressor source even though the pressure variations are too rapid for a single primary control loop to respond. The improved system includes a primary controller to control a valve in the main (large) air supply line to regulate large slow changes in flow. A secondary controller is used to control a smaller, faster acting valve in a secondary (small) air supply line parallel to the main line valve to regulate rapid cyclic deviations in air flow. A low-pass filter with a time constant of from 20 to 50 seconds couples the output of the secondary controller to the input of the primary controller so that the primary controller only responds to slow changes in the air-flow rate, the faster, cyclic deviations in flow rate sensed and corrected by the secondary controller loop do not reach the primary controller due to the high frequency rejection provided by the filter. This control arrangement provides at least a factor of 5 improvement in air-flow regulation for a coal gasifier in which air is supplied by a reciprocating compressor through a surge tank.

  15. A New Control Paradigm for Stochastic Differential Equations

    NASA Astrophysics Data System (ADS)

    Schmid, Matthias J. A.

    This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.

  16. Trade-off between linewidth and slip rate in a mode-locked laser model.

    PubMed

    Moore, Richard O

    2014-05-15

    We demonstrate a trade-off between linewidth and loss-of-lock rate in a mode-locked laser employing active feedback to control the carrier-envelope offset phase difference. In frequency metrology applications, the linewidth translates directly to uncertainty in the measured frequency, whereas the impact of lock loss and recovery on the measured frequency is less well understood. We reduce the dynamics to stochastic differential equations, specifically diffusion processes, and compare the linearized linewidth to the rate of lock loss determined by the mean time to exit, as calculated from large deviation theory.

  17. The influence of base rates on correlations: An evaluation of proposed alternative effect sizes with real-world data.

    PubMed

    Babchishin, Kelly M; Helmus, Leslie-Maaike

    2016-09-01

    Correlations are the simplest and most commonly understood effect size statistic in psychology. The purpose of the current paper was to use a large sample of real-world data (109 correlations with 60,415 participants) to illustrate the base rate dependence of correlations when applied to dichotomous or ordinal data. Specifically, we examined the influence of the base rate on different effect size metrics. Correlations decreased when the dichotomous variable did not have a 50 % base rate. The higher the deviation from a 50 % base rate, the smaller the observed Pearson's point-biserial and Kendall's tau correlation coefficients. In contrast, the relationship between base rate deviations and the more commonly proposed alternatives (i.e., polychoric correlation coefficients, AUCs, Pearson/Thorndike adjusted correlations, and Cohen's d) were less remarkable, with AUCs being most robust to attenuation due to base rates. In other words, the base rate makes a marked difference in the magnitude of the correlation. As such, when using dichotomous data, the correlation may be more sensitive to base rates than is optimal for the researcher's goals. Given the magnitude of the association between the base rate and point-biserial correlations (r = -.81) and Kendall's tau (r = -.80), we recommend that AUCs, Pearson/Thorndike adjusted correlations, Cohen's d, or polychoric correlations should be considered as alternate effect size statistics in many contexts.

  18. Annealed Scaling for a Charged Polymer

    NASA Astrophysics Data System (ADS)

    Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.

    2016-03-01

    This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems. What happens for the quenched free energy per monomer remains open. We state two modest results and raise a few questions.

  19. An activity index for geomagnetic paleosecular variation, excursions, and reversals

    NASA Astrophysics Data System (ADS)

    Panovska, S.; Constable, C. G.

    2017-04-01

    Magnetic indices provide quantitative measures of space weather phenomena that are widely used by researchers in geomagnetism. We introduce an index focused on the internally generated field that can be used to evaluate long term variations or climatology of modern and paleomagnetic secular variation, including geomagnetic excursions, polarity reversals, and changes in reversal rate. The paleosecular variation index, Pi, represents instantaneous or average deviation from a geocentric axial dipole field using normalized ratios of virtual geomagnetic pole colatitude and virtual dipole moment. The activity level of the index, σPi, provides a measure of field stability through the temporal standard deviation of Pi. Pi can be calculated on a global grid from geomagnetic field models to reveal large scale geographic variations in field structure. It can be determined for individual time series, or averaged at local, regional, and global scales to detect long term changes in geomagnetic activity, identify excursions, and transitional field behavior. For recent field models, Pi ranges from less than 0.05 to 0.30. Conventional definitions for geomagnetic excursions are characterized by Pi exceeding 0.5. Strong field intensities are associated with low Pi unless they are accompanied by large deviations from axial dipole field directions. σPi provides a measure of geomagnetic stability that is modulated by the level of PSV or frequency of excursional activity and reversal rate. We demonstrate uses of Pi for paleomagnetic observations and field models and show how it could be used to assess whether numerical simulations of the geodynamo exhibit Earth-like properties.

  20. Cumulants and large deviations of the current through non-equilibrium steady states

    NASA Astrophysics Data System (ADS)

    Bodineau, Thierry; Derrida, Bernard

    2007-06-01

    Using a generalisation of detailed balance for systems maintained out of equilibrium by contact with 2 reservoirs at unequal temperatures or at unequal densities, one can recover the fluctuation theorem for the large deviation function of the current. For large diffusive systems, we show how the large deviation function of the current can be computed using a simple additivity principle. The validity of this additivity principle and the occurrence of phase transitions are discussed in the framework of the macroscopic fluctuation theory. To cite this article: T. Bodineau, B. Derrida, C. R. Physique 8 (2007).

  1. Error detection capability of a novel transmission detector: a validation study for online VMAT monitoring.

    PubMed

    Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes

    2017-09-01

    The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min -1 . Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from  -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean : R 2   =  0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.

  2. Error detection capability of a novel transmission detector: a validation study for online VMAT monitoring

    NASA Astrophysics Data System (ADS)

    Pasler, Marlies; Michel, Kilian; Marrazzo, Livia; Obenland, Michael; Pallotta, Stefania; Björnsgard, Mari; Lutterbach, Johannes

    2017-09-01

    The purpose of this study was to characterize a new single large-area ionization chamber, the integral quality monitor system (iRT, Germany), for online and real-time beam monitoring. Signal stability, monitor unit (MU) linearity and dose rate dependence were investigated for static and arc deliveries and compared to independent ionization chamber measurements. The dose verification capability of the transmission detector system was evaluated by comparing calculated and measured detector signals for 15 volumetric modulated arc therapy plans. The error detection sensitivity was tested by introducing MLC position and linac output errors. Deviations in dose distributions between the original and error-induced plans were compared in terms of detector signal deviation, dose-volume histogram (DVH) metrics and 2D γ-evaluation (2%/2 mm and 3%/3 mm). The detector signal is linearly dependent on linac output and shows negligible (<0.4%) dose rate dependence up to 460 MU min-1. Signal stability is within 1% for cumulative detector output; substantial variations were observed for the segment-by-segment signal. Calculated versus measured cumulative signal deviations ranged from  -0.16%-2.25%. DVH, mean 2D γ-value and detector signal evaluations showed increasing deviations with regard to the respective reference with growing MLC and dose output errors; good correlation between DVH metrics and detector signal deviation was found (e.g. PTV D mean: R 2  =  0.97). Positional MLC errors of 1 mm and errors in linac output of 2% were identified with the transmission detector system. The extensive tests performed in this investigation show that the new transmission detector provides a stable and sensitive cumulative signal output and is suitable for beam monitoring during patient treatment.

  3. Analysis of the stress field and strain rate in Zagros-Makran transition zone

    NASA Astrophysics Data System (ADS)

    Ghorbani Rostam, Ghasem; Pakzad, Mehrdad; Mirzaei, Noorbakhsh; Sakhaei, Seyed Reza

    2018-01-01

    Transition boundary between Zagros continental collision and Makran oceanic-continental subduction can be specified by two wide limits: (a) Oman Line is the seismicity boundary with a sizeable reduction in seismicity rate from Zagros in the west to Makran in the east; and (b) the Zendan-Minab-Palami (ZMP) fault system is believed to be a prominent tectonic boundary. The purpose of this paper is to analyze the stress field in the Zagros-Makran transition zone by the iterative joint inversion method developed by Vavrycuk (Geophysical Journal International 199:69-77, 2014). The results suggest a rather uniform pattern of the stress field around these two boundaries. We compare the results with the strain rates obtained from the Global Positioning System (GPS) network stations. In most cases, the velocity vectors show a relatively good agreement with the stress field except for the Bandar Abbas (BABS) station which displays a relatively large deviation between the stress field and the strain vector. This deviation probably reflects a specific location of the BABS station being in the transition zone between Zagros continental collision and Makran subduction zones.

  4. Examining the Relationship Between Passenger Airline Aircraft Maintenance Outsourcing and Aircraft Safety

    NASA Astrophysics Data System (ADS)

    Monaghan, Kari L.

    The problem addressed was the concern for aircraft safety rates as they relate to the rate of maintenance outsourcing. Data gathered from 14 passenger airlines: AirTran, Alaska, America West, American, Continental, Delta, Frontier, Hawaiian, JetBlue, Midwest, Northwest, Southwest, United, and USAir covered the years 1996 through 2008. A quantitative correlational design, utilizing Pearson's correlation coefficient, and the coefficient of determination were used in the present study to measure the correlation between variables. Elements of passenger airline aircraft maintenance outsourcing and aircraft accidents, incidents, and pilot deviations within domestic passenger airline operations were analyzed, examined, and evaluated. Rates of maintenance outsourcing were analyzed to determine the association with accident, incident, and pilot deviation rates. Maintenance outsourcing rates used in the evaluation were the yearly dollar expenditure of passenger airlines for aircraft maintenance outsourcing as they relate to the total airline aircraft maintenance expenditures. Aircraft accident, incident, and pilot deviation rates used in the evaluation were the yearly number of accidents, incidents, and pilot deviations per miles flown. The Pearson r-values were calculated to measure the linear relationship strength between the variables. There were no statistically significant correlation findings for accidents, r(174)=0.065, p=0.393, and incidents, r(174)=0.020, p=0.793. However, there was a statistically significant correlation for pilot deviation rates, r(174)=0.204, p=0.007 thus indicating a statistically significant correlation between maintenance outsourcing rates and pilot deviation rates. The calculated R square value of 0.042 represents the variance that can be accounted for in aircraft pilot deviation rates by examining the variance in aircraft maintenance outsourcing rates; accordingly, 95.8% of the variance is unexplained. Suggestions for future research include replication of the present study with the inclusion of maintenance outsourcing rate data for all airlines differentiated between domestic and foreign repair station utilization. Replication of the present study every five years is also encouraged to continue evaluating the impact of maintenance outsourcing practices on passenger airline safety.

  5. Studies of dark energy with X-ray observatories.

    PubMed

    Vikhlinin, Alexey

    2010-04-20

    I review the contribution of Chandra X-ray Observatory to studies of dark energy. There are two broad classes of observable effects of dark energy: evolution of the expansion rate of the Universe, and slow down in the rate of growth of cosmic structures. Chandra has detected and measured both of these effects through observations of galaxy clusters. A combination of the Chandra results with other cosmological datasets leads to 5% constraints on the dark energy equation-of-state parameter, and limits possible deviations of gravity on large scales from general relativity.

  6. Large-deviation joint statistics of the finite-time Lyapunov spectrum in isotropic turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Perry L., E-mail: pjohns86@jhu.edu; Meneveau, Charles

    2015-08-15

    One of the hallmarks of turbulent flows is the chaotic behavior of fluid particle paths with exponentially growing separation among them while their distance does not exceed the viscous range. The maximal (positive) Lyapunov exponent represents the average strength of the exponential growth rate, while fluctuations in the rate of growth are characterized by the finite-time Lyapunov exponents (FTLEs). In the last decade or so, the notion of Lagrangian coherent structures (which are often computed using FTLEs) has gained attention as a tool for visualizing coherent trajectory patterns in a flow and distinguishing regions of the flow with different mixingmore » properties. A quantitative statistical characterization of FTLEs can be accomplished using the statistical theory of large deviations, based on the so-called Cramér function. To obtain the Cramér function from data, we use both the method based on measuring moments and measuring histograms and introduce a finite-size correction to the histogram-based method. We generalize the existing univariate formalism to the joint distributions of the two FTLEs needed to fully specify the Lyapunov spectrum in 3D flows. The joint Cramér function of turbulence is measured from two direct numerical simulation datasets of isotropic turbulence. Results are compared with joint statistics of FTLEs computed using only the symmetric part of the velocity gradient tensor, as well as with joint statistics of instantaneous strain-rate eigenvalues. When using only the strain contribution of the velocity gradient, the maximal FTLE nearly doubles in magnitude, highlighting the role of rotation in de-correlating the fluid deformations along particle paths. We also extend the large-deviation theory to study the statistics of the ratio of FTLEs. The most likely ratio of the FTLEs λ{sub 1} : λ{sub 2} : λ{sub 3} is shown to be about 4:1:−5, compared to about 8:3:−11 when using only the strain-rate tensor for calculating fluid volume deformations. The results serve to characterize the fundamental statistical and geometric structure of turbulence at small scales including cumulative, time integrated effects. These are important for deformable particles such as droplets and polymers advected by turbulence.« less

  7. Large Deviations for Nonlocal Stochastic Neural Fields

    PubMed Central

    2014-01-01

    We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297

  8. Toilet Grab-Bar Preference and Center of Pressure Deviation During Toilet Transfers in Healthy Seniors, Seniors With Hip Replacements, and Seniors Having Suffered a Stroke.

    PubMed

    Kennedy, Matthew Joel; Arcelus, Amaya; Guitard, Paulette; Goubran, R A; Sveistrup, Heidi

    2015-01-01

    Multiple toilet grab-bar configurations are required by people with a diverse spectrum of disability. The study purpose was to determine toilet grab-bar preference of healthy seniors, seniors with a hip replacement, and seniors post-stroke, and to determine the effect of each configuration on centre of pressure (COP) displacement during toilet transfers. 14 healthy seniors, 7 ambulatory seniors with a hip replacement, and 8 ambulatory seniors post-stroke participated in the study. Toilet transfers were performed with no bars (NB), commode (C), two vertical bars (2VB), one vertical bar (1VB), a horizontal bar (H), two swing-away bars (S) and a diagonal bar (D). COP was measured using pressure sensitive floor mats. Participants rated the safety, ease of use, helpfulness, comfort and preference for instalment. 2VB was most preferred and had the smallest COP deviation. Least preferred was H and NB. C caused largest COP displacement but had favourable ratings. The preference and safety of the 2VB should be considered in the design of accessible toilets and in accessibility construction guidelines. However these results need to be verified in non-ambulatory populations. C is frequently prescribed, but generates large COP deviation, suggesting it may present an increased risk of falls.

  9. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories

    NASA Astrophysics Data System (ADS)

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  10. Fluid-driven fracture propagation in heterogeneous media: Probability distributions of fracture trajectories.

    PubMed

    Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis

    2017-11-01

    Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.

  11. Evaluation of Glaucoma Progression in Large-Scale Clinical Data: The Japanese Archive of Multicentral Databases in Glaucoma (JAMDIG).

    PubMed

    Fujino, Yuri; Asaoka, Ryo; Murata, Hiroshi; Miki, Atsuya; Tanito, Masaki; Mizoue, Shiro; Mori, Kazuhiko; Suzuki, Katsuyoshi; Yamashita, Takehiro; Kashiwagi, Kenji; Shoji, Nobuyuki

    2016-04-01

    To develop a large-scale real clinical database of glaucoma (Japanese Archive of Multicentral Databases in Glaucoma: JAMDIG) and to investigate the effect of treatment. The study included a total of 1348 eyes of 805 primary open-angle glaucoma patients with 10 visual fields (VFs) measured with 24-2 or 30-2 Humphrey Field Analyzer (HFA) and intraocular pressure (IOP) records in 10 institutes in Japan. Those with 10 reliable VFs were further identified (638 eyes of 417 patients). Mean total deviation (mTD) of the 52 test points in the 24-2 HFA VF was calculated, and the relationship between mTD progression rate and seven variables (age, mTD of baseline VF, average IOP, standard deviation (SD) of IOP, previous argon/selective laser trabeculoplasties (ALT/SLT), previous trabeculectomy, and previous trabeculotomy) was analyzed. The mTD in the initial VF was -6.9 ± 6.2 dB and the mTD progression rate was -0.26 ± 0.46 dB/year. Mean IOP during the follow-up period was 13.5 ± 2.2 mm Hg. Age and SD of IOP were related to mTD progression rate. However, in eyes with average IOP below 15 and also 13 mm Hg, only age and baseline VF mTD were related to mTD progression rate. Age and the degree of VF damage were related to future progression. Average IOP was not related to the progression rate; however, fluctuation of IOP was associated with faster progression, although this was not the case when average IOP was below 15 mm Hg.

  12. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  13. Effect of stress on energy flux deviation of ultrasonic waves in GR/EP composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1990-01-01

    Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis (fiber axis) and the x1 for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers a new nondestructive technique of evaluating stress in composites.

  14. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  15. Comparison of therapy augmentation and deviation rates from the recommended once-daily dosing regimen between LDX and commonly prescribed long-acting stimulants for the treatment of ADHD in youth and adults.

    PubMed

    Setyawan, Juliana; Hodgkins, Paul; Guérin, Annie; Gauthier, Geneviève; Cloutier, Martin; Wu, Eric; Erder, M Haim

    2013-10-01

    To compare therapy augmentation and deviation rates from the recommended once-daily dosing regimen in Attention Deficit Hyperactivity Disorder (ADHD) patients initiated on lisdexamfetamine (LDX) vs other once-daily Food and Drug Administration (FDA) approved stimulants. ADHD patients initiated on a long-acting ADHD stimulant medication (index medication) in/after 2007 were selected from a large U.S. administrative claims database. Patients were required to be persistent for ≥90 days and continuously enrolled in their healthcare plan for ≥12 months following treatment initiation date. Based on age and previous treatment status, patients were classified into treatment-naïve children and adolescents (6-17 years old), previously treated children and adolescents, treatment-naïve adults (≥18 years old), and previously treated adults. Furthermore, patients were classified into four mutually exclusive treatment groups, based on index medication: lisdexamfetamine (LDX), osmotic release methylphenidate hydrochloride long-acting (OROS MPH), other methylphenidate/dexmethylphenidate long-acting (MPH LA), and amphetamine/dextroamphetamine long-acting (AMPH LA). The average daily consumption was measured as the quantity of index medication supplied in the 12-month study period divided by the total number of days of supply. Therapy augmentation was defined as the use of another ADHD medication concomitantly with the index medication for ≥28 consecutive days. Therapy augmentation and deviation rates from the recommended once-daily dosing regimen were compared between treatment groups using multivariate logistic regression models. Compared to the other treatment groups, LDX patients were less likely to augment with another ADHD medication (range odds ratios [OR]; 1.28-3.30) and to deviate from the recommended once-daily dosing regimen (range OR; 1.73-4.55), except for previously treated adult patients, where therapy augmentation differences were not statistically significant when compared to OROS MPH and MPH LA patients. This study did not control for ADHD severity. Overall, compared to LDX-treated patients, patients initiated on other ADHD medications were equally or more likely to have a therapy augmentation and more likely to deviate from the recommended once-daily dosing regimen.

  16. Scaling laws in the dynamics of crime growth rate

    NASA Astrophysics Data System (ADS)

    Alves, Luiz G. A.; Ribeiro, Haroldo V.; Mendes, Renio S.

    2013-06-01

    The increasing number of crimes in areas with large concentrations of people have made cities one of the main sources of violence. Understanding characteristics of how crime rate expands and its relations with the cities size goes beyond an academic question, being a central issue for contemporary society. Here, we characterize and analyze quantitative aspects of murders in the period from 1980 to 2009 in Brazilian cities. We find that the distribution of the annual, biannual and triannual logarithmic homicide growth rates exhibit the same functional form for distinct scales, that is, a scale invariant behavior. We also identify asymptotic power-law decay relations between the standard deviations of these three growth rates and the initial size. Further, we discuss similarities with complex organizations.

  17. In Terms of the Logarithmic Mean Annual Seismicity Rate and Its Standard Deviation to Present the Gutenberg-Richter Relation

    NASA Astrophysics Data System (ADS)

    Chen, K. P.; Chang, W. Y.; Tsai, Y. B.

    2016-12-01

    The main purpose of this study is to apply an innovative approach to assess the median annual seismicity rates and their dispersions for Taiwan earthquakes in different depth ranges. This approach explicitly represents the Gutenberg-Richter (G-R) relation in terms of both the logarithmic mean annual seismicity rate and its standard deviation, instead of just the arithmetic mean. We use the high-quality seismicity data obtained by the Institute of Earth Sciences (IES) and the Central Weather Bureau (CWB) in an earthquake catalog with homogenized moment magnitudes from 1975 to 2014 for our study. The selected data set is shown to be complete for Mw>3.0. We first use it to illustrate the merits of our new approach for dampening the influence of spuriously large or small event numbers in individual years on the determination of median annual seismicity rate and its standard deviation. We further show that the logarithmic annual seismicity rates indeed possess a well-behaved lognormal distribution. The final results are summarized as follows: log10N=5.75-0.90Mw+/-(0.245-0.01Mw) for focal depth 0 300 km; log10N=5.78-0.94Mw+/-(0.195+0.01Mw) for focal depth 0-35 km; log10N=4.72-0.89Mw+/-(-0.075+0.075Mw) for focal depth 35-70 km; and log10N=4.69-0.88Mw+/-(-0.47+0.16Mw) for focal depth 70-300 km. Above results show distinctly different values for the parameters a and b in the G-R relations for Taiwan earthquakes in different depth ranges. These analytical equations can be readily used for comprehensive probabilistic seismic hazard assessment. Furthermore, a numerical table on the corresponding median annual seismicity rates and their upper and lower bounds at median +/- one standard deviation levels, as calculated from above analytical equations, is presented at the end. This table offers an overall glance of the estimated median annual seismicity rates and their dispersions for Taiwan earthquakes of various magnitudes and focal depths. It is interesting to point out that the seismicity rate of crustal earthquakes, which tend to contribute most hazards, accounts for only about 74% of the overall seismicity rate in Taiwan. Accordingly, direct use of the entire earthquake catalog without differentiating the focal depth may result in substantial overestimates of potential seismic hazards.

  18. Probability distributions of linear statistics in chaotic cavities and associated phase transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivo, Pierpaolo; Majumdar, Satya N.; Bohigas, Oriol

    2010-03-01

    We establish large deviation formulas for linear statistics on the N transmission eigenvalues (T{sub i}) of a chaotic cavity, in the framework of random matrix theory. Given any linear statistics of interest A=SIGMA{sub i=1}{sup N}a(T{sub i}), the probability distribution P{sub A}(A,N) of A generically satisfies the large deviation formula lim{sub N-}>{sub i}nfinity[-2 log P{sub A}(Nx,N)/betaN{sup 2}]=PSI{sub A}(x), where PSI{sub A}(x) is a rate function that we compute explicitly in many cases (conductance, shot noise, and moments) and beta corresponds to different symmetry classes. Using these large deviation expressions, it is possible to recover easily known results and to produce newmore » formulas, such as a closed form expression for v(n)=lim{sub N-}>{sub i}nfinity var(T{sub n}) (where T{sub n}=SIGMA{sub i}T{sub i}{sup n}) for arbitrary integer n. The universal limit v*=lim{sub n-}>{sub i}nfinity v(n)=1/2pibeta is also computed exactly. The distributions display a central Gaussian region flanked on both sides by non-Gaussian tails. At the junction of the two regimes, weakly nonanalytical points appear, a direct consequence of phase transitions in an associated Coulomb gas problem. Numerical checks are also provided, which are in full agreement with our asymptotic results in both real and Laplace space even for moderately small N. Part of the results have been announced by Vivo et al. [Phys. Rev. Lett. 101, 216809 (2008)].« less

  19. The Effects of Rate of Deviation and Musical Context on Intonation Perception in Homophonic Four-Part Chorales.

    NASA Astrophysics Data System (ADS)

    Bell, Michael Stephen

    Sixty-four trained musicians listened to four -bar excerpts of selected chorales by J. S. Bach, which were presented both in four-part texture (harmonic context) and as a single voice part (melodic context). These digitally synthesized examples were created by combining the first twelve partials, and all voice parts had the same generic timbre. A within-subjects design was used, so subjects heard each example in both contexts. Included in the thirty -two excerpts for each subject were four soprano, four alto, four tenor, and four bass parts as the target voices. The intonation of the target voice was varied such that the voice stayed in tune or changed by a half cent, two cents, or eight cents per second (a cent is 1/100 of a half step). Although direction of the deviation (sharp or flat) was not a significant factor in intonation perception, main effects for context (melodic vs. harmonic) and rate of deviation were highly significant, as was the interaction between rate of deviation and context. Specifically, selections that stayed in tune or changed only by half cents were not perceived differently; for larger deviations, the error was detected earlier and the intonation was judged to be worse in the harmonic contexts compared to the melodic contexts. Additionally, the direction of the error was correctly identified in the melodic context more often than the hamonic context only for the examples that mistuned at a rate of eight cents per second. Correct identification of the voice part that went out of tune in the four-part textures depended only on rate of deviation: the in tune excerpts (no voice going out of tune) and the eight cent deviations were correctly identified most often, the two cent deviations were next, and the half cent deviation excerpts were the least accurately identified.

  20. Variation in the standard deviation of the lure rating distribution: Implications for estimates of recollection probability.

    PubMed

    Dopkins, Stephen; Varner, Kaitlin; Hoyer, Darin

    2017-10-01

    In word recognition semantic priming of test words increased the false-alarm rate and the mean of confidence ratings to lures. Such priming also increased the standard deviation of confidence ratings to lures and the slope of the z-ROC function, suggesting that the priming increased the standard deviation of the lure evidence distribution. The Unequal Variance Signal Detection (UVSD) model interpreted the priming as increasing the standard deviation of the lure evidence distribution. Without additional parameters the Dual Process Signal Detection (DPSD) model could only accommodate the results by fitting the data for related and unrelated primes separately, interpreting the priming, implausibly, as decreasing the probability of target recollection (DPSD). With an additional parameter, for the probability of false (lure) recollection the model could fit the data for related and unrelated primes together, interpreting the priming as increasing the probability of false recollection. These results suggest that DPSD estimates of target recollection probability will decrease with increases in the lure confidence/evidence standard deviation unless a parameter is included for false recollection. Unfortunately the size of a given lure confidence/evidence standard deviation relative to other possible lure confidence/evidence standard deviations is often unspecified by context. Hence the model often has no way of estimating false recollection probability and thereby correcting its estimates of target recollection probability.

  1. Evaporation of sessile droplets affected by graphite nanoparticles and binary base fluids.

    PubMed

    Zhong, Xin; Duan, Fei

    2014-11-26

    The effects of ethanol component and nanoparticle concentration on evaporation dynamics of graphite-water nanofluid droplets have been studied experimentally. The results show that the formed deposition patterns vary greatly with an increase in ethanol concentration from 0 to 50 vol %. Nanoparticles have been observed to be carried to the droplet surface and form a large piece of aggregate. The volume evaporation rate on average increases as the ethanol concentration increases from 0 to 50 vol % in the binary mixture nanofluid droplets. The evaporation rate at the initial stage is more rapid than that at the late stage to dry, revealing a deviation from a linear fitting line, standing for a constant evaporation rate. The deviation is more intense with a higher ethanol concentration. The ethanol-induced smaller liquid-vapor surface tension leads to higher wettability of the nanofluid droplets. The graphite nanoparticles in ethanol-water droplets reinforce the pinning effect in the drying process, and the droplets with more ethanol demonstrate the depinning behavior only at the late stage. The addition of graphite nanoparticles in water enhances a droplet baseline spreading at the beginning of evaporation, a pinning effect during evaporation, and the evaporation rate. However, with a relatively high nanoparticle concentration, the enhancement is attenuated.

  2. Comparison of Objective Measures for Predicting Perceptual Balance and Visual Aesthetic Preference

    PubMed Central

    Hübner, Ronald; Fillinger, Martin G.

    2016-01-01

    The aesthetic appreciation of a picture largely depends on the perceptual balance of its elements. The underlying mental mechanisms of this relation, however, are still poorly understood. For investigating these mechanisms, objective measures of balance have been constructed, such as the Assessment of Preference for Balance (APB) score of Wilson and Chatterjee (2005). In the present study we examined the APB measure and compared it to an alternative measure (DCM; Deviation of the Center of “Mass”) that represents the center of perceptual “mass” in a picture and its deviation from the geometric center. Additionally, we applied measures of homogeneity and of mirror symmetry. In a first experiment participants had to rate the balance and symmetry of simple pictures, whereas in a second experiment different participants rated their preference (liking) for these pictures. In a third experiment participants rated the balance as well as the preference of new pictures. Altogether, the results show that DCM scores accounted better for balance ratings than APB scores, whereas the opposite held with respect to preference. Detailed analyses revealed that these results were due to the fact that aesthetic preference does not only depend on balance but also on homogeneity, and that the APB measure takes this feature into account. PMID:27014143

  3. Blood pressure variability in man: its relation to high blood pressure, age and baroreflex sensitivity.

    PubMed

    Mancia, G; Ferrari, A; Gregorini, L; Parati, G; Pomidossi, G; Bertinieri, G; Grassi, G; Zanchetti, A

    1980-12-01

    1. Intra-arterial blood pressure and heart rate were recorded for 24 h in ambulant hospitalized patients of variable age who had normal blood pressure or essential hypertension. Mean 24 h values, standard deviations and variation coefficient were obtained as the averages of values separately analysed for 48 consecutive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation aations and variation coefficient were obtained as the averages of values separately analysed for 48 consecurive half-hour periods. 2. In older subjects standard deviation and variation coefficient for mean arterial pressure were greater than in younger subjects with similar pressure values, whereas standard deviation and variation coefficient for heart rate were smaller. 3. In hypertensive subjects standard deviation for mean arterial pressure was greater than in normotensive subjects of similar ages, but this was not the case for variation coefficient, which was slightly smaller in the former than in the latter group. Normotensive and hypertensive subjects showed no difference in standard deviation and variation coefficient for heart rate. 4. In both normotensive and hypertensive subjects standard deviation and even more so variation coefficient were slightly or not related to arterial baroreflex sensitivity as measured by various methods (phenylephrine, neck suction etc.). 5. It is concluded that blood pressure variability increases and heart rate variability decreases with age, but that changes in variability are not so obvious in hypertension. Also, differences in variability among subjects are only marginally explained by differences in baroreflex function.

  4. Effect of Stress on Energy Flux Deviation of Ultrasonic Waves in Ultrasonic Waves in GR/EP Composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1990-01-01

    Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis fiber axis) and the x1 axis for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers new nondestructive technique of evaluating stress in composites.

  5. Evaluation of the dosimetric properties of a synthetic single crystal diamond detector in high energy clinical proton beams.

    PubMed

    Mandapaka, A K; Ghebremedhin, A; Patyal, B; Marinelli, Marco; Prestopino, G; Verona, C; Verona-Rinati, G

    2013-12-01

    To investigate the dosimetric properties of a synthetic single crystal diamond Schottky diode for accurate relative dose measurements in large and small field high-energy clinical proton beams. The dosimetric properties of a synthetic single crystal diamond detector were assessed by comparison with a reference Markus parallel plate ionization chamber, an Exradin A16 microionization chamber, and Exradin T1a ion chamber. The diamond detector was operated at zero bias voltage at all times. Comparative dose distribution measurements were performed by means of Fractional depth dose curves and lateral beam profiles in clinical proton beams of energies 155 and 250 MeV for a 14 cm square cerrobend aperture and 126 MeV for 3, 2, and 1 cm diameter circular brass collimators. ICRU Report No. 78 recommended beam parameters were used to compare fractional depth dose curves and beam profiles obtained using the diamond detector and the reference ionization chamber. Warm-up∕stability of the detector response and linearity with dose were evaluated in a 250 MeV proton beam and dose rate dependence was evaluated in a 126 MeV proton beam. Stem effect and the azimuthal angle dependence of the diode response were also evaluated. A maximum deviation in diamond detector signal from the average reading of less than 0.5% was found during the warm-up irradiation procedure. The detector response showed a good linear behavior as a function of dose with observed deviations below 0.5% over a dose range from 50 to 500 cGy. The detector response was dose rate independent, with deviations below 0.5% in the investigated dose rates ranging from 85 to 300 cGy∕min. Stem effect and azimuthal angle dependence of the diode signal were within 0.5%. Fractional depth dose curves and lateral beam profiles obtained with the diamond detector were in good agreement with those measured using reference dosimeters. The observed dosimetric properties of the synthetic single crystal diamond detector indicate that its behavior is proton energy independent and dose rate independent in the investigated energy and dose rate range and it is suitable for accurate relative dosimetric measurements in large as well as in small field high energy clinical proton beams.

  6. 78 FR 6232 - Energy Conservation Program: Test Procedures for Conventional Cooking Products With Induction...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-30

    ... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 69.79 1.59 1.97... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 64.52 0.87 1.08... technology unit % % ( ) % Large A Electric Coil... 1 79.81 1.66 2.06 B Electric........ 1 61.81 2.83 3.52...

  7. Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-04-10

    Numerical simulation of subaperture tool influence functions (TIF) is widely known as a critical procedure in computer-controlled optical surfacing. However, it may lack practicability in engineering because the emulation TIF (e-TIF) has some discrepancy with the practical TIF (p-TIF), and the removal rate could not be predicted by simulations. Prior to the polishing of a formal workpiece, opticians have to conduct TIF spot experiments on another sample to confirm the p-TIF with a quantitative removal rate, which is difficult and time-consuming for sequential polishing runs with different tools. This work is dedicated to applying these e-TIFs into practical engineering by making improvements from two aspects: (1) modifies the pressure distribution model of a flat-pitch polisher by finite element analysis and least square fitting methods to make the removal shape of e-TIFs closer to p-TIFs (less than 5% relative deviation validated by experiments); (2) predicts the removal rate of e-TIFs by reverse calculating the material removal volume of a pre-polishing run to the formal workpiece (relative deviations of peak and volume removal rate were validated to be less than 5%). This can omit TIF spot experiments for the particular flat-pitch tool employed and promote the direct usage of e-TIFs in the optimization of a dwell time map, which can largely save on cost and increase fabrication efficiency.

  8. A Modified Differential Coherent Bit Synchronization Algorithm for BeiDou Weak Signals with Large Frequency Deviation.

    PubMed

    Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi

    2017-07-04

    BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.

  9. Large deviation probabilities for correlated Gaussian stochastic processes and daily temperature anomalies

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Kantz, Holger

    2016-04-01

    As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).

  10. Open inflation in the landscape

    NASA Astrophysics Data System (ADS)

    Yamauchi, Daisuke; Linde, Andrei; Naruko, Atsushi; Sasaki, Misao; Tanaka, Takahiro

    2011-08-01

    The open inflation scenario is attracting a renewed interest in the context of the string landscape. Since there are a large number of metastable de Sitter vacua in the string landscape, tunneling transitions to lower metastable vacua through the bubble nucleation occur quite naturally, which leads to a natural realization of open inflation. Although the deviation of Ω0 from unity is small by the observational bound, we argue that the effect of this small deviation on the large-angle CMB anisotropies can be significant for tensor-type perturbation in the open inflation scenario. We consider the situation in which there is a large hierarchy between the energy scale of the quantum tunneling and that of the slow-roll inflation in the nucleated bubble. If the potential just after tunneling is steep enough, a rapid-roll phase appears before the slow-roll inflation. In this case the power spectrum is basically determined by the Hubble rate during the slow-roll inflation. On the other hand, if such a rapid-roll phase is absent, the power spectrum keeps the memory of the high energy density there in the large angular components. Furthermore, the amplitude of large angular components can be enhanced due to the effects of the wall fluctuation mode if the bubble wall tension is small. Therefore, although even the dominant quadrupole component is suppressed by the factor (1-Ω0)2, one can construct some models in which the deviation of Ω0 from unity is large enough to produce measurable effects. We also consider a more general class of models, where the false vacuum decay may occur due to Hawking-Moss tunneling, as well as the models involving more than one scalar field. We discuss scalar perturbations in these models and point out that a large set of such models is already ruled out by observational data, unless there was a very long stage of slow-roll inflation after the tunneling. These results show that observational data allow us to test various assumptions concerning the structure of the string theory potentials and the duration of the last stage of inflation.

  11. River gradient anomalies reveal recent tectonic movements when assuming an exponential gradient decrease along a river course

    NASA Astrophysics Data System (ADS)

    Žibret, Gorazd; Žibret, Lea

    2017-03-01

    High resolution digital models, combined with GIS or other terrain modelling software, allow many new possibilities in geoscience. In this paper we develop, describe and test a novel method, the GLA method, to detect active tectonic uplift or subsidence along river courses. It is a modification of Hack's SL-index method in order to overcome the disadvantages of the latter. The core assumption of the GLA method is that over geological time river profiles quickly adjust to follow an exponential decrease in elevation along the river course. Any large deviation can be attributed to active tectonic movement, or to disturbances in erosion/sedimentation processes caused by an anthropogenic structure (e.g. artificial dam). During the testing phase, the locations of identified deviations were compared to the locations of faults, identified on a 1:100,000 geological map. Results show that higher magnitude deviations are found within a maximum radius of 200 m from the fault, and the majority of detected deviations within a maximum radius of 600 m from faults or thrusts. However, these results are not the best that could be obtained because the geological map that was used (and the only one available for the area) is not of the appropriate scale, and was therefore not precise enough. Comparison of deviation magnitudes against PSInSAR measurements of vertical displacements in the vicinity revealed that in spite of the very few suitable points available, a good correlation between both independent methods was obtained (R2 = 0.68 for the E research area and R2 = 0.69 for the W research area). The GLA method was applied to the three test sites where previous studies have shown active tectonic movements. It shows that deviations occur at the intersections between active faults and river courses, as well as also correctly detected active uplift, attributed to the increased sedimentation rate above an artificial hydropower dam, and an increased erosion rate below. The method gives promising results, and it is acknowledged that the GLA method needs to be tested in other locations around the world.

  12. Modeling of adsorption dynamics at air-liquid interfaces using statistical rate theory (SRT).

    PubMed

    Biswas, M E; Chatzis, I; Ioannidis, M A; Chen, P

    2005-06-01

    A large number of natural and technological processes involve mass transfer at interfaces. Interfacial properties, e.g., adsorption, play a key role in such applications as wetting, foaming, coating, and stabilizing of liquid films. The mechanistic understanding of surface adsorption often assumes molecular diffusion in the bulk liquid and subsequent adsorption at the interface. Diffusion is well described by Fick's law, while adsorption kinetics is less understood and is commonly described using Langmuir-type empirical equations. In this study, a general theoretical model for adsorption kinetics/dynamics at the air-liquid interface is developed; in particular, a new kinetic equation based on the statistical rate theory (SRT) is derived. Similar to many reported kinetic equations, the new kinetic equation also involves a number of parameters, but all these parameters are theoretically obtainable. In the present model, the adsorption dynamics is governed by three dimensionless numbers: psi (ratio of adsorption thickness to diffusion length), lambda (ratio of square of the adsorption thickness to the ratio of adsorption to desorption rate constant), and Nk (ratio of the adsorption rate constant to the product of diffusion coefficient and bulk concentration). Numerical simulations for surface adsorption using the proposed model are carried out and verified. The difference in surface adsorption between the general and the diffusion controlled model is estimated and presented graphically as contours of deviation. Three different regions of adsorption dynamics are identified: diffusion controlled (deviation less than 10%), mixed diffusion and transfer controlled (deviation in the range of 10-90%), and transfer controlled (deviation more than 90%). These three different modes predominantly depend on the value of Nk. The corresponding ranges of Nk for the studied values of psi (10(-2)

  13. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE PAGES

    Dupuis, Paul; Johnson, Dane

    2017-11-17

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  14. Moderate deviations-based importance sampling for stochastic recursive equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Paul; Johnson, Dane

    Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.

  15. Implicit Incompressible SPH.

    PubMed

    Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias

    2013-07-25

    We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.

  16. Implicit incompressible SPH.

    PubMed

    Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias

    2014-03-01

    We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.

  17. Atomic rate coefficients in a degenerate plasma

    NASA Astrophysics Data System (ADS)

    Aslanyan, Valentin; Tallents, Greg

    2015-11-01

    The electrons in a dense, degenerate plasma follow Fermi-Dirac statistics, which deviate significantly in this regime from the usual Maxwell-Boltzmann approach used by many models. We present methods to calculate the atomic rate coefficients for the Fermi-Dirac distribution and present a comparison of the ionization fraction of carbon calculated using both models. We have found that for densities close to solid, although the discrepancy is small for LTE conditions, there is a large divergence from the ionization fraction by using classical rate coefficients in the presence of strong photoionizing radiation. We have found that using these modified rates and the degenerate heat capacity may affect the time evolution of a plasma subject to extreme ultraviolet and x-ray radiation such as produced in free electron laser irradiation of solid targets.

  18. Complexities of follicle deviation during selection of a dominant follicle in Bos taurus heifers.

    PubMed

    Ginther, O J; Baldrighi, J M; Siddiqui, M A R; Araujo, E R

    2016-11-01

    Follicle deviation during a follicular wave is a continuation in growth rate of the dominant follicle (F1) and decreased growth rate of the largest subordinate follicle (F2). The reliability of using an F1 of 8.5 mm to represent the beginning of expected deviation for experimental purposes during waves 1 and 2 (n = 26 per wave) was studied daily in heifers. Each wave was subgrouped as follows: standard subgroup (F1 larger than F2 for 2 days preceding deviation and F2 > 7.0 mm on the day of deviation), undersized subgroup (F2 did not attain 7.0 mm by the day of deviation), and switched subgroup (F2 larger than F1 at least once on the 2 days before or on the day of deviation). For each wave, mean differences in diameter between F1 and F2 changed abruptly at expected deviation in the standard subgroup but began 1 day before expected deviation in the undersized and switched subgroups. Concentrations of FSH in the wave-stimulating FSH surge and an increase in LH centered on expected deviation did not differ among subgroups. Results for each wave indicated that (1) expected deviation (F1, 8.5 mm) was a reliable representation of actual deviation in the standard subgroup but not in the undersized and switched subgroups; (2) concentrations of the gonadotropins normalized to expected deviation were similar among the three subgroups, indicating that the day of deviation was related to diameter of F1 and not F2; and (3) defining an expected day of deviation for experimental use should consider both diameter of F1 and the characteristics of deviation. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A Study of Two Dwarf Irregular Galaxies with Asymmetrical Star Formation Distributions

    NASA Astrophysics Data System (ADS)

    Hunter, Deidre A.; Gallardo, Samavarti; Zhang, Hong-Xin; Adamo, Angela; Cook, David O.; Oh, Se-Heon; Elmegreen, Bruce G.; Kim, Hwihyun; Kahre, Lauren; Ubeda, Leonardo; Bright, Stacey N.; Ryon, Jenna E.; Fumagalli, Michele; Sacchi, Elena; Kennicutt, R. C.; Tosi, Monica; Dale, Daniel A.; Cignoni, Michele; Messa, Matteo; Grebel, Eva K.; Gouliermis, Dimitrios A.; Sabbi, Elena; Grasha, Kathryn; Gallagher, John S., III; Calzetti, Daniela; Lee, Janice C.

    2018-03-01

    Two dwarf irregular galaxies, DDO 187 and NGC 3738, exhibit a striking pattern of star formation: intense star formation is taking place in a large region occupying roughly half of the inner part of the optical galaxy. We use data on the H I distribution and kinematics and stellar images and colors to examine the properties of the environment in the high star formation rate (HSF) halves of the galaxies in comparison with the low star formation rate halves. We find that the pressure and gas density are higher on the HSF sides by 30%–70%. In addition we find in both galaxies that the H I velocity fields exhibit significant deviations from ordered rotation and there are large regions of high-velocity dispersion and multiple velocity components in the gas beyond the inner regions of the galaxies. The conditions in the HSF regions are likely the result of large-scale external processes affecting the internal environment of the galaxies and enabling the current star formation there.

  20. LD-SPatt: large deviations statistics for patterns on Markov chains.

    PubMed

    Nuel, G

    2004-01-01

    Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.

  1. Photospheric Magnetic Field Properties of Flaring versus Flare-quiet Active Regions. II. Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Leka, K. D.; Barnes, G.

    2003-10-01

    We apply statistical tests based on discriminant analysis to the wide range of photospheric magnetic parameters described in a companion paper by Leka & Barnes, with the goal of identifying those properties that are important for the production of energetic events such as solar flares. The photospheric vector magnetic field data from the University of Hawai'i Imaging Vector Magnetograph are well sampled both temporally and spatially, and we include here data covering 24 flare-event and flare-quiet epochs taken from seven active regions. The mean value and rate of change of each magnetic parameter are treated as separate variables, thus evaluating both the parameter's state and its evolution, to determine which properties are associated with flaring. Considering single variables first, Hotelling's T2-tests show small statistical differences between flare-producing and flare-quiet epochs. Even pairs of variables considered simultaneously, which do show a statistical difference for a number of properties, have high error rates, implying a large degree of overlap of the samples. To better distinguish between flare-producing and flare-quiet populations, larger numbers of variables are simultaneously considered; lower error rates result, but no unique combination of variables is clearly the best discriminator. The sample size is too small to directly compare the predictive power of large numbers of variables simultaneously. Instead, we rank all possible four-variable permutations based on Hotelling's T2-test and look for the most frequently appearing variables in the best permutations, with the interpretation that they are most likely to be associated with flaring. These variables include an increasing kurtosis of the twist parameter and a larger standard deviation of the twist parameter, but a smaller standard deviation of the distribution of the horizontal shear angle and a horizontal field that has a smaller standard deviation but a larger kurtosis. To support the ``sorting all permutations'' method of selecting the most frequently occurring variables, we show that the results of a single 10-variable discriminant analysis are consistent with the ranking. We demonstrate that individually, the variables considered here have little ability to differentiate between flaring and flare-quiet populations, but with multivariable combinations, the populations may be distinguished.

  2. Numerical Large Deviation Analysis of the Eigenstate Thermalization Hypothesis

    NASA Astrophysics Data System (ADS)

    Yoshizawa, Toru; Iyoda, Eiki; Sagawa, Takahiro

    2018-05-01

    A plausible mechanism of thermalization in isolated quantum systems is based on the strong version of the eigenstate thermalization hypothesis (ETH), which states that all the energy eigenstates in the microcanonical energy shell have thermal properties. We numerically investigate the ETH by focusing on the large deviation property, which directly evaluates the ratio of athermal energy eigenstates in the energy shell. As a consequence, we have systematically confirmed that the strong ETH is indeed true even for near-integrable systems. Furthermore, we found that the finite-size scaling of the ratio of athermal eigenstates is a double exponential for nonintegrable systems. Our result illuminates the universal behavior of quantum chaos, and suggests that a large deviation analysis would serve as a powerful method to investigate thermalization in the presence of the large finite-size effect.

  3. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by the light and heavy rain rates. This is useful in estimating the fraction of rainfall contributed by the rain rates that go undetected by the radar. The results at high rain rates provide a cross-check on the usual attenuation correction methods that are applied at the highest resolution of the instrument.

  4. Locality and nonlocality of classical restrictions of quantum spin systems with applications to quantum large deviations and entanglement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Roeck, W., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Maes, C., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Schütz, M., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be

    2015-02-15

    We study the projection on classical spins starting from quantum equilibria. We show Gibbsianness or quasi-locality of the resulting classical spin system for a class of gapped quantum systems at low temperatures including quantum ground states. A consequence of Gibbsianness is the validity of a large deviation principle in the quantum system which is known and here recovered in regimes of high temperature or for thermal states in one dimension. On the other hand, we give an example of a quantum ground state with strong nonlocality in the classical restriction, giving rise to what we call measurement induced entanglement andmore » still satisfying a large deviation principle.« less

  5. Hoeffding Type Inequalities and their Applications in Statistics and Operations Research

    NASA Astrophysics Data System (ADS)

    Daras, Tryfon

    2007-09-01

    Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.

  6. Quality control in interstitial brachytherapy of the breast using pulsed dose rate: treatment planning and dose delivery with an Ir-192 afterloading system.

    PubMed

    Mangold, C A; Rijnders, A; Georg, D; Van Limbergen, E; Pötter, R; Huyskens, D

    2001-01-01

    In the Radiotherapy Department of Leuven, about 20% of all breast cancer patients treated with breast conserving surgery and external radiotherapy receive an additional boost with pulsed dose rate (PDR) Ir-192 brachytherapy. An investigation was performed to assess the accuracy of the delivered PDR brachytherapy treatment. Secondly, the feasibility of in vivo measurements during PDR dose delivery was investigated. Two phantoms are manufactured to mimic a breast, one for thermoluminescent dosimetry (TLD) measurements, and one for dosimetry using radiochromic films. The TLD phantom allows measurements at 34 dose points in three planes including the basal dose points. The film phantom is designed in such a way that films can be positioned in a plane parallel and orthogonal to the needles. The dose distributions calculated with the TPS are in good agreement with both TLD and radiochromic film measurements (average deviations of point doses <+/-5%). However, close to the interface tissue-air the dose is overestimated by the TPS since it neglects the finite size of a breast and the associated lack of backscatter (average deviations of point doses -14%). Most deviations between measured and calculated doses, are in the order of magnitude of the uncertainty associated with the source strength specification, except for the point doses measured close to the skin. In vivo dosimetry during PDR brachytherapy treatment was found to be a valuable procedure to detect large errors, e.g. errors caused by an incorrect data transfer.

  7. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    NASA Astrophysics Data System (ADS)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  8. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  9. BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertulani, C. A.; Fuqua, J.; Hussein, M. S.

    The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less

  10. Finite-time and finite-size scalings in the evaluation of large-deviation functions: Numerical approach in continuous time.

    PubMed

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    2017-06-01

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.

  11. Categorizing moving objects into film genres: the effect of animacy attribution, emotional response, and the deviation from non-fiction.

    PubMed

    Visch, Valentijn T; Tan, Ed S

    2009-02-01

    The reported study follows the footsteps of Heider, and Simmel (1944) [Heider, F., & Simmel, M. (1944). An experimental study of apparent behavior. American Journal of Psychology, 57, 243-249] and Michotte (1946/1963) [Michotte, A. (1963). The perception of causality (T.R. Miles & E. Miles, Trans.). London: Methuen (Original work published 1946)] who demonstrated the role of object movement in attributions of life-likeness to figures. It goes one step further in studying the categorization of film scenes as to genre as a function of object movements. In an animated film scene portraying a chase, movements of the chasing object were systematically varied as to parameters: velocity, efficiency, fluency, detail, and deformation. The object movements were categorized by viewers into genres: non-fiction, comedy, drama, and action. Besides this categorization, viewers rated their animacy attribution and emotional response. Results showed that non-expert viewers were consistent in categorizing the genres according to object movement parameters. The size of its deviation from the unmanipulated movement scene determined the assignment of any target scene to one of the fiction genres: small and moderate deviations resulted in categorization as drama and action, and large deviations as comedy. The results suggest that genre classification is achieved by, at least, three distinct cognitive processes: (a) animacy attribution, which influences the fiction versus non-fiction classification; (b) emotional responses, which influences the classification of a specific fiction genre; and (c) the amount of deviation from reality, at least with regard to movements.

  12. The Laplace method for probability measures in Banach spaces

    NASA Astrophysics Data System (ADS)

    Piterbarg, V. I.; Fatalov, V. R.

    1995-12-01

    Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography

  13. Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems

    NASA Astrophysics Data System (ADS)

    Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger

    2018-05-01

    In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.

  14. Do health care workforce, population, and service provision significantly contribute to the total health expenditure? An econometric analysis of Serbia.

    PubMed

    Santric-Milicevic, M; Vasic, V; Terzic-Supic, Z

    2016-08-15

    In times of austerity, the availability of econometric health knowledge assists policy-makers in understanding and balancing health expenditure with health care plans within fiscal constraints. The objective of this study is to explore whether the health workforce supply of the public health care sector, population number, and utilization of inpatient care significantly contribute to total health expenditure. The dependent variable is the total health expenditure (THE) in Serbia from the years 2003 to 2011. The independent variables are the number of health workers employed in the public health care sector, population number, and inpatient care discharges per 100 population. The statistical analyses include the quadratic interpolation method, natural logarithm and differentiation, and multiple linear regression analyses. The level of significance is set at P < 0.05. The regression model captures 90 % of all variations of observed dependent variables (adjusted R square), and the model is significant (P < 0.001). Total health expenditure increased by 1.21 standard deviations, with an increase in health workforce growth rate by 1 standard deviation. Furthermore, this rate decreased by 1.12 standard deviations, with an increase in (negative) population growth rate by 1 standard deviation. Finally, the growth rate increased by 0.38 standard deviation, with an increase of the growth rate of inpatient care discharges per 100 population by 1 standard deviation (P < 0.001). Study results demonstrate that the government has been making an effort to control strongly health budget growth. Exploring causality relationships between health expenditure and health workforce is important for countries that are trying to consolidate their public health finances and achieve universal health coverage at the same time.

  15. The most likely voltage path and large deviations approximations for integrate-and-fire neurons.

    PubMed

    Paninski, Liam

    2006-08-01

    We develop theory and numerical methods for computing the most likely subthreshold voltage path of a noisy integrate-and-fire (IF) neuron, given observations of the neuron's superthreshold spiking activity. This optimal voltage path satisfies a second-order ordinary differential (Euler-Lagrange) equation which may be solved analytically in a number of special cases, and which may be solved numerically in general via a simple "shooting" algorithm. Our results are applicable for both linear and nonlinear subthreshold dynamics, and in certain cases may be extended to correlated subthreshold noise sources. We also show how this optimal voltage may be used to obtain approximations to (1) the likelihood that an IF cell with a given set of parameters was responsible for the observed spike train; and (2) the instantaneous firing rate and interspike interval distribution of a given noisy IF cell. The latter probability approximations are based on the classical Freidlin-Wentzell theory of large deviations principles for stochastic differential equations. We close by comparing this most likely voltage path to the true observed subthreshold voltage trace in a case when intracellular voltage recordings are available in vitro.

  16. Distribution of diameters for Erdős-Rényi random graphs.

    PubMed

    Hartmann, A K; Mézard, M

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c. The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P(d) numerically for various values of c, in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10^{-100} which allow us to obtain the distribution over basically the full range of the support, for graphs up to N=1000 nodes. For values c<1, our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c>1 the distribution is more complex and no complete analytical results are available. For this parameter range, P(d) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c, we determined the finite-size rate function Φ(d/N) and were able to extrapolate numerically to N→∞, indicating that the large-deviation principle holds.

  17. Distribution of diameters for Erdős-Rényi random graphs

    NASA Astrophysics Data System (ADS)

    Hartmann, A. K.; Mézard, M.

    2018-03-01

    We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c . The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P (d ) numerically for various values of c , in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10-100 which allow us to obtain the distribution over basically the full range of the support, for graphs up to N =1000 nodes. For values c <1 , our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c >1 the distribution is more complex and no complete analytical results are available. For this parameter range, P (d ) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c , we determined the finite-size rate function Φ (d /N ) and were able to extrapolate numerically to N →∞ , indicating that the large-deviation principle holds.

  18. Endometrioid adenocarcinoma of the uterus with a minimal deviation invasive pattern.

    PubMed

    Landry, D; Mai, K T; Senterman, M K; Perkins, D G; Yazdi, H M; Veinot, J P; Thomas, J

    2003-01-01

    Minimal deviation adenocarcinoma of endometrioid type is a rare pathological entity. We describe a variant of typical endometrioid adenocarcinoma associated with minimal deviation adenocarcinoma of endometrioid type. One 'pilot' case of minimal deviation adenocarcinoma of endometrioid type associated with typical endometrioid adenocarcinoma was encountered at our institution in 2001. A second case of same type was received in consultation. We reviewed 168 consecutive hysterectomy specimens diagnosed with 'endometrioid adenocarcinoma' specifically to identify areas of minimal deviation adenocarcinoma of endometrioid type. Immunohistochemistry was done with the following antibodies: MIB1, p53, oestrogen receptor (ER), progesterone receptor (PR), cytokeratin 7 (CK7), cytokeratin 20 (CK20), carcinoembryonic antigen (CEA), and vimentin (VIM). Four additional cases of minimal deviation adenocarcinoma of endometrioid type were identified. All six cases of minimal deviation adenocarcinoma of endometrioid type were associated with superficial endometrioid adenocarcinoma. In two cases with a large amount of minimal deviation adenocarcinoma of endometrioid type, the cervix was involved. The immunoprofile of two representative cases was ER+, PR+, CK7+, CK20-, CEA-, VIM+. MIB1 immunostaining of four cases revealed little proliferative activity of the minimal deviation adenocarcinoma of endometrioid type glandular cells (0-1%) compared with the associated 'typical' endometrioid adenocarcinoma (20-30%). The same four cases showed no p53 immunostaining in minimal deviation adenocarcinoma of endometrioid type compared with a range of positive staining in the associated endometrioid adenocarcinoma. Minimal deviation adenocarcinoma of endometrioid type more often develops as a result of differentiation from typical endometrioid adenocarcinoma than de novo. Due to its deceptively benign microscopic appearance, minimal deviation adenocarcinoma of endometrioid type may be overlooked and may lead to incorrect assessment of tumour depth and pathological stage. There was a tendency for tumour with a large amount of minimal deviation adenocarcinoma of endometrioid type to invade the cervix.

  19. Adaptive Gain-based Stable Power Smoothing of a DFIG

    DOE PAGES

    Muljadi, Eduard; Lee, Hyewon; Hwang, Min; ...

    2017-11-01

    In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  20. Adaptive Gain-based Stable Power Smoothing of a DFIG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Lee, Hyewon; Hwang, Min

    In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  1. Association of auricular pressing and heart rate variability in pre-exam anxiety students.

    PubMed

    Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong

    2013-03-25

    A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety.

  2. Association of auricular pressing and heart rate variability in pre-exam anxiety students

    PubMed Central

    Wu, Wocao; Chen, Junqi; Zhen, Erchuan; Huang, Huanlin; Zhang, Pei; Wang, Jiao; Ou, Yingyi; Huang, Yong

    2013-01-01

    A total of 30 students scoring between 12 and 20 on the Test Anxiety Scale who had been exhibiting an anxious state > 24 hours, and 30 normal control students were recruited. Indices of heart rate variability were recorded using an Actiheart electrocardiogram recorder at 10 minutes before auricular pressing, in the first half of stimulation and in the second half of stimulation. The results revealed that the standard deviation of all normal to normal intervals and the root mean square of standard deviation of normal to normal intervals were significantly increased after stimulation. The heart rate variability triangular index, very-low-frequency power, low-frequency power, and the ratio of low-frequency to high-frequency power were increased to different degrees after stimulation. Compared with normal controls, the root mean square of standard deviation of normal to normal intervals was significantly increased in anxious students following auricular pressing. These results indicated that auricular pressing can elevate heart rate variability, especially the root mean square of standard deviation of normal to normal intervals in students with pre-exam anxiety. PMID:25206734

  3. Size-dependent standard deviation for growth rates: Empirical results and theoretical modeling

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H. Eugene; Grosse, I.

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation σ(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation σ(R) on the average value of the wages with a scaling exponent β≈0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation σ(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of σ(R) on the average payroll with a scaling exponent β≈-0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  4. Size-dependent standard deviation for growth rates: empirical results and theoretical modeling.

    PubMed

    Podobnik, Boris; Horvatic, Davor; Pammolli, Fabio; Wang, Fengzhong; Stanley, H Eugene; Grosse, I

    2008-05-01

    We study annual logarithmic growth rates R of various economic variables such as exports, imports, and foreign debt. For each of these variables we find that the distributions of R can be approximated by double exponential (Laplace) distributions in the central parts and power-law distributions in the tails. For each of these variables we further find a power-law dependence of the standard deviation sigma(R) on the average size of the economic variable with a scaling exponent surprisingly close to that found for the gross domestic product (GDP) [Phys. Rev. Lett. 81, 3275 (1998)]. By analyzing annual logarithmic growth rates R of wages of 161 different occupations, we find a power-law dependence of the standard deviation sigma(R) on the average value of the wages with a scaling exponent beta approximately 0.14 close to those found for the growth of exports, imports, debt, and the growth of the GDP. In contrast to these findings, we observe for payroll data collected from 50 states of the USA that the standard deviation sigma(R) of the annual logarithmic growth rate R increases monotonically with the average value of payroll. However, also in this case we observe a power-law dependence of sigma(R) on the average payroll with a scaling exponent beta approximately -0.08 . Based on these observations we propose a stochastic process for multiple cross-correlated variables where for each variable (i) the distribution of logarithmic growth rates decays exponentially in the central part, (ii) the distribution of the logarithmic growth rate decays algebraically in the far tails, and (iii) the standard deviation of the logarithmic growth rate depends algebraically on the average size of the stochastic variable.

  5. Snaring large serrated polyps.

    PubMed

    Liang, Jennifer; Kalady, Matthew F; Church, James

    2013-05-01

    Serrated polyps of the large bowel are potentially premalignant, difficult to see, but important to remove. Few studies describe the technique or outcomes of serrated polypectomy. We sought to present outcomes of a series of polypectomies of large serrated polyps in comparison to a series of endoscopic resections of large adenomas. This retrospective, comparative, single endoscopist study was performed in an outpatient colonoscopy department of a tertiary referral medical center. Patients had outpatient colonoscopy where a large (≥2 cm) serrated polyp or adenoma was removed. Outcomes were completeness of excision and complications of polypectomy. A database of endoscopic polypectomies was reviewed. Polypectomy of large serrated polyps was compared with polypectomy of large adenomas. There were 132 large serrated polyps in 112 patients and 563 adenomas in 428 patients. More serrated polyps were right sided (120 of 130, 92.3 %, vs. 379 of 563, 67 %) (p < 0.0001). The serrated polyps were smaller than the adenomas (mean 25.5 ± 7.9 mm standard deviation) versus 36.8 ± 16.9 mm standard deviation (p < 0.001). There were four complications of serrated polypectomy in four patients (4 % of polyps, 5 % of patients): three postpolypectomy bleeds and one postpolypectomy syndrome. There were 33 complications of adenoma removal (31 postpolypectomy bleeding and two postpolypectomy syndrome) (6.9 % of polyps, p = 0.376, 8.4 % of patients, p = 0.371). On follow-up, 36 of 51 patients (71 %) with serrated polyps had metachronous lesions compared to 133 of 298 patients (45 %) with adenomas (p < 0.0001). There were fewer residual polyps in the serrated group (4 of 47 vs. 64 of 298, p = 0.001). Removal of large serrated colorectal polyps is no more complicated than polypectomy of similarly sized adenomas. However, large serrated polyps have a higher rate of metachronous polyps than similarly sized adenomas and surveillance should be adapted to reflect these findings.

  6. Power-Smoothing Scheme of a DFIG Using the Adaptive Gain Depending on the Rotor Speed and Frequency Deviation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyewon; Hwang, Min; Muljadi, Eduard

    In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  7. Power-Smoothing Scheme of a DFIG Using the Adaptive Gain Depending on the Rotor Speed and Frequency Deviation

    DOE PAGES

    Lee, Hyewon; Hwang, Min; Muljadi, Eduard; ...

    2017-04-18

    In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less

  8. Sampling rare fluctuations of discrete-time Markov chains

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  9. Sampling rare fluctuations of discrete-time Markov chains.

    PubMed

    Whitelam, Stephen

    2018-03-01

    We describe a simple method that can be used to sample the rare fluctuations of discrete-time Markov chains. We focus on the case of Markov chains with well-defined steady-state measures, and derive expressions for the large-deviation rate functions (and upper bounds on such functions) for dynamical quantities extensive in the length of the Markov chain. We illustrate the method using a series of simple examples, and use it to study the fluctuations of a lattice-based model of active matter that can undergo motility-induced phase separation.

  10. Combining Load and Motor Encoders to Compensate Nonlinear Disturbances for High Precision Tracking Control of Gear-Driven Gimbal

    PubMed Central

    Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo

    2018-01-01

    High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm. PMID:29498643

  11. Combining Load and Motor Encoders to Compensate Nonlinear Disturbances for High Precision Tracking Control of Gear-Driven Gimbal.

    PubMed

    Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo

    2018-03-02

    High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm.

  12. The infection rate of Daphnia magna by Pasteuria ramosa conforms with the mass-action principle.

    PubMed

    Regoes, R R; Hottinger, J W; Sygnarski, L; Ebert, D

    2003-10-01

    In simple epidemiological models that describe the interaction between hosts with their parasites, the infection process is commonly assumed to be governed by the law of mass action, i.e. it is assumed that the infection rate depends linearly on the densities of the host and the parasite. The mass-action assumption, however, can be problematic if certain aspects of the host-parasite interaction are very pronounced, such as spatial compartmentalization, host immunity which may protect from infection with low doses, or host heterogeneity with regard to susceptibility to infection. As deviations from a mass-action infection rate have consequences for the dynamics of the host-parasite system, it is important to test for the appropriateness of the mass-action assumption in a given host-parasite system. In this paper, we examine the relationship between the infection rate and the parasite inoculum for the water flee Daphnia magna and its bacterial parasite Pasteuria ramosa. We measured the fraction of infected hosts after exposure to 14 different doses of the parasite. We find that the observed relationship between the fraction of infected hosts and the parasite dose is largely consistent with an infection process governed by the mass-action principle. However, we have evidence for a subtle but significant deviation from a simple mass-action infection model, which can be explained either by some antagonistic effects of the parasite spores during the infection process, or by heterogeneity in the hosts' susceptibility with regard to infection.

  13. Are EUR and GBP different words for the same currency?

    NASA Astrophysics Data System (ADS)

    Ivanova, K.; Ausloos, M.

    2002-05-01

    The British Pound (GBP) is not part of the Euro (EUR) monetary system. In order to find out arguments on whether GBP should join the EUR or not correlations are calculated between GBP exchange rates with respect to various currencies: USD, JPY, CHF, DKK, the currencies forming EUR and a reconstructed EUR for the time interval from 1993 till June 30, 2000. The distribution of fluctuations of the exchange rates is Gaussian for the central part of the distribution, but has fat tails for the large size fluctuations. Within the Detrended Fluctuation Analysis (DFA) statistical method the power law behavior describing the root-mean-square deviation from a linear trend of the exchange rate fluctuations is obtained as a function of time for the time interval of interest. The time-dependent exponent evolution of the exchange rate fluctuations is given. Statistical considerations imply that the GBP is already behaving as a true EUR.

  14. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2012-09-30

    Estimation Methods for Underwater OFDM 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. 6) Asynchronous Multiuser...multi-input multi-output ( MIMO ) OFDM is also pursued, where it is shown that the proposed hybrid initialization enables drastically improved receiver...are investigated. 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. This work studies a distributed system with

  15. Large deviations and mixing for dissipative PDEs with unbounded random kicks

    NASA Astrophysics Data System (ADS)

    Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.

    2018-02-01

    We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.

  16. A unique approach to demonstrating that apical bud temperature specifically determines leaf initiation rate in the dicot Cucumis sativus.

    PubMed

    Savvides, Andreas; Dieleman, Janneke A; van Ieperen, Wim; Marcelis, Leo F M

    2016-04-01

    Leaf initiation rate is largely determined by the apical bud temperature even when apical bud temperature largely deviates from the temperature of other plant organs. We have long known that the rate of leaf initiation (LIR) is highly sensitive to temperature, but previous studies in dicots have not rigorously demonstrated that apical bud temperature controls LIR independent of other plant organs temperature. Many models assume that apical bud and leaf temperature are the same. In some environments, the temperature of the apical bud, where leaf initiation occurs, may differ by several degrees Celsius from the temperature of other plant organs. In a 28-days study, we maintained temperature differences between the apical bud and the rest of the individual Cucumis sativus plants from -7 to +8 °C by enclosing the apical buds in transparent, temperature-controlled, flow-through, spheres. Our results demonstrate that LIR was completely determined by apical bud temperature independent of other plant organs temperature. These results emphasize the need to measure or model apical bud temperatures in dicots to improve the prediction of crop development rates in simulation models.

  17. Perceptions of midline deviations among different facial types.

    PubMed

    Williams, Ryan P; Rinchuse, Daniel J; Zullo, Thomas G

    2014-02-01

    The correction of a deviated midline can involve complicated mechanics and a protracted treatment. The threshold below which midline deviations are considered acceptable might depend on multiple factors. The objective of this study was to evaluate the effect of facial type on laypersons' perceptions of various degrees of midline deviation. Smiling photographs of male and female subjects were altered to create 3 facial type variations (euryprosopic, mesoprosopic, and leptoprosopic) and deviations in the midline ranging from 0.0 to 4.0 mm. Evaluators rated the overall attractiveness and acceptability of each photograph. Data were collected from 160 raters. The overall threshold for the acceptability of a midline deviation was 2.92 ± 1.10 mm, with the threshold for the male subject significantly lower than that for the female subject. The euryprosopic facial type showed no decrease in mean attractiveness until the deviations were 2 mm or more. All other facial types were rated as decreasingly attractive from 1 mm onward. Among all facial types, the attractiveness of the male subject was only affected at deviations of 2 mm or greater; for the female subject, the attractiveness scores were significantly decreased at 1 mm. The mesoprosopic facial type was most attractive for the male subject but was the least attractive for the female subject. Facial type and sex may affect the thresholds at which a midline deviation is detected and above which a midline deviation is considered unacceptable. Both the euryprosopic facial type and male sex were associated with higher levels of attractiveness at relatively small levels of deviations. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  18. WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murphy, S; Molloy, J

    Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less

  19. The effects of auditory stimulation with music on heart rate variability in healthy women.

    PubMed

    Roque, Adriano L; Valenti, Vitor E; Guida, Heraldo L; Campos, Mônica F; Knap, André; Vanderlei, Luiz Carlos M; Ferreira, Lucas L; Ferreira, Celso; Abreu, Luiz Carlos de

    2013-07-01

    There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level.

  20. The effects of auditory stimulation with music on heart rate variability in healthy women

    PubMed Central

    Roque, Adriano L.; Valenti, Vitor E.; Guida, Heraldo L.; Campos, Mônica F.; Knap, André; Vanderlei, Luiz Carlos M.; Ferreira, Lucas L.; Ferreira, Celso; de Abreu, Luiz Carlos

    2013-01-01

    OBJECTIVES: There are no data in the literature with regard to the acute effects of different styles of music on the geometric indices of heart rate variability. In this study, we evaluated the acute effects of relaxant baroque and excitatory heavy metal music on the geometric indices of heart rate variability in women. METHODS: We conducted this study in 21 healthy women ranging in age from 18 to 35 years. We excluded persons with previous experience with musical instruments and persons who had an affinity for the song styles. We evaluated two groups: Group 1 (n = 21), who were exposed to relaxant classical baroque musical and excitatory heavy metal auditory stimulation; and Group 2 (n = 19), who were exposed to both styles of music and white noise auditory stimulation. Using earphones, the volunteers were exposed to baroque or heavy metal music for five minutes. After the first music exposure to baroque or heavy metal music, they remained at rest for five minutes; subsequently, they were re-exposed to the opposite music (70-80 dB). A different group of women were exposed to the same music styles plus white noise auditory stimulation (90 dB). The sequence of the songs was randomized for each individual. We analyzed the following indices: triangular index, triangular interpolation of RR intervals and Poincaré plot (standard deviation of instantaneous beat-by-beat variability, standard deviation of the long-term RR interval, standard deviation of instantaneous beat-by-beat variability and standard deviation of the long-term RR interval ratio), low frequency, high frequency, low frequency/high frequency ratio, standard deviation of all the normal RR intervals, root-mean square of differences between the adjacent normal RR intervals and the percentage of adjacent RR intervals with a difference of duration greater than 50 ms. Heart rate variability was recorded at rest for 10 minutes. RESULTS: The triangular index and the standard deviation of the long-term RR interval indices were reduced during exposure to both music styles in the first group and tended to decrease in the second group whereas the white noise exposure decreased the high frequency index. We observed no changes regarding the triangular interpolation of RR intervals, standard deviation of instantaneous beat-by-beat variability and standard deviation of instantaneous beat-by-beat variability/standard deviation in the long-term RR interval ratio. CONCLUSION: We suggest that relaxant baroque and excitatory heavy metal music slightly decrease global heart rate variability because of the equivalent sound level. PMID:23917660

  1. Convex hulls of random walks in higher dimensions: A large-deviation study

    NASA Astrophysics Data System (ADS)

    Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.

    2017-12-01

    The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .

  2. Fetal heart rate and fetal heart rate variability in Lipizzaner broodmares.

    PubMed

    Baska-Vincze, Boglárka; Baska, Ferenc; Szenci, Ottó

    2015-03-01

    Monitoring fetal heart rate (FHR) and fetal heart rate variability (FHRV) helps to understand and evaluate normal and pathological conditions in the foal. The aim of this study was to establish normal heart rate reference values for the ongoing equine pregnancy and to perform a heart rate variability (HRV) time-domain analysis in Lipizzaner mares. Seventeen middle- and late-term (days 121-333) pregnant Lipizzaner mares were examined using fetomaternal electrocardiography (ECG). The mean FHR (P = 0.004) and the standard deviation of FHR (P = 0.012) significantly decreased during the pregnancy. FHR ± SD values decreased from 115 ± 35 to 79 ± 9 bpm between months 5 and 11. Our data showed that HRV in the foal decreased as the pregnancy progressed, which is in contrast with the findings of earlier equine studies. The standard deviation of normal-normal intervals (SDNN) was higher (70 ± 25 to 166 ± 108 msec) than described previously. The root mean square of successive differences (RMSSD) decreased from 105 ± 69 to 77 ± 37 msec between the 5th and 11th month of gestation. Using telemetric ECG equipment, we could detect equine fetal heartbeat on day 121 for the first time. In addition, the large differences observed in the HR values of four mare-fetus pairs in four consecutive months support the assumption that there might be 'high-HR' and 'low-HR' fetuses in horses. It can be concluded that the analysis of FHR and FHRV is a promising tool for the assessment of fetal well-being but the applicability of these parameters in the clinical setting and in studs requires further investigation.

  3. Deviation-based spam-filtering method via stochastic approach

    NASA Astrophysics Data System (ADS)

    Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun

    2018-03-01

    In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.

  4. How accurate are lexile text measures?

    PubMed

    Stenner, A Jackson; Burdick, Hal; Sanford, Eleanor E; Burdick, Donald S

    2006-01-01

    The Lexile Framework for Reading models comprehension as the difference between a reader measure and a text measure. Uncertainty in comprehension rates results from unreliability in reader measures and inaccuracy in text readability measures. Whole-text processing eliminates sampling error in text measures. However, Lexile text measures are imperfect due to misspecification of the Lexile theory. The standard deviation component associated with theory misspecification is estimated at 64L for a standard-length passage (approximately 125 words). A consequence is that standard errors for longer texts (2,500 to 150,000 words) are measured on the Lexile scale with uncertainties in the single digits. Uncertainties in expected comprehension rates are largely due to imprecision in reader ability and not inaccuracies in text readabilities.

  5. Business cycles and mortality: results from Swedish microdata.

    PubMed

    Gerdtham, Ulf-G; Johannesson, Magnus

    2005-01-01

    We assess the relationship between business cycles and mortality risk using a large individual level data set on over 40,000 individuals in Sweden who were followed for 10-16 years (leading to over 500,000 person-year observations). We test the effect of six alternative business cycle indicators on the mortality risk: the unemployment rate, the notification rate, the deviation from the GDP trend, the GDP change, the industry capacity utilization, and the industry confidence indicator. For men we find a significant countercyclical relationship between the business cycle and the mortality risk for four of the indicators and a non-significant effect for the other two indicators. For women we cannot reject the null hypothesis of no effect for any of the business cycle indicators.

  6. Frenetic Bounds on the Entropy Production

    NASA Astrophysics Data System (ADS)

    Maes, Christian

    2017-10-01

    We give a systematic derivation of positive lower bounds for the expected entropy production (EP) rate in classical statistical mechanical systems obeying a dynamical large deviation principle. The logic is the same for the return to thermodynamic equilibrium as it is for steady nonequilibria working under the condition of local detailed balance. We recover there recently studied "uncertainty" relations for the EP, appearing in studies about the effectiveness of mesoscopic machines. In general our refinement of the positivity of the expected EP rate is obtained in terms of a positive and even function of the expected current(s) which measures the dynamical activity in the system, a time-symmetric estimate of the changes in the system's configuration. Also underdamped diffusions can be included in the analysis.

  7. Work fluctuations for a Brownian particle between two thermostats

    NASA Astrophysics Data System (ADS)

    Visco, Paolo

    2006-06-01

    We explicitly determine the large deviation function of the energy flow of a Brownian particle coupled to two heat baths at different temperatures. This toy model, initially introduced by Derrida and Brunet (2005, Einstein aujourd'hui (Les Ulis: EDP Sciences)), not only allows us to sort out the influence of initial conditions on large deviation functions but also allows us to pinpoint various restrictions bearing upon the range of validity of the Fluctuation Relation.

  8. Large Deviations and Transitions Between Equilibria for Stochastic Landau-Lifshitz-Gilbert Equation

    NASA Astrophysics Data System (ADS)

    Brzeźniak, Zdzisław; Goldys, Ben; Jegaraj, Terence

    2017-11-01

    We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise. We first show that there exists a pathwise unique solution to this equation and that this solution enjoys the maximal regularity property. Next, we prove the large deviations principle for the small noise asymptotic of solutions using the weak convergence method. An essential ingredient of the proof is the compactness, or weak to strong continuity, of the solution map for a deterministic Landau-Lifschitz equation when considered as a transformation of external fields. We then apply this large deviations principle to show that small noise can cause magnetisation reversal. We also show the importance of the shape anisotropy parameter for reducing the disturbance of the solution caused by small noise. The problem is motivated by applications from ferromagnetic nanowires to the fabrication of magnetic memories.

  9. Evaluation of bacterial motility from non-Gaussianity of finite-sample trajectories using the large deviation principle

    NASA Astrophysics Data System (ADS)

    Hanasaki, Itsuo; Kawano, Satoyuki

    2013-11-01

    Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.

  10. IceCube sensitivity for low-energy neutrinos from nearby supernovae

    NASA Astrophysics Data System (ADS)

    Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Allen, M. M.; Altmann, D.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Baum, V.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K. H.; Benabderrahmane, M. L.; Benzvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brown, A. M.; Buitink, S.; Caballero-Mora, K. S.; Carson, M.; Chirkin, D.; Christy, B.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; Cruz Silva, A. H.; D'Agostino, M. V.; Danninger, M.; Daughhetee, J.; Davis, J. C.; de Clercq, C.; Degner, T.; Demirörs, L.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Dunkman, M.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Góra, D.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gurtner, M.; Ha, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, B.; Homeier, A.; Hoshina, K.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jakobi, E.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kroll, G.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lünemann, J.; Madsen, J.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Panknin, S.; Paul, L.; Pérez de Los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Richard, A. S.; Richman, M.; Rodrigues, J. P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schmidt, T.; Schönwald, A.; Schukraft, A.; Schulte, L.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Singh, K.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Stüer, M.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, C.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Zoll, M.; IceCube Collaboration

    2011-11-01

    This paper describes the response of the IceCube neutrino telescope located at the geographic south pole to outbursts of MeV neutrinos from the core collapse of nearby massive stars. IceCube was completed in December 2010 forming a lattice of 5160 photomultiplier tubes that monitor a volume of ~1 km3 in the deep Antarctic ice for particle induced photons. The telescope was designed to detect neutrinos with energies greater than 100 GeV. Owing to subfreezing ice temperatures, the photomultiplier dark noise rates are particularly low. Hence IceCube can also detect large numbers of MeV neutrinos by observing a collective rise in all photomultiplier rates on top of the dark noise. With 2 ms timing resolution, IceCube can detect subtle features in the temporal development of the supernova neutrino burst. For a supernova at the galactic center, its sensitivity matches that of a background-free megaton-scale supernova search experiment. The sensitivity decreases to 20 standard deviations at the galactic edge (30 kpc) and 6 standard deviations at the Large Magellanic Cloud (50 kpc). IceCube is sending triggers from potential supernovae to the Supernova Early Warning System. The sensitivity to neutrino properties such as the neutrino hierarchy is discussed, as well as the possibility to detect the neutronization burst, a short outbreak of \\barνe's released by electron capture on protons soon after collapse. Tantalizing signatures, such as the formation of a quark star or a black hole as well as the characteristics of shock waves, are investigated to illustrate IceCube's capability for supernova detection.

  11. Scaling laws for perturbations in the ocean-atmosphere system following large CO2 emissions

    NASA Astrophysics Data System (ADS)

    Towles, N.; Olson, P.; Gnanadesikan, A.

    2015-01-01

    Scaling relationships are derived for the perturbations to atmosphere and ocean variables from large transient CO2 emissions. Using the carbon cycle model LOSCAR (Zeebe et al., 2009; Zeebe, 2012b) we calculate perturbations to atmosphere temperature and total carbon, ocean temperature, total ocean carbon, pH, and alkalinity, marine sediment carbon, plus carbon-13 isotope anomalies in the ocean and atmosphere resulting from idealized CO2 emission events. The peak perturbations in the atmosphere and ocean variables are then fit to power law functions of the form γDαEbeta, where D is the event duration, E is its total carbon emission, and γ is a coefficient. Good power law fits are obtained for most system variables for E up to 50 000 PgC and D up to 100 kyr. However, these power laws deviate substantially from predictions based on simplified equilibrium considerations. For example, although all of the peak perturbations increase with emission rate E/D, we find no evidence of emission rate-only scaling α + β =0, a prediction of the long-term equilibrium between CO2 input by volcanism and CO2 removal by silicate weathering. Instead, our scaling yields α + β ≃ 1 for total ocean and atmosphere carbon and 0< α + β < 1 for most of the other system variables. The deviations in these scaling laws from equilibrium predictions are mainly due to the multitude and diversity of time scales that govern the exchange of carbon between marine sediments, the ocean, and the atmosphere.

  12. IceCube Sensitivity for Low-Energy Neutrinos from Nearby Supernovae

    NASA Technical Reports Server (NTRS)

    Stamatikos, M.; Abbasi, R.; Berghaus, P.; Chirkin, D.; Desiati, P.; Diaz-Velez, J.; Dumm, J. P.; Eisch, J.; Feintzeig, J.; Hanson, K.; hide

    2012-01-01

    This paper describes the response of the IceCube neutrino telescope located at the geographic South Pole to outbursts of MeV neutrinos from the core collapse of nearby massive stars. IceCube was completed in December 2010 forming a lattice of 5160 photomultiplier tubes that monitor a volume of approx. 1 cu km in the deep Antarctic ice for particle induced photons. The telescope was designed to detect neutrinos with energies greater than 100 GeV. Owing to subfreezing ice temperatures, the photomultiplier dark noise rates are particularly low. Hence IceCube can also detect large numbers of MeV neutrinos by observing a collective rise in all photomultiplier rates on top of the dark noise. With 2 ms timing resolution, IceCube can detect subtle features in the temporal development of the supernova neutrino burst. For a supernova at the galactic center, its sensitivity matches that of a background-free megaton-scale supernova search experiment. The sensitivity decreases to 20 standard deviations at the galactic edge (30 kpc) and 6 standard deviations at the Large Magellanic Cloud (50 kpc). IceCube is sending triggers from potential supernovae to the Supernova Early Warning System. The sensitivity to neutrino properties such as the neutrino hierarchy is discussed, as well as the possibility to detect the neutronization burst, a short outbreak's released by electron capture on protons soon after collapse. Tantalizing signatures, such as the formation of a quark star or a black hole as well as the characteristics of shock waves, are investigated to illustrate IceCube's capability for supernova detection.

  13. Viscosity Dependence of Some Protein and Enzyme Reaction Rates: Seventy-Five Years after Kramers.

    PubMed

    Sashi, Pulikallu; Bhuyan, Abani K

    2015-07-28

    Kramers rate theory is a milestone in chemical reaction research, but concerns regarding the basic understanding of condensed phase reaction rates of large molecules in viscous milieu persist. Experimental studies of Kramers theory rely on scaling reaction rates with inverse solvent viscosity, which is often equated with the bulk friction coefficient based on simple hydrodynamic relations. Apart from the difficulty of abstraction of the prefactor details from experimental data, it is not clear why the linearity of rate versus inverse viscosity, k ∝ η(-1), deviates widely for many reactions studied. In most cases, the deviation simulates a power law k ∝ η(-n), where the exponent n assumes fractional values. In rate-viscosity studies presented here, results for two reactions, unfolding of cytochrome c and cysteine protease activity of human ribosomal protein S4, show an exceedingly overdamped rate over a wide viscosity range, registering n values up to 2.4. Although the origin of this extraordinary reaction friction is not known at present, the results indicate that the viscosity exponent need not be bound by the 0-1 limit as generally suggested. For the third reaction studied here, thermal dissociation of CO from nativelike cytochrome c, the rate-viscosity behavior can be explained using Grote-Hynes theory of time-dependent friction in conjunction with correlated motions intrinsic to the protein. Analysis of the glycerol viscosity-dependent rate for the CO dissociation reaction in the presence of urea as the second variable shows that the protein stabilizing effect of subdenaturing amounts of urea is not affected by the bulk viscosity. It appears that a myriad of factors as diverse as parameter uncertainty due to the difficulty of knowing the exact reaction friction and both mode and consequences of protein-solvent interaction work in a complex manner to convey as though Kramers rate equation is not absolute.

  14. Deviations from Newton's law in supersymmetric large extra dimensions

    NASA Astrophysics Data System (ADS)

    Callin, P.; Burgess, C. P.

    2006-09-01

    Deviations from Newton's inverse-squared law at the micron length scale are smoking-gun signals for models containing supersymmetric large extra dimensions (SLEDs), which have been proposed as approaches for resolving the cosmological constant problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the dark energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant naturally small also keeps the extra-dimensional moduli effectively massless, leading to deviations from general relativity in the far infrared of the scalar-tensor form. We here explore the deviations from Newton's law which are predicted over micron distances, and show the ways in which they differ and resemble those in the non-supersymmetric case.

  15. Modelling the dispersion and transport of reactive pollutants in a deep urban street canyon: using large-eddy simulation.

    PubMed

    Zhong, Jian; Cai, Xiao-Ming; Bloss, William James

    2015-05-01

    This study investigates the dispersion and transport of reactive pollutants in a deep urban street canyon with an aspect ratio of 2 under neutral meteorological conditions using large-eddy simulation. The spatial variation of pollutants is significant due to the existence of two unsteady vortices. The deviation of species abundance from chemical equilibrium for the upper vortex is greater than that for the lower vortex. The interplay of dynamics and chemistry is investigated using two metrics: the photostationary state defect, and the inferred ozone production rate. The latter is found to be negative at all locations within the canyon, pointing to a systematic negative offset to ozone production rates inferred by analogous approaches in environments with incomplete mixing of emissions. This study demonstrates an approach to quantify parameters for a simplified two-box model, which could support traffic management and urban planning strategies and personal exposure assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  17. Forty-five degree cutting septoplasty.

    PubMed

    Hsiao, Yen-Chang; Chang, Chun-Shin; Chuang, Shiow-Shuh; Kolios, Georgios; Abdelrahman, Mohamed

    2016-01-01

    The crooked nose represents a challenge for rhinoplasty surgeons, and many methods have been proposed for management; however, there is no ideal method for treatment. Accordingly, the 45° cutting septoplasty technique, which involves a 45° cut at the junction of the L-shaped strut and repositioning it to achieve a straight septum is proposed. From October 2010 to September 2014, 43 patients underwent the 45° cutting septoplasty technique. There were 28 men and 15 women, with ages ranging from 20 to 58 years (mean, 33 years). Standardized photographs were obtained at every visit. Established photogrammetric parameters were used to describe the degree of correction: Correction rate = (preoperative total deviation - postoperative residual deviation)/preoperative total deviation × 100% was proposed. The mean follow-up period for all patients was 12.3 months. The mean preoperative deviation was 64.3° and the mean postoperative deviation was 2.7°; the overall correction rate was 95.8%. One patient experienced composite implant deviation two weeks postoperatively and underwent revision rhinoplasty. There were no infections, hematomas or postoperative bleeding. Based on the clinical observation of all patients during the follow-up period, the 45° cutting septoplasty technique was shown to be effective for the treatment of crooked nose.

  18. Phytoplankton Growth and Microzooplankton Grazing in the Subtropical Northeast Atlantic

    PubMed Central

    Cáceres, Carlos; Taboada, Fernando González; Höfer, Juan; Anadón, Ricardo

    2013-01-01

    Dilution experiments were performed to estimate phytoplankton growth and microzooplankton grazing rates during two Lagrangian surveys in inner and eastern locations of the Eastern North Atlantic Subtropical Gyre province (NAST-E). Our design included two phytoplankton size fractions (0.2–5 µm and >5 µm) and five depths, allowing us to characterize differences in growth and grazing rates between size fractions and depths, as well as to estimate vertically integrated measurements. Phytoplankton growth rates were high (0.11–1.60 d−1), especially in the case of the large fraction. Grazing rates were also high (0.15–1.29 d−1), suggesting high turnover rates within the phytoplankton community. The integrated balances between phytoplankton growth and grazing losses were close to zero, although deviations were detected at several depths. Also, O2 supersaturation was observed up to 110 m depth during both Lagrangian surveys. These results add up to increased evidence indicating an autotrophic metabolic balance in oceanic subtropical gyres. PMID:23935946

  19. A Nonequilibrium Rate Formula for Collective Motions of Complex Molecular Systems

    NASA Astrophysics Data System (ADS)

    Yanao, Tomohiro; Koon, Wang Sang; Marsden, Jerrold E.

    2010-09-01

    We propose a compact reaction rate formula that accounts for a non-equilibrium distribution of residence times of complex molecules, based on a detailed study of the coarse-grained phase space of a reaction coordinate. We take the structural transition dynamics of a six-atom Morse cluster between two isomers as a prototype of multi-dimensional molecular reactions. Residence time distribution of one of the isomers shows an exponential decay, while that of the other isomer deviates largely from the exponential form and has multiple peaks. Our rate formula explains such equilibrium and non-equilibrium distributions of residence times in terms of the rates of diffusions of energy and the phase of the oscillations of the reaction coordinate. Rapid diffusions of energy and the phase generally give rise to the exponential decay of residence time distribution, while slow diffusions give rise to a non-exponential decay with multiple peaks. We finally make a conjecture about a general relationship between the rates of the diffusions and the symmetry of molecular mass distributions.

  20. The scaling of contact rates with population density for the infectious disease models.

    PubMed

    Hu, Hao; Nigmatulina, Karima; Eckhoff, Philip

    2013-08-01

    Contact rates and patterns among individuals in a geographic area drive transmission of directly-transmitted pathogens, making it essential to understand and estimate contacts for simulation of disease dynamics. Under the uniform mixing assumption, one of two mechanisms is typically used to describe the relation between contact rate and population density: density-dependent or frequency-dependent. Based on existing evidence of population threshold and human mobility patterns, we formulated a spatial contact model to describe the appropriate form of transmission with initial growth at low density and saturation at higher density. We show that the two mechanisms are extreme cases that do not capture real population movement across all scales. Empirical data of human and wildlife diseases indicate that a nonlinear function may work better when looking at the full spectrum of densities. This estimation can be applied to large areas with population mixing in general activities. For crowds with unusually large densities (e.g., transportation terminals, stadiums, or mass gatherings), the lack of organized social contact structure deviates the physical contacts towards a special case of the spatial contact model - the dynamics of kinetic gas molecule collision. In this case, an ideal gas model with van der Waals correction fits well; existing movement observation data and the contact rate between individuals is estimated using kinetic theory. A complete picture of contact rate scaling with population density may help clarify the definition of transmission rates in heterogeneous, large-scale spatial systems. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Ozone trends and their relationship to characteristic weather patterns.

    PubMed

    Austin, Elena; Zanobetti, Antonella; Coull, Brent; Schwartz, Joel; Gold, Diane R; Koutrakis, Petros

    2015-01-01

    Local trends in ozone concentration may differ by meteorological conditions. Furthermore, the trends occurring at the extremes of the Ozone distribution are often not reported even though these may be very different than the trend observed at the mean or median and they may be more relevant to health outcomes. Classify days of observation over a 16-year period into broad categories that capture salient daily local weather characteristics. Determine the rate of change in mean and median O3 concentrations within these different categories to assess how concentration trends are impacted by daily weather. Further examine if trends vary for observations in the extremes of the O3 distribution. We used k-means clustering to categorize days of observation based on the maximum daily temperature, standard deviation of daily temperature, mean daily ground level wind speed, mean daily water vapor pressure and mean daily sea-level barometric pressure. The five cluster solution was determined to be the appropriate one based on cluster diagnostics and cluster interpretability. Trends in cluster frequency and pollution trends within clusters were modeled using Poisson regression with penalized splines as well as quantile regression. There were five characteristic groupings identified. The frequency of days with large standard deviations in hourly temperature decreased over the observation period, whereas the frequency of warmer days with smaller deviations in temperature increased. O3 trends were significantly different within the different weather groupings. Furthermore, the rate of O3 change for the 95th percentile and 5th percentile was significantly different than the rate of change of the median for several of the weather categories.We found that O3 trends vary between different characteristic local weather patterns. O3 trends were significantly different between the different weather groupings suggesting an important interaction between changes in prevailing weather conditions and O3 concentration.

  2. Long-Term Results for Trigeminal Schwannomas Treated With Gamma Knife Surgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasegawa, Toshinori, E-mail: h-toshi@komakihp.gr.jp; Kato, Takenori; Iizuka, Hiroshi

    Purpose: Surgical resection is considered the desirable curative treatment for trigeminal schwannomas. However, complete resection without any complications remains challenging. During the last several decades, stereotactic radiosurgery (SRS) has emerged as a minimally invasive treatment modality. Information regarding long-term outcomes of SRS for patients harboring trigeminal schwannomas is limited because of the rarity of this tumor. The aim of this study was to evaluate long-term tumor control and functional outcomes in patients harboring trigeminal schwannomas treated with SRS, specifically with gamma knife surgery (GKS). Methods and Materials: Fifty-three patients harboring trigeminal schwannomas treated with GKS were evaluated. Of these, 2more » patients (4%) had partial irradiation of the tumor, and 34 patients (64%) underwent GKS as the initial treatment. The median tumor volume was 6.0 cm{sup 3}. The median maximum and marginal doses were 28 Gy and 14 Gy, respectively. Results: The median follow-up period was 98 months. On the last follow-up image, 7 patients (13%) had tumor enlargement, including the 2 patients who had partial treatment. Excluding the 2 patients who had partial treatment, the actuarial 5- and 10-year progression-free survival (PFS) rates were 90% and 82%, respectively. Patients with tumors compressing the brainstem with deviation of the fourth ventricle had significantly lower PFS rates. If those patients with tumors compressing the brainstem with deviation of the fourth ventricle are excluded, the actuarial 5- and 10-year PFS rates increased to 95% and 90%, respectively. Ten percent of patients had worsened facial numbness or pain in spite of no tumor progression, indicating adverse radiation effect. Conclusions: GKS can be an acceptable alternative to surgical resection in patients with trigeminal schwannomas. However, large tumors that compress the brainstem with deviation of the fourth ventricle should be surgically removed first and then treated with GKS when necessary.« less

  3. Pressure fluctuation generated by the interaction of blade and tongue

    NASA Astrophysics Data System (ADS)

    Zheng, Lulu; Dou, Hua-Shu; Chen, Xiaoping; Zhu, Zuchao; Cui, Baoling

    2018-02-01

    Pressure fluctuation around the tongue has large effect on the stable operation of a centrifugal pump. In this paper, the Reynolds averaged Navier-Stokes equations (RANS) and the RNG k-epsilon turbulence model is employed to simulate the flow in a pump. The flow field in the centrifugal pump is computed for a range of flow rate. The simulation results have been compared with the experimental data and good agreement has been achieved. In order to study the interaction of the tongue with the impeller, fifteen monitor probes are evenly distributed circumferentially at three radii around the tongue. Pressure distribution is investigated at various blade positions while the blade approaches to and leaves the tongue region. Results show that pressure signal fluctuates largely around the tongue, and it is more intense near the tongue surface. At design condition, standard deviation of pressure fluctuation is the minimum. At large flow rate, the increased low pressure region at the blade trailing edge results in the increases of pressure fluctuation amplitude and pressure spectra at the monitor probes. Minimum pressure is obtained when the blade is facing to the tongue. It is found that the amplitude of pressure fluctuation strongly depends on the blade positions at large flow rate, and pressure fluctuation is caused by the relative movement between blades and tongue. At small flow rate, the rule of pressure fluctuation is mainly depending on the structure of vortex flow at blade passage exit besides the influence from the relative position between the blade and the tongue.

  4. Enhancement of large fluctuations to extinction in adaptive networks

    NASA Astrophysics Data System (ADS)

    Hindes, Jason; Schwartz, Ira B.; Shaw, Leah B.

    2018-01-01

    During an epidemic, individual nodes in a network may adapt their connections to reduce the chance of infection. A common form of adaption is avoidance rewiring, where a noninfected node breaks a connection to an infected neighbor and forms a new connection to another noninfected node. Here we explore the effects of such adaptivity on stochastic fluctuations in the susceptible-infected-susceptible model, focusing on the largest fluctuations that result in extinction of infection. Using techniques from large-deviation theory, combined with a measurement of heterogeneity in the susceptible degree distribution at the endemic state, we are able to predict and analyze large fluctuations and extinction in adaptive networks. We find that in the limit of small rewiring there is a sharp exponential reduction in mean extinction times compared to the case of zero adaption. Furthermore, we find an exponential enhancement in the probability of large fluctuations with increased rewiring rate, even when holding the average number of infected nodes constant.

  5. Self-optimizing Pitch Control for Large Scale Wind Turbine Based on ADRC

    NASA Astrophysics Data System (ADS)

    Xia, Anjun; Hu, Guoqing; Li, Zheng; Huang, Dongxiao; Wang, Fengxiang

    2018-01-01

    Since wind turbine is a complex nonlinear and strong coupling system, traditional PI control method can hardly achieve good control performance. A self-optimizing pitch control method based on the active-disturbance-rejection control theory is proposed in this paper. A linear model of the wind turbine is derived by linearizing the aerodynamic torque equation and the dynamic response of wind turbine is transformed into a first-order linear system. An expert system is designed to optimize the amplification coefficient according to the pitch rate and the speed deviation. The purpose of the proposed control method is to regulate the amplification coefficient automatically and keep the variations of pitch rate and rotor speed in proper ranges. Simulation results show that the proposed pitch control method has the ability to modify the amplification coefficient effectively, when it is not suitable, and keep the variations of pitch rate and rotor speed in proper ranges

  6. Cross-section and rate formulas for electron-impact ionization, excitation, deexcitation, and total depopulation of excited atoms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vriens, L.; Smeets, A.H.M.

    1980-09-01

    For electron-induced ionization, excitation, and de-excitation, mainly from excited atomic states, a detailed analysis is presented of the dependence of the cross sections and rate coefficients on electron energy and temperature, and on atomic parameters. A wide energy range is covered, including sudden as well as adiabatic collisions. By combining the available experimental and theoretical information, a set of simple analytical formulas is constructed for the cross sections and rate coefficients of the processes mentioned, for the total depopulation, and for three-body recombination. The formulas account for large deviations from classical and semiclassical scaling, as found for excitation. They agreemore » with experimental data and with the theories in their respective ranges of validity, but have a wider range of validity than the separate theories. The simple analytical form further facilitates the application in plasma modeling.« less

  7. What Determines Star Formation Rates?

    NASA Astrophysics Data System (ADS)

    Evans, Neal John

    2017-06-01

    The relations between star formation and gas have received renewed attention. We combine studies on scales ranging from local (within 0.5 kpc) to distant galaxies to assess what factors contribute to star formation. These include studies of star forming regions in the Milky Way, the LMC, nearby galaxies with spatially resolved star formation, and integrated galaxy studies. We test whether total molecular gas or dense gas provides the best predictor of star formation rate. The star formation ``efficiency," defined as star formation rate divided by mass, spreads over a large range when the mass refers to molecular gas; the standard deviation of the log of the efficiency decreases by a factor of three when the mass of relatively dense molecular gas is used rather than the mass of all the molecular gas. We suggest ways to further develop the concept of "dense gas" to incorporate other factors, such as turbulence.

  8. A framework for the direct evaluation of large deviations in non-Markovian processes

    NASA Astrophysics Data System (ADS)

    Cavallaro, Massimo; Harris, Rosemary J.

    2016-11-01

    We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.

  9. Efficient characterisation of large deviations using population dynamics

    NASA Astrophysics Data System (ADS)

    Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.

    2018-05-01

    We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.

  10. Evaluation of True Power Luminous Efficiency from Experimental Luminance Values

    NASA Astrophysics Data System (ADS)

    Tsutsui, Tetsuo; Yamamato, Kounosuke

    1999-05-01

    A method for obtaining true external power luminous efficiencyfrom experimentally obtained luminance in organic light-emittingdiodes (LEDs) wasdemonstrated. Conventional two-layer organic LEDs with different electron-transport layer thicknesses wereprepared. Spatial distributions of emission intensities wereobserved. The large deviation in both emission spectra and spatialemission patterns were observed when the electron-transport layerthickness was varied. The deviation of emission patterns from thestandard Lambertian pattern was found to cause overestimations ofpower luminous efficiencies as large as 30%. A method for evaluatingcorrection factors was proposed.

  11. Precision analysis for standard deviation measurements of immobile single fluorescent molecule images.

    PubMed

    DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M

    2010-03-29

    Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.

  12. Management of spontaneous pneumothorax compared to British Thoracic Society (BTS) 2003 guidelines: a district general hospital audit.

    PubMed

    Medford, Andrew Rl; Pepperell, Justin Ct

    2007-10-01

    In 1993, the British Thoracic Society (BTS) issued guidelines for the management of spontaneous pneumothorax (SP). These were refined in 2003. To determine adherence to the 2003 BTS SP guidelines in a district general hospital. An initial retrospective audit of 52 episodes of acute SP was performed. Subsequent intervention involved a junior doctor educational update on both the 2003 BTS guidelines and the initial audit results, and the setting up of an online guideline hyperlink. After the educational intervention a further prospective re-audit of 28 SP episodes was performed. Management of SP deviated considerably from the 2003 BTS guidelines in the initial audit - deviation rate 26.9%. After the intervention, a number of clinical management deviations persisted (32.1% deviation rate); these included failure to insert a chest drain despite unsuccessful aspiration, and attempting aspiration of symptomatic secondary SPs. Specific tools to improve standards might include a pneumothorax proforma to improve record keeping and a pneumothorax care pathway to reduce management deviations compared to BTS guidelines. Successful change also requires identification of the total target audience for any educational intervention.

  13. Forecasting of magnitude and duration of currency crises based on the analysis of distortions of fractal scaling in exchange rate fluctuations

    NASA Astrophysics Data System (ADS)

    Uritskaya, Olga Y.

    2005-05-01

    Results of fractal stability analysis of daily exchange rate fluctuations of more than 30 floating currencies for a 10-year period are presented. It is shown for the first time that small- and large-scale dynamical instabilities of national monetary systems correlate with deviations of the detrended fluctuation analysis (DFA) exponent from the value 1.5 predicted by the efficient market hypothesis. The observed dependence is used for classification of long-term stability of floating exchange rates as well as for revealing various forms of distortion of stable currency dynamics prior to large-scale crises. A normal range of DFA exponents consistent with crisis-free long-term exchange rate fluctuations is determined, and several typical scenarios of unstable currency dynamics with DFA exponents fluctuating beyond the normal range are identified. It is shown that monetary crashes are usually preceded by prolonged periods of abnormal (decreased or increased) DFA exponent, with the after-crash exponent tending to the value 1.5 indicating a more reliable exchange rate dynamics. Statistically significant regression relations (R=0.99, p<0.01) between duration and magnitude of currency crises and the degree of distortion of monofractal patterns of exchange rate dynamics are found. It is demonstrated that the parameters of these relations characterizing small- and large-scale crises are nearly equal, which implies a common instability mechanism underlying these events. The obtained dependences have been used as a basic ingredient of a forecasting technique which provided correct in-sample predictions of monetary crisis magnitude and duration over various time scales. The developed technique can be recommended for real-time monitoring of dynamical stability of floating exchange rate systems and creating advanced early-warning-system models for currency crisis prevention.

  14. A randomized controlled trial investigating the effects of craniosacral therapy on pain and heart rate variability in fibromyalgia patients.

    PubMed

    Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A; Sánchez-Labraca, Nuria; Quesada-Rubio, José Manuel; Granero-Molina, José; Moreno-Lorenzo, Carmen

    2011-01-01

    Fibromyalgia is a prevalent musculoskeletal disorder associated with widespread mechanical tenderness, fatigue, non-refreshing sleep, depressed mood and pervasive dysfunction of the autonomic nervous system: tachycardia, postural intolerance, Raynaud's phenomenon and diarrhoea. To determine the effects of craniosacral therapy on sensitive tender points and heart rate variability in patients with fibromyalgia. A randomized controlled trial. Ninety-two patients with fibromyalgia were randomly assigned to an intervention group or placebo group. Patients received treatments for 20 weeks. The intervention group underwent a craniosacral therapy protocol and the placebo group received sham treatment with disconnected magnetotherapy equipment. Pain intensity levels were determined by evaluating tender points, and heart rate variability was recorded by 24-hour Holter monitoring. After 20 weeks of treatment, the intervention group showed significant reduction in pain at 13 of the 18 tender points (P < 0.05). Significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement versus baseline values were observed in the intervention group but not in the placebo group. At two months and one year post therapy, the intervention group showed significant differences versus baseline in tender points at left occiput, left-side lower cervical, left epicondyle and left greater trochanter and significant differences in temporal standard deviation of RR segments, root mean square deviation of temporal standard deviation of RR segments and clinical global impression of improvement. Craniosacral therapy improved medium-term pain symptoms in patients with fibromyalgia.

  15. Diagnostic classification of macular ganglion cell and retinal nerve fiber layer analysis: differentiation of false-positives from glaucoma.

    PubMed

    Kim, Ko Eun; Jeoung, Jin Wook; Park, Ki Ho; Kim, Dong Myung; Kim, Seok Hwan

    2015-03-01

    To investigate the rate and associated factors of false-positive diagnostic classification of ganglion cell analysis (GCA) and retinal nerve fiber layer (RNFL) maps, and characteristic false-positive patterns on optical coherence tomography (OCT) deviation maps. Prospective, cross-sectional study. A total of 104 healthy eyes of 104 normal participants. All participants underwent peripapillary and macular spectral-domain (Cirrus-HD, Carl Zeiss Meditec Inc, Dublin, CA) OCT scans. False-positive diagnostic classification was defined as yellow or red color-coded areas for GCA and RNFL maps. Univariate and multivariate logistic regression analyses were used to determine associated factors. Eyes with abnormal OCT deviation maps were categorized on the basis of the shape and location of abnormal color-coded area. Differences in clinical characteristics among the subgroups were compared. (1) The rate and associated factors of false-positive OCT maps; (2) patterns of false-positive, color-coded areas on the GCA deviation map and associated clinical characteristics. Of the 104 healthy eyes, 42 (40.4%) and 32 (30.8%) showed abnormal diagnostic classifications on any of the GCA and RNFL maps, respectively. Multivariate analysis revealed that false-positive GCA diagnostic classification was associated with longer axial length and larger fovea-disc angle, whereas longer axial length and smaller disc area were associated with abnormal RNFL maps. Eyes with abnormal GCA deviation map were categorized as group A (donut-shaped round area around the inner annulus), group B (island-like isolated area), and group C (diffuse, circular area with an irregular inner margin in either). The axial length showed a significant increasing trend from group A to C (P=0.001), and likewise, the refractive error was more myopic in group C than in groups A (P=0.015) and B (P=0.014). Group C had thinner average ganglion cell-inner plexiform layer thickness compared with other groups (group A=B>C, P=0.004). Abnormal OCT diagnostic classification should be interpreted with caution, especially in eyes with long axial lengths, large fovea-disc angles, and small optic discs. Our findings suggest that the characteristic patterns of OCT deviation map can provide useful clues to distinguish glaucomatous changes from false-positive findings. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  16. Spatiotemporal Parameters are not Substantially Influenced by Load Carriage or Inclination During Treadmill and Overground Walking

    PubMed Central

    Seay, Joseph F.; Gregorczyk, Karen N.; Hasselquist, Leif

    2016-01-01

    Abstract Influences of load carriage and inclination on spatiotemporal parameters were examined during treadmill and overground walking. Ten soldiers walked on a treadmill and overground with three load conditions (00 kg, 20 kg, 40 kg) during level, uphill (6% grade) and downhill (-6% grade) inclinations at self-selected speed, which was constant across conditions. Mean values and standard deviations for double support percentage, stride length and a step rate were compared across conditions. Double support percentage increased with load and inclination change from uphill to level walking, with a 0.4% stance greater increase at the 20 kg condition compared to 00 kg. As inclination changed from uphill to downhill, the step rate increased more overground (4.3 ± 3.5 steps/min) than during treadmill walking (1.7 ± 2.3 steps/min). For the 40 kg condition, the standard deviations were larger than the 00 kg condition for both the step rate and double support percentage. There was no change between modes for step rate standard deviation. For overground compared to treadmill walking, standard deviation for stride length and double support percentage increased and decreased, respectively. Changes in the load of up to 40 kg, inclination of 6% grade away from the level (i.e., uphill or downhill) and mode (treadmill and overground) produced small, yet statistically significant changes in spatiotemporal parameters. Variability, as assessed by standard deviation, was not systematically lower during treadmill walking compared to overground walking. Due to the small magnitude of changes, treadmill walking appears to replicate the spatiotemporal parameters of overground walking. PMID:28149338

  17. High-Data-Rate Quadrax Cable Microwave Characterization at the NASA Glenn Structural Dynamics Laboratory

    NASA Technical Reports Server (NTRS)

    Theofylaktos, Onoufrios; Warner, Joseph D.; Sheehe, Charles J.

    2012-01-01

    An experiment was performed to determine the degradation in the bit-error-rate (BER) in the high-data-rate cables chosen for the Orion Service Module due to extreme launch conditions of vibrations with a magnitude of 60g. The cable type chosen for the Orion Service Module was no. 8 quadrax cable. The increase in electrical noise induced on these no. 8 quadrax cables was measured at the NASA Glenn vibration facility in the Structural Dynamics Laboratory. The intensity of the vibrations was set at 32g, which was the maximum available level at the facility. The cable lengths used during measurements were 1, 4, and 8 m. The noise measurements were done in an analog fashion using a performance network analyzer (PNA) by recording the standard deviation of the transmission scattering parameter S(sub 21) over the frequency range of 100 to 900 MHz. The standard deviation of S(sub 210 was measured before, during, and after the vibration of the cables at the vibration facility. We observed an increase in noise by a factor of 2 to 6. From these measurements we estimated the increase expected in the BER for a cable length of 25 m and concluded that these findings are large enough that the noise increase due to vibration must be taken in to account for the design of the communication system for a BER of 10(exp -8).

  18. Performance evaluation of an importance sampling technique in a Jackson network

    NASA Astrophysics Data System (ADS)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  19. Linked versus unlinked estimates of mortality and length of life by education and marital status: evidence from the first record linkage study in Lithuania.

    PubMed

    Shkolnikov, Vladimir M; Jasilionis, Domantas; Andreev, Evgeny M; Jdanov, Dmitri A; Stankuniene, Vladislava; Ambrozaitiene, Dalia

    2007-04-01

    Earlier studies have found large and increasing with time differences in mortality by education and marital status in post-Soviet countries. Their results are based on independent tabulations of population and deaths counts (unlinked data). The present study provides the first census-linked estimates of group-specific mortality and the first comparison between census-linked and unlinked mortality estimates for a post-Soviet country. The study is based on a data set linking 140,000 deaths occurring in 2001-2004 in Lithuania with the population census of 2001. The same socio-demographic information about the deceased is available from both the census and death records. Cross-tabulations and Poisson regressions are used to compare linked and unlinked data. Linked and unlinked estimates of life expectancies and mortality rate ratios are calculated with standard life table techniques and Poisson regressions. For the two socio-demographic variables under study, the values from the death records partly differ from those from the census records. The deviations are especially significant for education, with 72-73%, 66-67%, and 82-84% matching for higher education, secondary education, and lower education, respectively. For marital status, deviations are less frequent. For education and marital status, unlinked estimates tend to overstate mortality in disadvantaged groups and they understate mortality in advantaged groups. The differences in inter-group life expectancy and the mortality rate ratios thus are significantly overestimated in the unlinked data. Socio-demographic differences in mortality previously observed in Lithuania and possibly other post-Soviet countries are overestimated. The growth in inequalities over the 1990s is real but might be overstated. The results of this study confirm the existence of large and widening health inequalities but call for better data.

  20. Validation of Cross Sections with Criticality Experiment and Reaction Rates: the Neptunium Case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Berthier, B.; Le Naour, C.; Stéphan, C.; Paradela, C.; Tarrío, D.; Duran, I.

    2014-04-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurements the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of the n_TOF data, we considered a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by uranium highly enriched in 235U so as to approach criticality with fast neutrons. The multiplication factor keff of the calculation is in better agreement with the experiment when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explored the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. The large modification needed to reduce the deviation seems to be incompatible with existing inelastic cross section measurements. Also we show that the νbar of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  1. SU-E-T-546: Use of Implant Volume for Quality Assurance of Low Dose Rate Brachytherapy Treatment Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilkinson, D; Kolar, M

    Purpose: To analyze the application of volume implant (V100) data as a method for a global check of low dose rate (LDR) brachytherapy plans. Methods: Treatment plans for 335 consecutive patients undergoing permanent seed implants for prostate cancer and for 113 patients treated with plaque therapy for ocular melanoma were analyzed. Plaques used were 54 COMS (10 to 20 mm, notched and regular) and 59 Eye Physics EP917s with variable loading. Plots of treatment time x implanted activity per unit dose versus v100 ^.667 were made. V100 values were obtained using dose volume histograms calculated by the treatment planning systemsmore » (Variseed 8.02 and Plaque Simulator 5.4). Four different physicists were involved in planning the prostate seed cases; two physicists for the eye plaques. Results: Since the time and dose for the prostate cases did not vary, a plot of implanted activity vs V100 ^.667 was made. A linear fit with no intercept had an r{sup 2} = 0.978; more than 94% of the actual activities fell within 5% of the activities calculated from the linear fit. The greatest deviations were in cases where the implant volumes were large (> 100 cc). Both COMS and EP917 plaque linear fits were good (r{sup 2} = .967 and .957); the largest deviations were seen for large volumes. Conclusions: The method outlined here is effective for checking planning consistency and quality assurance of two types of LDR brachytherapy treatment plans (temporary and permanent). A spreadsheet for the calculations enables a quick check of the plan in situations were time is short (e.g. OR-based prostate planning)« less

  2. Strain-rate dependence of ramp-wave evolution and strength in tantalum

    DOE PAGES

    Lane, J. Matthew D.; Foiles, Stephen M.; Lim, Hojun; ...

    2016-08-25

    We have conducted molecular dynamics (MD) simulations of quasi-isentropic ramp-wave compression to very high pressures over a range of strain rates from 10 11 down to 10 8 1/s. Using scaling methods, we collapse wave profiles from various strain rates to a master profile curve, which shows deviations when material response is strain-rate dependent. Thus, we can show with precision where, and how, strain-rate dependence affects the ramp wave. We find that strain rate affects the stress-strain material response most dramatically at strains below 20%, and that above 30% strain the material response is largely independent of strain rate. Wemore » show good overall agreement with experimental stress-strain curves up to approximately 30% strain, above which simulated response is somewhat too stiff. We postulate that this could be due to our interatomic potential or to differences in grain structure and/or size between simulation and experiment. Strength is directly measured from per-atom stress tensor and shows significantly enhanced elastic response at the highest strain rates. As a result, this enhanced elastic response is less pronounced at higher pressures and at lower strain rates.« less

  3. Misery Loves Company? A Meta-Regression Examining Aggregate Unemployment Rates and the Unemployment-Mortality Association

    PubMed Central

    Roelfs, David J.; Shor, Eran; Blank, Aharon; Schwartz, Joseph E.

    2015-01-01

    PURPOSE Individual-level unemployment has been consistently linked to poor health and higher mortality, but some scholars have suggested that the negative effect of job loss may be lower during times and in places where aggregate unemployment rates are high. We review three logics associated with this moderation hypothesis: health selection, social isolation, and unemployment stigma. We then test whether aggregate unemployment rates moderate the individual-level association between unemployment and all-cause mortality. METHODS We use 6 meta-regression models (each utilizing a different measure of the aggregate unemployment rate) based on 62 relative all-cause mortality risk estimates from 36 studies (from 15 nations). RESULTS We find that the magnitude of the individual-level unemployment-mortality association is approximately the same during periods of high and low aggregate-level unemployment. Model coefficients (exponentiated) were 1.01 for the crude unemployment rate (p = 0.27), 0.94 for the change in unemployment rate from the previous year (p = 0.46), 1.01 for the deviation of the unemployment rate from the 5-year running average (p = 0.87), 1.01 for the deviation of the unemployment rate from the 10-year running average (p = 0.73), 1.01 for the deviation of the unemployment rate from the overall average (measured as a continuous variable; p = 0.61), and showed no variation across unemployment levels when the deviation of the unemployment rate from the overall average was measured categorically. Heterogeneity between studies was significant (p < .001), supporting the use of the random effects model. CONCLUSIONS We found no strong evidence to suggest that unemployment experiences change when macro-economic conditions change. Efforts to ameliorate the negative social and economic consequences of unemployment should continue to focus on the individual and should be maintained regardless of periodic changes in macro-economic conditions. PMID:25795225

  4. Misery loves company? A meta-regression examining aggregate unemployment rates and the unemployment-mortality association.

    PubMed

    Roelfs, David J; Shor, Eran; Blank, Aharon; Schwartz, Joseph E

    2015-05-01

    Individual-level unemployment has been consistently linked to poor health and higher mortality, but some scholars have suggested that the negative effect of job loss may be lower during times and in places where aggregate unemployment rates are high. We review three logics associated with this moderation hypothesis: health selection, social isolation, and unemployment stigma. We then test whether aggregate unemployment rates moderate the individual-level association between unemployment and all-cause mortality. We use six meta-regression models (each using a different measure of the aggregate unemployment rate) based on 62 relative all-cause mortality risk estimates from 36 studies (from 15 nations). We find that the magnitude of the individual-level unemployment-mortality association is approximately the same during periods of high and low aggregate-level unemployment. Model coefficients (exponentiated) were 1.01 for the crude unemployment rate (P = .27), 0.94 for the change in unemployment rate from the previous year (P = .46), 1.01 for the deviation of the unemployment rate from the 5-year running average (P = .87), 1.01 for the deviation of the unemployment rate from the 10-year running average (P = .73), 1.01 for the deviation of the unemployment rate from the overall average (measured as a continuous variable; P = .61), and showed no variation across unemployment levels when the deviation of the unemployment rate from the overall average was measured categorically. Heterogeneity between studies was significant (P < .001), supporting the use of the random effects model. We found no strong evidence to suggest that unemployment experiences change when macroeconomic conditions change. Efforts to ameliorate the negative social and economic consequences of unemployment should continue to focus on the individual and should be maintained regardless of periodic changes in macroeconomic conditions. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. SU-F-T-285: Evaluation of a Patient DVH-Based IMRT QA System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen, H; Redler, G; Chu, J

    2016-06-15

    Purpose: To evaluate the clinical performance of a patient DVH-based QA system for prostate VMAT QA. Methods: Mobius3D(M3D) is a QA software with an independent beam model and dose engine. The MobiusFX(MFX) add-on predicts patient dose using treatment machine log files. We commissioned the Mobius beam model in two steps. First, the stock beam model was customized using machine commissioning data, then verified against the TPS with 12 simple phantom plans and 7 clinical 3D plans. Secondly, the Dosimetric Leaf Gap(DLG) in the Mobius model was fine-tuned for VMAT treatment based on ion chamber measurements for 6 clinical VMAT plans.more » Upon successful commissioning, we retrospectively performed IMRT QA for 12 VMAT plans with the Mobius system as well as the ArcCHECK-3DVH system. Selected patient DVH values (PTV D95, D50; Bladder D2cc, Dmean; Rectum D2cc) were compared between TPS, M3D, MFX, and 3DVH. Results: During the first commissioning step, TPS and M3D calculated target Dmean for 3D plans agree within 0.7%±0.7%, with 3D gamma passing rates of 98%±2%. In the second commissioning step, the Mobius DLG was adjusted by 1.2mm from the stock value, reducing the average difference between MFX calculation and ion chamber measurement from 3.2% to 0.1%. In retrospective prostate VMAT QA, 5 of 60 MFX calculated DVH values have a deviation greater than 5% compared to TPS. One large deviation at high dose level was identified as a potential QA failure. This echoes the 3DVH QA result, which identified 2 instances of large DVH deviation on the same structure. For all DVH’s evaluated, M3D and MFX show high level of agreement (0.1%±0.2%), indicating that the observed deviation is likely from beam modelling differences rather than delivery errors. Conclusion: Mobius system provides a viable solution for DVH based VMAT QA, with the capability of separating TPS and delivery errors.« less

  6. Development of a good-quality speech coder for transmission over noisy channels at 2.4 kb/s

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. R.; Berouti, M.; Higgins, A.; Russell, W.

    1982-03-01

    This report describes the development, study, and experimental results of a 2.4 kb/s speech coder called harmonic deviations (HDV) vocoder, which transmits good-quality speech over noisy channels with bit-error rates of up to 1%. The HDV coder is based on the linear predictive coding (LPC) vocoder, and it transmits additional information over and above the data transmitted by the LPC vocoder, in the form of deviations between the speech spectrum and the LPC all-pole model spectrum at a selected set of frequencies. At the receiver, the spectral deviations are used to generate the excitation signal for the all-pole synthesis filter. The report describes and compares several methods for extracting the spectral deviations from the speech signal and for encoding them. To limit the bit-rate of the HDV coder to 2.4 kb/s the report discusses several methods including orthogonal transformation and minimum-mean-square-error scalar quantization of log area ratios, two-stage vector-scalar quantization, and variable frame rate transmission. The report also presents the results of speech-quality optimization of the HDV coder at 2.4 kb/s.

  7. Quality indicators for eye bank.

    PubMed

    Acharya, Manisha; Biswas, Saurabh; Das, Animesh; Mathur, Umang; Dave, Abhishek; Singh, Ashok; Dubey, Suneeta

    2018-03-01

    The aim of this study is to identify quality indicators of the eye bank and validate their effectivity. Adverse reaction rate, discard rate, protocol deviation rate, and compliance rate were defined as Quality Indicators of the eye bank. These were identified based on definition of quality that captures two dimensions - "result quality" and "process quality." The indicators were measured and tracked as part of quality assurance (QA) program of the eye bank. Regular audits were performed to validate alignment of standard operating procedures (SOP) with regulatory and surgeon acceptance standards and alignment of activities performed in the eye bank with the SOP. Prospective study of the indicators was performed by comparing their observed values over the period 2011-2016. Adverse reaction rate decreased more than 8-fold (from 0.61% to 0.07%), discard rate decreased and stabilized at 30%, protocol deviation rate decreased from 1.05% to 0.08%, and compliance rate reported by annual quality audits improved from 59% to 96% at the same time. In effect, adverse reaction rate, discard rate, and protocol deviation rate were leading indicators, and compliance rate was the trailing indicator. These indicators fulfill an important gap in available literature on QA in eye banking. There are two ways in which these findings can be meaningful. First, eye banks which are new to quality measurement can adopt these indicators. Second, eye banks which are already deeply engaged in quality improvement can test these indicators in their eye bank, thereby incorporating them widely and improving them over time.

  8. Significant viscosity dependent deviations from classical van Deemter theory in liquid chromatography with porous silica monolithic columns.

    PubMed

    Nesterenko, Pavel N; Rybalko, Marina A; Paull, Brett

    2005-06-01

    Significant deviations from classical van Deemter behaviour, indicative of turbulent flow liquid chromatography, has been recorded for mobile phases of varying viscosity on porous silica monolithic columns at elevated mobile phase flow rates.

  9. Testing general relativity using gravitational wave signals from the inspiral, merger and ringdown of binary black holes

    NASA Astrophysics Data System (ADS)

    Ghosh, Abhirup; Johnson-McDaniel, Nathan K.; Ghosh, Archisman; Kant Mishra, Chandra; Ajith, Parameswaran; Del Pozzo, Walter; Berry, Christopher P. L.; Nielsen, Alex B.; London, Lionel

    2018-01-01

    Advanced LIGO’s recent observations of gravitational waves (GWs) from merging binary black holes have opened up a unique laboratory to test general relativity (GR) in the highly relativistic regime. One of the tests used to establish the consistency of the first LIGO event with a binary black hole merger predicted by GR was the inspiral-merger-ringdown consistency test. This involves inferring the mass and spin of the remnant black hole from the inspiral (low-frequency) part of the observed signal and checking for the consistency of the inferred parameters with the same estimated from the post-inspiral (high-frequency) part of the signal. Based on the observed rate of binary black hole mergers, we expect the advanced GW observatories to observe hundreds of binary black hole mergers every year when operating at their design sensitivities, most of them with modest signal to noise ratios (SNRs). Anticipating such observations, this paper shows how constraints from a large number of events with modest SNRs can be combined to produce strong constraints on deviations from GR. Using kludge modified GR waveforms, we demonstrate how this test could identify certain types of deviations from GR if such deviations are present in the signal waveforms. We also study the robustness of this test against reasonable variations of a variety of different analysis parameters.

  10. Loss aversion, large deviation preferences and optimal portfolio weights for some classes of return processes

    NASA Astrophysics Data System (ADS)

    Duffy, Ken; Lobunets, Olena; Suhov, Yuri

    2007-05-01

    We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.

  11. Gait analysis in children with cerebral palsy.

    PubMed

    Armand, Stéphane; Decoulon, Geraldo; Bonnefoy-Mazure, Alice

    2016-12-01

    Cerebral palsy (CP) children present complex and heterogeneous motor disorders that cause gait deviations.Clinical gait analysis (CGA) is needed to identify, understand and support the management of gait deviations in CP. CGA assesses a large amount of quantitative data concerning patients' gait characteristics, such as video, kinematics, kinetics, electromyography and plantar pressure data.Common gait deviations in CP can be grouped into the gait patterns of spastic hemiplegia (drop foot, equinus with different knee positions) and spastic diplegia (true equinus, jump, apparent equinus and crouch) to facilitate communication. However, gait deviations in CP tend to be a continuum of deviations rather than well delineated groups. To interpret CGA, it is necessary to link gait deviations to clinical impairments and to distinguish primary gait deviations from compensatory strategies.CGA does not tell us how to treat a CP patient, but can provide objective identification of gait deviations and further the understanding of gait deviations. Numerous treatment options are available to manage gait deviations in CP. Generally, treatments strive to limit secondary deformations, re-establish the lever arm function and preserve muscle strength.Additional roles of CGA are to better understand the effects of treatments on gait deviations. Cite this article: Armand S, Decoulon G, Bonnefoy-Mazure A. Gait analysis in children with cerebral palsy. EFORT Open Rev 2016;1:448-460. DOI: 10.1302/2058-5241.1.000052.

  12. A Public Database of Immersive VR Videos with Corresponding Ratings of Arousal, Valence, and Correlations between Head Movements and Self Report Measures.

    PubMed

    Li, Benjamin J; Bailenson, Jeremy N; Pines, Adam; Greenleaf, Walter J; Williams, Leanne M

    2017-01-01

    Virtual reality (VR) has been proposed as a methodological tool to study the basic science of psychology and other fields. One key advantage of VR is that sharing of virtual content can lead to more robust replication and representative sampling. A database of standardized content will help fulfill this vision. There are two objectives to this study. First, we seek to establish and allow public access to a database of immersive VR video clips that can act as a potential resource for studies on emotion induction using virtual reality. Second, given the large sample size of participants needed to get reliable valence and arousal ratings for our video, we were able to explore the possible links between the head movements of the observer and the emotions he or she feels while viewing immersive VR. To accomplish our goals, we sourced for and tested 73 immersive VR clips which participants rated on valence and arousal dimensions using self-assessment manikins. We also tracked participants' rotational head movements as they watched the clips, allowing us to correlate head movements and affect. Based on past research, we predicted relationships between the standard deviation of head yaw and valence and arousal ratings. Results showed that the stimuli varied reasonably well along the dimensions of valence and arousal, with a slight underrepresentation of clips that are of negative valence and highly arousing. The standard deviation of yaw positively correlated with valence, while a significant positive relationship was found between head pitch and arousal. The immersive VR clips tested are available online as supplemental material.

  13. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yidong Xia; Mitch Plummer; Robert Podgorney

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less

  14. A Public Database of Immersive VR Videos with Corresponding Ratings of Arousal, Valence, and Correlations between Head Movements and Self Report Measures

    PubMed Central

    Li, Benjamin J.; Bailenson, Jeremy N.; Pines, Adam; Greenleaf, Walter J.; Williams, Leanne M.

    2017-01-01

    Virtual reality (VR) has been proposed as a methodological tool to study the basic science of psychology and other fields. One key advantage of VR is that sharing of virtual content can lead to more robust replication and representative sampling. A database of standardized content will help fulfill this vision. There are two objectives to this study. First, we seek to establish and allow public access to a database of immersive VR video clips that can act as a potential resource for studies on emotion induction using virtual reality. Second, given the large sample size of participants needed to get reliable valence and arousal ratings for our video, we were able to explore the possible links between the head movements of the observer and the emotions he or she feels while viewing immersive VR. To accomplish our goals, we sourced for and tested 73 immersive VR clips which participants rated on valence and arousal dimensions using self-assessment manikins. We also tracked participants' rotational head movements as they watched the clips, allowing us to correlate head movements and affect. Based on past research, we predicted relationships between the standard deviation of head yaw and valence and arousal ratings. Results showed that the stimuli varied reasonably well along the dimensions of valence and arousal, with a slight underrepresentation of clips that are of negative valence and highly arousing. The standard deviation of yaw positively correlated with valence, while a significant positive relationship was found between head pitch and arousal. The immersive VR clips tested are available online as supplemental material. PMID:29259571

  15. Extreme fluctuations of active Brownian motion

    NASA Astrophysics Data System (ADS)

    Pietzonka, Patrick; Kleinbeck, Kevin; Seifert, Udo

    2016-05-01

    In active Brownian motion, an internal propulsion mechanism interacts with translational and rotational thermal noise and other internal fluctuations to produce directed motion. We derive the distribution of its extreme fluctuations and identify its universal properties using large deviation theory. The limits of slow and fast internal dynamics give rise to a kink-like and parabolic behavior of the corresponding rate functions, respectively. For dipolar Janus particles in two- and three-dimensions interacting with a field, we predict a novel symmetry akin to, but different from, the one related to entropy production. Measurements of these extreme fluctuations could thus be used to infer properties of the underlying, often hidden, network of states.

  16. Adjusted hospital death rates: a potential screen for quality of medical care.

    PubMed

    Dubois, R W; Brook, R H; Rogers, W H

    1987-09-01

    Increased economic pressure on hospitals has accelerated the need to develop a screening tool for identifying hospitals that potentially provide poor quality care. Based upon data from 93 hospitals and 205,000 admissions, we used a multiple regression model to adjust the hospitals crude death rate. The adjustment process used age, origin of patient from the emergency department or nursing home, and a hospital case mix index based on DRGs (diagnostic related groups). Before adjustment, hospital death rates ranged from 0.3 to 5.8 per 100 admissions. After adjustment, hospital death ratios ranged from 0.36 to 1.36 per 100 (actual death rate divided by predicted death rate). Eleven hospitals (12 per cent) were identified where the actual death rate exceeded the predicted death rate by more than two standard deviations. In nine hospitals (10 per cent), the predicted death rate exceeded the actual death rate by a similar statistical margin. The 11 hospitals with higher than predicted death rates may provide inadequate quality of care or have uniquely ill patient populations. The adjusted death rate model needs to be validated and generalized before it can be used routinely to screen hospitals. However, the remaining large differences in observed versus predicted death rates lead us to believe that important differences in hospital performance may exist.

  17. Correlation Between Analog Noise Measurements and the Expected Bit Error Rate of a Digital Signal Propagating Through Passive Components

    NASA Technical Reports Server (NTRS)

    Warner, Joseph D.; Theofylaktos, Onoufrios

    2012-01-01

    A method of determining the bit error rate (BER) of a digital circuit from the measurement of the analog S-parameters of the circuit has been developed. The method is based on the measurement of the noise and the standard deviation of the noise in the S-parameters. Once the standard deviation and the mean of the S-parameters are known, the BER of the circuit can be calculated using the normal Gaussian function.

  18. Comparative analysis of the processing accuracy of high strength metal sheets by AWJ, laser and plasma

    NASA Astrophysics Data System (ADS)

    Radu, M. C.; Schnakovszky, C.; Herghelegiu, E.; Tampu, N. C.; Zichil, V.

    2016-08-01

    Experimental tests were carried out on two high-strength steel materials (Ramor 400 and Ramor 550). Quantification of the dimensional accuracy was achieved by measuring the deviations from some geometric parameters of part (two lengths and two radii). It was found that in case of Ramor 400 steel, at the jet inlet, the deviations from the part radii are quite small for all the three analysed processes. Instead for the linear dimensions, the deviations are small only in case of laser cutting. At the jet outlet, the deviations raised in small amount compared to those obtained at the jet inlet for both materials as well as for all the three processes. Related to Ramor 550 steel, at the jet inlet the deviations from the part radii are very small in case of AWJ and laser cutting but larger in case of plasma cutting. At the jet outlet, the deviations from the part radii are very small for all processes; in case of linear dimensions, there was obtained very small deviations only in the case of laser processing, the other two processes leading to very large deviations.

  19. Robust optimization of the billet for isothermal local loading transitional region of a Ti-alloy rib-web component based on dual-response surface method

    NASA Astrophysics Data System (ADS)

    Wei, Ke; Fan, Xiaoguang; Zhan, Mei; Meng, Miao

    2018-03-01

    Billet optimization can greatly improve the forming quality of the transitional region in the isothermal local loading forming (ILLF) of large-scale Ti-alloy ribweb components. However, the final quality of the transitional region may be deteriorated by uncontrollable factors, such as the manufacturing tolerance of the preforming billet, fluctuation of the stroke length, and friction factor. Thus, a dual-response surface method (RSM)-based robust optimization of the billet was proposed to address the uncontrollable factors in transitional region of the ILLF. Given that the die underfilling and folding defect are two key factors that influence the forming quality of the transitional region, minimizing the mean and standard deviation of the die underfilling rate and avoiding folding defect were defined as the objective function and constraint condition in robust optimization. Then, the cross array design was constructed, a dual-RSM model was established for the mean and standard deviation of the die underfilling rate by considering the size parameters of the billet and uncontrollable factors. Subsequently, an optimum solution was derived to achieve the robust optimization of the billet. A case study on robust optimization was conducted. Good results were attained for improving the die filling and avoiding folding defect, suggesting that the robust optimization of the billet in the transitional region of the ILLF was efficient and reliable.

  20. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grana, Justin; Wolpert, David; Neil, Joshua

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  1. A likelihood ratio anomaly detector for identifying within-perimeter computer network attacks

    DOE PAGES

    Grana, Justin; Wolpert, David; Neil, Joshua; ...

    2016-03-11

    The rapid detection of attackers within firewalls of enterprise computer networks is of paramount importance. Anomaly detectors address this problem by quantifying deviations from baseline statistical models of normal network behavior and signaling an intrusion when the observed data deviates significantly from the baseline model. But, many anomaly detectors do not take into account plausible attacker behavior. As a result, anomaly detectors are prone to a large number of false positives due to unusual but benign activity. Our paper first introduces a stochastic model of attacker behavior which is motivated by real world attacker traversal. Then, we develop a likelihoodmore » ratio detector that compares the probability of observed network behavior under normal conditions against the case when an attacker has possibly compromised a subset of hosts within the network. Since the likelihood ratio detector requires integrating over the time each host becomes compromised, we illustrate how to use Monte Carlo methods to compute the requisite integral. We then present Receiver Operating Characteristic (ROC) curves for various network parameterizations that show for any rate of true positives, the rate of false positives for the likelihood ratio detector is no higher than that of a simple anomaly detector and is often lower. Finally, we demonstrate the superiority of the proposed likelihood ratio detector when the network topologies and parameterizations are extracted from real-world networks.« less

  2. Enhanced detection and visualization of anomalies in spectral imagery

    NASA Astrophysics Data System (ADS)

    Basener, William F.; Messinger, David W.

    2009-05-01

    Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.

  3. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  4. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    ERIC Educational Resources Information Center

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  5. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    NASA Astrophysics Data System (ADS)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  6. Impact of typical steady-state conditions and transient conditions on flow ripple and its test accuracy for axial piston pump

    NASA Astrophysics Data System (ADS)

    Xu, Bing; Hu, Min; Zhang, Junhui

    2015-09-01

    The current research about the flow ripple of axial piston pump mainly focuses on the effect of the structure of parts on the flow ripple. Therein, the structure of parts are usually designed and optimized at rated working conditions. However, the pump usually has to work in large-scale and time-variant working conditions. Therefore, the flow ripple characteristics of pump and analysis for its test accuracy with respect to variant steady-state conditions and transient conditions in a wide range of operating parameters are focused in this paper. First, a simulation model has been constructed, which takes the kinematics of oil film within friction pairs into account for higher accuracy. Afterwards, a test bed which adopts Secondary Source Method is built to verify the model. The simulation and tests results show that the angular position of the piston, corresponding to the position where the peak flow ripple is produced, varies with the different pressure. The pulsating amplitude and pulsation rate of flow ripple increase with the rise of pressure and the variation rate of pressure. For the pump working at a constant speed, the flow pulsation rate decreases dramatically with the increasing speed when the speed is less than 27.78% of the maximum speed, subsequently presents a small decrease tendency with the speed further increasing. With the rise of the variation rate of speed, the pulsating amplitude and pulsation rate of flow ripple increase. As the swash plate angle augments, the pulsating amplitude of flow ripple increases, nevertheless the flow pulsation rate decreases. In contrast with the effect of the variation of pressure, the test accuracy of flow ripple is more sensitive to the variation of speed. It makes the test accuracy above 96.20% available for the pulsating amplitude of pressure deviating within a range of ±6% from the mean pressure. However, with a variation of speed deviating within a range of ±2% from the mean speed, the attainable test accuracy of flow ripple is above 93.07%. The model constructed in this research proposes a method to determine the flow ripple characteristics of pump and its attainable test accuracy under the large-scale and time-variant working conditions. Meanwhile, a discussion about the variation of flow ripple and its obtainable test accuracy with the conditions of the pump working in wide operating ranges is given as well.

  7. Reference values for respiratory rate in the first 3 years of life.

    PubMed

    Rusconi, F; Castagneto, M; Gagliardi, L; Leo, G; Pellegatta, A; Porta, N; Razon, S; Braga, M

    1994-09-01

    Raised respiratory rate is a useful sign to diagnose lower respiratory infections in childhood. However, the normal range for respiratory rate has not been defined in a proper, large sample. To assess the respiratory rate in a large number of infants and young children in order to construct percentile curves by age; to determine the repeatability to the assessment using a stethoscope and compare it with observation. Respiratory rate was recorded for 1 minute with a stethoscope in 618 infants and children, aged 15 days to 3 years old, without respiratory infections or any other severe disease when awake and calm and when asleep. In 50 subjects we compared respiratory rate taken 30 to 60 minutes apart to assess repeatability, and in 50 others we compared simultaneous counts obtained by stethoscope versus observation. Repeatability was good as the standard deviation of differences was 2.5 breaths/minute in awake and 1.7 breaths/minute in asleep children. Respiratory rate obtained with a stethoscope was systematically higher than that obtained by observation (mean difference 2.6 breaths/minute in awake and 1.8 breaths/minute in asleep children; P = .015 and P < .001, respectively). A decrease in respiratory rate with age was seen for both states, and it was faster in the first few months of life when also a greater dispersion of values was observed. A second degree polynomial curve accurately fitted the data. Reference percentile values were developed from these data. The repeatability of respiratory rate measured with a stethoscope was good. Percentile curves would be particularly helpful in the first months of life when the decline in respiratory rate is very rapid and prevents to use cut off values for defining "normality."

  8. Large-deviation properties of Brownian motion with dry friction.

    PubMed

    Chen, Yaming; Just, Wolfram

    2014-10-01

    We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.

  9. Effects of Aftershock Declustering in Risk Modeling: Case Study of a Subduction Sequence in Mexico

    NASA Astrophysics Data System (ADS)

    Kane, D. L.; Nyst, M.

    2014-12-01

    Earthquake hazard and risk models often assume that earthquake rates can be represented by a stationary Poisson process, and that aftershocks observed in historical seismicity catalogs represent a deviation from stationarity that must be corrected before earthquake rates are estimated. Algorithms for classifying individual earthquakes as independent mainshocks or as aftershocks vary widely, and analysis of a single catalog can produce considerably different earthquake rates depending on the declustering method implemented. As these rates are propagated through hazard and risk models, the modeled results will vary due to the assumptions implied by these choices. In particular, the removal of large aftershocks following a mainshock may lead to an underestimation of the rate of damaging earthquakes and potential damage due to a large aftershock may be excluded from the model. We present a case study based on the 1907 - 1911 sequence of nine 6.9 <= Mw <= 7.9 earthquakes along the Cocos - North American plate subduction boundary in Mexico in order to illustrate the variability in risk under various declustering approaches. Previous studies have suggested that subduction zone earthquakes in Mexico tend to occur in clusters, and this particular sequence includes events that would be labeled as aftershocks in some declustering approaches yet are large enough to produce significant damage. We model the ground motion for each event, determine damage ratios using modern exposure data, and then compare the variability in the modeled damage from using the full catalog or one of several declustered catalogs containing only "independent" events. We also consider the effects of progressive damage caused by each subsequent event and how this might increase or decrease the total losses expected from this sequence.

  10. Large deviation approach to the generalized random energy model

    NASA Astrophysics Data System (ADS)

    Dorlas, T. C.; Dukes, W. M. B.

    2002-05-01

    The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.

  11. Large Deviations in Weakly Interacting Boundary Driven Lattice Gases

    NASA Astrophysics Data System (ADS)

    van Wijland, Frédéric; Rácz, Zoltán

    2005-01-01

    One-dimensional, boundary-driven lattice gases with local interactions are studied in the weakly interacting limit. The density profiles and the correlation functions are calculated to first order in the interaction strength for zero-range and short-range processes differing only in the specifics of the detailed-balance dynamics. Furthermore, the effective free-energy (large-deviation function) and the integrated current distribution are also found to this order. From the former, we find that the boundary drive generates long-range correlations only for the short-range dynamics while the latter provides support to an additivity principle recently proposed by Bodineau and Derrida.

  12. Centrality-Dependent Modification of Jet-Production Rates in Deuteron-Gold Collisions at √{sN N }=200 GeV

    NASA Astrophysics Data System (ADS)

    Adare, A.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Al-Bataineh, H.; Alexander, J.; Alfred, M.; Angerami, A.; Aoki, K.; Apadula, N.; Aramaki, Y.; Asano, H.; Atomssa, E. T.; Averbeck, R.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Baksay, G.; Baksay, L.; Bandara, N. S.; Bannier, B.; Barish, K. N.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Baublis, V.; Baumann, C.; Bazilevsky, A.; Beaumier, M.; Beckman, S.; Belikov, S.; Belmont, R.; Bennett, R.; Berdnikov, A.; Berdnikov, Y.; Bhom, J. H.; Blau, D. S.; Bok, J. S.; Boyle, K.; Brooks, M. L.; Bryslawskyj, J.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Butsyk, S.; Campbell, S.; Caringi, A.; Chen, C.-H.; Chi, C. Y.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chung, P.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cole, B. A.; Conesa Del Valle, Z.; Connors, M.; Csanád, M.; Csörgő, T.; Dahms, T.; Dairaku, S.; Danchev, I.; Danley, T. W.; Das, K.; Datta, A.; Daugherity, M. S.; David, G.; Dayananda, M. K.; Deblasio, K.; Dehmelt, K.; Denisov, A.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Dion, A.; Diss, P. B.; Do, J. H.; Donadelli, M.; D'Orazio, L.; Drapier, O.; Drees, A.; Drees, K. A.; Durham, J. M.; Durum, A.; Dutta, D.; Edwards, S.; Efremenko, Y. V.; Ellinghaus, F.; Engelmore, T.; Enokizono, A.; En'yo, H.; Esumi, S.; Fadem, B.; Feege, N.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Fraenkel, Z.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fujiwara, K.; Fukao, Y.; Fusayasu, T.; Gal, C.; Gallus, P.; Garg, P.; Garishvili, I.; Ge, H.; Giordano, F.; Glenn, A.; Gong, H.; Gonin, M.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grim, G.; Grosse Perdekamp, M.; Gunji, T.; Gustafsson, H.-Å.; Hachiya, T.; Haggerty, J. S.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Hamilton, H. F.; Han, R.; Han, S. Y.; Hanks, J.; Hasegawa, S.; Haseler, T. O. S.; Hashimoto, K.; Haslum, E.; Hayano, R.; He, X.; Heffner, M.; Hemmick, T. K.; Hester, T.; Hill, J. C.; Hohlmann, M.; Hollis, R. S.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hornback, D.; Hoshino, T.; Hotvedt, N.; Huang, J.; Huang, S.; Ichihara, T.; Ichimiya, R.; Ikeda, Y.; Imai, K.; Inaba, M.; Iordanova, A.; Isenhower, D.; Ishihara, M.; Issah, M.; Ivanishchev, D.; Iwanaga, Y.; Jacak, B. V.; Jezghani, M.; Jia, J.; Jiang, X.; Jin, J.; Johnson, B. M.; Jones, T.; Joo, K. S.; Jouan, D.; Jumper, D. S.; Kajihara, F.; Kamin, J.; Kanda, S.; Kang, J. H.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawall, D.; Kawashima, M.; Kazantsev, A. V.; Kempel, T.; Key, J. A.; Khachatryan, V.; Khanzadeev, A.; Kijima, K. M.; Kikuchi, J.; Kim, A.; Kim, B. I.; Kim, C.; Kim, D. J.; Kim, E.-J.; Kim, G. W.; Kim, M.; Kim, Y.-J.; Kimelman, B.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kitamura, R.; Klatsky, J.; Kleinjan, D.; Kline, P.; Koblesky, T.; Kochenda, L.; Komkov, B.; Konno, M.; Koster, J.; Kotov, D.; Král, A.; Kravitz, A.; Kunde, G. J.; Kurita, K.; Kurosawa, M.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Lee, D. M.; Lee, J.; Lee, K. B.; Lee, K. S.; Lee, S.; Lee, S. H.; Leitch, M. J.; Leite, M. A. L.; Li, X.; Lichtenwalner, P.; Liebing, P.; Lim, S. H.; Linden Levy, L. A.; Liška, T.; Liu, H.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Makek, M.; Malik, M. D.; Manion, A.; Manko, V. I.; Mannel, E.; Mao, Y.; Masui, H.; Matathias, F.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Means, N.; Meles, A.; Mendoza, M.; Meredith, B.; Miake, Y.; Mibe, T.; Mignerey, A. C.; Miki, K.; Milov, A.; Mishra, D. K.; Mitchell, J. T.; Miyasaka, S.; Mizuno, S.; Mohanty, A. K.; Montuenga, P.; Moon, H. J.; Moon, T.; Morino, Y.; Morreale, A.; Morrison, D. P.; Moukhanova, T. V.; Murakami, T.; Murata, J.; Mwai, A.; Nagamiya, S.; Nagashima, K.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakagomi, H.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Nam, S.; Nattrass, C.; Netrakanti, P. K.; Newby, J.; Nguyen, M.; Nihashi, M.; Niida, T.; Nishimura, S.; Nouicer, R.; Novák, T.; Novitzky, N.; Nyanin, A. S.; Oakley, C.; O'Brien, E.; Oda, S. X.; Ogilvie, C. A.; Oka, M.; Okada, K.; Onuki, Y.; Orjuela Koop, J. D.; Osborn, J. D.; Oskarsson, A.; Ouchida, M.; Ozawa, K.; Pak, R.; Pantuev, V.; Papavassiliou, V.; Park, I. H.; Park, J. S.; Park, S.; Park, S. K.; Park, W. J.; Pate, S. F.; Patel, M.; Pei, H.; Peng, J.-C.; Pereira, H.; Perepelitsa, D. V.; Perera, G. D. N.; Peressounko, D. Yu.; Perry, J.; Petti, R.; Pinkenburg, C.; Pinson, R.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Qu, H.; Rak, J.; Ramson, B. J.; Ravinovich, I.; Read, K. F.; Rembeczki, S.; Reygers, K.; Reynolds, D.; Riabov, V.; Riabov, Y.; Richardson, E.; Rinn, T.; Roach, D.; Roche, G.; Rolnick, S. D.; Rosati, M.; Rosen, C. A.; Rosendahl, S. S. E.; Rowan, Z.; Rubin, J. G.; Ružička, P.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Sakashita, K.; Sako, H.; Samsonov, V.; Sano, S.; Sarsour, M.; Sato, S.; Sato, T.; Sawada, S.; Schaefer, B.; Schmoll, B. K.; Sedgwick, K.; Seele, J.; Seidl, R.; Sen, A.; Seto, R.; Sett, P.; Sexton, A.; Sharma, D.; Shein, I.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Slunečka, M.; Snowball, M.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Stankus, P. W.; Stenlund, E.; Stepanov, M.; Stoll, S. P.; Sugitate, T.; Sukhanov, A.; Sumita, T.; Sun, J.; Sziklai, J.; Takagui, E. M.; Taketani, A.; Tanabe, R.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Themann, H.; Thomas, D.; Thomas, T. L.; Tieulent, R.; Timilsina, A.; Todoroki, T.; Togawa, M.; Toia, A.; Tomášek, L.; Tomášek, M.; Torii, H.; Towell, C. L.; Towell, R.; Towell, R. S.; Tserruya, I.; Tsuchimoto, Y.; Vale, C.; Valle, H.; van Hecke, H. W.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Virius, M.; Vrba, V.; Vznuzdaev, E.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Watanabe, Y. S.; Wei, F.; Wei, R.; Wessels, J.; White, A. S.; White, S. N.; Winter, D.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Xia, B.; Xue, L.; Yalcin, S.; Yamaguchi, Y. L.; Yamaura, K.; Yang, R.; Yanovich, A.; Ying, J.; Yokkaichi, S.; Yoo, J. H.; Yoon, I.; You, Z.; Young, G. R.; Younus, I.; Yu, H.; Yushmanov, I. E.; Zajc, W. A.; Zelenski, A.; Zhou, S.; Zou, L.; Phenix Collaboration

    2016-03-01

    Jet production rates are measured in p +p and d +Au collisions at √{sN N}=200 GeV recorded in 2008 with the PHENIX detector at the Relativistic Heavy Ion Collider. Jets are reconstructed using the R =0.3 anti-kt algorithm from energy deposits in the electromagnetic calorimeter and charged tracks in multiwire proportional chambers, and the jet transverse momentum (pT) spectra are corrected for the detector response. Spectra are reported for jets with 12

  13. Hydrodynamic chromatography of macromolecules using polymer monolithic columns.

    PubMed

    Edam, Rob; Eeltink, Sebastiaan; Vanhoutte, Dominique J D; Kok, Wim Th; Schoenmakers, Peter J

    2011-12-02

    The selectivity window of size-based separations of macromolecules was tailored by tuning the macropore size of polymer monolithic columns. Monolithic materials with pore sizes ranging between 75 nm and 1.2 μm were prepared in situ in large I.D. columns. The dominant separation mechanism was hydrodynamic chromatography in the flow-through pores. The calibration curves for synthetic polymers matched with the elution behavior by HDC separations in packed columns with 'analyte-to-pore' aspect ratios (λ) up to 0.2. For large-macropore monoliths, a deviation in retention behavior was observed for small polystyrene polymers (M(r)<20 kDa), which may be explained by a combined HDC-SEC mechanism for λ<0.02. The availability of monoliths with very narrow pore sizes allowed investigation of separations at high λ values. For high-molecular weight polymers (M(r)>300,000 Da) confined in narrow channels, the separation strongly depended on flow rate. Flow-rate dependent elution behavior was evaluated by calculation of Deborah numbers and confirmed to be outside the scope of classic shear deformation or slalom chromatography. Shear-induced forces acting on the periphery of coiled polymers in solution may be responsible for flow-rate dependent elution. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Effects of Epoetin Alfa Titration Practices, Implemented After Changes to Product Labeling, on Hemoglobin Levels, Transfusion Use, and Hospitalization Rates.

    PubMed

    Molony, Julia T; Monda, Keri L; Li, Suying; Beaubrun, Anne C; Gilbertson, David T; Bradbury, Brian D; Collins, Allan J

    2016-08-01

    Little is known about epoetin alfa (EPO) dosing at dialysis centers after implementation of the US Medicare prospective payment system and revision of the EPO label in 2011. Retrospective cohort study. Approximately 412,000 adult hemodialysis patients with Medicare Parts A and B as primary payer in 2009 to 2012 to describe EPO dosing and hemoglobin patterns; of these, about 70,000 patients clustered in about 1,300 dialysis facilities to evaluate facility-level EPO titration practices and patient-level outcomes in 2012. Facility EPO titration practices when hemoglobin levels were <10 and >11g/dL (grouped treatment variable) determined from monthly EPO dosing and hemoglobin level patterns. Patient mean hemoglobin levels, red blood cell transfusion rates, and all-cause and cause-specific hospitalization rates using a facility-based analysis. Monthly EPO dose and hemoglobin level, red blood cell transfusion rates, and all-cause and cause-specific hospitalization rates. Monthly EPO doses declined across all hemoglobin levels, with the greatest decline in patients with hemoglobin levels < 10g/dL (July-October 2011). In 2012, nine distinct facility titration practices were identified. Across groups, mean hemoglobin levels differed slightly (10.5-10.8g/dL) but within-patient hemoglobin standard deviations were similar (∼0.68g/dL). Patients at facilities implementing greater dose reductions and smaller dose escalations had lower hemoglobin levels and higher transfusion rates. In contrast, patients at facilities that implemented greater dose escalations (and large or small dose reductions) had higher hemoglobin levels and lower transfusion rates. There were no clinically meaningful differences in all-cause or cause-specific hospitalization events across groups. Possibly incomplete claims data; excluded small facilities and those without consistent titration patterns; hemoglobin levels reported monthly; inferred facility practice from observed dosing. Following prospective payment system implementation and labeling revisions, EPO doses declined significantly. Under the new label, facility EPO titration practices were associated with mean hemoglobin levels (but not standard deviations) and transfusion use, but not hospitalization rates. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  15. Cosmological implications of a large complete quasar sample.

    PubMed

    Segal, I E; Nicoll, J F

    1998-04-28

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.

  16. Covered interest parity arbitrage and temporal long-term dependence between the US dollar and the Yen

    NASA Astrophysics Data System (ADS)

    Batten, Jonathan A.; Szilagyi, Peter G.

    2007-03-01

    Using a daily time series from 1983 to 2005 of currency prices in spot and forward USD/Yen markets and matching equivalent maturity short-term US and Japanese interest rates, we investigate the sensitivity of the difference between actual prices in forward markets to those calculated from differentials in short-term interest rates. According to a fundamental theorem in financial economics termed covered interest parity (CIP), the actual and estimated prices should be identical once transaction and other costs are accommodated. The paper presents three important findings: first, we find evidence of considerable variation in CIP deviations from equilibrium; second, these deviations have diminished significantly and by 2000 have been almost eliminated; third, an analysis of the CIP deviations using the local Hurst exponent finds episodes of time-varying dependence over the various sample periods, which appear to be linked to episodes of dollar decline/Yen appreciation, or vice versa. The finding of temporal long-term dependence in CIP deviations is consistent with recent evidence of temporal long-term dependence in the returns of currency, stock and commodity markets.

  17. Analysis of the irradiation data for A302B and A533B correlation monitor materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J.A.

    1996-04-01

    The results of Charpy V-notch impact tests for A302B and A533B-1 Correlation Monitor Materials (CMM) listed in the surveillance power reactor data base (PR-EDB) and material test reactor data base (TR-EDB) are analyzed. The shift of the transition temperature at 30 ft-lb (T{sub 30}) is considered as the primary measure of radiation embrittlement in this report. The hyperbolic tangent fitting model and uncertainty of the fitting parameters for Charpy impact tests are presented in this report. For the surveillance CMM data, the transition temperature shifts at 30 ft-lb ({Delta}T{sub 30}) generally follow the predictions provided by Revision 2 of Regulatorymore » Guide 1.99 (R.G. 1.99). Difference in capsule temperatures is a likely explanation for large deviations from R.G. 1.99 predictions. Deviations from the R.G. 1.99 predictions are correlated to similar deviations for the accompanying materials in the same capsules, but large random fluctuations prevent precise quantitative determination. Significant scatter is noted in the surveillance data, some of which may be attributed to variations from one specimen set to another, or inherent in Charpy V-notch testing. The major contributions to the uncertainty of the R.G. 1.99 prediction model, and the overall data scatter are from mechanical test results, chemical analysis, irradiation environments, fluence evaluation, and inhomogeneous material properties. Thus in order to improve the prediction model, control of the above-mentioned error sources needs to be improved. In general the embrittlement behavior of both the A302B and A533B-1 plate materials is similar. There is evidence for a fluence-rate effect in the CMM data irradiated in test reactors; thus its implication on power reactor surveillance programs deserves special attention.« less

  18. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  19. Orientation illusions and heart-rate changes during short-radius centrifugation

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Kavelaars, J.; Cheung, C. C.; Young, L. R.

    2001-01-01

    Intermittent short-radius centrifugation is a promising countermeasure against the adverse effects of prolonged weightlessness. To assess the feasibility of this countermeasure, we need to understand the disturbing sensory effects that accompany some movements carried out during rotation. We tested 20 subjects who executed yaw and pitch head movements while rotating at constant angular velocity. They were supine with their main body axis perpendicular to earth gravity. The head was placed at the centrifuge's axis of rotation. Head movements produced a transient elevation of heart-rate. All observers reported head-contingent sensations of body tilt although their bodies remained supine. Mostly, the subjective sensations conform to a model based on semicircular canal responses to angular acceleration. However, some surprising deviations from the model were found. Also, large inter-individual differences in direction, magnitude, and quality of the illusory body tilt were observed. The results have implications for subject screening and prediction of subjective tolerance for centrifugation.

  20. Canonical Structure and Orthogonality of Forces and Currents in Irreversible Markov Chains

    NASA Astrophysics Data System (ADS)

    Kaiser, Marcus; Jack, Robert L.; Zimmer, Johannes

    2018-03-01

    We discuss a canonical structure that provides a unifying description of dynamical large deviations for irreversible finite state Markov chains (continuous time), Onsager theory, and Macroscopic Fluctuation Theory (MFT). For Markov chains, this theory involves a non-linear relation between probability currents and their conjugate forces. Within this framework, we show how the forces can be split into two components, which are orthogonal to each other, in a generalised sense. This splitting allows a decomposition of the pathwise rate function into three terms, which have physical interpretations in terms of dissipation and convergence to equilibrium. Similar decompositions hold for rate functions at level 2 and level 2.5. These results clarify how bounds on entropy production and fluctuation theorems emerge from the underlying dynamical rules. We discuss how these results for Markov chains are related to similar structures within MFT, which describes hydrodynamic limits of such microscopic models.

  1. Two-dimensional signal processing using a morphological filter for holographic memory

    NASA Astrophysics Data System (ADS)

    Kondo, Yo; Shigaki, Yusuke; Yamamoto, Manabu

    2012-03-01

    Today, along with the wider use of high-speed information networks and multimedia, it is increasingly necessary to have higher-density and higher-transfer-rate storage devices. Therefore, research and development into holographic memories with three-dimensional storage areas is being carried out to realize next-generation large-capacity memories. However, in holographic memories, interference between bits, which affect the detection characteristics, occurs as a result of aberrations such as the deviation of a wavefront in an optical system. In this study, we pay particular attention to the nonlinear factors that cause bit errors, where filters with a Volterra equalizer and the morphologies are investigated as a means of signal processing.

  2. Bethe Ansatz for the Weakly Asymmetric Simple Exclusion Process and Phase Transition in the Current Distribution

    NASA Astrophysics Data System (ADS)

    Simon, Damien

    2011-03-01

    The probability distribution of the current in the asymmetric simple exclusion process is expected to undergo a phase transition in the regime of weak asymmetry of the jumping rates. This transition was first predicted by Bodineau and Derrida using a linear stability analysis of the hydrodynamical limit of the process and further arguments have been given by Mallick and Prolhac. However it has been impossible so far to study what happens after the transition. The present paper presents an analysis of the large deviation function of the current on both sides of the transition from a Bethe Ansatz approach of the weak asymmetry regime of the exclusion process.

  3. A grid of MHD models for stellar mass loss and spin-down rates of solar analogs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, O.; Drake, J. J.

    2014-03-01

    Stellar winds are believed to be the dominant factor in the spin-down of stars over time. However, stellar winds of solar analogs are poorly constrained due to observational challenges. In this paper, we present a grid of magnetohydrodynamic models to study and quantify the values of stellar mass loss and angular momentum loss rates as a function of the stellar rotation period, magnetic dipole component, and coronal base density. We derive simple scaling laws for the loss rates as a function of these parameters, and constrain the possible mass loss rate of stars with thermally driven winds. Despite the successmore » of our scaling law in matching the results of the model, we find a deviation between the 'solar dipole' case and a real case based on solar observations that overestimates the actual solar mass loss rate by a factor of three. This implies that the model for stellar fields might require a further investigation with additional complexity. Mass loss rates in general are largely controlled by the magnetic field strength, with the wind density varying in proportion to the confining magnetic pressure B {sup 2}. We also find that the mass loss rates obtained using our grid models drop much faster with the increase in rotation period than scaling laws derived using observed stellar activity. For main-sequence solar-like stars, our scaling law for angular momentum loss versus poloidal magnetic field strength retrieves the well-known Skumanich decline of angular velocity with time, Ω{sub *}∝t {sup –1/2}, if the large-scale poloidal magnetic field scales with rotation rate as B{sub p}∝Ω{sub ⋆}{sup 2}.« less

  4. Temperature and current coefficients of lasing wavelength in tunable diode laser spectroscopy.

    PubMed

    Fukuda, M; Mishima, T; Nakayama, N; Masuda, T

    2010-08-01

    The factors determining temperature and current coefficients of lasing wavelength are investigated and discussed under monitoring CO(2)-gas absorption spectra. The diffusion rate of Joule heating at the active layer to the surrounding region is observed by monitoring the change in the junction voltage, which is a function of temperature and the wavelength (frequency) deviation under sinusoidal current modulation. Based on the experimental results, the time interval of monitoring the wavelength after changing the ambient temperature or injected current (scanning rate) has to be constant at least to eliminate the monitoring error induced by the deviation of lasing wavelength, though the temperature and current coefficients of lasing wavelength differ with the rate.

  5. Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories

    DOE R&D Accomplishments Database

    Wilczek, F. A.; Zee, A.; Treiman, S. B.

    1974-11-01

    Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.

  6. Social deviance activates the brain's error-monitoring system.

    PubMed

    Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J

    2012-03-01

    Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.

  7. Measuring the rate of change of voice fundamental frequency in fluent speech during mental depression.

    PubMed

    Nilsonne, A; Sundberg, J; Ternström, S; Askenfelt, A

    1988-02-01

    A method of measuring the rate of change of fundamental frequency has been developed in an effort to find acoustic voice parameters that could be useful in psychiatric research. A minicomputer program was used to extract seven parameters from the fundamental frequency contour of tape-recorded speech samples: (1) the average rate of change of the fundamental frequency and (2) its standard deviation, (3) the absolute rate of fundamental frequency change, (4) the total reading time, (5) the percent pause time of the total reading time, (6) the mean, and (7) the standard deviation of the fundamental frequency distribution. The method is demonstrated on (a) a material consisting of synthetic speech and (b) voice recordings of depressed patients who were examined during depression and after improvement.

  8. Office and 24-hour heart rate and target organ damage in hypertensive patients

    PubMed Central

    2012-01-01

    Background We investigated the association between heart rate and its variability with the parameters that assess vascular, renal and cardiac target organ damage. Methods A cross-sectional study was performed including a consecutive sample of 360 hypertensive patients without heart rate lowering drugs (aged 56 ± 11 years, 64.2% male). Heart rate (HR) and its standard deviation (HRV) in clinical and 24-hour ambulatory monitoring were evaluated. Renal damage was assessed by glomerular filtration rate and albumin/creatinine ratio; vascular damage by carotid intima-media thickness and ankle/brachial index; and cardiac damage by the Cornell voltage-duration product and left ventricular mass index. Results There was a positive correlation between ambulatory, but not clinical, heart rate and its standard deviation with glomerular filtration rate, and a negative correlation with carotid intima-media thickness, and night/day ratio of systolic and diastolic blood pressure. There was no correlation with albumin/creatinine ratio, ankle/brachial index, Cornell voltage-duration product or left ventricular mass index. In the multiple linear regression analysis, after adjusting for age, the association of glomerular filtration rate and intima-media thickness with ambulatory heart rate and its standard deviation was lost. According to the logistic regression analysis, the predictors of any target organ damage were age (OR = 1.034 and 1.033) and night/day systolic blood pressure ratio (OR = 1.425 and 1.512). Neither 24 HR nor 24 HRV reached statistical significance. Conclusions High ambulatory heart rate and its variability, but not clinical HR, are associated with decreased carotid intima-media thickness and a higher glomerular filtration rate, although this is lost after adjusting for age. Trial Registration ClinicalTrials.gov: NCT01325064 PMID:22439900

  9. Constraints on Cosmology and Gravity from the Dynamics of Voids.

    PubMed

    Hamaus, Nico; Pisani, Alice; Sutter, P M; Lavaux, Guilhem; Escoffier, Stéphanie; Wandelt, Benjamin D; Weller, Jochen

    2016-08-26

    The Universe is mostly composed of large and relatively empty domains known as cosmic voids, whereas its matter content is predominantly distributed along their boundaries. The remaining material inside them, either dark or luminous matter, is attracted to these boundaries and causes voids to expand faster and to grow emptier over time. Using the distribution of galaxies centered on voids identified in the Sloan Digital Sky Survey and adopting minimal assumptions on the statistical motion of these galaxies, we constrain the average matter content Ω_{m}=0.281±0.031 in the Universe today, as well as the linear growth rate of structure f/b=0.417±0.089 at median redshift z[over ¯]=0.57, where b is the galaxy bias (68% C.L.). These values originate from a percent-level measurement of the anisotropic distortion in the void-galaxy cross-correlation function, ϵ=1.003±0.012, and are robust to consistency tests with bootstraps of the data and simulated mock catalogs within an additional systematic uncertainty of half that size. They surpass (and are complementary to) existing constraints by unlocking cosmological information on smaller scales through an accurate model of nonlinear clustering and dynamics in void environments. As such, our analysis furnishes a powerful probe of deviations from Einstein's general relativity in the low-density regime which has largely remained untested so far. We find no evidence for such deviations in the data at hand.

  10. Effect of surface nano/micro-structuring on the early formation of microbial anodes with Geobacter sulfurreducens: Experimental and theoretical approaches.

    PubMed

    Champigneux, Pierre; Renault-Sentenac, Cyril; Bourrier, David; Rossi, Carole; Delia, Marie-Line; Bergel, Alain

    2018-06-01

    Smooth and nano-rough flat gold electrodes were manufactured with controlled Ra of 0.8 and 4.5nm, respectively. Further nano-rough surfaces (Ra 4.5nm) were patterned with arrays of micro-pillars 500μm high. All these electrodes were implemented in pure cultures of Geobacter sulfurreducens, under a constant potential of 0.1V/SCE and with a single addition of acetate 10mM to check the early formation of microbial anodes. The flat smooth electrodes produced an average current density of 0.9A·m -2 . The flat nano-rough electrodes reached 2.5A·m -2 on average, but with a large experimental deviation of ±2.0A·m -2 . This large deviation was due to the erratic colonization of the surface but, when settled on the surface, the cells displayed current density that was directly correlated to the biofilm coverage ratio. The micro-pillars considerably improved the experimental reproducibility by offering the cells a quieter environment, facilitating biofilm development. Current densities of up to 8.5A·m -2 (per projected surface area) were thus reached, in spite of rate limitation due to the mass transport of the buffering species, as demonstrated by numerical modelling. Nano-roughness combined with micro-structuring increased current density by a factor close to 10 with respect to the smooth flat surface. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. A bench-top megavoltage fan-beam CT using CdWO4-photodiode detectors. I. System description and detector characterization.

    PubMed

    Rathee, S; Tu, D; Monajemi, T T; Rickey, D W; Fallone, B G

    2006-04-01

    We describe the components of a bench-top megavoltage computed tomography (MVCT) scanner that uses an 80-element detector array consisting of CdWO4 scintillators coupled to photodiodes. Each CdWO4 crystal is 2.75 x 8 x 10 mm3. The detailed design of the detector array, timing control, and multiplexer are presented. The detectors show a linear response to dose (dose rate was varied by changing the source to detector distance) with a correlation coefficient (R2) nearly unity with the standard deviation of signal at each dose being less than 0.25%. The attenuation of a 6 MV beam by solid water measured by this detector array indicates a small, yet significant spectral hardening that needs to be corrected before image reconstruction. The presampled modulation transfer function is strongly affected by the detector's large pitch and a large improvement can be obtained by reducing the detector pitch. The measured detective quantum efficiency at zero spatial frequency is 18.8% for 6 MV photons which will reduce the dose to the patient in MVCT applications. The detector shows a less than a 2% reduction in response for a dose of 24.5 Gy accumulated in 2 h; however, the lost response is recovered on the following day. A complete recovery can be assumed within the experimental uncertainty (standard deviation <0.5%); however, any smaller permanent damage could not be assessed.

  12. Exploring the Lyapunov instability properties of high-dimensional atmospheric and climate models

    NASA Astrophysics Data System (ADS)

    De Cruz, Lesley; Schubert, Sebastian; Demaeyer, Jonathan; Lucarini, Valerio; Vannitsem, Stéphane

    2018-05-01

    The stability properties of intermediate-order climate models are investigated by computing their Lyapunov exponents (LEs). The two models considered are PUMA (Portable University Model of the Atmosphere), a primitive-equation simple general circulation model, and MAOOAM (Modular Arbitrary-Order Ocean-Atmosphere Model), a quasi-geostrophic coupled ocean-atmosphere model on a β-plane. We wish to investigate the effect of the different levels of filtering on the instabilities and dynamics of the atmospheric flows. Moreover, we assess the impact of the oceanic coupling, the dissipation scheme, and the resolution on the spectra of LEs. The PUMA Lyapunov spectrum is computed for two different values of the meridional temperature gradient defining the Newtonian forcing to the temperature field. The increase in the gradient gives rise to a higher baroclinicity and stronger instabilities, corresponding to a larger dimension of the unstable manifold and a larger first LE. The Kaplan-Yorke dimension of the attractor increases as well. The convergence rate of the rate function for the large deviation law of the finite-time Lyapunov exponents (FTLEs) is fast for all exponents, which can be interpreted as resulting from the absence of a clear-cut atmospheric timescale separation in such a model. The MAOOAM spectra show that the dominant atmospheric instability is correctly represented even at low resolutions. However, the dynamics of the central manifold, which is mostly associated with the ocean dynamics, is not fully resolved because of its associated long timescales, even at intermediate orders. As expected, increasing the mechanical atmosphere-ocean coupling coefficient or introducing a turbulent diffusion parametrisation reduces the Kaplan-Yorke dimension and Kolmogorov-Sinai entropy. In all considered configurations, we are not yet in the regime in which one can robustly define large deviation laws describing the statistics of the FTLEs. This paper highlights the need to investigate the natural variability of the atmosphere-ocean coupled dynamics by associating rate of growth and decay of perturbations with the physical modes described using the formalism of the covariant Lyapunov vectors and considering long integrations in order to disentangle the dynamical processes occurring at all timescales.

  13. Design and Development of Lateral Flight Director

    NASA Technical Reports Server (NTRS)

    Kudlinski, Kim E.; Ragsdale, William A.

    1999-01-01

    The current control law used for the flight director in the Boeing 737 simulator is inadequate with large localizer deviations near the middle marker. Eight different control laws are investigated. A heuristic method is used to design control laws that meet specific performance criteria. The design of each is described in detail. Several tests were performed and compared with the current control law for the flight director. The goal was to design a control law for the flight director that can be used with large localizer deviations near the middle marker, which could be caused by winds or wake turbulence, without increasing its level of complexity.

  14. On the Geometry of Chemical Reaction Networks: Lyapunov Function and Large Deviations

    NASA Astrophysics Data System (ADS)

    Agazzi, A.; Dembo, A.; Eckmann, J.-P.

    2018-04-01

    In an earlier paper, we proved the validity of large deviations theory for the particle approximation of quite general chemical reaction networks. In this paper, we extend its scope and present a more geometric insight into the mechanism of that proof, exploiting the notion of spherical image of the reaction polytope. This allows to view the asymptotic behavior of the vector field describing the mass-action dynamics of chemical reactions as the result of an interaction between the faces of this polytope in different dimensions. We also illustrate some local aspects of the problem in a discussion of Wentzell-Freidlin theory, together with some examples.

  15. Spatial variation in deposition rate coefficients of an adhesion-deficient bacterial strain in quartz sand.

    PubMed

    Tong, Meiping; Camesano, Terri A; Johnson, William P

    2005-05-15

    The transport of bacterial strain DA001 was examined in packed quartz sand under a variety of environmentally relevant ionic strength and flow conditions. Under all conditions, the retained bacterial concentrations decreased with distance from the column inlet at a rate that was faster than loglinear, indicating that the deposition rate coefficient decreased with increasing transport distance. The hyperexponential retained profile contrasted againstthe nonmonotonic retained profiles that had been previously observed for this same bacterial strain in glass bead porous media, demonstrating that the form of deviation from log-linear behavior is highly sensitive to system conditions. The deposition rate constants in quartz sand were orders of magnitude below those expected from filtration theory, even in the absence of electrostatic energy barriers. The degree of hyperexponential deviation of the retained profiles from loglinear behavior did not decrease with increasing ionic strength in quartz sand. These observations demonstrate thatthe observed low adhesion and deviation from log-linear behavior was not driven by electrostatic repulsion. Measurements of the interaction forces between DA001 cells and the silicon nitride tip of an atomic force microscope (AFM) showed that the bacterium possesses surface polymers with an average equilibrium length of 59.8 nm. AFM adhesion force measurements revealed low adhesion affinities between silicon nitride and DA001 polymers with approximately 95% of adhesion forces having magnitudes < 0.8 nN. Steric repulsion due to surface polymers was apparently responsible for the low adhesion to silicon nitride, indicating that steric interactions from extracellular polymers controlled DA001 adhesion deficiency and deviation from log-linear behavior on quartz sand.

  16. Accuracy of computer-aided design models of the jaws produced using ultra-low MDCT doses and ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig

    2018-06-16

    To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.

  17. 40 CFR 63.4720 - What reports must I submit?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... this section. This includes periods of SSM during which deviations occurred. (i) The beginning and... each deviation occurred during a period of SSM or during another period. (ix) A summary of the total... completing the tests as specified in § 63.10(d)(2). (c) SSM reports. If you used the emission rate with add...

  18. 40 CFR 63.4720 - What reports must I submit?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... this section. This includes periods of SSM during which deviations occurred. (i) The beginning and... each deviation occurred during a period of SSM or during another period. (ix) A summary of the total... completing the tests as specified in § 63.10(d)(2). (c) SSM reports. If you used the emission rate with add...

  19. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  20. Effects of aortic tortuosity on left ventricular diastolic parameters derived from gated myocardial perfusion single photon emission computed tomography in patients with normal myocardial perfusion.

    PubMed

    Kurisu, Satoshi; Nitta, Kazuhiro; Sumimoto, Yoji; Ikenaga, Hiroki; Ishibashi, Ken; Fukuda, Yukihiro; Kihara, Yasuki

    2018-06-01

    Aortic tortuosity is often found on chest radiograph, especially in aged patients. We tested the hypothesis that aortic tortuosity was associated with LV diastolic parameters derived from gated SPECT in patients with normal myocardial perfusion. One-hundred and twenty-two patients with preserved LV ejection fraction and normal myocardial perfusion were enrolled. Descending aortic deviation was defined as the horizontal distance from the left line of the aortic knob to the most prominent left line of the descending aorta. This parameter was measured for the quantitative assessment of aortic tortuosity. Peak filling rate (PFR) and one-third mean filling rate (1/3 MFR) were obtained from redistribution images as LV diastolic parameters. Descending aortic deviation ranged from 0 to 22 mm with a mean distance of 4.5 ± 6.3 mm. Descending aortic deviation was significantly correlated with age (r = 0.38, p < 0.001) and estimated glomerular filtration rate (eGFR) (r = - 0.21, p = 0.02). Multivariate linear regression analysis revealed that eGFR (β = 0.23, p = 0.02) and descending aortic deviation (β = - 0.23, p = 0.01) were significantly associated with PFR, and that only descending aortic deviation (β = - 0.21, p = 0.03) was significantly associated with 1/3 MFR. Our data suggest that aortic tortuosity is associated with LV diastolic parameters derived from gated SPECT in patients with normal myocardial perfusion.

  1. Offshore fatigue design turbulence

    NASA Astrophysics Data System (ADS)

    Larsen, Gunner C.

    2001-07-01

    Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.

  2. Ranking and validation of spallation models for isotopic production cross sections of heavy residua

    NASA Astrophysics Data System (ADS)

    Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef

    2017-07-01

    The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.

  3. Fatigue, pilot deviations and time of day

    NASA Technical Reports Server (NTRS)

    Baker, Susan P.

    1989-01-01

    The relationships between pilot fatigue, pilot deviations, reported incidents, and time of day are examined. A sample of 200 Aviation Safety Reporting System (ASRS) reports were analyzed from 1985 and 200 reports from 1987, plus 100 reports from late 1987 and early 1988 that were selected because of possible association with fatigue. The FAA pilot deviation data and incident data were analyzed in relation to denominator data that summarized the hourly operations (landings and takeoffs of scheduled flights) at major U.S. airports. Using as numerators FAA data on pilot deviations and incidents reported to the FAA, the rates by time of day were calculated. Pilot age was also analyzed in relation to the time of day, phase of flight, and type of incident.

  4. The Effect of Expert Performance Microtiming on Listeners' Experience of Groove in Swing or Funk Music

    PubMed Central

    Senn, Olivier; Kilchenmann, Lorenz; von Georgi, Richard; Bullerjahn, Claudia

    2016-01-01

    This study tested the influence of expert performance microtiming on listeners' experience of groove. Two professional rhythm section performances (bass/drums) in swing and funk style were recorded, and the performances' original microtemporal deviations from a regular metronomic grid were scaled to several levels of magnitude. Music expert (n = 79) and non-expert (n = 81) listeners rated the groove qualities of stimuli using a newly developed questionnaire that measures three dimensions of the groove experience (Entrainment, Enjoyment, and the absence of Irritation). Findings show that music expert listeners were more sensitive to microtiming manipulations than non-experts. Across both expertise groups and for both styles, groove ratings were high for microtiming magnitudes equal or smaller than those originally performed and decreased for exaggerated microtiming magnitudes. In particular, both the fully quantized music and the music with the originally performed microtiming pattern were rated equally high on groove. This means that neither the claims of PD theory (that microtiming deviations are necessary for groove) nor the opposing exactitude hypothesis (that microtiming deviations are detrimental to groove) were supported by the data. PMID:27761117

  5. The Effect of Expert Performance Microtiming on Listeners' Experience of Groove in Swing or Funk Music.

    PubMed

    Senn, Olivier; Kilchenmann, Lorenz; von Georgi, Richard; Bullerjahn, Claudia

    2016-01-01

    This study tested the influence of expert performance microtiming on listeners' experience of groove. Two professional rhythm section performances (bass/drums) in swing and funk style were recorded, and the performances' original microtemporal deviations from a regular metronomic grid were scaled to several levels of magnitude. Music expert ( n = 79) and non-expert ( n = 81) listeners rated the groove qualities of stimuli using a newly developed questionnaire that measures three dimensions of the groove experience ( Entrainment, Enjoyment , and the absence of Irritation ). Findings show that music expert listeners were more sensitive to microtiming manipulations than non-experts. Across both expertise groups and for both styles, groove ratings were high for microtiming magnitudes equal or smaller than those originally performed and decreased for exaggerated microtiming magnitudes. In particular, both the fully quantized music and the music with the originally performed microtiming pattern were rated equally high on groove. This means that neither the claims of PD theory (that microtiming deviations are necessary for groove) nor the opposing exactitude hypothesis (that microtiming deviations are detrimental to groove) were supported by the data.

  6. Incidence rates, correlates, and prognosis of electrocardiographic P-wave abnormalities - a nationwide population-based study.

    PubMed

    Lehtonen, Arttu O; Langén, Ville L; Puukka, Pauli J; Kähönen, Mika; Nieminen, Markku S; Jula, Antti M; Niiranen, Teemu J

    Scant data exist on incidence rates, correlates, and prognosis of electrocardiographic P-wave abnormalities in the general population. We recorded ECG and measured conventional cardiovascular risk factors in 5667 Finns who were followed up for incident atrial fibrillation (AF). We obtained repeat ECGs from 3089 individuals 11years later. The incidence rates of prolonged P-wave duration, abnormal P terminal force (PTF), left P-wave axis deviation, and right P-wave axis deviation were 16.0%, 7.4%, 3.4%, and 2.2%, respectively. Older age and higher BMI were associated with incident prolonged P-wave duration and abnormal PTF (P≤0.01). Higher blood pressure was associated with incident prolonged P-wave duration and right P-wave axis deviation (P≤0.01). During follow-up, only prolonged P-wave duration predicted AF (multivariable-adjusted hazard ratio, 1.38; P=0.001). Modifiable risk factors associate with P-wave abnormalities that are common and may represent intermediate steps of atrial cardiomyopathy on a pathway leading to AF. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Reduction in children's symptoms of attention deficit hyperactivity disorder and oppositional defiant disorder during individual tutoring as compared with classroom instruction.

    PubMed

    Strayhorn, Joseph M; Bickel, Donna D

    2002-08-01

    Children who display symptoms of Attention Deficit Hyperactivity Disorder (ADHD) in classrooms are reputed to display fewer symptoms in one-on-one interaction. We tested this hypothesis with children who received tutoring for reading and behavior problems. We selected 30 children whose teacher-rated ADHD symptoms fit a pattern consistent with DSM criteria for the diagnosis. Teachers rated the frequency of symptoms in classrooms before and after tutoring. Tutors rated the frequency of the same behaviors during individual tutoring sessions. Children's ADHD symptoms, as well as oppositional symptoms, were significantly lower in the tutoring sessions than in the classrooms. The effect sizes for the difference between behavior in classrooms and in individual tutoring ranged from 0.7 to 2.5 standard deviations. These effect sizes appear as large as those reported for the effect of stimulant medication on ADHD symptoms. All 30 children at preintervention fit the pattern for ADHD using teachers' ratings of classroom behavior; 87% of them did not meet those DSM criteria using tutors' ratings of behavior in individual sessions. The confound of different raters for the two different settings must be resolved by another study with a new design.

  8. Growth, chamber building rate and reproduction time of Palaeonummulites venosus under natural conditions.

    NASA Astrophysics Data System (ADS)

    Kinoshita, Shunichi; Eder, Wolfgang; Wöger, Julia; Hohenegger, Johann; Briguglio, Antonino

    2017-04-01

    Investigations on Palaeonummulites venosus using the natural laboratory approach for determining chamber building rate, test diameter increase rate, reproduction time and longevity is based on the decomposition of monthly obtained frequency distributions based on chamber number and test diameter into normal-distributed components. The shift of the component parameters 'mean' and 'standard deviation' during the investigation period of 15 months was used to calculate Michaelis-Menten functions applied to estimate the averaged chamber building rate and diameter increase rate under natural conditions. The individual dates of birth were estimated using the inverse averaged chamber building rate and the inverse diameter increase rate fitted by the individual chamber number or the individual test diameter at the sampling date. Distributions of frequencies and densities (i.e. frequency divided by sediment weight) based on chamber building rate and diameter increase rate resulted both in a continuous reproduction through the year with two peaks, the stronger in May /June determined as the beginning of the summer generation (generation1) and the weaker in November determined as the beginning of the winter generation (generation 2). This reproduction scheme explains the existence of small and large specimens in the same sample. Longevity, calculated as the maximum difference in days between the individual's birth date and the sampling date seems to be round about one year, obtained by both estimations based on the chamber building rate and the diameter increase rate.

  9. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation

    PubMed Central

    Schulze, Walther H. W.; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2–11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold. PMID:26587538

  10. ECG-Based Detection of Early Myocardial Ischemia in a Computational Model: Impact of Additional Electrodes, Optimal Placement, and a New Feature for ST Deviation.

    PubMed

    Loewe, Axel; Schulze, Walther H W; Jiang, Yuan; Wilhelms, Mathias; Luik, Armin; Dössel, Olaf; Seemann, Gunnar

    2015-01-01

    In case of chest pain, immediate diagnosis of myocardial ischemia is required to respond with an appropriate treatment. The diagnostic capability of the electrocardiogram (ECG), however, is strongly limited for ischemic events that do not lead to ST elevation. This computational study investigates the potential of different electrode setups in detecting early ischemia at 10 minutes after onset: standard 3-channel and 12-lead ECG as well as body surface potential maps (BSPMs). Further, it was assessed if an additional ECG electrode with optimized position or the right-sided Wilson leads can improve sensitivity of the standard 12-lead ECG. To this end, a simulation study was performed for 765 different locations and sizes of ischemia in the left ventricle. Improvements by adding a single, subject specifically optimized electrode were similar to those of the BSPM: 2-11% increased detection rate depending on the desired specificity. Adding right-sided Wilson leads had negligible effect. Absence of ST deviation could not be related to specific locations of the ischemic region or its transmurality. As alternative to the ST time integral as a feature of ST deviation, the K point deviation was introduced: the baseline deviation at the minimum of the ST-segment envelope signal, which increased 12-lead detection rate by 7% for a reasonable threshold.

  11. Quality assurance of HDR prostate plans: program implementation at a community hospital.

    PubMed

    Rush, Jennifer B; Thomas, Michael D

    2005-01-01

    Adenocarcinoma of the prostate is currently the most commonly diagnosed cancer in men in the United States, and the second leading cause of cancer mortality. The utilization of radiation therapy is regarded as the definitive local therapy of choice for intermediate- and high-risk disease, in which there is increased risk for extracapsular extension, seminal vesicle invasion, or regional node involvement. High-dose-rate (HDR) brachytherapy is a logical treatment modality to deliver the boost dose to an external beam radiation therapy (EBRT) treatment to increase local control rates. From a treatment perspective, the utilization of a complicated treatment delivery system, the compressed time frame in which the procedure is performed, and the small number of large dose fractions make the implementation of a comprehensive quality assurance (QA) program imperative. One aspect of this program is the QA of the HDR treatment plan. Review of regulatory and medical physics professional publications shows that substantial general guidance is available. We provide some insight to the implementation of an HDR prostate plan program at a community hospital. One aspect addressed is the utilization of the low-dose-rate (LDR) planning system and the use of existing ultrasound image sets to familiarize the radiation therapy team with respect to acceptable HDR implant geometries. Additionally, the use of the LDR treatment planning system provided a means to prospectively determine the relationship between the treated isodose volume and the product of activity and time for the department's planning protocol prior to the first HDR implant. For the first 12 HDR prostate implants, the root-mean-square (RMS) deviation was 3.05% between the predicted product of activity and time vs. the actual plan values. Retrospective re-evaluation of the actual implant data reduced the RMS deviation to 2.36%.

  12. Multifield stochastic particle production: beyond a maximum entropy ansatz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amin, Mustafa A.; Garcia, Marcos A.G.; Xie, Hong-Yi

    2017-09-01

    We explore non-adiabatic particle production for N {sub f} coupled scalar fields in a time-dependent background with stochastically varying effective masses, cross-couplings and intervals between interactions. Under the assumption of weak scattering per interaction, we provide a framework for calculating the typical particle production rates after a large number of interactions. After setting up the framework, for analytic tractability, we consider interactions (effective masses and cross couplings) characterized by series of Dirac-delta functions in time with amplitudes and locations drawn from different distributions. Without assuming that the fields are statistically equivalent, we present closed form results (up to quadratures) formore » the asymptotic particle production rates for the N {sub f}=1 and N {sub f}=2 cases. We also present results for the general N {sub f} >2 case, but with more restrictive assumptions. We find agreement between our analytic results and direct numerical calculations of the total occupation number of the produced particles, with departures that can be explained in terms of violation of our assumptions. We elucidate the precise connection between the maximum entropy ansatz (MEA) used in Amin and Baumann (2015) and the underlying statistical distribution of the self and cross couplings. We provide and justify a simple to use (MEA-inspired) expression for the particle production rate, which agrees with our more detailed treatment when the parameters characterizing the effective mass and cross-couplings between fields are all comparable to each other. However, deviations are seen when some parameters differ significantly from others. We show that such deviations become negligible for a broad range of parameters when N {sub f}>> 1.« less

  13. Inflight evaluation of pilot workload measures for rotorcraft research

    NASA Technical Reports Server (NTRS)

    Shively, Robert J.; Bortolussi, Michael R.; Battiste, Vernol; Hart, Sandra G.; Pepitone, David D.; Matsumoto, Joy Hamerman

    1987-01-01

    The effectiveness of heart-rate monitoring and the NASA TLX workload rating scale (Hart et al., 1985) in measuring helicopter-pilot workloads is investigated experimentally. Four NASA test pilots flew two 2-h missions each in an SH-3G helicopter, following scenarios with takeoff, hover, cross-country, and landing tasks; pilot performance on the tasks undertaken near the landing area was measured by laser tracking. The results are presented in graphs and discussed in detail, and it is found that the TLX ratings clearly distinguish the flight segments and are well correlated with the performance data. The mean heart rate (measured as interbeat interval) is correlated (r = -0.69) with the TLX workload, but only the standard deviation of the interbeat interval is able to distinguish between flight segments; the correlation between standard deviation and TLX ratings is negative but not significant.

  14. SU-E-J-32: Dosimetric Evaluation Based On Pre-Treatment Cone Beam CT for Spine Stereotactic Body Radiotherapy: Does Region of Interest Focus Matter?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnelli, A; Xia, P

    2015-06-15

    Purpose: Spine stereotactic body radiotherapy requires very conformal dose distributions and precise delivery. Prior to treatment, a KV cone-beam CT (KV-CBCT) is registered to the planning CT to provide image-guided positional corrections, which depend on selection of the region of interest (ROI) because of imperfect patient positioning and anatomical deformation. Our objective is to determine the dosimetric impact of ROI selections. Methods: Twelve patients were selected for this study with the treatment regions varied from C-spine to T-spine. For each patient, the KV-CBCT was registered to the planning CT three times using distinct ROIs: one encompassing the entire patient, amore » large ROI containing large bony anatomy, and a small target-focused ROI. Each registered CBCT volume, saved as an aligned dataset, was then sent to the planning system. The treated plan was applied to each dataset and dose was recalculated. The tumor dose coverage (percentage of target volume receiving prescription dose), maximum point dose to 0.03 cc of the spinal cord, and dose to 10% of the spinal cord volume (V10) for each alignment were compared to the original plan. Results: The average magnitude of tumor coverage deviation was 3.9%±5.8% with external contour, 1.5%±1.1% with large ROI, 1.3%±1.1% with small ROI. Spinal cord V10 deviation from plan was 6.6%±6.6% with external contour, 3.5%±3.1% with large ROI, and 1.2%±1.0% with small ROI. Spinal cord max point dose deviation from plan was: 12.2%±13.3% with external contour, 8.5%±8.4% with large ROI, and 3.7%±2.8% with small ROI. Conclusion: A small ROI focused on the target results in the smallest deviation from planned dose to target and cord although rotations at large distances from the targets were observed. It is recommended that image fusion during CBCT focus narrowly on the target volume to minimize dosimetric error. Improvement in patient setups may further reduce residual errors.« less

  15. Thin Disk Accretion in the Magnetically-Arrested State

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan; Reynolds, Christopher S.

    2016-01-01

    Shakura-Sunyaev thin disk theory is fundamental to black hole astrophysics. Though applications of the theory are wide-spread and powerful tools for explaining observations, such as Soltan's argument using quasar power, broadened iron line measurements, continuum fitting, and recently reverberation mapping, a significant large-scale magnetic field causes substantial deviations from standard thin disk behavior. We have used fully 3D general relativistic MHD simulations with cooling to explore the thin (H/R~0.1) magnetically arrested disk (MAD) state and quantify these deviations. This work demonstrates that accumulation of large-scale magnetic flux into the MAD state is possible, and then extends prior numerical studies of thicker disks, allowing us to measure how jet power scales with the disk state, providing a natural explanation of phenomena like jet quenching in the high-soft state of X-ray binaries. We have also simulated thin MAD disks with a misaligned black hole spin axis in order to understand further deviations from thin disk theory that may significantly affect observations.

  16. Sound-direction identification, interaural time delay discrimination, and speech intelligibility advantages in noise for a bilateral cochlear implant user.

    PubMed

    Van Hoesel, Richard; Ramsden, Richard; Odriscoll, Martin

    2002-04-01

    To characterize some of the benefits available from using two cochlear implants compared with just one, sound-direction identification (ID) abilities, sensitivity to interaural time delays (ITDs) and speech intelligibility in noise were measured for a bilateral multi-channel cochlear implant user. Sound-direction ID in the horizontal plane was tested with a bilateral cochlear implant user. The subject was tested both unilaterally and bilaterally using two independent behind-the-ear ESPRIT (Cochlear Ltd.) processors, as well as bilaterally using custom research processors. Pink noise bursts were presented using an 11-loudspeaker array spanning the subject's frontal 180 degrees arc in an anechoic room. After each burst, the subject was asked to identify which loudspeaker had produced the sound. No explicit training, and no feedback were given. Presentation levels were nominally at 70 dB SPL, except for a repeat experiment using the clinical devices where the presentation levels were reduced to 60 dB SPL to avoid activation of the devices' automatic gain control (AGC) circuits. Overall presentation levels were randomly varied by +/- 3 dB. For the research processor, a "low-update-rate" and a "high-update-rate" strategy were tested. Direct measurements of ITD just noticeable differences (JNDs) were made using a 3 AFC paradigm targeting 70% correct performance on the psychometric function. Stimuli included simple, low-rate electrical pulse trains as well as high-rate pulse trains modulated at 100 Hz. Speech data comparing monaural and binaural performance in noise were also collected with both low, and high update-rate strategies on the research processors. Open-set sentences were presented from directly in front of the subject and competing multi-talker babble noise was presented from the same loudspeaker, or from a loudspeaker placed 90 degrees to the left or right of the subject. For the sound-direction ID task, monaural performance using the clinical devices showed large mean absolute errors of 81 degrees and 73 degrees, with standard deviations (averaged across all 11 loud-speakers) of 10 degrees and 17 degrees, for left and right ears, respectively. Fore bilateral device use at a presentation level of 70 dB SPL, the mean error improved to about 16 degrees with an average standard deviation of 18 degrees. When the presentation level was decreased to 60 dB SPL to avoid activation of the automatic gain control (AGC) circuits in the clinical processors, the mean response error improved further to 8 degrees with a standard deviation of 13 degrees. Further tests with the custom research processors, which had a higher stimulation rate and did not include AGCs, showed comparable response errors: around 8 or 9 degrees and a standard deviation of about 11 degrees for both update rates. The best ITD JNDs measured for this subject were between 350 to 400 microsec for simple low-rate pulse trains. Speech results showed a substantial headshadow advantage for bilateral device use when speech and noise were spatially separated, but little evidence of binaural unmasking. For spatially coincident speech and noise, listening with both ears showed similar results to listening with either side alone when loudness summation was compensated for. No significant differences were observed between binaural results for high and low update-rates in any test configuration. Only for monaural listening in one test configuration did the high rate show a small significant improvement over the low rate. Results show that even if interaural time delay cues are not well coded or perceived, bilateral implants can offer important advantages, both for speech in noise as well as for sound-direction identification.

  17. Functional and evolutionary correlates of gene constellations in the Drosophila melanogaster genome that deviate from the stereotypical gene architecture

    PubMed Central

    2010-01-01

    Background The biological dimensions of genes are manifold. These include genomic properties, (e.g., X/autosomal linkage, recombination) and functional properties (e.g., expression level, tissue specificity). Multiple properties, each generally of subtle influence individually, may affect the evolution of genes or merely be (auto-)correlates. Results of multidimensional analyses may reveal the relative importance of these properties on the evolution of genes, and therefore help evaluate whether these properties should be considered during analyses. While numerous properties are now considered during studies, most work still assumes the stereotypical solitary gene as commonly depicted in textbooks. Here, we investigate the Drosophila melanogaster genome to determine whether deviations from the stereotypical gene architecture correlate with other properties of genes. Results Deviations from the stereotypical gene architecture were classified as the following gene constellations: Overlapping genes were defined as those that overlap in the 5-prime, exonic, or intronic regions. Chromatin co-clustering genes were defined as genes that co-clustered within 20 kb of transcriptional territories. If this scheme is applied the stereotypical gene emerges as a rare occurrence (7.5%), slightly varied schemes yielded between ~1%-50%. Moreover, when following our scheme, paired-overlapping genes and chromatin co-clustering genes accounted for 50.1 and 42.4% of the genes analyzed, respectively. Gene constellation was a correlate of a number of functional and evolutionary properties of genes, but its statistical effect was ~1-2 orders of magnitude lower than the effects of recombination, chromosome linkage and protein function. Analysis of datasets on male reproductive proteins showed these were biased in their representation of gene constellations and evolutionary rate Ka/Ks estimates, but these biases did not overwhelm the biologically meaningful observation of high evolutionary rates of male reproductive genes. Conclusion Given the rarity of the solitary stereotypical gene, and the abundance of gene constellations that deviate from it, the presence of gene constellations, while once thought to be exceptional in large Eukaryote genomes, might have broader relevance to the understanding and study of the genome. However, according to our definition, while gene constellations can be significant correlates of functional properties of genes, they generally are weak correlates of the evolution of genes. Thus, the need for their consideration would depend on the context of studies. PMID:20497561

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daniel, Scott F.; Linder, Eric V.; Lawrence Berkeley National Laboratory, Berkeley, California

    Deviations from general relativity, such as could be responsible for the cosmic acceleration, would influence the growth of large-scale structure and the deflection of light by that structure. We clarify the relations between several different model-independent approaches to deviations from general relativity appearing in the literature, devising a translation table. We examine current constraints on such deviations, using weak gravitational lensing data of the CFHTLS and COSMOS surveys, cosmic microwave background radiation data of WMAP5, and supernova distance data of Union2. A Markov chain Monte Carlo likelihood analysis of the parameters over various redshift ranges yields consistency with general relativitymore » at the 95% confidence level.« less

  19. Truck driver informational overload, fiscal year 1992. Final report, 1 July 1991-30 September 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacAdam, C.C.

    1992-09-01

    The document represents the final project report for a study entitled 'Truck Driver Informational Overload' sponsored by the Motor Vehicle Manufacturers Association through its Motor Truck Research Committee and associated Operations/Performance Panels. As stated in an initial project statement, the objective of the work was to provide guidance for developing methods for measuring driving characteristics during information processing tasks. The contents of the report contain results from two basic project activities: (1) a literature review on multiple task performance driver information overload, and (2) a description of driving simulator side-task experiments and a discussion of findings from tests conducted withmore » eight subjects. Two of the key findings from a set of disturbance-input tests conducted with the simulator and the eight test subjects were that: (1) standard deviations of vehicle lateral position and heading (yaw) angle measurements showed the greatest sensitivity to the presence of side-task activities during basic information processing tasks, and (2) corresponding standard deviations of driver steering activity, vehicle yaw rate, and lateral acceleration measurements were seen to be largely insensitive indicators of side-task activity.« less

  20. Impact of turbulence anisotropy near walls in room airflow.

    PubMed

    Schälin, A; Nielsen, P V

    2004-06-01

    The influence of different turbulence models used in computational fluid dynamics predictions is studied in connection with room air movement. The turbulence models used are the high Re-number kappa-epsilon model and the high Re-number Reynolds stress model (RSM). The three-dimensional wall jet is selected for the work. The growth rate parallel to the wall in a three-dimensional wall jet is large compared with the growth rate perpendicular to the wall, and it is large compared with the growth rate in a free circular jet. It is shown that it is not possible to predict the high growth rate parallel with a surface in a three-dimensional wall jet by the kappa-epsilon turbulence model. Furthermore, it is shown that the growth rate can be predicted to a certain extent by the RSM with wall reflection terms. The flow in a deep room can be strongly influenced by details as the growth rate of a three-dimensional wall jet. Predictions by a kappa-epsilon model and RSM show large deviations in the occupied zone. Measurements and observations of streamline patterns in model experiments indicate that a reasonable solution is obtained by the RSM compared with the solution obtained by the kappa-epsilon model. Computational fluid dynamics (CFD) is often used for the prediction of air distribution in rooms and for the evaluation of thermal comfort and indoor air quality. The most used turbulence model in CFD is the kappa-epsilon model. This model often produces good results; however, some cases require more sophisticated models. The prediction of a three-dimensional wall jet is improved if it is made by a Reynolds stress model (RSM). This model improves the prediction of the velocity level in the jet and in some special cases it may influence the entire flow in the occupied zone.

  1. Prediction of Rare Transitions in Planetary Atmosphere Dynamics Between Attractors with Different Number of Zonal Jets

    NASA Astrophysics Data System (ADS)

    Bouchet, F.; Laurie, J.; Zaboronski, O.

    2012-12-01

    We describe transitions between attractors with either one, two or more zonal jets in models of turbulent atmosphere dynamics. Those transitions are extremely rare, and occur over times scales of centuries or millennia. They are extremely hard to observe in direct numerical simulations, because they require on one hand an extremely good resolution in order to simulate accurately the turbulence and on the other hand simulations performed over an extremely long time. Those conditions are usually not met together in any realistic models. However many examples of transitions between turbulent attractors in geophysical flows are known to exist (paths of the Kuroshio, Earth's magnetic field reversal, atmospheric flows, and so on). Their study through numerical computations is inaccessible using conventional means. We present an alternative approach, based on instanton theory and large deviations. Instanton theory provides a way to compute (both numerically and theoretically) extremely rare transitions between turbulent attractors. This tool, developed in field theory, and justified in some cases through the large deviation theory in mathematics, can be applied to models of turbulent atmosphere dynamics. It provides both new theoretical insights and new type of numerical algorithms. Those algorithms can predict transition histories and transition rates using numerical simulations run over only hundreds of typical model dynamical time, which is several order of magnitude lower than the typical transition time. We illustrate the power of those tools in the framework of quasi-geostrophic models. We show regimes where two or more attractors coexist. Those attractors corresponds to turbulent flows dominated by either one or more zonal jets similar to midlatitude atmosphere jets. Among the trajectories connecting two non-equilibrium attractors, we determine the most probable ones. Moreover, we also determine the transition rates, which are several of magnitude larger than a typical time determined from the jet structure. We discuss the medium-term generalization of those results to models with more complexity, like primitive equations or GCMs.

  2. Cosmological implications of a large complete quasar sample

    PubMed Central

    Segal, I. E.; Nicoll, J. F.

    1998-01-01

    Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423–1460]. The Expanding Universe model as represented by the Friedman–Lemaitre cosmology with parameters qo = 0, Λ = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude–redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11σ from direct observation; none of the C2 predictions deviate by >2σ. The C1 deviations may be reconciled with theory by the hypothesis of quasar “evolution,” which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact. PMID:9560182

  3. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  4. Psychophysiological responses to auditory change.

    PubMed

    Chuen, Lorraine; Sears, David; McAdams, Stephen

    2016-06-01

    A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions. © 2016 Society for Psychophysiological Research.

  5. Deformed transition-state theory: Deviation from Arrhenius behavior and application to bimolecular hydrogen transfer reaction rates in the tunneling regime.

    PubMed

    Carvalho-Silva, Valter H; Aquilanti, Vincenzo; de Oliveira, Heibbe C B; Mundim, Kleber C

    2017-01-30

    A formulation is presented for the application of tools from quantum chemistry and transition-state theory to phenomenologically cover cases where reaction rates deviate from Arrhenius law at low temperatures. A parameter d is introduced to describe the deviation for the systems from reaching the thermodynamic limit and is identified as the linearizing coefficient in the dependence of the inverse activation energy with inverse temperature. Its physical meaning is given and when deviation can be ascribed to quantum mechanical tunneling its value is calculated explicitly. Here, a new derivation is given of the previously established relationship of the parameter d with features of the barrier in the potential energy surface. The proposed variant of transition state theory permits comparison with experiments and tests against alternative formulations. Prescriptions are provided and implemented to three hydrogen transfer reactions: CH 4  + OH → CH 3  + H 2 O, CH 3 Cl + OH → CH 2 Cl + H 2 O and H 2  + CN → H + HCN, widely investigated both experimentally and theoretically. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. High Pressure ZZ-Exchange NMR Reveals Key Features of Protein Folding Transition States.

    PubMed

    Zhang, Yi; Kitazawa, Soichiro; Peran, Ivan; Stenzoski, Natalie; McCallum, Scott A; Raleigh, Daniel P; Royer, Catherine A

    2016-11-23

    Understanding protein folding mechanisms and their sequence dependence requires the determination of residue-specific apparent kinetic rate constants for the folding and unfolding reactions. Conventional two-dimensional NMR, such as HSQC experiments, can provide residue-specific information for proteins. However, folding is generally too fast for such experiments. ZZ-exchange NMR spectroscopy allows determination of folding and unfolding rates on much faster time scales, yet even this regime is not fast enough for many protein folding reactions. The application of high hydrostatic pressure slows folding by orders of magnitude due to positive activation volumes for the folding reaction. We combined high pressure perturbation with ZZ-exchange spectroscopy on two autonomously folding protein domains derived from the ribosomal protein, L9. We obtained residue-specific apparent rates at 2500 bar for the N-terminal domain of L9 (NTL9), and rates at atmospheric pressure for a mutant of the C-terminal domain (CTL9) from pressure dependent ZZ-exchange measurements. Our results revealed that NTL9 folding is almost perfectly two-state, while small deviations from two-state behavior were observed for CTL9. Both domains exhibited large positive activation volumes for folding. The volumetric properties of these domains reveal that their transition states contain most of the internal solvent excluded voids that are found in the hydrophobic cores of the respective native states. These results demonstrate that by coupling it with high pressure, ZZ-exchange can be extended to investigate a large number of protein conformational transitions.

  7. Finite-Time and -Size Scalings in the Evaluation of Large Deviation Functions. Numerical Analysis in Continuous Time

    NASA Astrophysics Data System (ADS)

    Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien

    Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.

  8. Short-term heart rate variability in dogs with sick sinus syndrome or chronic mitral valve disease as compared to healthy controls.

    PubMed

    Bogucki, Sz; Noszczyk-Nowak, A

    2017-03-28

    Heart rate variability is an established risk factor for mortality in both healthy dogs and animals with heart failure. The aim of this study was to compare short-term heart rate variability (ST-HRV) parameters from 60-min electrocardiograms in dogs with sick sinus syndrome (SSS, n=20) or chronic mitral valve disease (CMVD, n=20) and healthy controls (n=50), and to verify the clinical application of ST-HRV analysis. The study groups differed significantly in terms of both time - and frequency- domain ST-HRV parameters. In the case of dogs with SSS and healthy controls, particularly evident differences pertained to HRV parameters linked directly to the variability of R-R intervals. Lower values of standard deviation of all R-R intervals (SDNN), standard deviation of the averaged R-R intervals for all 5-min segments (SDANN), mean of the standard deviations of all R-R intervals for all 5-min segments (SDNNI) and percentage of successive R-R intervals >50 ms (pNN50) corresponded to a decrease in parasympathetic regulation of heart rate in dogs with CMVD. These findings imply that ST-HRV may be useful for the identification of dogs with SSS and for detection of dysautonomia in animals with CMVD.

  9. Performance of informative priors skeptical of large treatment effects in clinical trials: A simulation study.

    PubMed

    Pedroza, Claudia; Han, Weilu; Thanh Truong, Van Thi; Green, Charles; Tyson, Jon E

    2018-01-01

    One of the main advantages of Bayesian analyses of clinical trials is their ability to formally incorporate skepticism about large treatment effects through the use of informative priors. We conducted a simulation study to assess the performance of informative normal, Student- t, and beta distributions in estimating relative risk (RR) or odds ratio (OR) for binary outcomes. Simulation scenarios varied the prior standard deviation (SD; level of skepticism of large treatment effects), outcome rate in the control group, true treatment effect, and sample size. We compared the priors with regards to bias, mean squared error (MSE), and coverage of 95% credible intervals. Simulation results show that the prior SD influenced the posterior to a greater degree than the particular distributional form of the prior. For RR, priors with a 95% interval of 0.50-2.0 performed well in terms of bias, MSE, and coverage under most scenarios. For OR, priors with a wider 95% interval of 0.23-4.35 had good performance. We recommend the use of informative priors that exclude implausibly large treatment effects in analyses of clinical trials, particularly for major outcomes such as mortality.

  10. Rare events in networks with internal and external noise

    NASA Astrophysics Data System (ADS)

    Hindes, J.; Schwartz, I. B.

    2017-12-01

    We study rare events in networks with both internal and external noise, and develop a general formalism for analyzing rare events that combines pair-quenched techniques and large-deviation theory. The probability distribution, shape, and time scale of rare events are considered in detail for extinction in the Susceptible-Infected-Susceptible model as an illustration. We find that when both types of noise are present, there is a crossover region as the network size is increased, where the probability exponent for large deviations no longer increases linearly with the network size. We demonstrate that the form of the crossover depends on whether the endemic state is localized near the epidemic threshold or not.

  11. Fearsome Flashes: A Study Of The Evolution Of Flaring Rates In Cool Stars Using Kepler Cluster Data

    NASA Astrophysics Data System (ADS)

    Saar, Steven

    Strong solar flares can damage power grids, satellites, interrupt communications and GPS information, and threaten astronauts and high latitude air travelers. Despite the potential cost, their frequency is poorly determined. Beyond purely current terrestrial concerns, how the rate of large flares (and associated coronal mass ejections [CMEs], high-energy particle fluxes and far UV emission) varies over the stellar lifetime holds considerable astrophysical interest. These include: the contributions of flares to coronal energy budgets; the importance of flares and CMEs to terrestrial and exoplanet atmospheric and biological evolution; and importance of CME mass loss for angular momentum evolution. We will explore the rate of strong flares and its variation with stellar age, mass and rotation by studying Kepler data of cool stars in two open clusters NGC 6811 (age ~ 1 Gyr) and NGC 6819 (~2.5 Gyr). We will use two flare analysis methods to build white-light flare distributions for cluster stars. One subtracts a low-pass filtered version of the data and analyzes the residue for positive flux deviations, the other does a statistical analysis of the flux deviations vs. time lags compared with a model. For near- solar stars, a known solar relation can then be used to estimate X-ray production by the white-light flares. For stars much hotter or cooler or with significantly different chromospheric density, we will use particle code flare models including bombardment effects to estimate how the X-ray to white light scaling changes. With the X-ray values, we can estimate far UV fluxes and CME rates, building a picture of the flare effects; with the two cluster ages, we can make a first estimate of the solar rate (by projecting to the Sun's age) and begin to build up an understanding of flare rate evolution with mass and age. Our proposal falls squarely in the "Stellar Astrophysics and Exoplanets" research area, and is relevant to NASA astrophysics goals in promoting better understanding the evolution of stars and their exoplanets, and better understanding the environment in which life evolved, and threats to it, both on Earth and in the wider cosmos.

  12. Approaching sub-50 nanoradian measurements by reducing the saw-tooth deviation of the autocollimator in the Nano-Optic-Measuring Machine

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Geckeler, Ralf D.; Just, Andreas; Idir, Mourad; Wu, Xuehui

    2015-06-01

    Since the development of the Nano-Optic-Measuring Machine (NOM), the accuracy of measuring the profile of an optical surface has been enhanced to the 100-nrad rms level or better. However, to update the accuracy of the NOM system to sub-50 nrad rms, the large saw-tooth deviation (269 nrad rms) of an existing electronic autocollimator, the Elcomat 3000/8, must be resolved. We carried out simulations to assess the saw-tooth-like deviation. We developed a method for setting readings to reduce the deviation to sub-50 nrad rms, suitable for testing plane mirrors. With this method, we found that all the tests conducted in a slowly rising section of the saw-tooth show a small deviation of 28.8 to <40 nrad rms. We also developed a dense-measurement method and an integer-period method to lower the saw-tooth deviation during tests of sphere mirrors. Further research is necessary for formulating a precise test for a spherical mirror. We present a series of test results from our experiments that verify the value of the improvements we made.

  13. Diagnostic accuracy of referral criteria for head circumference to detect hydrocephalus in the first year of life.

    PubMed

    van Dommelen, Paula; Deurloo, Jacqueline A; Gooskens, Rob H; Verkerk, Paul H

    2015-04-01

    Increased head circumference is often the first and main sign leading to the diagnosis of hydrocephalus. Our aim is to investigate the diagnostic accuracy of referral criteria for head circumference to detect hydrocephalus in the first year of life. A reference group with longitudinal head circumference data (n = 1938) was obtained from the Social Medical Survey of Children Attending Child Health Clinics study. The case group comprised infants with hydrocephalus treated in a tertiary pediatric hospital who had not already been detected during pregnancy (n = 125). Head circumference data were available for 43 patients. Head circumference data were standardized according to gestational age-specific references. Sensitivity and specificity of a very large head circumference (>2.5 standard deviations on the growth chart) were, respectively, 72.1% (95% confidence interval [CI]: 56.3-84.7) and 97.1% (95% CI:96.2-97.8). These figures were, respectively, 74.4% (95% CI: 58.8-86.5) and 93.0% (95% CI:91.8-94.1) for a large head circumference (>2.0 standard deviation), and 76.7% (95% CI:61.4-88.2) and 96.5% (95% CI:95.6-97.3) for a very large head circumference and/or a very large (>2.5 standard deviation) progressive growth of head circumference. A very large head circumference and/or a very large progressive growth of head circumference shows the best diagnostic accuracy to detect hydrocephalus at an early stage. Gestational age-specific growth charts are recommended. Further improvements may be possible by taking into account parental head circumference. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Visual field progression in glaucoma: what is the specificity of the Guided Progression Analysis?

    PubMed

    Artes, Paul H; O'Leary, Neil; Nicolela, Marcelo T; Chauhan, Balwantray C; Crabb, David P

    2014-10-01

    To estimate the specificity of the Guided Progression Analysis (GPA) (Carl Zeiss Meditec, Dublin, CA) in individual patients with glaucoma. Observational cohort study. Thirty patients with open-angle glaucoma. In 30 patients with open-angle glaucoma, 1 eye (median mean deviation [MD], -2.5 decibels [dB]; interquartile range, -4.4 to -1.3 dB) was tested 12 times over 3 months (Humphrey Field Analyzer, Carl Zeiss Meditec; SITA Standard, 24-2). "Possible progression" and "likely progression" were determined with the GPA. These analyses were repeated after the order of the tests had been randomly rearranged (1000 unique permutations). Rate of false-positive alerts of "possible progression" and "likely progression" with the GPA. On average, the specificity of the GPA "likely progression" alert was high-for the entire sample, the mean rate of false-positive alerts after 10 follow-up tests was 2.6%. With "possible progression," the specificity was considerably lower (false-positive rate, 18.5%). Most important, the cumulative rate of false-positive alerts varied substantially among patients, from <1% to 80% with "possible progression" and from <0.1% to 20% with "likely progression." Factors associated with false-positive alerts were visual field variability (standard deviation of MD, Spearman's rho = 0.41, P<0.001) and the reliability indices (proportion of false-positive and false-negative responses, fixation losses, rho>0.31, P≤0.10). On average, progression criteria currently used in the GPA have high specificity, but some patients are more likely to show false-positive alerts than others. This is a natural consequence of population-based change criteria and may not matter in clinical trials and studies in which large groups of patients are compared. However, it must be considered when the GPA is used in clinical practice where specificity needs to be controlled for individual patients. Copyright © 2014 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  15. Perception of midline deviations in smile esthetics by laypersons.

    PubMed

    Ferreira, Jamille Barros; Silva, Licínio Esmeraldo da; Caetano, Márcia Tereza de Oliveira; Motta, Andrea Fonseca Jardim da; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson

    2016-01-01

    To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student's t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation.

  16. Effects of structural offset, axial shortening, and gravitational torque on the slewing of a flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Li, Feiyue; Bainum, Peter M.

    1990-01-01

    The large-angle maneuvering of a Shuttle-beam-reflector spacecraft in the plane of a circular earth orbit is examined by considering the effects of the structural offset connection, the axial shortening, and the gravitational torque on the slewing motion. The offset effect is analyzed by changing the attachment point of the reflector to the beam. As the attachment point is moved away from the mass center of the reflector, the responses of the nonlinear system deviate from those of the linearized system. The axial geometric shortening effect induced by the deformation of the beam contributes to the system equations through second order terms in the modal amplitudes and rates. The gravitational torque effect is relatively small.

  17. Particle Orbit Analysis in the Finite Beta Plasma of the Large Helical Device using Real Coordinates

    NASA Astrophysics Data System (ADS)

    Seki, Ryousuke; Matsumoto, Yutaka; Suzuki, Yasuhiro; Watanabe, Kiyomasa; Itagaki, Masafumi

    High-energy particles in a finite beta plasma of the Large Helical Device (LHD) are numerically traced in a real coordinate system. We investigate particle orbits by changing the beta value and/or the magnetic field strength. No significant difference is found in the particle orbit classifications between the vacuum magnetic field and the finite beta plasma cases. The deviation of a banana orbit from the flux surfaces strongly depends on the beta value, although the deviation of the orbit of a passing particle is independent of the beta value. In addition, the deviation of the orbit of the passing particle, rather than that of the banana-orbit particles, depends on the magnetic field strength. We also examine the effect of re-entering particles, which repeatedly pass in and out of the last closed flux surface, in the finite beta plasma of the LHD. It is found that the number of re-entering particles in the finite beta plasma is larger than that in the vacuum magnetic field. As a result, the role of reentering particles in the finite beta plasma of the LHD is more important than that in the vacuum magnetic field, and the effect of the charge-exchange reaction on particle confinement in the finite beta plasma is large.

  18. Not a Copernican observer: biased peculiar velocity statistics in the local Universe

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej

    2017-05-01

    We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.

  19. Large incidence angle and defocus influence cat's eye retro-reflector

    NASA Astrophysics Data System (ADS)

    Zhang, Lai-xian; Sun, Hua-yan; Zhao, Yan-zhong; Yang, Ji-guang; Zheng, Yong-hui

    2014-11-01

    Cat's eye lens make the laser beam retro-reflected exactly to the opposite direction of the incidence beam, called cat's eye effect, which makes rapid acquiring, tracking and pointing of free space optical communication possible. Study the influence of cat's eye effect to cat's eye retro-reflector at large incidence angle is useful. This paper analyzed the process of how the incidence angle and focal shit affect effective receiving area, retro-reflected beam divergence angle, central deviation of cat's eye retro-reflector at large incidence angle and cat's eye effect factor using geometrical optics method, and presented the analytic expressions. Finally, numerical simulation was done to prove the correction of the study. The result shows that the efficiency receiving area of cat's eye retro-reflector is mainly affected by incidence angle when the focal shift is positive, and it decreases rapidly when the incidence angle increases; the retro-reflected beam divergence and central deviation is mainly affected by focal shift, and within the effective receiving area, the central deviation is smaller than beam divergence in most time, which means the incidence beam can be received and retro-reflected to the other terminal in most time. The cat's eye effect factor gain is affected by both incidence angle and focal shift.

  20. Life histories and conservation of long-lived reptiles, an illustration with the American crocodile (Crocodylus acutus)

    USGS Publications Warehouse

    Briggs-Gonzalez, Venetia; Bonefant, Christophe; Basille, Mathieu; Cherkiss, Michael S.; Beauchamp, Jeff; Mazzotti, Frank J.

    2017-01-01

    Successful species conservation is dependent on adequate estimates of population dynamics, but age-specific demographics are generally lacking for many long-lived iteroparous species such as large reptiles. Accurate demographic information allows estimation of population growth rate, as well as projection of future population sizes and quantitative analyses of fitness trade-offs involved in the evolution of life-history strategies.Here, a long-term capture–recapture study was conducted from 1978 to 2014 on the American crocodile (Crocodylus acutus) in southern Florida. Over the study period, 7,427 hatchlings were marked and 380 individuals were recaptured for as many as 25 years. We estimated survival to be strongly age dependent with hatchlings having the lowest survival rates (16%) but increasing to nearly 90% at adulthood based on mark–recapture models. More than 5% of the female population were predicted to be reproductive by age 8 years; the age-specific proportion of reproductive females steadily increased until age 18 when more than 95% of females were predicted to be reproductive. Population growth rate, estimated from a Leslie–Lefkovitch stage-class model, showed a positive annual growth rate of 4% over the study period.Using a prospective sensitivity analysis, we revealed that the adult stage, as expected, was the most critical stage for population growth rate; however, the survival of younger crocodiles before they became reproductive also had a surprisingly high elasticity. We found that variation in age-specific fecundity has very limited impact on population growth rate in American crocodiles.We used a comparative approach to show that the original life-history strategy of American crocodiles is actually shared by other large, long-lived reptiles: while adult survival rates always have a large impact on population growth, this decreases with declining increasing growth rates, in favour of a higher elasticity of the juvenile stage.Crocodiles, as a long-lived and highly fecund species, deviate from the usual association of life histories of “slow” species. Current management practices are focused on nests and hatchling survival; however, protection efforts that extend to juvenile crocodiles would be most effective for conservation of the species, especially in an ever-developing landscape.

  1. Analysis of change orders in geotechnical engineering work at INDOT.

    DOT National Transportation Integrated Search

    2011-01-01

    Change orders represent a cost to the State and to tax payers that is real and often extremely large because contractors tend to charge very large : amounts to any additional work that deviates from the work that was originally planned. Therefore, ef...

  2. Molecular dynamics studies of electron-ion temperature equilibration in hydrogen plasmas within the coupled-mode regime

    DOE PAGES

    Benedict, Lorin X.; Surh, Michael P.; Stanton, Liam G.; ...

    2017-04-10

    Here, we use classical molecular dynamics (MD) to study electron-ion temperature equilibration in two-component plasmas in regimes for which the presence of coupled collective modes has been predicted to substantively reduce the equilibration rate. Guided by previous kinetic theory work, we examine hydrogen plasmas at a density of n = 10 26cm –3, T i = 10 5K, and 10 7 K < Te < 10 9K. The nonequilibrium classical MD simulations are performed with interparticle interactions modeled by quantum statistical potentials (QSPs). Our MD results indicate (i) a large effect from time-varying potential energy, which we quantify by appealingmore » to an adiabatic two-temperature equation of state, and (ii) a notable deviation in the energy equilibration rate when compared to calculations from classical Lenard-Balescu theory including the QSPs. In particular, it is shown that the energy equilibration rates from MD are more similar to those of the theory when coupled modes are neglected. We suggest possible reasons for this surprising result and propose directions of further research along these lines.« less

  3. Centrality-Dependent Modification of Jet-Production Rates in Deuteron-Gold Collisions at √[s(NN)]=200 GeV.

    PubMed

    Adare, A; Aidala, C; Ajitanand, N N; Akiba, Y; Al-Bataineh, H; Alexander, J; Alfred, M; Angerami, A; Aoki, K; Apadula, N; Aramaki, Y; Asano, H; Atomssa, E T; Averbeck, R; Awes, T C; Azmoun, B; Babintsev, V; Bai, M; Baksay, G; Baksay, L; Bandara, N S; Bannier, B; Barish, K N; Bassalleck, B; Basye, A T; Bathe, S; Baublis, V; Baumann, C; Bazilevsky, A; Beaumier, M; Beckman, S; Belikov, S; Belmont, R; Bennett, R; Berdnikov, A; Berdnikov, Y; Bhom, J H; Blau, D S; Bok, J S; Boyle, K; Brooks, M L; Bryslawskyj, J; Buesching, H; Bumazhnov, V; Bunce, G; Butsyk, S; Campbell, S; Caringi, A; Chen, C-H; Chi, C Y; Chiu, M; Choi, I J; Choi, J B; Choudhury, R K; Christiansen, P; Chujo, T; Chung, P; Chvala, O; Cianciolo, V; Citron, Z; Cole, B A; Conesa Del Valle, Z; Connors, M; Csanád, M; Csörgő, T; Dahms, T; Dairaku, S; Danchev, I; Danley, T W; Das, K; Datta, A; Daugherity, M S; David, G; Dayananda, M K; DeBlasio, K; Dehmelt, K; Denisov, A; Deshpande, A; Desmond, E J; Dharmawardane, K V; Dietzsch, O; Dion, A; Diss, P B; Do, J H; Donadelli, M; D'Orazio, L; Drapier, O; Drees, A; Drees, K A; Durham, J M; Durum, A; Dutta, D; Edwards, S; Efremenko, Y V; Ellinghaus, F; Engelmore, T; Enokizono, A; En'yo, H; Esumi, S; Fadem, B; Feege, N; Fields, D E; Finger, M; Finger, M; Fleuret, F; Fokin, S L; Fraenkel, Z; Frantz, J E; Franz, A; Frawley, A D; Fujiwara, K; Fukao, Y; Fusayasu, T; Gal, C; Gallus, P; Garg, P; Garishvili, I; Ge, H; Giordano, F; Glenn, A; Gong, H; Gonin, M; Goto, Y; Granier de Cassagnac, R; Grau, N; Greene, S V; Grim, G; Grosse Perdekamp, M; Gunji, T; Gustafsson, H-Å; Hachiya, T; Haggerty, J S; Hahn, K I; Hamagaki, H; Hamblen, J; Hamilton, H F; Han, R; Han, S Y; Hanks, J; Hasegawa, S; Haseler, T O S; Hashimoto, K; Haslum, E; Hayano, R; He, X; Heffner, M; Hemmick, T K; Hester, T; Hill, J C; Hohlmann, M; Hollis, R S; Holzmann, W; Homma, K; Hong, B; Horaguchi, T; Hornback, D; Hoshino, T; Hotvedt, N; Huang, J; Huang, S; Ichihara, T; Ichimiya, R; Ikeda, Y; Imai, K; Inaba, M; Iordanova, A; Isenhower, D; Ishihara, M; Issah, M; Ivanishchev, D; Iwanaga, Y; Jacak, B V; Jezghani, M; Jia, J; Jiang, X; Jin, J; Johnson, B M; Jones, T; Joo, K S; Jouan, D; Jumper, D S; Kajihara, F; Kamin, J; Kanda, S; Kang, J H; Kapustinsky, J; Karatsu, K; Kasai, M; Kawall, D; Kawashima, M; Kazantsev, A V; Kempel, T; Key, J A; Khachatryan, V; Khanzadeev, A; Kijima, K M; Kikuchi, J; Kim, A; Kim, B I; Kim, C; Kim, D J; Kim, E-J; Kim, G W; Kim, M; Kim, Y-J; Kimelman, B; Kinney, E; Kiss, Á; Kistenev, E; Kitamura, R; Klatsky, J; Kleinjan, D; Kline, P; Koblesky, T; Kochenda, L; Komkov, B; Konno, M; Koster, J; Kotov, D; Král, A; Kravitz, A; Kunde, G J; Kurita, K; Kurosawa, M; Kwon, Y; Kyle, G S; Lacey, R; Lai, Y S; Lajoie, J G; Lebedev, A; Lee, D M; Lee, J; Lee, K B; Lee, K S; Lee, S; Lee, S H; Leitch, M J; Leite, M A L; Li, X; Lichtenwalner, P; Liebing, P; Lim, S H; Linden Levy, L A; Liška, T; Liu, H; Liu, M X; Love, B; Lynch, D; Maguire, C F; Makdisi, Y I; Makek, M; Malik, M D; Manion, A; Manko, V I; Mannel, E; Mao, Y; Masui, H; Matathias, F; McCumber, M; McGaughey, P L; McGlinchey, D; McKinney, C; Means, N; Meles, A; Mendoza, M; Meredith, B; Miake, Y; Mibe, T; Mignerey, A C; Miki, K; Milov, A; Mishra, D K; Mitchell, J T; Miyasaka, S; Mizuno, S; Mohanty, A K; Montuenga, P; Moon, H J; Moon, T; Morino, Y; Morreale, A; Morrison, D P; Moukhanova, T V; Murakami, T; Murata, J; Mwai, A; Nagamiya, S; Nagashima, K; Nagle, J L; Naglis, M; Nagy, M I; Nakagawa, I; Nakagomi, H; Nakamiya, Y; Nakamura, K R; Nakamura, T; Nakano, K; Nam, S; Nattrass, C; Netrakanti, P K; Newby, J; Nguyen, M; Nihashi, M; Niida, T; Nishimura, S; Nouicer, R; Novák, T; Novitzky, N; Nyanin, A S; Oakley, C; O'Brien, E; Oda, S X; Ogilvie, C A; Oka, M; Okada, K; Onuki, Y; Orjuela Koop, J D; Osborn, J D; Oskarsson, A; Ouchida, M; Ozawa, K; Pak, R; Pantuev, V; Papavassiliou, V; Park, I H; Park, J S; Park, S; Park, S K; Park, W J; Pate, S F; Patel, M; Pei, H; Peng, J-C; Pereira, H; Perepelitsa, D V; Perera, G D N; Peressounko, D Yu; Perry, J; Petti, R; Pinkenburg, C; Pinson, R; Pisani, R P; Proissl, M; Purschke, M L; Qu, H; Rak, J; Ramson, B J; Ravinovich, I; Read, K F; Rembeczki, S; Reygers, K; Reynolds, D; Riabov, V; Riabov, Y; Richardson, E; Rinn, T; Roach, D; Roche, G; Rolnick, S D; Rosati, M; Rosen, C A; Rosendahl, S S E; Rowan, Z; Rubin, J G; Ružička, P; Sahlmueller, B; Saito, N; Sakaguchi, T; Sakashita, K; Sako, H; Samsonov, V; Sano, S; Sarsour, M; Sato, S; Sato, T; Sawada, S; Schaefer, B; Schmoll, B K; Sedgwick, K; Seele, J; Seidl, R; Sen, A; Seto, R; Sett, P; Sexton, A; Sharma, D; Shein, I; Shibata, T-A; Shigaki, K; Shimomura, M; Shoji, K; Shukla, P; Sickles, A; Silva, C L; Silvermyr, D; Silvestre, C; Sim, K S; Singh, B K; Singh, C P; Singh, V; Slunečka, M; Snowball, M; Soltz, R A; Sondheim, W E; Sorensen, S P; Sourikova, I V; Stankus, P W; Stenlund, E; Stepanov, M; Stoll, S P; Sugitate, T; Sukhanov, A; Sumita, T; Sun, J; Sziklai, J; Takagui, E M; Taketani, A; Tanabe, R; Tanaka, Y; Taneja, S; Tanida, K; Tannenbaum, M J; Tarafdar, S; Taranenko, A; Themann, H; Thomas, D; Thomas, T L; Tieulent, R; Timilsina, A; Todoroki, T; Togawa, M; Toia, A; Tomášek, L; Tomášek, M; Torii, H; Towell, C L; Towell, R; Towell, R S; Tserruya, I; Tsuchimoto, Y; Vale, C; Valle, H; van Hecke, H W; Vazquez-Zambrano, E; Veicht, A; Velkovska, J; Vértesi, R; Virius, M; Vrba, V; Vznuzdaev, E; Wang, X R; Watanabe, D; Watanabe, K; Watanabe, Y; Watanabe, Y S; Wei, F; Wei, R; Wessels, J; White, A S; White, S N; Winter, D; Woody, C L; Wright, R M; Wysocki, M; Xia, B; Xue, L; Yalcin, S; Yamaguchi, Y L; Yamaura, K; Yang, R; Yanovich, A; Ying, J; Yokkaichi, S; Yoo, J H; Yoon, I; You, Z; Young, G R; Younus, I; Yu, H; Yushmanov, I E; Zajc, W A; Zelenski, A; Zhou, S; Zou, L

    2016-03-25

    Jet production rates are measured in p+p and d+Au collisions at sqrt[s_{NN}]=200  GeV recorded in 2008 with the PHENIX detector at the Relativistic Heavy Ion Collider. Jets are reconstructed using the R=0.3 anti-k_{t} algorithm from energy deposits in the electromagnetic calorimeter and charged tracks in multiwire proportional chambers, and the jet transverse momentum (p_{T}) spectra are corrected for the detector response. Spectra are reported for jets with 12

  4. Centrality-Dependent Modification of Jet-Production Rates in Deuteron-Gold Collisions at s N N = 200 GeV

    DOE PAGES

    Adare, A.

    2016-03-24

    Wemore » measured jet production rates in p+p and d+Au collisions at s N N = 200 GeV recorded in 2008 with the PHENIX detector at the Relativistic Heavy Ion Collider. Jets are reconstructed using the R=0.3 anti-k t algorithm from energy deposits in the electromagnetic calorimeter and charged tracks in multiwire proportional chambers, and the jet transverse momentum (p T) spectra are corrected for the detector response. Spectra are reported for jets with 12T<50 GeV/c, within a pseudorapidity acceptance of |η|<0.3. The nuclear-modification factor (R dAu) values for 0%–100% d+Au events are found to be consistent with unity, constraining the role of initial state effects on jet production. Nonetheless, the centrality-selected R dAu values and central-to-peripheral ratios (R CP) show large, p T-dependent deviations from unity, challenging the conventional models that relate hard-process rates and soft-particle production in collisions involving nuclei.« less

  5. Quantitative relations between risk, return and firm size

    NASA Astrophysics Data System (ADS)

    Podobnik, B.; Horvatic, D.; Petersen, A. M.; Stanley, H. E.

    2009-03-01

    We analyze —for a large set of stocks comprising four financial indices— the annual logarithmic growth rate R and the firm size, quantified by the market capitalization MC. For the Nasdaq Composite and the New York Stock Exchange Composite we find that the probability density functions of growth rates are Laplace ones in the broad central region, where the standard deviation σ(R), as a measure of risk, decreases with the MC as a power law σ(R)~(MC)- β. For both the Nasdaq Composite and the S&P 500, we find that the average growth rate langRrang decreases faster than σ(R) with MC, implying that the return-to-risk ratio langRrang/σ(R) also decreases with MC. For the S&P 500, langRrang and langRrang/σ(R) also follow power laws. For a 20-year time horizon, for the Nasdaq Composite we find that σ(R) vs. MC exhibits a functional form called a volatility smile, while for the NYSE Composite, we find power law stability between σ(r) and MC.

  6. Method of surface error visualization using laser 3D projection technology

    NASA Astrophysics Data System (ADS)

    Guo, Lili; Li, Lijuan; Lin, Xuezhu

    2017-10-01

    In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.

  7. In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.

    PubMed

    Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J

    2004-01-01

    In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.

  8. Visual space under free viewing conditions.

    PubMed

    Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J

    2005-10-01

    Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.

  9. Effect of drivers' age and push button locations on visual time off road, steering wheel deviation and safety perception.

    PubMed

    Dukic, T; Hanson, L; Falkmer, T

    2006-01-15

    The study examined the effects of manual control locations on two groups of randomly selected young and old drivers in relation to visual time off road, steering wheel deviation and safety perception. Measures of visual time off road, steering wheel deviations and safety perception were performed with young and old drivers during real traffic. The results showed an effect of both driver's age and button location on the dependent variables. Older drivers spent longer visual time off road when pushing the buttons and had larger steering wheel deviations. Moreover, the greater the eccentricity between the normal line of sight and the button locations, the longer the visual time off road and the larger the steering wheel deviations. No interaction effect between button location and age was found with regard to visual time off road. Button location had an effect on perceived safety: the further away from the normal line of sight the lower the rating.

  10. Truncated Linear Statistics Associated with the Eigenvalues of Random Matrices II. Partial Sums over Proper Time Delays for Chaotic Quantum Dots

    NASA Astrophysics Data System (ADS)

    Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe

    2017-06-01

    Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p

  11. Internal fixators: a safe option for managing distal femur fractures?

    PubMed Central

    Batista, Bruno Bellaguarda; Salim, Rodrigo; Paccola, Cleber Antonio Jansen; Kfuri, Mauricio

    2014-01-01

    OBJECTIVE: Evaluate safety and reliability of internal fixator for the treatment of intra-articular and periarticular distal femur fractures. METHODS: Retrospective data evaluation of 28 patients with 29 fractures fixed with internal fixator was performed. There was a predominance of male patients (53.5%), with 52% of open wound fractures, 76% of AO33C type fractures, and a mean follow up of 21.3 months. Time of fracture healing, mechanical axis deviation, rate of infection and postoperative complications were registered. RESULTS: Healing rate was 93% in this sample, with an average time of 5.5 months. Twenty-seven percent of patients ended up with mechanical axis deviation, mostly resulting from poor primary intra-operative reduction. There were two cases of implant loosening; two implant breakage, and three patients presented stiff knee. No case of infection was observed. Healing rate in this study was comparable with current literature; there was a high degree of angular deviation, especially in the coronal plane. CONCLUSION: Internal fixators are a breakthrough in the treatment of knee fractures, but its use does not preclude application of principles of anatomical articular reduction and mechanical axis restoration. Level of Evidence II, Retrospective Study. PMID:25061424

  12. Estimation of Blood Flow Rates in Large Microvascular Networks

    PubMed Central

    Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.

    2012-01-01

    Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980

  13. Longitudinal and cross-sectional analyses of visual field progression in participants of the Ocular Hypertension Treatment Study.

    PubMed

    Artes, Paul H; Chauhan, Balwantray C; Keltner, John L; Cello, Kim E; Johnson, Chris A; Anderson, Douglas R; Gordon, Mae O; Kass, Michael A

    2010-12-01

    To assess agreement between longitudinal and cross-sectional analyses for determining visual field progression in data from the Ocular Hypertension Treatment Study. Visual field data from 3088 eyes of 1570 participants (median follow-up, 7 years) were analyzed. Longitudinal analyses were performed using change probability with total and pattern deviation, and cross-sectional analyses were performed using the glaucoma hemifield test, corrected pattern standard deviation, and mean deviation. The rates of mean deviation and general height change were compared to estimate the degree of diffuse loss in emerging glaucoma. Agreement on progression in longitudinal and cross-sectional analyses ranged from 50% to 61% and remained nearly constant across a wide range of criteria. In contrast, agreement on absence of progression ranged from 97.0% to 99.7%, being highest for the stricter criteria. Analyses of pattern deviation were more conservative than analyses of total deviation, with a 3 to 5 times lesser incidence of progression. Most participants developing field loss had both diffuse and focal changes. Despite considerable overall agreement, 40% to 50% of eyes identified as having progressed with either longitudinal or cross-sectional analyses were identified with only one of the analyses. Because diffuse change is part of early glaucomatous damage, pattern deviation analyses may underestimate progression in patients with ocular hypertension.

  14. Rapidly rotating neutron stars with a massive scalar field—structure and universal relations

    NASA Astrophysics Data System (ADS)

    Doneva, Daniela D.; Yazadjiev, Stoytcho S.

    2016-11-01

    We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly and rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I-Love-Q relations.

  15. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  16. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  17. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  18. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  19. Chained Kullback-Leibler Divergences

    PubMed Central

    Pavlichin, Dmitri S.; Weissman, Tsachy

    2017-01-01

    We define and characterize the “chained” Kullback-Leibler divergence minw D(p‖w) + D(w‖q) minimized over all intermediate distributions w and the analogous k-fold chained K-L divergence min D(p‖wk−1) + … + D(w2‖w1) + D(w1‖q) minimized over the entire path (w1,…,wk−1). This quantity arises in a large deviations analysis of a Markov chain on the set of types – the Wright-Fisher model of neutral genetic drift: a population with allele distribution q produces offspring with allele distribution w, which then produce offspring with allele distribution p, and so on. The chained divergences enjoy some of the same properties as the K-L divergence (like joint convexity in the arguments) and appear in k-step versions of some of the same settings as the K-L divergence (like information projections and a conditional limit theorem). We further characterize the optimal k-step “path” of distributions appearing in the definition and apply our findings in a large deviations analysis of the Wright-Fisher process. We make a connection to information geometry via the previously studied continuum limit, where the number of steps tends to infinity, and the limiting path is a geodesic in the Fisher information metric. Finally, we offer a thermodynamic interpretation of the chained divergence (as the rate of operation of an appropriately defined Maxwell’s demon) and we state some natural extensions and applications (a k-step mutual information and k-step maximum likelihood inference). We release code for computing the objects we study. PMID:29130024

  20. Temperature dependence of blood viscosity in frogs and turtles: effect on heat exchange with environment.

    PubMed

    Langille, B L; Crisp, B

    1980-09-01

    The temperature dependence of the viscosity of blood from frogs and turtles has been assessed for temperatures between 5 and 40 degrees C. Viscosity of turtles' blood was, on average, reduced from 3.50 +/- 0.16 to 2.13 +/- 0.10 cP between 10 and 30 degrees C, a decline of 39%. Even larger changes in viscosity were observed for frogs' blood with viscosity falling from 4.55 +/- 0.32 to 2.55 +/- 0.25 cP over the same temperature range, a change of 44%. Blood viscosity was highly correlated with hematocrit in both species at all temperatures. Viscosity of blood from both frogs and turtles showed a large standard deviation at all temperatures and this was attributed to large individual-to-individual variations in hematocrit. Turtles heat faster than they cool, regardless of whether tests are performed at temperatures above or below the range of thermal preference. The effect of temperature dependence of blood viscosity on heating and cooling rates is demonstrated.

  1. Access to enhanced differences in Marcus-Hush and Butler-Volmer electron transfer theories by systematic analysis of higher order AC harmonics.

    PubMed

    Stevenson, Gareth P; Baker, Ruth E; Kennedy, Gareth F; Bond, Alan M; Gavaghan, David J; Gillow, Kathryn

    2013-02-14

    The potential-dependences of the rate constants associated with heterogeneous electron transfer predicted by the empirically based Butler-Volmer and fundamentally based Marcus-Hush formalisms are well documented for dc cyclic voltammetry. However, differences are often subtle, so, presumably on the basis of simplicity, the Butler-Volmer method is generally employed in theoretical-experimental comparisons. In this study, the ability of Large Amplitude Fourier Transform AC Cyclic Voltammetry to distinguish the difference in behaviour predicted by the two formalisms has been investigated. The focus of this investigation is on the difference in the profiles of the first to sixth harmonics, which are readily accessible when a large amplitude of the applied ac potential is employed. In particular, it is demonstrated that systematic analysis of the higher order harmonic responses in suitable kinetic regimes provides predicted deviations of Marcus-Hush from Butler-Volmer behaviour to be established from a single experiment under conditions where the background charging current is minimal.

  2. Methodenvergleich zur Bestimmung der hydraulischen Durchlässigkeit

    NASA Astrophysics Data System (ADS)

    Storz, Katharina; Steger, Hagen; Wagner, Valentin; Bayer, Peter; Blum, Philipp

    2017-06-01

    Knowing the hydraulic conductivity (K) is a precondition for understanding groundwater flow processes in the subsurface. Numerous laboratory and field methods for the determination of hydraulic conductivity exist, which can lead to significantly different results. In order to quantify the variability of these various methods, the hydraulic conductivity was examined for an industrial silica sand (Dorsilit) using four different methods: (1) grain-size analysis, (2) Kozeny-Carman approach, (3) permeameter tests and (4) flow rate experiments in large-scale tank experiments. Due to the large volume of the artificially built aquifer, the tank experiment results are assumed to be the most representative. Hydraulic conductivity values derived from permeameter tests show only minor deviation, while results of the empirically evaluated grain-size analysis are about one magnitude higher and show great variances. The latter was confirmed by the analysis of several methods for the determination of K-values found in the literature, thus we generally question the suitability of grain-size analyses and strongly recommend the use of permeameter tests.

  3. Does a web-based feedback training program result in improved reliability in clinicians' ratings of the Global Assessment of Functioning (GAF) Scale?

    PubMed

    Støre-Valen, Jakob; Ryum, Truls; Pedersen, Geir A F; Pripp, Are H; Jose, Paul E; Karterud, Sigmund

    2015-09-01

    The Global Assessment of Functioning (GAF) Scale is used in routine clinical practice and research to estimate symptom and functional severity and longitudinal change. Concerns about poor interrater reliability have been raised, and the present study evaluated the effect of a Web-based GAF training program designed to improve interrater reliability in routine clinical practice. Clinicians rated up to 20 vignettes online, and received deviation scores as immediate feedback (i.e., own scores compared with expert raters) after each rating. Growth curves of absolute SD scores across the vignettes were modeled. A linear mixed effects model, using the clinician's deviation scores from expert raters as the dependent variable, indicated an improvement in reliability during training. Moderation by content of scale (symptoms; functioning), scale range (average; extreme), previous experience with GAF rating, profession, and postgraduate training were assessed. Training reduced deviation scores for inexperienced GAF raters, for individuals in clinical professions other than nursing and medicine, and for individuals with no postgraduate specialization. In addition, training was most beneficial for cases with average severity of symptoms compared with cases with extreme severity. The results support the use of Web-based training with feedback routines as a means to improve the reliability of GAF ratings performed by clinicians in mental health practice. These results especially pertain to clinicians in mental health practice who do not have a masters or doctoral degree. (c) 2015 APA, all rights reserved.

  4. Moderate Deviation Analysis for Classical Communication over Quantum Channels

    NASA Astrophysics Data System (ADS)

    Chubb, Christopher T.; Tan, Vincent Y. F.; Tomamichel, Marco

    2017-11-01

    We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  6. Finite-Size Scaling of a First-Order Dynamical Phase Transition: Adaptive Population Dynamics and an Effective Model

    NASA Astrophysics Data System (ADS)

    Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien

    2017-03-01

    We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.

  7. On Deviations between Observed and Theoretically Estimated Values on Additivity-Law Failures

    NASA Astrophysics Data System (ADS)

    Nayatani, Yoshinobu; Sobagaki, Hiroaki

    The authors have reported in the previous studies that the average observed results are about a half of the corresponding predictions on the experiments with large additivity-law failures. One of the reasons of the deviations is studied and clarified by using the original observed data on additivity-law failures in the Nakano experiment. The conclusion from the observations and their analyses clarified that it was essentially difficult to have a good agreement between the average observed results and the corresponding theoretical predictions in the experiments with large additivity-law failures. This is caused by a kind of unavoidable psychological pressure existing in subjects participated in the experiments. We should be satisfied with the agreement in trend between them.

  8. Next Generation Quality: Assessing the Physician in Clinical History Completeness and Diagnostic Interpretations Using Funnel Plots and Normalized Deviations Plots in 3,854 Prostate Biopsies.

    PubMed

    Bonert, Michael; El-Shinnawy, Ihab; Carvalho, Michael; Williams, Phillip; Salama, Samih; Tang, Damu; Kapoor, Anil

    2017-01-01

    Observational data and funnel plots are routinely used outside of pathology to understand trends and improve performance. Extract diagnostic rate (DR) information from free text surgical pathology reports with synoptic elements and assess whether inter-rater variation and clinical history completeness information useful for continuous quality improvement (CQI) can be obtained. All in-house prostate biopsies in a 6-year period at two large teaching hospitals were extracted and then diagnostically categorized using string matching, fuzzy string matching, and hierarchical pruning. DRs were then stratified by the submitting physicians and pathologists. Funnel plots were created to assess for diagnostic bias. 3,854 prostate biopsies were found and all could be diagnostically classified. Two audits involving the review of 700 reports and a comparison of the synoptic elements with the free text interpretations suggest a categorization error rate of <1%. Twenty-seven pathologists each read >40 cases and together assessed 3,690 biopsies. There was considerable inter-rater variability and a trend toward more World Health Organization/International Society of Urologic Pathology Grade 1 cancers in older pathologists. Normalized deviations plots, constructed using the median DR, and standard error can elucidate associated over- and under-calls for an individual pathologist in relation to their practice group. Clinical history completeness by submitting medical doctor varied significantly (100% to 22%). Free text data analyses have some limitations; however, they could be used for data-driven CQI in anatomical pathology, and could lead to the next generation in quality of care.

  9. Autonomic regulation in fetuses with Congenital Heart Disease

    PubMed Central

    Siddiqui, Saira; Wilpers, Abigail; Myers, Michael; Nugent, J. David; Fifer, William P.; Williams, Ismée A.

    2015-01-01

    Background Exposure to antenatal stressors affects autonomic regulation in fetuses. Whether the presence of congenital heart disease (CHD) alters the developmental trajectory of autonomic regulation is not known. Aims/Study Design This prospective observational cohort study aimed to further characterize autonomic regulation in fetuses with CHD; specifically hypoplastic left heart syndrome (HLHS), transposition of the great arteries (TGA), and tetralogy of Fallot (TOF). Subjects From 11/2010 – 11/2012, 92 fetuses were enrolled: 41 controls and 51 with CHD consisting of 19 with HLHS, 12 with TGA, and 20 with TOF. Maternal abdominal fetal electrocardiogram (ECG) recordings were obtained at 3 gestational ages: 19-27 weeks (F1), 28-33 weeks (F2), and 34-38 weeks (F3). Outcome measures Fetal ECG was analyzed for mean heart rate along with 3 measures of autonomic variability of the fetal heart rate: interquartile range, standard deviation, and root mean square of the standard deviation of the heart rate (RMSSD), a measure of parasympathetic activity. Results During F1 and F2 periods, HLHS fetuses demonstrated significantly lower mean HR than controls (p<0.05). Heart rate variability at F3, as measured by standard deviation, interquartile range, and RMSSD was lower in HLHS than controls (p<0.05). Other CHD subgroups showed a similar, though non-significant trend towards lower variability. Conclusions Autonomic regulation in CHD fetuses differs from controls with HLHS fetuses most markedly affected. PMID:25662702

  10. An approximate fluvial equilibrium topography for the Alps

    NASA Astrophysics Data System (ADS)

    Stüwe, K.; Hergarten, S.

    2012-04-01

    This contribution addresses the question whether the present topography of the Alps can be approximated by a fluvial equilibrium topography and whether this can be used to determine uplift rates. Based on a statistical analysis of the present topography we use a stream-power approach for erosion where the erosion rate is proportional to the square root of the catchment size for catchment sizes larger than 12 square kilometers and a logarithmic dependence to mimic slope processes at smaller catchment sizes. If we assume a homogeneous uplift rate over the entire region (block uplift), the best-fit fluvial equilibrium topography differs from the real topography by about 500 m RMS (root mean square) with a strong systematic deviation. Regions of low elevation are too high in the equilibrium topography, while high-mountain regions are too low. The RMS difference significantly decreases if a spatially variable uplift function is allowed. If a strong variation of the uplift rate on a scale of 5 km is allowed, the systematic deviation becomes rather small, and the RMS difference decreases to about 150 m. A significant part of the remaining deviation apparently arises from glacially-shaped valleys, while another part may result from prematurity of the relief (Hergarten, Wagner & Stüwe, EPSL 297:453, 2010). The best-fit uplift function can probably be used for forward or backward simulation of the landform evolution.

  11. Autonomic regulation in fetuses with congenital heart disease.

    PubMed

    Siddiqui, Saira; Wilpers, Abigail; Myers, Michael; Nugent, J David; Fifer, William P; Williams, Ismée A

    2015-03-01

    Exposure to antenatal stressors affects autonomic regulation in fetuses. Whether the presence of congenital heart disease (CHD) alters the developmental trajectory of autonomic regulation is not known. This prospective observational cohort study aimed to further characterize autonomic regulation in fetuses with CHD; specifically hypoplastic left heart syndrome (HLHS), transposition of the great arteries (TGA), and tetralogy of Fallot (TOF). From 11/2010 to 11/2012, 92 fetuses were enrolled: 41 controls and 51 with CHD consisting of 19 with HLHS, 12 with TGA, and 20 with TOF. Maternal abdominal fetal electrocardiogram (ECG) recordings were obtained at 3 gestational ages: 19-27 weeks (F1), 28-33 weeks (F2), and 34-38 weeks (F3). Fetal ECG was analyzed for mean heart rate along with 3 measures of autonomic variability of the fetal heart rate: interquartile range, standard deviation, and root mean square of the standard deviation of the heart rate (RMSSD), a measure of parasympathetic activity. During F1 and F2 periods, HLHS fetuses demonstrated significantly lower mean HR than controls (p<0.05). Heart rate variability at F3, as measured by standard deviation, interquartile range, and RMSSD was lower in HLHS than controls (p<0.05). Other CHD subgroups showed a similar, though non-significant trend towards lower variability. Autonomic regulation in CHD fetuses differs from controls, with HLHS fetuses most markedly affected. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Perception of midline deviations in smile esthetics by laypersons

    PubMed Central

    Ferreira, Jamille Barros; da Silva, Licínio Esmeraldo; Caetano, Márcia Tereza de Oliveira; da Motta, Andrea Fonseca Jardim; Cury-Saramago, Adriana de Alcantara; Mucha, José Nelson

    2016-01-01

    ABSTRACT Objective: To evaluate the esthetic perception of upper dental midline deviation by laypersons and if adjacent structures influence their judgment. Methods: An album with 12 randomly distributed frontal view photographs of the smile of a woman with the midline digitally deviated was evaluated by 95 laypersons. The frontal view smiling photograph was modified to create from 1 mm to 5 mm deviations in the upper midline to the left side. The photographs were cropped in two different manners and divided into two groups of six photographs each: group LCN included the lips, chin, and two-thirds of the nose, and group L included the lips only. The laypersons performed the rate of each smile using a visual analog scale (VAS). Wilcoxon test, Student’s t-test and Mann-Whitney test were applied, adopting a 5% level of significance. Results: Laypersons were able to perceive midline deviations starting at 1 mm. Statistically significant results (p< 0.05) were found for all multiple comparisons of the values in photographs of group LCN and for almost all comparisons in photographs of group L. Comparisons between the photographs of groups LCN and L showed statistically significant values (p< 0.05) when the deviation was 1 mm. Conclusions: Laypersons were able to perceive the upper dental midline deviations of 1 mm, and above when the adjacent structures of the smiles were included. Deviations of 2 mm and above when the lips only were included. The visualization of structures adjacent to the smile demonstrated influence on the perception of midline deviation. PMID:28125140

  13. Some limit theorems for ratios of order statistics from uniform random variables.

    PubMed

    Xu, Shou-Fang; Miao, Yu

    2017-01-01

    In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.

  14. Exclusion Process with Slow Boundary

    NASA Astrophysics Data System (ADS)

    Baldasso, Rangel; Menezes, Otávio; Neumann, Adriana; Souza, Rafael R.

    2017-06-01

    We study the hydrodynamic and the hydrostatic behavior of the simple symmetric exclusion process with slow boundary. The term slow boundary means that particles can be born or die at the boundary sites, at a rate proportional to N^{-θ }, where θ > 0 and N is the scaling parameter. In the bulk, the particles exchange rate is equal to 1. In the hydrostatic scenario, we obtain three different linear profiles, depending on the value of the parameter θ ; in the hydrodynamic scenario, we obtain that the time evolution of the spatial density of particles, in the diffusive scaling, is given by the weak solution of the heat equation, with boundary conditions that depend on θ . If θ \\in (0,1), we get Dirichlet boundary conditions, (which is the same behavior if θ =0, see Farfán in Hydrostatics, statical and dynamical large deviations of boundary driven gradient symmetric exclusion processes, 2008); if θ =1, we get Robin boundary conditions; and, if θ \\in (1,∞), we get Neumann boundary conditions.

  15. A building-block approach to 3D printing a multichannel, organ-regenerative scaffold.

    PubMed

    Wang, Xiaohong; Rijff, Boaz Lloyd; Khang, Gilson

    2017-05-01

    Multichannel scaffolds, formed by rapid prototyping technologies, retain a high potential for regenerative medicine and the manufacture of complex organs. This study aims to optimize several parameters for producing poly(lactic-co-glycolic acid) (PLGA) scaffolds by a low-temperature, deposition manufacturing, three-dimensional printing (3DP, or rapid prototyping) system. Concentration of the synthetic polymer solution, nozzle speed and extrusion rate were analysed and discussed. Polymer solution with a concentration of 12% w/v was determined as optimal for formation; large deviation of this figure failed to maintain the desired structure. The extrusion rate was also modified for better construct quality. Finally, several solid organ scaffolds, such as the liver, with proper wall thickness and intact contour were printed. This study gives basic instruction to design and fabricate scaffolds with de novo material systems, particularly by showing the approximation of variables for manufacturing multichannel PLGA scaffolds. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Radiotherapy quality assurance report from children's oncology group AHOD0031

    PubMed Central

    Dharmarajan, Kavita V.; Friedman, Debra L.; FitzGerald, T.J.; McCarten, Kathleen M.; Constine, Louis S.; Chen, Lu; Kessel, Sandy K.; Iandoli, Matt; Laurie, Fran; Schwartz, Cindy L.; Wolden, Suzanne L.

    2016-01-01

    Purpose A phase III trial assessing response-based therapy in intermediate-risk Hodgkin lymphoma, mandated real-time central review of involved field radiotherapy and imaging records by a centralized review center to maximize protocol compliance. We report the impact of centralized radiotherapy review upon protocol compliance. Methods Review of simulation films, port films, and dosimetry records was required pre-treatment and after treatment completion. Records were reviewed by study-affiliated or review center-affiliated radiation oncologists. A 6–10% deviation from protocol-specified dose was scored as “minor”; >10% was “major”. A volume deviation was scored as “minor” if margins were less than specified, or “major” if fields transected disease-bearing areas. Interventional review and final compliance review scores were assigned to each radiotherapy case and compared. Results Of 1712 patients enrolled, 1173 underwent IFRT at 256 institutions in 7 countries. An interventional review was performed in 88% and a final review in 98%. Overall, minor and major deviations were found in 12% and 6%, respectively. Among the cases for which ≥ 1 pre-IFRT modification was requested by QARC and subsequently made by the treating institution, 100% were made compliant on final review. In contrast, among the cases for which ≥ 1 modification was requested but not made by the treating institution, 10% were deemed compliant on final review. Conclusion In a large trial with complex treatment pathways and heterogeneous radiotherapy fields, central review was performed in a large percentage of cases pre-IFRT and identified frequent potential deviations in a timely manner. When suggested modifications were performed by the institutions, deviations were almost eliminated. PMID:25670539

  17. Ultrasonographic characterization of follicle deviation in follicular waves with single dominant and codominant follicles in dromedary camels (Camelus dromedarius).

    PubMed

    Manjunatha, B M; Al-Bulushi, S; Pratap, N

    2014-04-01

    Follicular wave emergence was synchronized by treating camels with GnRH when a dominant follicle (DF) was present in the ovaries. Animals were scanned twice a day from day 0 (day of GnRH treatment) to day 10, to characterize emergence and deviation of follicles during the development of the follicular wave. Follicle deviation in individual animals was determined by graphical method. Single DFs were found in 16, double DFs in 9 and triple DFs in two camels. The incidence of codominant (double and triple DFs) follicles was 41%. The interval from GnRH treatment to wave emergence, wave emergence to deviation, diameter and growth rate of F1 follicle before or after deviation did not differ between the animals with single and double DFs. The size difference between future DF(s) and the largest subordinate follicle (SF) was apparent from the day of wave emergence in single and double DFs. Overall, interval from GnRH treatment to wave emergence and wave emergence to the beginning of follicle deviation was 70.6 ± 1.4 and 58.6 ± 2.7 h, respectively. Mean size of the DF and largest SF at the beginning of deviation was 7.4 ± 0.2 and 6.3 ± 0.1 mm, respectively. In conclusion, the characteristics of follicle deviation are similar between the animals that developed single or double DFs. © 2013 Blackwell Verlag GmbH.

  18. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... donation rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard deviations below the mean national donation rate of eligible donors as a percentage of eligible deaths, averaged...'s donation rate ratio are adjusted by adding a 1 for each donation after cardiac death donor and...

  19. A Robust Interpretation of Teaching Evaluation Ratings

    ERIC Educational Resources Information Center

    Bi, Henry H.

    2018-01-01

    There are no absolute standards regarding what teaching evaluation ratings are satisfactory. It is also problematic to compare teaching evaluation ratings with the average or with a cutoff number to determine whether they are adequate. In this paper, we use average and standard deviation charts (X[overbar]-S charts), which are based on the theory…

  20. Hurricane track forecast cones from fluctuations

    PubMed Central

    Meuel, T.; Prado, G.; Seychelles, F.; Bessafi, M.; Kellay, H.

    2012-01-01

    Trajectories of tropical cyclones may show large deviations from predicted tracks leading to uncertainty as to their landfall location for example. Prediction schemes usually render this uncertainty by showing track forecast cones representing the most probable region for the location of a cyclone during a period of time. By using the statistical properties of these deviations, we propose a simple method to predict possible corridors for the future trajectory of a cyclone. Examples of this scheme are implemented for hurricane Ike and hurricane Jimena. The corridors include the future trajectory up to at least 50 h before landfall. The cones proposed here shed new light on known track forecast cones as they link them directly to the statistics of these deviations. PMID:22701776

  1. Overdispersion of the Molecular Clock: Temporal Variation of Gene-Specific Substitution Rates in Drosophila

    PubMed Central

    Hartl, Daniel L.

    2008-01-01

    Simple models of molecular evolution assume that sequences evolve by a Poisson process in which nucleotide or amino acid substitutions occur as rare independent events. In these models, the expected ratio of the variance to the mean of substitution counts equals 1, and substitution processes with a ratio greater than 1 are called overdispersed. Comparing the genomes of 10 closely related species of Drosophila, we extend earlier evidence for overdispersion in amino acid replacements as well as in four-fold synonymous substitutions. The observed deviation from the Poisson expectation can be described as a linear function of the rate at which substitutions occur on a phylogeny, which implies that deviations from the Poisson expectation arise from gene-specific temporal variation in substitution rates. Amino acid sequences show greater temporal variation in substitution rates than do four-fold synonymous sequences. Our findings provide a general phenomenological framework for understanding overdispersion in the molecular clock. Also, the presence of substantial variation in gene-specific substitution rates has broad implications for work in phylogeny reconstruction and evolutionary rate estimation. PMID:18480070

  2. Potential Energy Surface for Large Barrierless Reaction Systems: Application to the Kinetic Calculations of the Dissociation of Alkanes and the Reverse Recombination Reactions.

    PubMed

    Yao, Qian; Cao, Xiao-Mei; Zong, Wen-Gang; Sun, Xiao-Hui; Li, Ze-Rong; Li, Xiang-Yuan

    2018-05-31

    The isodesmic reaction method is applied to calculate the potential energy surface (PES) along the reaction coordinates and the rate constants of the barrierless reactions for unimolecular dissociation reactions of alkanes to form two alkyl radicals and their reverse recombination reactions. The reaction class is divided into 10 subclasses depending upon the type of carbon atoms in the reaction centers. A correction scheme based on isodesmic reaction theory is proposed to correct the PESs at UB3LYP/6-31+G(d,p) level. To validate the accuracy of this scheme, a comparison of the PESs at B3LYP level and the corrected PESs with the PESs at CASPT2/aug-cc-pVTZ level is performed for 13 representative reactions, and it is found that the deviations of the PESs at B3LYP level are up to 35.18 kcal/mol and are reduced to within 2 kcal/mol after correction, indicating that the PESs for barrierless reactions in a subclass can be calculated meaningfully accurately at a low level of ab initio method using our correction scheme. High-pressure limit rate constants and pressure dependent rate constants of these reactions are calculated based on their corrected PESs and the results show the pressure dependence of the rate constants cannot be ignored, especially at high temperatures. Furthermore, the impact of molecular size on the pressure-dependent rate constants of decomposition reactions of alkanes and their reverse reactions has been studied. The present work provides an effective method to generate meaningfully accurate PESs for large molecular system.

  3. Intermittent changing axis deviation with intermittent left anterior hemiblock during atrial flutter with subclinical hyperthyroidism.

    PubMed

    Patanè, Salvatore; Marte, Filippo

    2009-06-26

    Subclinical hyperthyroidism is an increasingly recognized entity that is defined as a normal serum free thyroxine and free triiodothyronine levels with a thyroid-stimulating hormone level suppressed below the normal range and usually undetectable. It has been reported that subclinical hyperthyroidism is not associated with CHD or mortality from cardiovascular causes but it is usually associated with a higher heart rate and a higher risk of supraventricular arrhythmias including atrial fibrillation and atrial flutter. Intermittent changing axis deviation during atrial fibrillation has also rarely been reported. We present a case of intermittent changing axis deviation with intermittent left anterior hemiblock in a 59-year-old Italian man with atrial flutter and subclinical hyperthyroidism. To our knowledge, this is the first report of intermittent changing axis deviation with intermittent left anterior hemiblock in a patient with atrial flutter.

  4. Evaluation of an attenuation correction method for PET/MR imaging of the head based on substitute CT images.

    PubMed

    Larsson, Anne; Johansson, Adam; Axelsson, Jan; Nyholm, Tufve; Asklund, Thomas; Riklund, Katrine; Karlsson, Mikael

    2013-02-01

    The aim of this study was to evaluate MR-based attenuation correction of PET emission data of the head, based on a previously described technique that calculates substitute CT (sCT) images from a set of MR images. Images from eight patients, examined with (18)F-FLT PET/CT and MRI, were included. sCT images were calculated and co-registered to the corresponding CT images, and transferred to the PET/CT scanner for reconstruction. The new reconstructions were then compared with the originals. The effect of replacing bone with soft tissue in the sCT-images was also evaluated. The average relative difference between the sCT-corrected PET images and the CT-corrected PET images was 1.6% for the head and 1.9% for the brain. The average standard deviations of the relative differences within the head were relatively high, at 13.2%, primarily because of large differences in the nasal septa region. For the brain, the average standard deviation was lower, 4.1%. The global average difference in the head when replacing bone with soft tissue was 11%. The method presented here has a high rate of accuracy, but high-precision quantitative imaging of the nasal septa region is not possible at the moment.

  5. Implementation of a Risk Stratification and Management Pathway for Acute Chest Pain in the Emergency Department.

    PubMed

    Baugh, Christopher W; Greenberg, Jeffrey O; Mahler, Simon A; Kosowsky, Joshua M; Schuur, Jeremiah D; Parmar, Siddharth; Ciociolo, George R; Carr, Christina W; Ghazinouri, Roya; Scirica, Benjamin M

    2016-12-01

    Chest pain is a common complaint in the emergency department, and a small but important minority represents an acute coronary syndrome (ACS). Variation in diagnostic workup, risk stratification, and management may result in underuse, misuse, and/or overuse of resources. From July to October 2014, we conducted a prospective cohort study in an academic medical center by implementing a Standardized Clinical Assessment and Management Plan (SCAMP) for chest pain based on the HEART score. In addition to capturing adherence to the SCAMP algorithm and reasons for any deviations, we measured troponin sample timing; rates of stress test utilization; length of stay (LOS); and 30-day rates of revascularization, ACS, and death. We identified 239 patients during the enrollment period who were eligible to enter the SCAMP, of whom 97 patients were entered into the pathway. Patients were risk stratified into one of 3 risk tiers: high (n = 3), intermediate (n = 40), and low (n = 54). Among low-risk patients, recommendations for troponin testing were not followed in 56%, and 11% received stress tests contrary to the SCAMP recommendation. None of the low-risk patients had elevated troponin measurements, and none had an abnormal stress test. Mean LOS in low-risk patients managed with discordant plans was 22:26 h/min, compared with 9:13 h/min in concordant patients (P < 0.001). Mean LOS in intermediate-risk patients with stress testing was 25:53 h/min, compared with 7:55 h/min for those without (P < 0.001). At 30 days, 10% of intermediate-risk patients and 0% of low-risk patients experienced an ACS event (risk difference 10% [0.7%-19%]); none experienced revascularization or death. The most frequently cited reason for deviation from the SCAMP was lack of confidence in the tool. Compliance with SCAMP recommendations for low- and intermediate-risk patients was poor, largely due to lack of confidence in the tool. However, in our study population, outcomes suggest that deviation from the SCAMP yielded no additional clinical benefit while significantly prolonging emergency department LOS.

  6. A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets

    NASA Technical Reports Server (NTRS)

    Okongo, Nora; Bellan, Josette

    1999-01-01

    Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.

  7. [Strabismus surgery in Grave's disease--dose-effect relationships and functional results].

    PubMed

    Schittkowski, M; Fichter, N; Guthoff, R

    2004-11-01

    Strabismus in thyroid ophthalmopathy is based on a loss of the contractility and distensibility of the external ocular muscles. Different therapeutic approaches are available, such as recession after pre-. or intraoperative measurement, adjustable sutures, antagonist resection, or contralateral synergist faden-operation. 26 patients with strabismus in thyroid ophthalmopathy were operated between 2000 and 2003. All patients were examined preoperatively, then 1 day and 3 - 6 months (maximum 36 months) postoperatively. Before proceeding with surgery, we waited at least 6 months after stabilization of ocular alignment and normalization of thyroid chemistry. Preoperative vertical deviation was 10-44 PD (mean 22), 3 months postoperatively it was 2-10 PD (mean 1.5). Recession of the fibrotic muscle leads to reproducible results: 3.98 +/- 0.52 PD vertical deviation/mm for the inferior rectus. In the case of a large preoperative deviation a correction should be expected, which might not be sufficient in the first few days or weeks; a second operation should not be carried out before 3 months. 7 patients were operated twice, 1 patient need three operations. 4 patients (preop. 0) achieved no double vision at all; 15 patients (preop. 1) had no double vision in the primary and reading positions; 3 patients (preop. 0) had no double vision with a maximum of 5 PD; 1 patient (preop. 7) had double vision in the primary or reading position even with prisms; and 2 patients (preop. 17) had double vision in every position. We advocate that recession of the restricted inferior or internal rectus muscle is precise, safe and effective in patients with thyroid ophthalmopathy. The recessed muscle should be fixed directly at the sclera to avoid late over-correction through a slipped muscle. The success rate in terms of binocular single vision was 76 % and 88 % with prisms added.

  8. Some observations aimed at improving the success rate of paleointensity experiments for lava flows (Invited)

    NASA Astrophysics Data System (ADS)

    Valet, J. M.; Herrero-Bervera, E.

    2009-12-01

    Emile Thellier did not believe to the possibility of obtaining reliable determinations of absolute paleointensity from lava flows and defended that only archeomagnetic material was suitable. Many protocols have been proposed over the past fifty years to defend that this assertion was not really justified. We have performed paleointensity studies on contemporaneous flows in Hawaii and in the Canaries. To those we have added determinations obtained from relatively recent flows at Santorini. The hawaiian flows that are dominated by pure magnetite with a narrow distribution of grain sizes provide by far the most accurate determinations of paleointensity. Such characteristics are simply derived from the spectrum of unbloking temperatures. Thus the evolution of the TRM upon thermal demagnetization appears to be a very important feature for successfull paleointensity experiments. The existence of a sharp decrease of the magnetization before reaching the unique Curie temperature of the rock is conclusively a very appropriate condition for obtaining suitable field determinations. Of course, these characteristics are only valid if the pTRM checks do not deviate from the original TRM. In this respect, we have noticed that deviations larger than 5% are frequently associated with significant deviations from the expected field intensity. The results from the Canary islands are also consistent with this observation despite the presence of a larger amount of titanium. Overall, these conclusions make sense when faced to Thellier’s statement regarding the success of archeomagnetic material. Indeed, the features that have been outlined above are typical of the characteristics found in archeological materials which have been largely oxidized during cooling and are dominated by a single magnetic mineral with a tiny distribution of grain sizes.

  9. Cognitive loading affects motor awareness and movement kinematics but not locomotor trajectories during goal-directed walking in a virtual reality environment.

    PubMed

    Kannape, Oliver Alan; Barré, Arnaud; Aminian, Kamiar; Blanke, Olaf

    2014-01-01

    The primary purpose of this study was to investigate the effects of cognitive loading on movement kinematics and trajectory formation during goal-directed walking in a virtual reality (VR) environment. The secondary objective was to measure how participants corrected their trajectories for perturbed feedback and how participants' awareness of such perturbations changed under cognitive loading. We asked 14 healthy young adults to walk towards four different target locations in a VR environment while their movements were tracked and played back in real-time on a large projection screen. In 75% of all trials we introduced angular deviations of ±5° to ±30° between the veridical walking trajectory and the visual feedback. Participants performed a second experimental block under cognitive load (serial-7 subtraction, counter-balanced across participants). We measured walking kinematics (joint-angles, velocity profiles) and motor performance (end-point-compensation, trajectory-deviations). Motor awareness was determined by asking participants to rate the veracity of the feedback after every trial. In-line with previous findings in natural settings, participants displayed stereotypical walking trajectories in a VR environment. Our results extend these findings as they demonstrate that taxing cognitive resources did not affect trajectory formation and deviations although it interfered with the participants' movement kinematics, in particular walking velocity. Additionally, we report that motor awareness was selectively impaired by the secondary task in trials with high perceptual uncertainty. Compared with data on eye and arm movements our findings lend support to the hypothesis that the central nervous system (CNS) uses common mechanisms to govern goal-directed movements, including locomotion. We discuss our results with respect to the use of VR methods in gait control and rehabilitation.

  10. The large-scale correlations of multicell densities and profiles: implications for cosmic variance estimates

    NASA Astrophysics Data System (ADS)

    Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe

    2016-08-01

    In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.

  11. Probability evolution method for exit location distribution

    NASA Astrophysics Data System (ADS)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  12. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gopich, Irina V.

    2015-01-21

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when themore » FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated.« less

  13. Accuracy of maximum likelihood estimates of a two-state model in single-molecule FRET

    PubMed Central

    Gopich, Irina V.

    2015-01-01

    Photon sequences from single-molecule Förster resonance energy transfer (FRET) experiments can be analyzed using a maximum likelihood method. Parameters of the underlying kinetic model (FRET efficiencies of the states and transition rates between conformational states) are obtained by maximizing the appropriate likelihood function. In addition, the errors (uncertainties) of the extracted parameters can be obtained from the curvature of the likelihood function at the maximum. We study the standard deviations of the parameters of a two-state model obtained from photon sequences with recorded colors and arrival times. The standard deviations can be obtained analytically in a special case when the FRET efficiencies of the states are 0 and 1 and in the limiting cases of fast and slow conformational dynamics. These results are compared with the results of numerical simulations. The accuracy and, therefore, the ability to predict model parameters depend on how fast the transition rates are compared to the photon count rate. In the limit of slow transitions, the key parameters that determine the accuracy are the number of transitions between the states and the number of independent photon sequences. In the fast transition limit, the accuracy is determined by the small fraction of photons that are correlated with their neighbors. The relative standard deviation of the relaxation rate has a “chevron” shape as a function of the transition rate in the log-log scale. The location of the minimum of this function dramatically depends on how well the FRET efficiencies of the states are separated. PMID:25612692

  14. Radar sea reflection for low-e targets

    NASA Astrophysics Data System (ADS)

    Chow, Winston C.; Groves, Gordon W.

    1998-09-01

    Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.

  15. Recursive utility in a Markov environment with stochastic growth

    PubMed Central

    Hansen, Lars Peter; Scheinkman, José A.

    2012-01-01

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428

  16. Shapes of strong shock fronts in an inhomogeneous solar wind

    NASA Technical Reports Server (NTRS)

    Heinemann, M. A.; Siscoe, G. L.

    1974-01-01

    The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.

  17. Recursive utility in a Markov environment with stochastic growth.

    PubMed

    Hansen, Lars Peter; Scheinkman, José A

    2012-07-24

    Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N.; Mamon, Harvey J.; Ancukiewicz, Marek

    Purpose: To determine the rate of treatment deviations during combined modality therapy for rectal cancer in elderly patients aged 75 years and older. Methods and Materials: We reviewed the records of consecutively treated patients with rectal cancer aged 75 years and older treated with combined modality therapy at Massachusetts General Hospital and Brigham and Women's Hospital from 2002 to 2007. The primary endpoint was the rate of treatment deviation, defined as a treatment break, dose reduction, early discontinuation of therapy, or hospitalization during combined modality therapy. Patient comorbidity was rated using the validated Adult Comorbidity Evaluation 27 Test (ACE-27) comorbiditymore » index. Fisher's exact test and the Mantel-Haenszel trend test were used to identify predictors of treatment tolerability. Results: Thirty-six eligible patients had a median age of 79.0 years (range, 75-87 years); 53% (19/36) had no or mild comorbidity and 47% (17/36) had moderate or severe comorbidity. In all, 58% of patients (21/36) were treated with preoperative chemoradiotherapy (CRT) and 33% (12/36) with postoperative CRT. Although 92% patients (33/36) completed the planned radiotherapy (RT) dose, 25% (9/36) required an RT-treatment break, 11% (4/36) were hospitalized, and 33% (12/36) had a dose reduction, break, or discontinuation of concurrent chemotherapy. In all, 39% of patients (14/36) completed {>=}4 months of adjuvant chemotherapy, and 17% (6/36) completed therapy without a treatment deviation. More patients with no to mild comorbidity completed treatment than did patients with moderate to severe comorbidity (21% vs. 12%, p = 0.66). The rate of deviation did not differ between patients who had preoperative or postoperative CRT (19% vs. 17%, p = 1.0). Conclusions: The majority of elderly patients with rectal cancer in this series required early termination of treatment, treatment interruptions, or dose reductions. These data suggest that further intensification of combined modality therapy for rectal cancer should be performed with caution in elderly patients, who require aggressive supportive care to complete treatment.« less

  19. Observer Evaluation of a Metal Artifact Reduction Algorithm Applied to Head and Neck Cone Beam Computed Tomographic Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korpics, Mark; Surucu, Murat; Mescioglu, Ibrahim

    Purpose and Objectives: To quantify, through an observer study, the reduction in metal artifacts on cone beam computed tomographic (CBCT) images using a projection-interpolation algorithm, on images containing metal artifacts from dental fillings and implants in patients treated for head and neck (H&N) cancer. Methods and Materials: An interpolation-substitution algorithm was applied to H&N CBCT images containing metal artifacts from dental fillings and implants. Image quality with respect to metal artifacts was evaluated subjectively and objectively. First, 6 independent radiation oncologists were asked to rank randomly sorted blinded images (before and after metal artifact reduction) using a 5-point rating scalemore » (1 = severe artifacts; 5 = no artifacts). Second, the standard deviation of different regions of interest (ROI) within each image was calculated and compared with the mean rating scores. Results: The interpolation-substitution technique successfully reduced metal artifacts in 70% of the cases. From a total of 60 images from 15 H&N cancer patients undergoing image guided radiation therapy, the mean rating score on the uncorrected images was 2.3 ± 1.1, versus 3.3 ± 1.0 for the corrected images. The mean difference in ranking score between uncorrected and corrected images was 1.0 (95% confidence interval: 0.9-1.2, P<.05). The standard deviation of each ROI significantly decreased after artifact reduction (P<.01). Moreover, a negative correlation between the mean rating score for each image and the standard deviation of the oral cavity and bilateral cheeks was observed. Conclusion: The interpolation-substitution algorithm is efficient and effective for reducing metal artifacts caused by dental fillings and implants on CBCT images, as demonstrated by the statistically significant increase in observer image quality ranking and by the decrease in ROI standard deviation between uncorrected and corrected images.« less

  20. On the influence of airfoil deviations on the aerodynamic performance of wind turbine rotors

    NASA Astrophysics Data System (ADS)

    Winstroth, J.; Seume, J. R.

    2016-09-01

    The manufacture of large wind turbine rotor blades is a difficult task that still involves a certain degree of manual labor. Due to the complexity, airfoil deviations between the design airfoils and the manufactured blade are certain to arise. Presently, the understanding of the impact of manufacturing uncertainties on the aerodynamic performance is still incomplete. The present work analyzes the influence of a series of airfoil deviations likely to occur during manufacturing by means of Computational Fluid Dynamics and the aeroelastic code FAST. The average power production of the NREL 5MW wind turbine is used to evaluate the different airfoil deviations. Analyzed deviations include: Mold tilt towards the leading and trailing edge, thick bond lines, thick bond lines with cantilever correction, backward facing steps and airfoil waviness. The most severe influences are observed for mold tilt towards the leading and thick bond lines. By applying the cantilever correction, the influence of thick bond lines is almost compensated. Airfoil waviness is very dependent on amplitude height and the location along the surface of the airfoil. Increased influence is observed for backward facing steps, once they are high enough to trigger boundary layer transition close to the leading edge.

  1. Models of Lift and Drag Coefficients of Stalled and Unstalled Airfoils in Wind Turbines and Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    2008-01-01

    Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.

  2. Associations between heterozygosity and growth rate variables in three western forest trees

    Treesearch

    Jeffry B. Milton; Peggy Knowles; Kareen B. Sturgeon; Yan B. Linhart; Martha Davis

    1981-01-01

    For each of three species, quaking aspen, ponderosa pine, and lodgepole pine, we determined the relationships between a ranking of heterozygosity of individuals and measures of growth rate. Genetic variation was assayed by starch gel electrophoresis of enzymes. Growth rates were characterized by the mean, standard deviation, logarithm of the variance, and coefficient...

  3. Computerized Silent Reading Rate and Strategy Instruction for Fourth Graders at Risk in Silent Reading Rate

    ERIC Educational Resources Information Center

    Niedo, Jasmin; Lee, Yen-Ling; Breznitz, Zvia; Berninger, Virginia W.

    2014-01-01

    Fourth graders whose silent word reading and/or sentence reading rate was, on average, two-thirds standard deviation below their oral reading of real and pseudowords and reading comprehension accuracy were randomly assigned to treatment ("n" = 7) or wait-listed ("n" = 7) control groups. Following nine sessions combining…

  4. Excellent reliability of the Hamilton Depression Rating Scale (HDRS-21) in Indonesia after training.

    PubMed

    Istriana, Erita; Kurnia, Ade; Weijers, Annelies; Hidayat, Teddy; Pinxten, Lucas; de Jong, Cor; Schellekens, Arnt

    2013-09-01

    The Hamilton Depression Rating Scale (HDRS) is the most widely used depression rating scale worldwide. Reliability of HDRS has been reported mainly from Western countries. The current study tested the reliability of HDRS ratings among psychiatric residents in Indonesia, before and after HDRS training. The hypotheses were that: (i) prior to the training reliability of HDRS ratings is poor; and (ii) HDRS training can improve reliability of HDRS ratings to excellent levels. Furthermore, we explored cultural validity at item level. Videotaped HDRS interviews were rated by 30 psychiatric residents before and after 1 day of HDRS training. Based on a gold standard rating, percentage correct ratings and deviation from the standard were calculated. Correct ratings increased from 83% to 99% at item level and from 70% to 100% for the total rating. The average deviation from the gold standard rating improved from 0.07 to 0.02 at item level and from 2.97 to 0.46 for the total rating. HDRS assessment by psychiatric trainees in Indonesia without prior training is unreliable. A short, evidence-based HDRS training improves reliability to near perfect levels. The outlined training program could serve as a template for HDRS trainings. HDRS items that may be less valid for assessment of depression severity in Indonesia are discussed. Copyright © 2013 Wiley Publishing Asia Pty Ltd.

  5. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  6. Efficiency of thin magnetically arrested discs around black holes

    NASA Astrophysics Data System (ADS)

    Avara, Mark J.; McKinney, Jonathan C.; Reynolds, Christopher S.

    2016-10-01

    The radiative and jet efficiencies of thin magnetized accretion discs around black holes (BHs) are affected by BH spin and the presence of a magnetic field that, when strong, could lead to large deviations from Novikov-Thorne (NT) thin disc theory. To seek the maximum deviations, we perform general relativistic magnetohydrodynamic simulations of radiatively efficient thin (half-height H to radius R of H/R ≈ 0.10) discs around moderately rotating BHs with a/M = 0.5. First, our simulations, each evolved for more than 70 000 rg/c (gravitational radius rg and speed of light c), show that large-scale magnetic field readily accretes inward even through our thin disc and builds-up to the magnetically arrested disc (MAD) state. Secondly, our simulations of thin MADs show the disc achieves a radiative efficiency of ηr ≈ 15 per cent (after estimating photon capture), which is about twice the NT value of ηr ˜ 8 per cent for a/M = 0.5 and gives the same luminosity as an NT disc with a/M ≈ 0.9. Compared to prior simulations with ≲10 per cent deviations, our result of an ≈80 per cent deviation sets a new benchmark. Building on prior work, we are now able to complete an important scaling law which suggests that observed jet quenching in the high-soft state in BH X-ray binaries is consistent with an ever-present MAD state with a weak yet sustained jet.

  7. Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.

    PubMed

    Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman

    2013-02-01

    This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.

  8. OSMOSIS: A CAUSE OF APPARENT DEVIATIONS FROM DARCY'S LAW.

    USGS Publications Warehouse

    Olsen, Harold W.

    1985-01-01

    This review of the existing evidence shows that osmosis causes intercepts in flow rate versus hydraulic gradient relationships that are consistent with the observed deviations from Darcy's law at very low gradients. Moreover, it is suggested that a natural cause of osmosis in laboratory samples could be chemical reactions such as those involved in aging effects. This hypothesis is analogous to the previously proposed occurrence of electroosmosis in nature generated by geochemical weathering reactions. Refs.

  9. Coldest Temperature Extreme Monotonically Increased and Hottest Extreme Oscillated over Northern Hemisphere Land during Last 114 Years.

    PubMed

    Zhou, Chunlüe; Wang, Kaicun

    2016-05-13

    Most studies on global warming rely on global mean surface temperature, whose change is jointly determined by anthropogenic greenhouse gases (GHGs) and natural variability. This introduces a heated debate on whether there is a recent warming hiatus and what caused the hiatus. Here, we presented a novel method and applied it to a 5° × 5° grid of Northern Hemisphere land for the period 1900 to 2013. Our results show that the coldest 5% of minimum temperature anomalies (the coldest deviation) have increased monotonically by 0.22 °C/decade, which reflects well the elevated anthropogenic GHG effect. The warmest 5% of maximum temperature anomalies (the warmest deviation), however, display a significant oscillation following the Atlantic Multidecadal Oscillation (AMO), with a warming rate of 0.07 °C/decade from 1900 to 2013. The warmest (0.34 °C/decade) and coldest deviations (0.25 °C/decade) increased at much higher rates over the most recent decade than last century mean values, indicating the hiatus should not be interpreted as a general slowing of climate change. The significant oscillation of the warmest deviation provides an extension of previous study reporting no pause in the hottest temperature extremes since 1979, and first uncovers its increase from 1900 to 1939 and decrease from 1940 to 1969.

  10. Impact of the Food and Drug Administration approval of flecainide and encainide on coronary artery disease mortality: putting "Deadly Medicine" to the test.

    PubMed

    Anderson, J L; Pratt, C M; Waldo, A L; Karagounis, L A

    1997-01-01

    In his book Deadly Medicine and on television, Thomas Moore impugns the process of antiarrhythmic drug approval in the 1980s, alleging that the new generation of drugs had flooded the marketplace and had caused deaths in numbers comparable to lives lost during war. To assess these important public health allegations, we evaluated annual coronary artery disease death rates in relation to antiarrhythmic drug sales (2 independent marketing surveys). Predicted mortality rates were modeled using linear regression analysis for 1982 through 1991. Deviations from predicted linearity were sought in relation to rising and falling class IC and overall class I antiarrhythmic drug use. Flecainide came to market in 1986 and encainide in 1987. Combined class IC sales peaked in 1987 and 1988 (maximum market penetration, 20%, first quarter 1989). Results of the Cardiac Arrhythmia Suppression Trial (CAST) were disclosed in April 1989. Overall annual class I antiarrhythmic prescription sales actually fell slightly (-3% to -4%/yr) in the 2 years before CAST and then more abruptly (- 12%) in the year after CAST (1990). Sales of class IC drugs fell dramatically after CAST (by 75%). Coronary death rates (age adjusted) fell in a linear fashion during the decade of 1982 through 1991. No deviation from predicted rates was observed during the introduction, rise, and fall in class IC (and other class I) sales: rates were 126/100,000 in 1985 (before flecainide), 114 and 110 in 1987 and 1988 (maximum sales), and 103 in 1990 (after CAST). Deviations in death rates in the postulated range of 6,000 to 25,000 per year were shown to be excluded easily by the 95% confidence intervals about the predicted rates. Entry of new antiarrhythmic drugs in the 1980s did not lead to overall market expansion and had no adverse impact on coronary artery disease death rates, which fell progressively. Thus, the allegations in Deadly Medicine could not be confirmed.

  11. How does the past of a soccer match influence its future? Concepts and statistical analysis.

    PubMed

    Heuer, Andreas; Rubner, Oliver

    2012-01-01

    Scoring goals in a soccer match can be interpreted as a stochastic process. In the most simple description of a soccer match one assumes that scoring goals follows from independent rate processes of both teams. This would imply simple Poissonian and Markovian behavior. Deviations from this behavior would imply that the previous course of the match has an impact on the present match behavior. Here a general framework for the identification of deviations from this behavior is presented. For this endeavor it is essential to formulate an a priori estimate of the expected number of goals per team in a specific match. This can be done based on our previous work on the estimation of team strengths. Furthermore, the well-known general increase of the number of the goals in the course of a soccer match has to be removed by appropriate normalization. In general, three different types of deviations from a simple rate process can exist. First, the goal rate may depend on the exact time of the previous goals. Second, it may be influenced by the time passed since the previous goal and, third, it may reflect the present score. We show that the Poissonian scenario is fulfilled quite well for the German Bundesliga. However, a detailed analysis reveals significant deviations for the second and third aspect. Dramatic effects are observed if the away team leads by one or two goals in the final part of the match. This analysis allows one to identify generic features about soccer matches and to learn about the hidden complexities behind scoring goals. Among others the reason for the fact that the number of draws is larger than statistically expected can be identified.

  12. How Does the Past of a Soccer Match Influence Its Future? Concepts and Statistical Analysis

    PubMed Central

    Heuer, Andreas; Rubner, Oliver

    2012-01-01

    Scoring goals in a soccer match can be interpreted as a stochastic process. In the most simple description of a soccer match one assumes that scoring goals follows from independent rate processes of both teams. This would imply simple Poissonian and Markovian behavior. Deviations from this behavior would imply that the previous course of the match has an impact on the present match behavior. Here a general framework for the identification of deviations from this behavior is presented. For this endeavor it is essential to formulate an a priori estimate of the expected number of goals per team in a specific match. This can be done based on our previous work on the estimation of team strengths. Furthermore, the well-known general increase of the number of the goals in the course of a soccer match has to be removed by appropriate normalization. In general, three different types of deviations from a simple rate process can exist. First, the goal rate may depend on the exact time of the previous goals. Second, it may be influenced by the time passed since the previous goal and, third, it may reflect the present score. We show that the Poissonian scenario is fulfilled quite well for the German Bundesliga. However, a detailed analysis reveals significant deviations for the second and third aspect. Dramatic effects are observed if the away team leads by one or two goals in the final part of the match. This analysis allows one to identify generic features about soccer matches and to learn about the hidden complexities behind scoring goals. Among others the reason for the fact that the number of draws is larger than statistically expected can be identified. PMID:23226200

  13. Visual field progression in glaucoma: total versus pattern deviation analyses.

    PubMed

    Artes, Paul H; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-12-01

    To compare visual field progression with total and pattern deviation analyses in a prospective longitudinal study of patients with glaucoma and healthy control subjects. A group of 101 patients with glaucoma (168 eyes) with early to moderately advanced visual field loss at baseline (average mean deviation [MD], -3.9 dB) and no clinical evidence of media opacity were selected from a prospective longitudinal study on visual field progression in glaucoma. Patients were examined with static automated perimetry at 6-month intervals for a median follow-up of 9 years. At each test location, change was established with event and trend analyses of total and pattern deviation. The event analyses compared each follow-up test to a baseline obtained from averaging the first two tests, and visual field progression was defined as deterioration beyond the 5th percentile of test-retest variability at three test locations, observed on three consecutive tests. The trend analyses were based on point-wise linear regression, and visual field progression was defined as statistically significant deterioration (P < 5%) worse than -1 dB/year at three locations, confirmed by independently omitting the last and the penultimate observation. The incidence and the time-to-progression were compared between total and pattern deviation analyses. To estimate the specificity of the progression analyses, identical criteria were applied to visual fields obtained in 102 healthy control subjects, and the rate of visual field improvement was established in the patients with glaucoma and the healthy control subjects. With both event and trend methods, pattern deviation analyses classified approximately 15% fewer eyes as having progressed than did the total deviation analyses. In eyes classified as progressing by both the total and pattern deviation methods, total deviation analyses tended to detect progression earlier than the pattern deviation analyses. A comparison of the changes observed in MD and the visual fields' general height (estimated by the 85th percentile of the total deviation values) confirmed that change in the glaucomatous eyes almost always comprised a diffuse component. Pattern deviation analyses of progression may therefore underestimate the true amount of glaucomatous visual field progression. Pattern deviation analyses of visual field progression may underestimate visual field progression in glaucoma, particularly when there is no clinical evidence of increasing media opacity. Clinicians should have access to both total and pattern deviation analyses to make informed decisions on visual field progression in glaucoma.

  14. A Spatio-Temporal Approach for Global Validation and Analysis of MODIS Aerosol Products

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Chu, D. Allen; Mattoo, Shana; Kaufman, Yoram J.; Remer, Lorraine A.; Tanre, Didier; Slutsker, Ilya; Holben, Brent N.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    With the launch of the MODIS sensor on the Terra spacecraft, new data sets of the global distribution and properties of aerosol are being retrieved, and need to be validated and analyzed. A system has been put in place to generate spatial statistics (mean, standard deviation, direction and rate of spatial variation, and spatial correlation coefficient) of the MODIS aerosol parameters over more than 100 validation sites spread around the globe. Corresponding statistics are also computed from temporal subsets of AERONET-derived aerosol data. The means and standard deviations of identical parameters from MOMS and AERONET are compared. Although, their means compare favorably, their standard deviations reveal some influence of surface effects on the MODIS aerosol retrievals over land, especially at low aerosol loading. The direction and rate of spatial variation from MODIS are used to study the spatial distribution of aerosols at various locations either individually or comparatively. This paper introduces the methodology for generating and analyzing the data sets used by the two MODIS aerosol validation papers in this issue.

  15. Ku-band radar threshold analysis

    NASA Technical Reports Server (NTRS)

    Weber, C. L.; Polydoros, A.

    1979-01-01

    The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.

  16. Numerical Investigations of Slip Phenomena in Centrifugal Compressor Impellers

    NASA Astrophysics Data System (ADS)

    Huang, Jeng-Min; Luo, Kai-Wei; Chen, Ching-Fu; Chiang, Chung-Ping; Wu, Teng-Yuan; Chen, Chun-Han

    2013-03-01

    This study systematically investigates the slip phenomena in the centrifugal air compressor impellers by CFD. Eight impeller blades for different specific speeds, wrap angles and exit blade angles are designed by compressor design software to analyze their flow fields. Except for the above three variables, flow rate and number of blades are the other two. Results show that the deviation angle decreases as the flow rate increases. The specific speed is not an important parameter regarding deviation angle or slip factor for general centrifugal compressor impellers. The slip onset position is closely related to the position of the peak value in the blade loading factor distribution. When no recirculation flow is present at the shroud, the variations of slip factor under various flow rates are mainly determined by difference between maximum blade angle and exit blade angle, Δβmax-2. The solidity should be of little importance to slip factor correlations in centrifugal compressor impellers.

  17. Favorable mortality profile of naltrexone implants for opiate addiction.

    PubMed

    Reece, Albert Stuart

    2010-01-01

    Several reports express concern at the mortality associated with the use of oral naltrexone for opiate dependency. Registry controlled follow-up of patients treated with naltrexone implant and buprenorphine was performed. In the study, 255 naltrexone implant patients were followed for a mean (+/- standard deviation) of 5.22 +/- 1.87 years and 2,518 buprenorphine patients were followed for a mean (+/- standard deviation) of 3.19 +/- 1.61 years, accruing 1,332.22 and 8,030.02 patient-years of follow-up, respectively. The crude mortality rates were 3.00 and 5.35 per 1,000 patient-years, respectively, and the age standardized mortality rate ratio for naltrexone compared to buprenorphine was 0.676 (95% confidence interval = 0.014 to 1.338). Most sex, treatment group, and age comparisons significantly favored the naltrexone implant group. Mortality rates were shown to be comparable to, and intermediate between, published mortality rates of an age-standardized methadone treated cohort and the Australian population. These data suggest that the mortality rate from naltrexone implant is comparable to that of buprenorphine, methadone, and the Australian population.

  18. Rapidly rotating neutron stars with a massive scalar field—structure and universal relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doneva, Daniela D.; Yazadjiev, Stoytcho S., E-mail: daniela.doneva@uni-tuebingen.de, E-mail: yazad@phys.uni-sofia.bg

    We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly andmore » rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I -Love- Q relations.« less

  19. WKB theory of large deviations in stochastic populations

    NASA Astrophysics Data System (ADS)

    Assaf, Michael; Meerson, Baruch

    2017-06-01

    Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.

  20. Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser

    DOE PAGES

    Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji

    2017-11-21

    Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less

  1. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  2. SU-F-J-64: Comparison of Dosimetric Robustness Between Proton Therapy and IMRT Plans Following Tumor Regression for Locally Advanced Non-Small Cell Lung Cancer (NSCLC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, C; Ainsley, C; Teo, B

    Purpose: In the light of tumor regression and normal tissue changes, dose distributions can deviate undesirably from what was planned. As a consequence, replanning is sometimes necessary during treatment to ensure continued tumor coverage or to avoid overdosing organs at risk (OARs). Proton plans are generally thought to be less robust than photon plans because of the proton beam’s higher sensitivity to changes in tissue composition, suggesting also a higher likely replanning rate due to tumor regression. The purpose of this study is to compare dosimetric deviations between forward-calculated double scattering (DS) proton plans with IMRT plans upon tumor regression,more » and assesses their impact on clinical replanning decisions. Methods: Ten consecutive locally advanced NSCLC patients whose tumors shrank > 50% in volume and who received four or more CT scans during radiotherapy were analyzed. All the patients received proton radiotherapy (6660 cGy, 180 cGy/fx). Dosimetric robustness during therapy was characterized by changes in the planning objective metrics as well as by point-by-point root-mean-squared differences for the entire PTV, ITV, and OARs (heart, cord, esophagus, brachial plexus and lungs) DVHs. Results: Sixty-four pairs of DVHs were reviewed by three clinicians, who requested a replanning rate of 16.7% and 18.6% for DS and IMRT plans, respectively, with a high agreement between providers. Robustness of clinical indicators was found to depend on the beam orientation and dose level on the DVH curve. Proton dose increased most in OARs distal to the PTV along the beam path, but these changes were primarily in the mid to low dose levels. In contrast, the variation in IMRT plans occurred primarily in the high dose region. Conclusion: Robustness of clinical indicators depends where on the DVH curves comparisons are made. Similar replanning rates were observed for DS and IMRT plans upon large tumor regression.« less

  3. Measurements of {Gamma}(Z{sup O} {yields} b{bar b})/{Gamma}(Z{sup O} {yields} hadrons) using the SLD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, H.A. Jr. II

    1995-07-01

    The quantity R{sub b} = {Gamma}(Z{sup o} {yields}b{bar b})/{Gamma}(Z{sup o} {yields} hadrons) is a sensitive measure of corrections to the Zbb vertex. The precision necessary to observe the top quark mass dependent corrections is close to being achieved. LEP is already observing a 1.8{sigma} deviation from the Standard Model prediction. Knowledge of the top quark mass combined with the observation of deviations from the Standard Model prediction would indicate new physics. Models which include charged Higgs or light SUSY particles yield predictions for R{sub b} appreciably different from the Standard Model. In this thesis two independent methods are used tomore » measure R{sub b}. One uses a general event tag which determines R{sub b} from the rate at which events are tagged as Z{sup o} {yields} b{bar b} in data and the estimated rates at which various flavors of events are tagged from the Monte Carlo. The second method reduces the reliance on the Monte Carlo by separately tagging each hemisphere as containing a b-decay. The rates of single hemisphere tagged events and both hemisphere tagged events are used to determine the tagging efficiency for b-quarks directly from the data thus eliminating the main sources of systematic error present in the event tag. Both measurements take advantage of the unique environment provided by the SLAC Linear Collider (SLC) and the SLAC Large Detector (SLD). From the event tag a result of R{sub b} = 0.230{plus_minus}0.004{sub statistical}{plus_minus}0.013{sub systematic} is obtained. The higher precision hemisphere tag result obtained is R{sub b} = 0.218{plus_minus}0.004{sub statistical}{plus_minus}0.004{sub systematic}{plus_minus}0.003{sub Rc}.« less

  4. Effect of extreme sea surface temperature events on the demography of an age-structured albatross population.

    PubMed

    Pardo, Deborah; Jenouvrier, Stéphanie; Weimerskirch, Henri; Barbraud, Christophe

    2017-06-19

    Climate changes include concurrent changes in environmental mean, variance and extremes, and it is challenging to understand their respective impact on wild populations, especially when contrasted age-dependent responses to climate occur. We assessed how changes in mean and standard deviation of sea surface temperature (SST), frequency and magnitude of warm SST extreme climatic events (ECE) influenced the stochastic population growth rate log( λ s ) and age structure of a black-browed albatross population. For changes in SST around historical levels observed since 1982, changes in standard deviation had a larger (threefold) and negative impact on log( λ s ) compared to changes in mean. By contrast, the mean had a positive impact on log( λ s ). The historical SST mean was lower than the optimal SST value for which log( λ s ) was maximized. Thus, a larger environmental mean increased the occurrence of SST close to this optimum that buffered the negative effect of ECE. This 'climate safety margin' (i.e. difference between optimal and historical climatic conditions) and the specific shape of the population growth rate response to climate for a species determine how ECE affect the population. For a wider range in SST, both the mean and standard deviation had negative impact on log( λ s ), with changes in the mean having a greater effect than the standard deviation. Furthermore, around SST historical levels increases in either mean or standard deviation of the SST distribution led to a younger population, with potentially important conservation implications for black-browed albatrosses.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).

  5. Model for macroevolutionary dynamics.

    PubMed

    Maruvka, Yosef E; Shnerb, Nadav M; Kessler, David A; Ricklefs, Robert E

    2013-07-02

    The highly skewed distribution of species among genera, although challenging to macroevolutionists, provides an opportunity to understand the dynamics of diversification, including species formation, extinction, and morphological evolution. Early models were based on either the work by Yule [Yule GU (1925) Philos Trans R Soc Lond B Biol Sci 213:21-87], which neglects extinction, or a simple birth-death (speciation-extinction) process. Here, we extend the more recent development of a generic, neutral speciation-extinction (of species)-origination (of genera; SEO) model for macroevolutionary dynamics of taxon diversification. Simulations show that deviations from the homogeneity assumptions in the model can be detected in species-per-genus distributions. The SEO model fits observed species-per-genus distributions well for class-to-kingdom-sized taxonomic groups. The model's predictions for the appearance times (the time of the first existing species) of the taxonomic groups also approximately match estimates based on molecular inference and fossil records. Unlike estimates based on analyses of phylogenetic reconstruction, fitted extinction rates for large clades are close to speciation rates, consistent with high rates of species turnover and the relatively slow change in diversity observed in the fossil record. Finally, the SEO model generally supports the consistency of generic boundaries based on morphological differences between species and provides a comparator for rates of lineage splitting and morphological evolution.

  6. Surface diffusion effects on growth of nanowires by chemical beam epitaxy

    NASA Astrophysics Data System (ADS)

    Persson, A. I.; Fröberg, L. E.; Jeppesen, S.; Björk, M. T.; Samuelson, L.

    2007-02-01

    Surface processes play a large role in the growth of semiconductor nanowires by chemical beam epitaxy. In particular, for III-V nanowires the surface diffusion of group-III species is important to understand in order to control the nanowire growth. In this paper, we have grown InAs-based nanowires positioned by electron beam lithography and have investigated the dependence of the diffusion of In species on temperature, group-III and -V source pressure and group-V source combinations by measuring nanowire growth rate for different nanowire spacings. We present a model which relates the nanowire growth rate to the migration length of In species. The model is fitted to the experimental data for different growth conditions, using the migration length as fitting parameter. The results show that the migration length increases with decreasing temperature and increasing group-V/group-III source pressure ratio. This will most often lead to an increase in growth rate, but deviations will occur due to incomplete decomposition and changes in sticking coefficient for group-III species. The results also show that the introduction of phosphorous precursor for growth of InAs1-xPx nanowires decreases the migration length of the In species followed by a decrease in nanowire growth rate.

  7. Simple improvements to classical bubble nucleation models.

    PubMed

    Tanaka, Kyoko K; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2015-08-01

    We revisit classical nucleation theory (CNT) for the homogeneous bubble nucleation rate and improve the classical formula using a correct prefactor in the nucleation rate. Most of the previous theoretical studies have used the constant prefactor determined by the bubble growth due to the evaporation process from the bubble surface. However, the growth of bubbles is also regulated by the thermal conduction, the viscosity, and the inertia of liquid motion. These effects can decrease the prefactor significantly, especially when the liquid pressure is much smaller than the equilibrium one. The deviation in the nucleation rate between the improved formula and the CNT can be as large as several orders of magnitude. Our improved, accurate prefactor and recent advances in molecular dynamics simulations and laboratory experiments for argon bubble nucleation enable us to precisely constrain the free energy barrier for bubble nucleation. Assuming the correction to the CNT free energy is of the functional form suggested by Tolman, the precise evaluations of the free energy barriers suggest the Tolman length is ≃0.3σ independently of the temperature for argon bubble nucleation, where σ is the unit length of the Lennard-Jones potential. With this Tolman correction and our prefactor one gets accurate bubble nucleation rate predictions in the parameter range probed by current experiments and molecular dynamics simulations.

  8. Contributions from associative and explicit sequence knowledge to the execution of discrete keying sequences.

    PubMed

    Verwey, Willem B

    2015-05-01

    Research has provided many indications that highly practiced 6-key sequences are carried out in a chunking mode in which key-specific stimuli past the first are largely ignored. When in such sequences a deviating stimulus occasionally occurs at an unpredictable location, participants fall back to responding to individual stimuli (Verwey & Abrahamse, 2012). The observation that in such a situation execution still benefits from prior practice has been attributed to the possibility to operate in an associative mode. To better understand the contribution to the execution of keying sequences of motor chunks, associative sequence knowledge and also of explicit sequence knowledge, the present study tested three alternative accounts for the earlier finding of an execution rate increase at the end of 6-key sequences performed in the associative mode. The results provide evidence that the earlier observed execution rate increase can be attributed to the use of explicit sequence knowledge. In the present experiment this benefit was limited to sequences that are executed at the moderately fast rates of the associative mode, and occurred at both the earlier and final elements of the sequences. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Rain attenuation measurements: Variability and data quality assessment

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1989-01-01

    Year to year variations in the cumulative distributions of rain rate or rain attenuation are evident in any of the published measurements for a single propagation path that span a period of several years of observation. These variations must be described by models for the prediction of rain attenuation statistics. Now that a large measurement data base has been assembled by the International Radio Consultative Committee, the information needed to assess variability is available. On the basis of 252 sample cumulative distribution functions for the occurrence of attenuation by rain, the expected year to year variation in attenuation at a fixed probability level in the 0.1 to 0.001 percent of a year range is estimated to be 27 percent. The expected deviation from an attenuation model prediction for a single year of observations is estimated to exceed 33 percent when any of the available global rain climate model are employed to estimate the rain rate statistics. The probability distribution for the variation in attenuation or rain rate at a fixed fraction of a year is lognormal. The lognormal behavior of the variate was used to compile the statistics for variability.

  10. Modeling of Sheath Ion-Molecule Reactions in Plasma Enhanced Chemical Vapor Deposition of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Hash, David B.; Govindan, T. R.; Meyyappan, M.

    2004-01-01

    In many plasma simulations, ion-molecule reactions are modeled using ion energy independent reaction rate coefficients that are taken from low temperature selected-ion flow tube experiments. Only exothermic or nearly thermoneutral reactions are considered. This is appropriate for plasma applications such as high-density plasma sources in which sheaths are collisionless and ion temperatures 111 the bulk p!asma do not deviate significantly from the gas temperature. However, for applications at high pressure and large sheath voltages, this assumption does not hold as the sheaths are collisional and ions gain significant energy in the sheaths from Joule heating. Ion temperatures and thus reaction rates vary significantly across the discharge, and endothermic reactions become important in the sheaths. One such application is plasma enhanced chemical vapor deposition of carbon nanotubes in which dc discharges are struck at pressures between 1-20 Torr with applied voltages in the range of 500-700 V. The present work investigates The importance of the inclusion of ion energy dependent ion-molecule reaction rates and the role of collision induced dissociation in generating radicals from the feedstock used in carbon nanotube growth.

  11. Ionospheric Anomalies on the day of the Devastating Earthquakes during 2000-2012

    NASA Astrophysics Data System (ADS)

    Su, Fanfan; Zhou, Yiyan; Zhu, Fuying

    2013-04-01

    The study of the ionospheric abnormal changes during the large earthquakes has attracted much attention for many years. Many papers have reported the deviations of Total Electron Content (TEC) around the epicenter. The statistical analysis concludes that the anomalous behavior of TEC is related with the earthquakes with high probability[1]. But the special cases have different features[2][3]. In this study, we carry out a new statistical analysis to investigate the nature of the ionospheric anomalies during the devastating earthquakes. To demonstrate the abnormal changes of the ionospheric TEC, we have examined the TEC database from the Global Ionosphere Map (GIM). The GIM ( ftp://cddisa.gsfc.nasa.gov/pub/gps/products/ionex) includes about 200 of worldwide ground-based receivers of the GPS. The TEC data with resolution of 5° longitude and 2.5° latitude are routinely published in a 2-h time interval. The information of earthquakes is obtained from the USGS ( http://earthquake.usgs.gov/earthquakes/eqarchives/epic/). To avoid the interference of the magnetic storm, the days with Dst≤-20 nT are excluded. Finally, a total of 13 M≥8.0 earthquakes in the global area during 2000-2012 are selected. The 27 days before the main shock are treated as the background days. Here, 27-day TEC median (Me) and the standard deviation (σ) are used to detect the variation of TEC. We set the upper bound BU = Me + 3*σ, and the lower bound BL = Me - 3*σ. Therefore the probability of a new TEC in the interval (BL, BU) is approximately 99.7%. If TEC varies between BU and BL, the deviation (DTEC) equals zero. Otherwise, the deviations between TEC and bounds are calculated as DTEC = BU/BL - TEC. From the deviations, the positive and negative abnormal changes of TEC can be evaluated. We investigate temporal and spatial signatures of the ionospheric anomalies on the day of the devastating earthquakes(M≥8.0). The results show that the occurrence rates of positive anomaly and negative anomaly are almost equal. The most significant anomaly on the day may occur at the time very close to the main shock, but sometimes it is not the case. The positions of the maximal deviations always deviate from the epicenter. The direction may be southeast, southwest, northeast or northwest with the almost equal probability. The anomalies may move to the epicenter, deviate to any direction, or stay at the same position and gradually fade out. There is no significant feature, such as occurrence time, position, or motion, and so on, which can indicate the source of the anomalies. References: [1].Le, H., J. Y. Liu, et al. (2011). "A statistical analysis of ionospheric anomalies before 736 M6.0+earthquakes during 2002-2010." J. Geophys. Res. 116. [2].Liu, J. Y., Y. I. Chen, et al. (2009). "Seismoionospheric GPS total electron content anomalies observed before the 12 May 2008 Mw7.9 Wenchuan earthquake." J. Geophys. Res. 114. [3].Rolland, L. M., P. Lognonne, et al. (2011). "Detection and modeling of Rayleigh wave induced patterns in the ionosphere." J. Geophys. Res. 116.

  12. A Quantitative Evaluation of the Flipped Classroom in a Large Lecture Principles of Economics Course

    ERIC Educational Resources Information Center

    Balaban, Rita A.; Gilleskie, Donna B.; Tran, Uyen

    2016-01-01

    This research provides evidence that the flipped classroom instructional format increases student final exam performance, relative to the traditional instructional format, in a large lecture principles of economics course. The authors find that the flipped classroom directly improves performance by 0.2 to 0.7 standardized deviations, depending on…

  13. One-side forward-backward asymmetry in top quark pair production at the CERN Large Hadron Collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Youkai; Xiao Bo; Zhu Shouhua

    2010-11-01

    Both D0 and CDF at Tevatron reported the measurements of forward-backward asymmetry in top pair production, which showed possible deviation from the standard model QCD prediction. In this paper, we explore how to examine the same higher-order QCD effects at the more powerful Large Hadron Collider.

  14. Nonlinear Elastic Effects on the Energy Flux Deviation of Ultrasonic Waves in GR/EP Composites

    NASA Technical Reports Server (NTRS)

    Prosser, William H.; Kriz, R. D.; Fitting, Dale W.

    1992-01-01

    In isotropic materials, the direction of the energy flux (energy per unit time per unit area) of an ultrasonic plane wave is always along the same direction as the normal to the wave front. In anisotropic materials, however, this is true only along symmetry directions. Along other directions, the energy flux of the wave deviates from the intended direction of propagation. This phenomenon is known as energy flux deviation and is illustrated. The direction of the energy flux is dependent on the elastic coefficients of the material. This effect has been demonstrated in many anisotropic crystalline materials. In transparent quartz crystals, Schlieren photographs have been obtained which allow visualization of the ultrasonic waves and the energy flux deviation. The energy flux deviation in graphite/epoxy (gr/ep) composite materials can be quite large because of their high anisotropy. The flux deviation angle has been calculated for unidirectional gr/ep composites as a function of both fiber orientation and fiber volume content. Experimental measurements have also been made in unidirectional composites. It has been further demonstrated that changes in composite materials which alter the elastic properties such as moisture absorption by the matrix or fiber degradation, can be detected nondestructively by measurements of the energy flux shift. In this research, the effects of nonlinear elasticity on energy flux deviation in unidirectional gr/ep composites were studied. Because of elastic nonlinearity, the angle of the energy flux deviation was shown to be a function of applied stress. This shift in flux deviation was modeled using acoustoelastic theory and the previously measured second and third order elastic stiffness coefficients for T300/5208 gr/ep. Two conditions of applied uniaxial stress were considered. In the first case, the direction of applied uniaxial stress was along the fiber axis (x3) while in the second case it was perpendicular to the fiber axis along the laminate stacking direction (x1).

  15. File Carving and Malware Identification Algorithms Applied to Firmware Reverse Engineering

    DTIC Science & Technology

    2013-03-21

    33 3.5 Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.6 Experimental...consider a byte value rate-of-change frequency metric [32]. Their system calculates the absolute value of the distance between all consecutive bytes, then...the rate-of-change means and standard deviations. Karresand and Shahmehri use the same distance metric for both byte value frequency and rate-of-change

  16. Finding SDSS Galaxy Clusters in 4-dimensional Color Space Using the False Discovery Rate

    NASA Astrophysics Data System (ADS)

    Nichol, R. C.; Miller, C. J.; Reichart, D.; Wasserman, L.; Genovese, C.; SDSS Collaboration

    2000-12-01

    We describe a recently developed statistical technique that provides a meaningful cut-off in probability-based decision making. We are concerned with multiple testing, where each test produces a well-defined probability (or p-value). By well-known, we mean that the null hypothesis used to determine the p-value is fully understood and appropriate. The method is entitled False Discovery Rate (FDR) and its largest advantage over other measures is that it allows one to specify a maximal amount of acceptable error. As an example of this tool, we apply FDR to a four-dimensional clustering algorithm using SDSS data. For each galaxy (or test galaxy), we count the number of neighbors that fit within one standard deviation of a four dimensional Gaussian centered on that test galaxy. The mean and standard deviation of that Gaussian are determined from the colors and errors of the test galaxy. We then take that same Gaussian and place it on a random selection of n galaxies and make a similar count. In the limit of large n, we expect the median count around these random galaxies to represent a typical field galaxy. For every test galaxy we determine the probability (or p-value) that it is a field galaxy based on these counts. A low p-value implies that the test galaxy is in a cluster environment. Once we have a p-value for every galaxy, we use FDR to determine at what level we should make our probability cut-off. Once this cut-off is made, we have a final sample of galaxies that are cluster-like galaxies. Using FDR, we also know the maximum amount of field contamination in our cluster galaxy sample. We present our preliminary galaxy clustering results using these methods.

  17. Effect of Variable Spatial Scales on USLE-GIS Computations

    NASA Astrophysics Data System (ADS)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  18. Constitutive Modeling of the High-Temperature Flow Behavior of α-Ti Alloy Tube

    NASA Astrophysics Data System (ADS)

    Lin, Yanli; Zhang, Kun; He, Zhubin; Fan, Xiaobo; Yan, Yongda; Yuan, Shijian

    2018-04-01

    In the hot metal gas forming process, the deformation conditions, such as temperature, strain rate and deformation degree, are often prominently changed. The understanding of the flow behavior of α-Ti seamless tubes over a relatively wide range of temperatures and strain rates is important. In this study, the stress-strain curves in the temperature range of 973-1123 K and the initial strain rate range of 0.0004-0.4 s-1 were measured by isothermal tensile tests to conduct a constitutive analysis and a deformation behavior analysis. The results show that the flow stress decreases with the decrease in the strain rate and the increase of the deformation temperature. The Fields-Backofen model and Fields-Backofen-Zhang model were used to describe the stress-strain curves. The Fields-Backofen-Zhang model shows better predictability on the flow stress than the Fields-Backofen model, but there exists a large deviation in the deformation condition of 0.4 s-1. A modified Fields-Backofen-Zhang model is proposed, in which a strain rate term is introduced. This modified Fields-Backofen-Zhang model gives a more accurate description of the flow stress variation under hot forming conditions with a higher strain rate up to 0.4 s-1. Accordingly, it is reasonable to adopt the modified Fields-Backofen-Zhang model for the hot forming process which is likely to reach a higher strain rate, such as 0.4 s-1.

  19. Constitutive Modeling of the High-Temperature Flow Behavior of α-Ti Alloy Tube

    NASA Astrophysics Data System (ADS)

    Lin, Yanli; Zhang, Kun; He, Zhubin; Fan, Xiaobo; Yan, Yongda; Yuan, Shijian

    2018-05-01

    In the hot metal gas forming process, the deformation conditions, such as temperature, strain rate and deformation degree, are often prominently changed. The understanding of the flow behavior of α-Ti seamless tubes over a relatively wide range of temperatures and strain rates is important. In this study, the stress-strain curves in the temperature range of 973-1123 K and the initial strain rate range of 0.0004-0.4 s-1 were measured by isothermal tensile tests to conduct a constitutive analysis and a deformation behavior analysis. The results show that the flow stress decreases with the decrease in the strain rate and the increase of the deformation temperature. The Fields-Backofen model and Fields-Backofen-Zhang model were used to describe the stress-strain curves. The Fields-Backofen-Zhang model shows better predictability on the flow stress than the Fields-Backofen model, but there exists a large deviation in the deformation condition of 0.4 s-1. A modified Fields-Backofen-Zhang model is proposed, in which a strain rate term is introduced. This modified Fields-Backofen-Zhang model gives a more accurate description of the flow stress variation under hot forming conditions with a higher strain rate up to 0.4 s-1. Accordingly, it is reasonable to adopt the modified Fields-Backofen-Zhang model for the hot forming process which is likely to reach a higher strain rate, such as 0.4 s-1.

  20. What Predicts Method Effects in Child Behavior Ratings

    ERIC Educational Resources Information Center

    Low, Justin A.; Keith, Timothy Z.; Jensen, Megan

    2015-01-01

    The purpose of this research was to determine whether child, parent, and teacher characteristics such as sex, socioeconomic status (SES), parental depressive symptoms, the number of years of teaching experience, number of children in the classroom, and teachers' disciplinary self-efficacy predict deviations from maternal ratings in a…

  1. Beyond δ: Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.

  2. Do Lessons in Nature Boost Subsequent Classroom Engagement? Refueling Students in Flight

    PubMed Central

    Kuo, Ming; Browning, Matthew H. E. M.; Penner, Milbert L.

    2018-01-01

    Teachers wishing to offer lessons in nature may hold back for fear of leaving students keyed up and unable to concentrate in subsequent, indoor lessons. This study tested the hypothesis that lessons in nature have positive—not negative—aftereffects on subsequent classroom engagement. Using carefully matched pairs of lessons (one in a relatively natural outdoor setting and one indoors), we observed subsequent classroom engagement during an indoor instructional period, replicating these comparisons over 10 different topics and weeks in the school year, in each of two third grade classrooms. Pairs were roughly balanced in how often the outdoor lesson preceded or followed the classroom lesson. Classroom engagement was significantly better after lessons in nature than after their matched counterparts for four of the five measures developed for this study: teacher ratings; third-party tallies of “redirects” (the number of times the teacher stopped instruction to direct student attention back onto the task at hand); independent, photo-based ratings made blind to condition; and a composite index each showed a nature advantage; student ratings did not. This nature advantage held across different teachers and held equally over the initial and final 5 weeks of lessons. And the magnitude of the advantage was large. In 48 out of 100 paired comparisons, the nature lesson was a full standard deviation better than its classroom counterpart; in 20 of the 48, the nature lesson was over two standard deviations better. The rate of “redirects” was cut almost in half after a lesson in nature, allowing teachers to teach for longer periods uninterrupted. Because the pairs of lessons were matched on teacher, class (students and classroom), topic, teaching style, week of the semester, and time of day, the advantage of the nature-based lessons could not be attributed to any of these factors. It appears that, far from leaving students too keyed up to concentrate afterward, lessons in nature may actually leave students more able to engage in the next lesson, even as students are also learning the material at hand. Such “refueling in flight” argues for including more lessons in nature in formal education. PMID:29354083

  3. Bottleneck Effect on Evolutionary Rate in the Nearly Neutral Mutation Model

    PubMed Central

    Araki, H.; Tachida, H.

    1997-01-01

    Variances of evolutionary rates among lineages in some proteins are larger than those expected from simple Poisson processes. This phenomenon is called overdispersion of the molecular clock. If population size N is constant, the overdispersion is observed only in a limited range of 2Nσ under the nearly neutral mutation model, where σ represents the standard deviation of selection coefficients of new mutants. In this paper, we investigated effects of changing population size on the evolutionary rate by computer simulations assuming the nearly neutral mutation model. The size was changed cyclically between two numbers, N(1) and N(2) (N(1) > N(2)), in the simulations. The overdispersion is observed if 2N(2)σ is less than two and the state of reduced size (bottleneck state) continues for more than ~0.1/u generations, where u is the mutation rate. The overdispersion results mainly because the average fitnesses of only a portion of populations go down when the population size is reduced and only in these populations subsequent advantageous substitutions occur after the population size becomes large. Since the fitness reduction after the bottleneck is stochastic, acceleration of the evolutionary rate does not necessarily occur uniformly among loci. From these results, we argue that the nearly neutral mutation model is a candidate mechanism to explain the overdispersed molecular clock. PMID:9335622

  4. Generation of standard gas mixtures of halogenated, aliphatic, and aromatic compounds and prediction of the individual output rates based on molecular formula and boiling point.

    PubMed

    Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten

    2012-11-01

    In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.

  5. Pulse rate variability compared with Heart Rate Variability in children with and without sleep disordered breathing.

    PubMed

    Dehkordi, Parastoo; Garde, Ainara; Karlen, Walter; Wensley, David; Ansermino, J Mark; Dumont, Guy A

    2013-01-01

    Heart Rate Variability (HRV), the variation of time intervals between heartbeats, is one of the most promising and widely used quantitative markers of autonomic activity. Traditionally, HRV is measured as the series of instantaneous cycle intervals obtained from the electrocardiogram (ECG). In this study, we investigated the estimation of variation in heart rate from a photoplethysmography (PPG) signal, called pulse rate variability (PRV), and assessed its accuracy as an estimate of HRV in children with and without sleep disordered breathing (SDB). We recorded raw PPGs from 72 children using the Phone Oximeter, an oximeter connected to a mobile phone. Full polysomnography including ECG was simultaneously recorded for each subject. We used correlation and Bland-Altman analysis for comparing the parameters of HRV and PRV between two groups of children. Significant correlation (r > 0.90, p < 0.05) and close agreement were found between HRV and PRV for mean intervals, standard deviation of intervals (SDNN) and the root-mean square of the difference of successive intervals (RMSSD). However Bland-Altman analysis showed a large divergence for LF/HF ratio parameter. In addition, children with SDB had depressed SDNN and RMSSD and elevated LF/HF in comparison to children without SDB. In conclusion, PRV provides the accurate estimate of HRV in time domain analysis but does not reflect precise estimation for parameters in frequency domain.

  6. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Sympathoadrenal balance and physiological stress response in cattle at spontaneous and PGF2α-induced calving.

    PubMed

    Nagel, Christina; Trenk, Lisa; Aurich, Christine; Ille, Natascha; Pichler, Martina; Drillich, Marc; Pohl, Werner; Aurich, Jörg

    2016-03-15

    Increased cortisol release in parturient cows may either represent a stress response or is part of the endocrine changes that initiate calving. Acute stress elicits an increase in heart rate and decrease in heart rate variability (HRV). Therefore, we analyzed cortisol concentration, heart rate and HRV variables standard deviation of beat-to-beat interval (SDRR) and root mean square of successive beat-to-beat intervals (RMSSD) in dairy cows allowed to calve spontaneously (SPON, n = 6) or with PGF2α-induced preterm parturition (PG, n = 6). We hypothesized that calving is a stressor, but induced parturition is less stressful than term calving. Saliva collection for cortisol analysis and electrocardiogram recordings for heart rate and HRV analysis were performed from 32 hours before to 18.3 ± 0.7 hours after delivery. Cortisol concentration increased in SPON and PG cows, peaked 15 minutes after delivery (P < 0.001) but was higher in SPON versus PG cows (P < 0.001) during and within 2 hours after calving. Heart rate peaked during the expulsive phase of labor and was higher in SPON than in PG cows (time × group P < 0.01). The standard deviation of beat-to-beat interval and RMSSD peaked at the end of the expulsive phase of labor (P < 0.001), indicating high vagal activity. Standard deviation of beat-to-beat interval (P < 0.01) and RMSSD (P < 0.05) were higher in SPON versus PG cows. Based on physiological stress parameters, calving is perceived as stressful but expulsion of the calf is associated with a transiently increased vagal tone which may enhance uterine contractility. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Evolving geometrical heterogeneities of fault trace data

    NASA Astrophysics Data System (ADS)

    Wechsler, Neta; Ben-Zion, Yehuda; Christofferson, Shari

    2010-08-01

    We perform a systematic comparative analysis of geometrical fault zone heterogeneities using derived measures from digitized fault maps that are not very sensitive to mapping resolution. We employ the digital GIS map of California faults (version 2.0) and analyse the surface traces of active strike-slip fault zones with evidence of Quaternary and historic movements. Each fault zone is broken into segments that are defined as a continuous length of fault bounded by changes of angle larger than 1°. Measurements of the orientations and lengths of fault zone segments are used to calculate the mean direction and misalignment of each fault zone from the local plate motion direction, and to define several quantities that represent the fault zone disorder. These include circular standard deviation and circular standard error of segments, orientation of long and short segments with respect to the mean direction, and normal separation distances of fault segments. We examine the correlations between various calculated parameters of fault zone disorder and the following three potential controlling variables: cumulative slip, slip rate and fault zone misalignment from the plate motion direction. The analysis indicates that the circular standard deviation and circular standard error of segments decrease overall with increasing cumulative slip and increasing slip rate of the fault zones. The results imply that the circular standard deviation and error, quantifying the range or dispersion in the data, provide effective measures of the fault zone disorder, and that the cumulative slip and slip rate (or more generally slip rate normalized by healing rate) represent the fault zone maturity. The fault zone misalignment from plate motion direction does not seem to play a major role in controlling the fault trace heterogeneities. The frequency-size statistics of fault segment lengths can be fitted well by an exponential function over the entire range of observations.

  9. Comparison of slope instability screening tools following a large storm event and application to forest management and policy

    NASA Astrophysics Data System (ADS)

    Whittaker, Kara A.; McShane, Dan

    2012-04-01

    The objective of this study was to assess and compare the ability of two slope instability screening tools developed by the Washington State Department of Natural Resources (WDNR) to assess landslide risks associated with forestry activities. HAZONE is based on a semi-quantitative method that incorporates the landslide frequency rate and landslide area rate for delivery of mapped landforms. SLPSTAB is a GIS-based model of inherent landform characteristics that utilizes slope geometry derived from DEMs and climatic data. Utilization of slope instability screening tools by geologists, land managers, and regulatory agencies can reduce the frequency and magnitude of landslides. Aquatic habitats are negatively impacted by elevated rates and magnitudes of landslides associated with forest management practices due to high sediment loads and alteration of stream channels and morphology. In 2007 a large storm with heavy rainfall impacted southwestern Washington State trigging over 2500 landslides. This storm event and accompanying landslides provides an opportunity to assess the slope stability screening tools developed by WDNR. Landslide density (up to 6.5 landslides per km2) from the storm was highest in the areas designated by the screening tools as high hazard areas, and both of the screening tools were equal in their ability to predict landslide locations. Landslides that initiated in low hazard areas may have resulted from a variety of site-specific factors that deviated from assumed model values, from the inadequate identification of potentially unstable landforms due to low resolution DEMs, or from the inadequate implementation of the state Forest Practices Rules. We suggest that slope instability screening tools can be better utilized by forest management planners and regulators to meet policy goals regarding minimizing landslide rates and impacts to sensitive aquatic species.

  10. Quantifying gait deviations in individuals with rheumatoid arthritis using the Gait Deviation Index.

    PubMed

    Esbjörnsson, A-C; Rozumalski, A; Iversen, M D; Schwartz, M H; Wretenberg, P; Broström, E W

    2014-01-01

    In this study we evaluated the usability of the Gait Deviation Index (GDI), an index that summarizes the amount of deviation in movement from a standard norm, in adults with rheumatoid arthritis (RA). The aims of the study were to evaluate the ability of the GDI to identify gait deviations, assess inter-trial repeatability, and examine the relationship between the GDI and walking speed, physical disability, and pain. Sixty-three adults with RA and 59 adults with typical gait patterns were included in this retrospective case-control study. Following a three-dimensional gait analysis (3DGA), representative gait cycles were selected and GDI scores calculated. To evaluate the effect of walking speed, GDI scores were calculated using both a free-speed and a speed-matched reference set. Physical disability was assessed using the Health Assessment Questionnaire (HAQ) and subjects rated their pain during walking. Adults with RA had significantly increased gait deviations compared to healthy individuals, as shown by lower GDI scores [87.9 (SD = 8.7) vs. 99.4 (SD = 8.3), p < 0.001]. This difference was also seen when adjusting for walking speed [91.7 (SD = 9.0) vs. 99.9 (SD = 8.6), p < 0.001]. It was estimated that a change of ≥ 5 GDI units was required to account for natural variation in gait. There was no evident relationship between GDI and low/high RA-related physical disability and pain. The GDI seems to useful for identifying and summarizing gait deviations in individuals with RA. Thus, we consider that the GDI provides an overall measure of gait deviation that may reflect lower extremity pathology and may help clinicians to understand the impact of RA on gait dynamics.

  11. Application of air hammer drilling technology in igneous rocks of Junggar basin

    NASA Astrophysics Data System (ADS)

    Zhao, Hongshan; Feng, Guangtong; Yu, Haiye

    2018-03-01

    There were many technical problems such as serious well deviation, low penetration rate and long drilling cycle in igneous rocks because of its hardness, strong abrasive and poor drillability, which severely influenced the exploration and development process of Junggar basin. Through analyzing the difficulties of gas drilling with roller bits in Well HS 2, conducting the mechanics experiments about igneous rock, and deeply describing the rock-breaking mechanism of air hammer drilling and its adaptability in igneous rocks, air hammer drilling can realize deviation control and fast drilling in igneous rocks of piedmont zone and avoid the wear and fatigue fracture of drilling strings due to its characteristics of low WOB, low RPM and high frequency impact. Through firstly used in igneous rocks of Well HS 201, compared with gas drilling with cone bit, the average penetration rate and one-trip footage of air hammer drilling respectively increased by more than 2.45 times and 6.42 times while the well deviation was always controlled less than 2 degrees. Two records for Block HS were set up such as the fastest penetration rate of 14.29m/h in Φ444.5mm well hole and the highest one-trip footage of 470.62m in Φ311.2mm well hole. So air hammer drilling was an effective way to realize optimal and fast drilling in the igneous rock formation of Junggar basin.

  12. Electron transfer statistics and thermal fluctuations in molecular junctions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goswami, Himangshu Prabal; Harbola, Upendra

    2015-02-28

    We derive analytical expressions for probability distribution function (PDF) for electron transport in a simple model of quantum junction in presence of thermal fluctuations. Our approach is based on the large deviation theory combined with the generating function method. For large number of electrons transferred, the PDF is found to decay exponentially in the tails with different rates due to applied bias. This asymmetry in the PDF is related to the fluctuation theorem. Statistics of fluctuations are analyzed in terms of the Fano factor. Thermal fluctuations play a quantitative role in determining the statistics of electron transfer; they tend tomore » suppress the average current while enhancing the fluctuations in particle transfer. This gives rise to both bunching and antibunching phenomena as determined by the Fano factor. The thermal fluctuations and shot noise compete with each other and determine the net (effective) statistics of particle transfer. Exact analytical expression is obtained for delay time distribution. The optimal values of the delay time between successive electron transfers can be lowered below the corresponding shot noise values by tuning the thermal effects.« less

  13. Diameter Effect Curve and Detonation Front Curvature Measurements for ANFO

    NASA Astrophysics Data System (ADS)

    Catanach, R. A.; Hill, L. G.

    2002-07-01

    Diameter effect and front curvature measurements are reported for rate stick experiments on commercially available prilled ANFO (ammonium-nitrate/fuel-oil) at ambient temperature. The shots were fired in paper tubes so as to provide minimal confinement. Diameters ranged from 77 mm (approximately failure diameter) to 205 mm, with the tube length being ten diameters in all cases. Each detonation wave shape was fit with an analytic form, from which the local normal velocity Dn, and local total curvature kappa, were generated as a function of radius R, then plotted parametrically to generate a Dn(kappa) function. The observed behavior deviates substantially from that of previous explosives, for which curves for different diameters overlay well for small kappa but diverge for large kappa, and for which kappa increases monotonically with R. For ANFO, we find that Dn(kappa) curves for individual sticks 1) show little or no overlap--with smaller sticks lying to the right of larger ones, 2) exhibit a large velocity deficit with little kappa variation, and 3) reach a peak kappa at an intermediate R.

  14. A falsely fat curvaton with an observable running of the spectral tilt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peloso, Marco; Sorbo, Lorenzo; Tasinato, Gianmassimo, E-mail: peloso@physics.umn.edu, E-mail: sorbo@physics.umass.edu, E-mail: gianmassimo.tasinato@port.ac.uk

    2014-06-01

    In slow roll inflation, the running of the spectral tilt is generically proportional to the square of the deviation from scale invariance, α{sub s}∝(n{sub s}−1){sup 2}, and is therefore currently undetectable. We present a mechanism able to generate a much larger running within slow roll. The mechanism is based on a curvaton field with a large mass term, and a time evolving normalization. This may happen for instance to the angular direction of a complex field in presence of an evolving radial direction. At the price of a single tuning between the mass term and the rate of change ofmore » the normalization, the curvaton can be made effectively light at the CMB scales, giving a spectral tilt in agreement with observations. The lightness is not preserved at later times, resulting in a detectable running of the spectral tilt. This mechanism shows that fields with a large mass term do not necessarily decouple from the inflationary physics, and provides a new tool for model building in inflation.« less

  15. Semi-empirical long-term cycle life model coupled with an electrolyte depletion function for large-format graphite/LiFePO4 lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min

    2017-10-01

    To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.

  16. Radioactivity in the galactic plane

    NASA Technical Reports Server (NTRS)

    Walraven, G. D.; Haymes, R. C.

    1976-01-01

    The paper reports the detection of a large concentration of interstellar radioactivity during balloon-altitude measurements of gamma-ray energy spectra in the band between 0.02 and 12.27 MeV from galactic and extragalactic sources. Enhanced counting rates were observed in three directions towards the plane of the Galaxy; a power-law energy spectrum is computed for one of these directions (designated B 10). A large statistical deviation from the power law in a 1.0-FWHM interval centered near 1.16 MeV is discussed, and the existence of a nuclear gamma-ray line at 1.15 MeV in B 10 is postulated. It is suggested that Ca-44, which emits gamma radiation at 1.156 MeV following the decay of radioactive Sc-44, is a likely candidate for this line, noting that Sc-44 arises from Ti-44 according to explosive models of supernova nucleosynthesis. The 1.16-MeV line flux inferred from the present data is shown to equal the predicted flux for a supernova at a distance of approximately 3 kpc and an age not exceeding about 100 years.

  17. Fluctuating observation time ensembles in the thermodynamics of trajectories

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.; Turner, Robert M.; Garrahan, Juan P.

    2014-03-01

    The dynamics of stochastic systems, both classical and quantum, can be studied by analysing the statistical properties of dynamical trajectories. The properties of ensembles of such trajectories for long, but fixed, times are described by large-deviation (LD) rate functions. These LD functions play the role of dynamical free energies: they are cumulant generating functions for time-integrated observables, and their analytic structure encodes dynamical phase behaviour. This ‘thermodynamics of trajectories’ approach is to trajectories and dynamics what the equilibrium ensemble method of statistical mechanics is to configurations and statics. Here we show that, just like in the static case, there are a variety of alternative ensembles of trajectories, each defined by their global constraints, with that of trajectories of fixed total time being just one of these. We show how the LD functions that describe an ensemble of trajectories where some time-extensive quantity is constant (and large) but where total observation time fluctuates can be mapped to those of the fixed-time ensemble. We discuss how the correspondence between generalized ensembles can be exploited in path sampling schemes for generating rare dynamical trajectories.

  18. The impact of sterile neutrinos on CP measurements at long baselines

    DOE PAGES

    Gandhi, Raj; Kayser, Boris; Masud, Mehedi; ...

    2015-09-01

    With the Deep Underground Neutrino Experiment (DUNE) as an example, we show that the presence of even one sterile neutrino of mass ~1 eV can significantly impact the measurements of CP violation in long baseline experiments. Using a probability level analysis and neutrino-antineutrino asymmetry calculations, we discuss the large magnitude of these effects, and show how they translate into significant event rate deviations at DUNE. These results demonstrate that measurements which, when interpreted in the context of the standard three family paradigm, indicate CP conservation at long baselines, may, in fact hide large CP violation if there is a sterilemore » state. Similarly, any data indicating the violation of CP cannot be properly interpreted within the standard paradigm unless the presence of sterile states of mass O(1 eV) can be conclusively ruled out. Our work underscores the need for a parallel and linked short baseline oscillation program and a highly capable near detector for DUNE, but in order that its highly anticipated results on CP violation in the lepton sector may be correctly interpreted.« less

  19. The impact of sterile neutrinos on CP measurements at long baselines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gandhi, Raj; Kayser, Boris; Masud, Mehedi

    With the Deep Underground Neutrino Experiment (DUNE) as an example, we show that the presence of even one sterile neutrino of mass ~1 eV can significantly impact the measurements of CP violation in long baseline experiments. Using a probability level analysis and neutrino-antineutrino asymmetry calculations, we discuss the large magnitude of these effects, and show how they translate into significant event rate deviations at DUNE. These results demonstrate that measurements which, when interpreted in the context of the standard three family paradigm, indicate CP conservation at long baselines, may, in fact hide large CP violation if there is a sterilemore » state. Similarly, any data indicating the violation of CP cannot be properly interpreted within the standard paradigm unless the presence of sterile states of mass O(1 eV) can be conclusively ruled out. Our work underscores the need for a parallel and linked short baseline oscillation program and a highly capable near detector for DUNE, but in order that its highly anticipated results on CP violation in the lepton sector may be correctly interpreted.« less

  20. Influence of provider and urgent care density across different socioeconomic strata on outpatient antibiotic prescribing in the USA

    PubMed Central

    Klein, Eili Y.; Makowsky, Michael; Orlando, Megan; Hatna, Erez; Braykov, Nikolay P.; Laxminarayan, Ramanan

    2015-01-01

    Objectives Despite a strong link between antibiotic use and resistance, and highly variable antibiotic consumption rates across the USA, drivers of differences in consumption rates are not fully understood. The objective of this study was to examine how provider density affects antibiotic prescribing rates across socioeconomic groups in the USA. Methods We aggregated data on all outpatient antibiotic prescriptions filled in retail pharmacies in the USA in 2000 and 2010 from IMS Health into 3436 geographically distinct hospital service areas and combined this with socioeconomic and structural factors that affect antibiotic prescribing from the US Census. We then used fixed-effect models to estimate the interaction between poverty and the number of physician offices per capita (i.e. physician density) and the presence of urgent care and retail clinics on antibiotic prescribing rates. Results We found large geographical variation in prescribing, driven in part by the number of physician offices per capita. For an increase of one standard deviation in the number of physician offices per capita there was a 25.9% increase in prescriptions per capita. However, the determinants of the prescription rate were dependent on socioeconomic conditions. In poorer areas, clinics substitute for traditional physician offices, reducing the impact of physician density. In wealthier areas, clinics increase the effect of physician density on the prescribing rate. Conclusions In areas with higher poverty rates, access to providers drives the prescribing rate. However, in wealthier areas, where access is less of a problem, a higher density of providers and clinics increases the prescribing rate, potentially due to competition. PMID:25604743

  1. Isomerization reaction dynamics and equilibrium at the liquid-vapor interface of water. A molecular-dynamics study

    NASA Technical Reports Server (NTRS)

    Benjamin, Ilan; Pohorille, Andrew

    1993-01-01

    The gauche-trans isomerization reaction of 1,2-dichloroethane at the liquid-vapor interface of water is studied using molecular-dynamics computer simulations. The solvent bulk and surface effects on the torsional potential of mean force and on barrier recrossing dynamics are computed. The isomerization reaction involves a large change in the electric dipole moment, and as a result the trans/gauche ratio is considerably affected by the transition from the bulk solvent to the surface. Reactive flux correlation function calculations of the reaction rate reveal that deviation from the transition-state theory due to barrier recrossing is greater at the surface than in the bulk water. This suggests that the system exhibits non-Rice-Ramsperger-Kassel-Marcus behavior due to the weak solvent-solute coupling at the water liquid-vapor interface.

  2. Market impact and trading profile of hidden orders in stock markets.

    PubMed

    Moro, Esteban; Vicente, Javier; Moyano, Luis G; Gerig, Austin; Farmer, J Doyne; Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N

    2009-12-01

    We empirically study the market impact of trading orders. We are specifically interested in large trading orders that are executed incrementally, which we call hidden orders. These are statistically reconstructed based on information about market member codes using data from the Spanish Stock Market and the London Stock Exchange. We find that market impact is strongly concave, approximately increasing as the square root of order size. Furthermore, as a given order is executed, the impact grows in time according to a power law; after the order is finished, it reverts to a level of about 0.5-0.7 of its value at its peak. We observe that hidden orders are executed at a rate that more or less matches trading in the overall market, except for small deviations at the beginning and end of the order.

  3. Market impact and trading profile of hidden orders in stock markets

    NASA Astrophysics Data System (ADS)

    Moro, Esteban; Vicente, Javier; Moyano, Luis G.; Gerig, Austin; Farmer, J. Doyne; Vaglica, Gabriella; Lillo, Fabrizio; Mantegna, Rosario N.

    2009-12-01

    We empirically study the market impact of trading orders. We are specifically interested in large trading orders that are executed incrementally, which we call hidden orders. These are statistically reconstructed based on information about market member codes using data from the Spanish Stock Market and the London Stock Exchange. We find that market impact is strongly concave, approximately increasing as the square root of order size. Furthermore, as a given order is executed, the impact grows in time according to a power law; after the order is finished, it reverts to a level of about 0.5-0.7 of its value at its peak. We observe that hidden orders are executed at a rate that more or less matches trading in the overall market, except for small deviations at the beginning and end of the order.

  4. Sample Selection for Training Cascade Detectors.

    PubMed

    Vállez, Noelia; Deniz, Oscar; Bueno, Gloria

    2015-01-01

    Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.

  5. Quantifying inhomogeneity in fractal sets

    NASA Astrophysics Data System (ADS)

    Fraser, Jonathan M.; Todd, Mike

    2018-04-01

    An inhomogeneous fractal set is one which exhibits different scaling behaviour at different points. The Assouad dimension of a set is a quantity which finds the ‘most difficult location and scale’ at which to cover the set and its difference from box dimension can be thought of as a first-level overall measure of how inhomogeneous the set is. For the next level of analysis, we develop a quantitative theory of inhomogeneity by considering the measure of the set of points around which the set exhibits a given level of inhomogeneity at a certain scale. For a set of examples, a family of -invariant subsets of the 2-torus, we show that this quantity satisfies a large deviations principle. We compare members of this family, demonstrating how the rate function gives us a deeper understanding of their inhomogeneity.

  6. Radiotherapy in pediatric medulloblastoma: Quality assessment of Pediatric Oncology Group Trial 9031

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miralbell, Raymond; Fitzgerald, T.J.; Laurie, Fran

    2006-04-01

    Purpose: To evaluate the potential influence of radiotherapy quality on survival in high-risk pediatric medulloblastoma patients. Methods and Materials: Trial 9031 of the Pediatric Oncology Group (POG) aimed to study the relative benefit of cisplatin and etoposide randomization of high-risk patients with medulloblastoma to preradiotherapy vs. postradiotherapy treatment. Two-hundred and ten patients were treated according to protocol guidelines and were eligible for the present analysis. Treatment volume (whole brain, spine, posterior fossa, and primary tumor bed) and dose prescription deviations were assessed for each patient. An analysis of first site of failure was undertaken. Event-free and overall survival rates weremore » calculated. A log-rank test was used to determine the significance of potential survival differences between patients with and without major deviations in the radiotherapy procedure. Results: Of 160 patients who were fully evaluable for all treatment quality parameters, 91 (57%) had 1 or more major deviations in their treatment schedule. Major deviations by treatment site were brain (26%), spinal (7%), posterior fossa (40%), and primary tumor bed (17%). Major treatment volume or total dose deviations did not significantly influence overall and event-free survival. Conclusions: Despite major treatment deviations in more than half of fully evaluable patients, underdosage or treatment volume misses were not associated with a worse event-free or overall survival.« less

  7. Characterizing Accuracy and Precision of Glucose Sensors and Meters

    PubMed Central

    2014-01-01

    There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194

  8. Role of the standard deviation in the estimation of benchmark doses with continuous data.

    PubMed

    Gaylor, David W; Slikker, William

    2004-12-01

    For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.

  9. GAMA/H-ATLAS: common star formation rate indicators and their dependence on galaxy physical parameters

    NASA Astrophysics Data System (ADS)

    Wang, L.; Norberg, P.; Gunawardhana, M. L. P.; Heinis, S.; Baldry, I. K.; Bland-Hawthorn, J.; Bourne, N.; Brough, S.; Brown, M. J. I.; Cluver, M. E.; Cooray, A.; da Cunha, E.; Driver, S. P.; Dunne, L.; Dye, S.; Eales, S.; Grootes, M. W.; Holwerda, B. W.; Hopkins, A. M.; Ibar, E.; Ivison, R.; Lacey, C.; Lara-Lopez, M. A.; Loveday, J.; Maddox, S. J.; Michałowski, M. J.; Oteo, I.; Owers, M. S.; Popescu, C. C.; Smith, D. J. B.; Taylor, E. N.; Tuffs, R. J.; van der Werf, P.

    2016-09-01

    We compare common star formation rate (SFR) indicators in the local Universe in the Galaxy and Mass Assembly (GAMA) equatorial fields (˜160 deg2), using ultraviolet (UV) photometry from GALEX, far-infrared and sub-millimetre (sub-mm) photometry from Herschel Astrophysical Terahertz Large Area Survey, and Hα spectroscopy from the GAMA survey. With a high-quality sample of 745 galaxies (median redshift = 0.08), we consider three SFR tracers: UV luminosity corrected for dust attenuation using the UV spectral slope β (SFRUV, corr), Hα line luminosity corrected for dust using the Balmer decrement (BD) (SFRH α, corr), and the combination of UV and infrared (IR) emission (SFRUV + IR). We demonstrate that SFRUV, corr can be reconciled with the other two tracers after applying attenuation corrections by calibrating Infrared excess (IRX; I.e. the IR to UV luminosity ratio) and attenuation in the Hα (derived from BD) against β. However, β, on its own, is very unlikely to be a reliable attenuation indicator. We find that attenuation correction factors depend on parameters such as stellar mass (M*), z and dust temperature (Tdust), but not on Hα equivalent width or Sérsic index. Due to the large scatter in the IRX versus β correlation, when compared to SFRUV + IR, the β-corrected SFRUV, corr exhibits systematic deviations as a function of IRX, BD and Tdust.

  10. Towards Behavioral Reflexion Models

    NASA Technical Reports Server (NTRS)

    Ackermann, Christopher; Lindvall, Mikael; Cleaveland, Rance

    2009-01-01

    Software architecture has become essential in the struggle to manage today s increasingly large and complex systems. Software architecture views are created to capture important system characteristics on an abstract and, thus, comprehensible level. As the system is implemented and later maintained, it often deviates from the original design specification. Such deviations can have implication for the quality of the system, such as reliability, security, and maintainability. Software architecture compliance checking approaches, such as the reflexion model technique, have been proposed to address this issue by comparing the implementation to a model of the systems architecture design. However, architecture compliance checking approaches focus solely on structural characteristics and ignore behavioral conformance. This is especially an issue in Systems-of- Systems. Systems-of-Systems (SoS) are decompositions of large systems, into smaller systems for the sake of flexibility. Deviations of the implementation to its behavioral design often reduce the reliability of the entire SoS. An approach is needed that supports the reasoning about behavioral conformance on architecture level. In order to address this issue, we have developed an approach for comparing the implementation of a SoS to an architecture model of its behavioral design. The approach follows the idea of reflexion models and adopts it to support the compliance checking of behaviors. In this paper, we focus on sequencing properties as they play an important role in many SoS. Sequencing deviations potentially have a severe impact on the SoS correctness and qualities. The desired behavioral specification is defined in UML sequence diagram notation and behaviors are extracted from the SoS implementation. The behaviors are then mapped to the model of the desired behavior and the two are compared. Finally, a reflexion model is constructed that shows the deviations between behavioral design and implementation. This paper discusses the approach and shows how it can be applied to investigate reliability issues in SoS.

  11. Long-term Ozone Changes and Associated Climate Impacts in CMIP5 Simulations

    NASA Technical Reports Server (NTRS)

    Eyring, V.; Arblaster, J. M.; Cionni, I.; Sedlacek, J.; Perlwitz, J.; Young, P. J.; Bekki, S.; Bergmann, D.; Cameron-Smith, P.; Collins, W. J.; hide

    2013-01-01

    Ozone changes and associated climate impacts in the Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations are analyzed over the historical (1960-2005) and future (2006-2100) period under four Representative Concentration Pathways (RCP). In contrast to CMIP3, where half of the models prescribed constant stratospheric ozone, CMIP5 models all consider past ozone depletion and future ozone recovery. Multimodel mean climatologies and long-term changes in total and tropospheric column ozone calculated from CMIP5 models with either interactive or prescribed ozone are in reasonable agreement with observations. However, some large deviations from observations exist for individual models with interactive chemistry, and these models are excluded in the projections. Stratospheric ozone projections forced with a single halogen, but four greenhouse gas (GHG) scenarios show largest differences in the northern midlatitudes and in the Arctic in spring (approximately 20 and 40 Dobson units (DU) by 2100, respectively). By 2050, these differences are much smaller and negligible over Antarctica in austral spring. Differences in future tropospheric column ozone are mainly caused by differences in methane concentrations and stratospheric input, leading to approximately 10DU increases compared to 2000 in RCP 8.5. Large variations in stratospheric ozone particularly in CMIP5 models with interactive chemistry drive correspondingly large variations in lower stratospheric temperature trends. The results also illustrate that future Southern Hemisphere summertime circulation changes are controlled by both the ozone recovery rate and the rate of GHG increases, emphasizing the importance of simulating and taking into account ozone forcings when examining future climate projections.

  12. The effects of partial and full correction of refractive errors on sensorial and motor outcomes in children with refractive accommodative esotropia.

    PubMed

    Sefi-Yurdakul, Nazife; Kaykısız, Hüseyin; Koç, Feray

    2018-03-17

    To investigate the effects of partial and full correction of refractive errors on sensorial and motor outcomes in children with refractive accommodative esotropia (RAE). The records of pediatric cases with full RAE were reviewed; their first and last sensorial and motor findings were evaluated in two groups, classified as partial (Group 1) and full correction (Group 2) of refractive errors. The mean age at first admission was 5.84 ± 3.62 years in Group 1 (n = 35) and 6.35 ± 3.26 years in Group 2 (n = 46) (p = 0.335). Mean change in best corrected visual acuity (BCVA) was 0.24 ± 0.17 logarithm of the minimum angle of resolution (logMAR) in Group 1 and 0.13 ± 0.16 logMAR in Group 2 (p = 0.001). Duration of deviation, baseline refraction and amount of reduced refraction showed significant effects on change in BCVA (p < 0.05). Significant correlation was determined between binocular vision (BOV), duration of deviation and uncorrected baseline amount of deviation (p < 0.05). The baseline BOV rates were significantly high in fully corrected Group 2, and also were found to have increased in Group 1 (p < 0.05). Change in refraction was - 0.09 ± 1.08 and + 0.35 ± 0.76 diopters in Groups 1 and 2, respectively (p = 0.005). Duration of deviation, baseline refraction and the amount of reduced refraction had significant effects on change in refraction (p < 0.05). Change in deviation without refractive correction was - 0.74 ± 7.22 prism diopters in Group 1 and - 3.24 ± 10.41 prism diopters in Group 2 (p = 0.472). Duration of follow-up and uncorrected baseline deviation showed significant effects on change in deviation (p < 0.05). Although the BOV rates and BCVA were initially high in fully corrected patients, they finally improved significantly in both the fully and partially corrected patients. Full hypermetropic correction may also cause an increase in the refractive error with a possible negative effect on emmetropization. The negative effect of the duration of deviation on BOV and BCVA demonstrates the significance of early treatment in RAE cases.

  13. [Variation pattern and its affecting factors of three-dimensional landscape in urban residential community of Shenyang].

    PubMed

    Zhang, Pei-Feng; Hu, Yuan-Man; Xiong, Zai-Ping; Liu, Miao

    2011-02-01

    Based on the 1:10000 aerial photo in 1997 and the three QuickBird images in 2002, 2005, and 2008, and by using Barista software and GIS and RS techniques, the three-dimensional information of the residential community in Tiexi District of Shenyang was extracted, and the variation pattern of the three-dimensional landscape in the district during its reconstruction in 1997-2008 and related affecting factors were analyzed with the indices, ie. road density, greening rate, average building height, building height standard deviation, building coverage rate, floor area rate, building shape coefficient, population density, and per capita GDP. The results showed that in 1997-2008, the building area for industry decreased, that for commerce and other public affairs increased, and the area for residents, education, and medical cares basically remained stable. The building number, building coverage rate, and building shape coefficient decreased, while the floor area rate, average building height, height standard deviation, road density, and greening rate increased. Within the limited space of residential community, the containing capacity of population and economic activity increased, and the environment quality also improved to some extent. The variation degree of average building height increased, but the building energy consumption decreased. Population growth and economic development had positive correlations with floor area rate, road density, and greening rate, but negative correlation with building coverage rate.

  14. Rates of glaucomatous visual field change in a large clinical population.

    PubMed

    Chauhan, Balwantray C; Malik, Rizwan; Shuba, Lesya M; Rafuse, Paul E; Nicolela, Marcelo T; Artes, Paul H

    2014-06-10

    To determine the rate of glaucomatous visual field change in routine clinical care. Mean deviation (MD) rate was computed in one randomly selected eye of all glaucoma patients and suspects with ≥5 examinations in a tertiary eye-care center. Proportions of "fast" (MD rate, <-1 to -2 dB/y) and "catastrophic" (<-2 dB/y) progressors were determined. The MD rates were computed in tertile groups by the number of examinations, baseline age, and MD. The MD rates were compared to the Canadian Glaucoma Study (CGS), a prospective study with IOP interventions mandated by visual field progression, by pairwise matching of patients by baseline MD. There were 2324 patients with median (interquartile range) baseline age and MD of 65 (56, 74) years and -2.44 (-5.44, -0.86) dB, and follow-up of 7.1 (4.8, 10.2) years with 8 (6, 11) examinations. The median MD rate was -0.05 (0.13, -0.30) dB/y, while the mean follow-up IOP was 17.1 (15.0, 19.7) mm Hg. The MD rate was progressively worse, with a doubling of fast and catastrophic progressors, with each tertile of increasing age. Worse MD rate was associated with lower follow-up IOP. Neither MD rate nor the number of fast and catastrophic progressors was significantly different in clinical care patients matched to CGS patients. Most patients under routine glaucoma care demonstrate slow rates of visual field progression. The MD rate in the current study was similar to an interventional prospective study, but considerably less negative compared to published studies with similar design. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  15. CATION EXCHANGE BETWEEN CELLS AND PLASMA OF MAMMALIAN BLOOD

    PubMed Central

    Sheppard, C. W.; Martin, W. R.; Beyl, Gertrude

    1951-01-01

    Sodium and potassium exchange has been studied in the blood of the sheep, dog, cow, and man. The potassium exchange rate in human cells is practically unaltered by increasing the plasma potassium concentration approximately threefold. Comparing the results in different species the exchange rate for potassium shows a rough correlation with the intracellular amount of the element. Expressed in per cent of the cellular content sodium tends to exchange more rapidly than potassium. In three instances the specific activity curves deviate from the simple exponential behavior of a two compartment system. In the exchange of potassium in canine blood the deviation is caused by the presence of a rapidly exchanging fraction in the buffy coat cells. Such an effect does not account for the inhomogeneity of sodium exchange in human blood. PMID:14824508

  16. Acoustic characteristics of voice after severe traumatic brain injury.

    PubMed

    McHenry, M

    2000-07-01

    To describe the acoustic characteristics of voice in individuals with motor speech disorders after traumatic brain injury (TBI). Prospective study of 100 individuals with TBI based on consecutive referrals for motor speech evaluations. Subjects were audio tape-recorded while producing sustained vowels and single word and sentence intelligibility tests. Laryngeal airway resistance was estimated, and voice quality was rated perceptually. None of the subjects evidenced vocal parameters within normal limits. The most frequently occurring abnormal parameter across subjects was amplitude perturbation, followed by voice turbulence index. Twenty-three percent of subjects evidenced deviation in all five parameters measured. The perceptual ratings of breathiness were significantly correlated with both the amplitude perturbation quotient and the noise-to-harmonics ratio. Vocal quality deviation is common in motor speech disorders after TBI and may impact intelligibility.

  17. Advanced symbology for general aviation approach to landing displays

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.

    1983-01-01

    A set of flight tests designed to evaluate the relative utility of candidate displays with advanced symbology for general aviation terminal area instrument flight rules operations are discussed. The symbology was previously evaluated as part of the NASA Langley Research Center's Terminal Configured Vehicle Program for use in commercial airlines. The advanced symbology included vehicle track angle, flight path angle and a perspective representation of the runway. These symbols were selectively drawn on a cathode ray tube (CRT) display along with the roll attitude, pitch attitude, localizer deviation and glideslope deviation. In addition to the CRT display, the instrument panel contained standard turn and bank, altimeter, rate of climb, airspeed, heading, and engine instruments. The symbology was evaluated using tracking performance and pilot subjective ratings for an instrument landing system capture and tracking task.

  18. Estimation of absorbed radiation dose rates in wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi nuclear power plant accident.

    PubMed

    Kubota, Yoshihisa; Takahashi, Hiroyuki; Watanabe, Yoshito; Fuma, Shoichi; Kawaguchi, Isao; Aoki, Masanari; Kubota, Masahide; Furuhata, Yoshiaki; Shigemura, Yusaku; Yamada, Fumio; Ishikawa, Takahiro; Obara, Satoshi; Yoshida, Satoshi

    2015-04-01

    The dose rates of radiation absorbed by wild rodents inhabiting a site severely contaminated by the Fukushima Dai-ichi Nuclear Power Plant accident were estimated. The large Japanese field mouse (Apodemus speciosus), also called the wood mouse, was the major rodent species captured in the sampling area, although other species of rodents, such as small field mice (Apodemus argenteus) and Japanese grass voles (Microtus montebelli), were also collected. The external exposure of rodents calculated from the activity concentrations of radiocesium ((134)Cs and (137)Cs) in litter and soil samples using the ERICA (Environmental Risk from Ionizing Contaminants: Assessment and Management) tool under the assumption that radionuclides existed as the infinite plane isotropic source was almost the same as those measured directly with glass dosimeters embedded in rodent abdomens. Our findings suggest that the ERICA tool is useful for estimating external dose rates to small animals inhabiting forest floors; however, the estimated dose rates showed large standard deviations. This could be an indication of the inhomogeneous distribution of radionuclides in the sampled litter and soil. There was a 50-fold difference between minimum and maximum whole-body activity concentrations measured in rodents at the time of capture. The radionuclides retained in rodents after capture decreased exponentially over time. Regression equations indicated that the biological half-life of radiocesium after capture was 3.31 d. At the time of capture, the lowest activity concentration was measured in the lung and was approximately half of the highest concentration measured in the mixture of muscle and bone. The average internal absorbed dose rate was markedly smaller than the average external dose rate (<10% of the total absorbed dose rate). The average total absorbed dose rate to wild rodents inhabiting the sampling area was estimated to be approximately 52 μGy h(-1) (1.2 mGy d(-1)), even 3 years after the accident. This dose rate exceeds 0.1-1 mGy d(-1) derived consideration reference level for Reference rat proposed by the International Commission on Radiological Protection (ICRP). Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. A robust control strategy for mitigating renewable energy fluctuations in a real hybrid power system combined with SMES

    NASA Astrophysics Data System (ADS)

    Magdy, G.; Shabib, G.; Elbaset, Adel A.; Qudaih, Yaser; Mitani, Yasunori

    2018-05-01

    Utilizing Renewable Energy Sources (RESs) is attracting great attention as a solution to future energy shortages. However, the irregular nature of RESs and random load deviations cause a large frequency and voltage fluctuations. Therefore, in order to benefit from a maximum capacity of the RESs, a robust mitigation strategy of power fluctuations from RESs must be applied. Hence, this paper proposes a design of Load Frequency Control (LFC) coordinated with Superconducting Magnetic Energy Storage (SMES) technology (i.e., an auxiliary LFC), using an optimal PID controller-based Particle Swarm Optimization (PSO) in the Egyptian Power System (EPS) considering high penetration of Photovoltaics (PV) power generation. Thus, from the perspective of LFC, the robust control strategy is proposed to maintain the nominal system frequency and mitigating the power fluctuations from RESs against all disturbances sources for the EPS with the multi-source environment. The EPS is decomposed into three dynamics subsystems, which are non-reheat, reheat and hydro power plants taking into consideration the system nonlinearity. The results by nonlinear simulation Matlab/Simulink for the EPS combined with SMES system considering PV solar power approves that, the proposed control strategy achieves a robust stability by reducing transient time, minimizing the frequency deviations, maintaining the system frequency, preventing conventional generators from exceeding their power ratings during load disturbances, and mitigating the power fluctuations from the RESs.

  20. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  1. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  2. A method for investigating relative timing information on phylogenetic trees.

    PubMed

    Ford, Daniel; Matsen, Frederick A; Stadler, Tanja

    2009-04-01

    In this paper, we present a new way to describe the timing of branching events in phylogenetic trees. Our description is in terms of the relative timing of diversification events between sister clades; as such it is complementary to existing methods using lineages-through-time plots which consider diversification in aggregate. The method can be applied to look for evidence of diversification happening in lineage-specific "bursts", or the opposite, where diversification between 2 clades happens in an unusually regular fashion. In order to be able to distinguish interesting events from stochasticity, we discuss 2 classes of neutral models on trees with relative timing information and develop a statistical framework for testing these models. These model classes include both the coalescent with ancestral population size variation and global rate speciation-extinction models. We end the paper with 2 example applications: first, we show that the evolution of the hepatitis C virus deviates from the coalescent with arbitrary population size. Second, we analyze a large tree of ants, demonstrating that a period of elevated diversification rates does not appear to have occurred in a bursting manner.

  3. Influence of lateral discomfort on the stability of traffic flow based on visual angle car-following model

    NASA Astrophysics Data System (ADS)

    Zheng, Liang; Zhong, Shiquan; Jin, Peter J.; Ma, Shoufeng

    2012-12-01

    Due to the poor road markings and irregular driving behaviors, not every vehicle is positioned in the center of the lane. The deviation from the center can cause discomfort to drivers in the neighboring lane, which is referred to as lateral discomfort (or lateral friction). Such lateral discomfort can be incorporated into the driver stimulus-response framework by considering the visual angle and its changing rate from the psychological viewpoint. In this study, a two-lane visual angle based car-following model is proposed and its stability condition is obtained through linear stability theory. Further derivations indicate that the neutral stability line of the model is asymmetry and four factors including the vehicle width and length, the lateral separation and the sensitivity regarding the changing rate of visual angle have large impacts on the stability of traffic flow. Numerical simulations further verify these theoretical results, and demonstrate that the behaviors of diverging, merging and lane changing can break the original steady state and cause traffic fluctuations. However, these fluctuations may be alleviated to some extent by reducing the lateral discomfort.

  4. Seizures as a Complication of Congenital Zika Syndrome in Early Infancy.

    PubMed

    Oliveira-Filho, Jamary; Felzemburgh, Ridalva; Costa, Federico; Nery, Nivison; Mattos, Adriana; Henriques, Daniele F; Ko, Albert I; For The Salvador Zika Response Team

    2018-06-01

    Zika virus transmission in Brazil was linked to a large outbreak of microcephaly but less is known about longer term anthropometric and neurological outcomes. We studied a cohort of infants born between October 31, 2015, and January 9, 2016, in a state maternity hospital, followed up for 101 ± 28 days by home visits. Microcephaly (< 2 standard deviations, Intergrowth standard) occurred in 62 of 412 (15%) births. Congenital Zika syndrome (CZS) was diagnosed in 29 patients. Among CZS patients, we observed a significant gain in anthropometric measures ( P < 0.001) but no significant gain in percentile for these measures. The main neurological outcome was epilepsy, occurring in 48% of infants at a rate of 15.6 cases per 100 patient-months, frequently requiring multiple anti-seizure medications. The cumulative fatality rate was 7.4% (95% confidence interval: 2.1-23.4%). Health-care professionals should be alerted on the high risk of epilepsy and death associated with CZS in early infancy and the need to actively screen for seizures and initiate timely treatment.

  5. Effects of Nonlinear Inhomogeneity on the Cosmic Expansion with Numerical Relativity.

    PubMed

    Bentivegna, Eloisa; Bruni, Marco

    2016-06-24

    We construct a three-dimensional, fully relativistic numerical model of a universe filled with an inhomogeneous pressureless fluid, starting from initial data that represent a perturbation of the Einstein-de Sitter model. We then measure the departure of the average expansion rate with respect to this homogeneous and isotropic reference model, comparing local quantities to the predictions of linear perturbation theory. We find that collapsing perturbations reach the turnaround point much earlier than expected from the reference spherical top-hat collapse model and that the local deviation of the expansion rate from the homogeneous one can be as high as 28% at an underdensity, for an initial density contrast of 10^{-2}. We then study, for the first time, the exact behavior of the backreaction term Q_{D}. We find that, for small values of the initial perturbations, this term exhibits a 1/a scaling, and that it is negative with a linearly growing absolute value for larger perturbation amplitudes, thereby contributing to an overall deceleration of the expansion. Its magnitude, on the other hand, remains very small even for relatively large perturbations.

  6. Effects of Noise on Ecological Invasion Processes: Bacteriophage-mediated Competition in Bacteria

    NASA Astrophysics Data System (ADS)

    Joo, Jaewook; Eric, Harvill; Albert, Reka

    2007-03-01

    Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.

  7. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    PubMed

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  8. Nonideal Rayleigh–Taylor mixing

    PubMed Central

    Lim, Hyunkyung; Iwerks, Justin; Glimm, James; Sharp, David H.

    2010-01-01

    Rayleigh–Taylor mixing is a classical hydrodynamic instability that occurs when a light fluid pushes against a heavy fluid. The two main sources of nonideal behavior in Rayleigh–Taylor (RT) mixing are regularizations (physical and numerical), which produce deviations from a pure Euler equation, scale invariant formulation, and nonideal (i.e., experimental) initial conditions. The Kolmogorov theory of turbulence predicts stirring at all length scales for the Euler fluid equations without regularization. We interpret mathematical theories of existence and nonuniqueness in this context, and we provide numerical evidence for dependence of the RT mixing rate on nonideal regularizations; in other words, indeterminacy when modeled by Euler equations. Operationally, indeterminacy shows up as nonunique solutions for RT mixing, parametrized by Schmidt and Prandtl numbers, in the large Reynolds number (Euler equation) limit. Verification and validation evidence is presented for the large eddy simulation algorithm used here. Mesh convergence depends on breaking the nonuniqueness with explicit use of the laminar Schmidt and Prandtl numbers and their turbulent counterparts, defined in terms of subgrid scale models. The dependence of the mixing rate on the Schmidt and Prandtl numbers and other physical parameters will be illustrated. We demonstrate numerically the influence of initial conditions on the mixing rate. Both the dominant short wavelength initial conditions and long wavelength perturbations are observed to play a role. By examination of two classes of experiments, we observe the absence of a single universal explanation, with long and short wavelength initial conditions, and the various physical and numerical regularizations contributing in different proportions in these two different contexts. PMID:20615983

  9. Significant calendar period deviations in testicular germ cell tumors indicate that postnatal exposures are etiologically relevant.

    PubMed

    Speaks, Crystal; McGlynn, Katherine A; Cook, Michael B

    2012-10-01

    The current working model of type II testicular germ cell tumor (TGCT) pathogenesis states that carcinoma in situ arises during embryogenesis, is a necessary precursor, and always progresses to cancer. An implicit condition of this model is that only in utero exposures affect the development of TGCT in later life. In an age-period-cohort analysis, this working model contends an absence of calendar period deviations. We tested this contention using data from the SEER registries of the United States. We assessed age-period-cohort models of TGCTs, seminomas, and nonseminomas for the period 1973-2008. Analyses were restricted to whites diagnosed at ages 15-74 years. We tested whether calendar period deviations were significant in TGCT incidence trends adjusted for age deviations and cohort effects. This analysis included 32,250 TGCTs (18,475 seminomas and 13,775 nonseminomas). Seminoma incidence trends have increased with an average annual percentage change in log-linear rates (net drift) of 1.25 %, relative to just 0.14 % for nonseminoma. In more recent time periods, TGCT incidence trends have plateaued and then undergone a slight decrease. Calendar period deviations were highly statistically significant in models of TGCT (p = 1.24(-9)) and seminoma (p = 3.99(-14)), after adjustment for age deviations and cohort effects; results for nonseminoma (p = 0.02) indicated that the effects of calendar period were much more muted. Calendar period deviations play a significant role in incidence trends of TGCT, which indicates that postnatal exposures are etiologically relevant.

  10. Longitudinal and Cross-Sectional Analyses of Visual Field Progression in Participants of the Ocular Hypertension Treatment Study (OHTS)

    PubMed Central

    Chauhan, Balwantray C; Keltner, John L; Cello, Kim E; Johnson, Chris A; Anderson, Douglas R; Gordon, Mae O; Kass, Michael A

    2014-01-01

    Purpose Visual field progression can be determined by evaluating the visual field by serial examinations (longitudinal analysis), or by a change in classification derived from comparison to age-matched normal data in single examinations (cross-sectional analysis). We determined the agreement between these two approaches in data from the Ocular Hypertension Treatment Study (OHTS). Methods Visual field data from 3088 eyes of 1570 OHTS participants (median follow-up 7 yrs, 15 tests with static automated perimetry) were analysed. Longitudinal analyses were performed with change probability with total and pattern deviation, and cross-sectional analysis with Glaucoma Hemifield Test, Corrected Pattern Standard Deviation, and Mean Deviation. The rates of Mean Deviation and General Height change were compared to estimate the degree of diffuse loss in emerging glaucoma. Results The agreement on progression in longitudinal and cross-sectional analyses ranged from 50% to 61% and remained nearly constant across a wide range of criteria. In contrast, the agreement on absence of progression ranged from 97% to 99.7%, being highest for the stricter criteria. Analyses of pattern deviation were more conservative than total deviation, with a 3 to 5 times lesser incidence of progression. Most participants developing field loss had both diffuse and focal change. Conclusions Despite considerable overall agreement, between 40 to 50% of eyes identified as having progressed with either longitudinal or cross-sectional analyses were identified with only one of the analyses. Because diffuse change is part of early glaucomatous damage, pattern deviation analyses may underestimate progression in patients with ocular hypertension. PMID:21149774

  11. Happy orang-utans live longer lives.

    PubMed

    Weiss, Alexander; Adams, Mark J; King, James E

    2011-12-23

    Nonhuman primate ageing resembles its human counterpart. Moreover, ratings of subjective well-being traits in chimpanzees, orang-utans and rhesus macaques are similar to those of humans: they are intercorrelated, heritable, and phenotypically and genetically related to personality. We examined whether, as in humans, orang-utan subjective well-being was related to longer life. The sample included 184 zoo-housed orang-utans followed up for approximately 7 years. Age, sex, species and number of transfers were available for all subjects and 172 subjects were rated on at least one item of a subjective well-being scale. Of the 31 orang-utans that died, 25 died a mean of 3.4 years after being rated. Even in a model that included, and therefore, statistically adjusted for, sex, age, species and transfers, orang-utans rated as being "happier" lived longer. The risk differential between orang-utans that were one standard deviation above and one standard deviation below baseline in subjective well-being was comparable with approximately 11 years in age. This finding suggests that impressions of the subjective well-being of captive great apes are valid indicators of their welfare and longevity.

  12. Technology research for strapdown inertial experiment and digital flight control and guidance

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Cottrell, D. E.

    1985-01-01

    A helicopter flight-test program to evaluate the performance of Honeywell's Tetrad - a strapdown, laser gyro, inertial navitation system is discussed. The results of 34 flights showed a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n.mi., with a standard deviation of 1.48 n.m.; and a modeled mean-position-error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. Tetrad's four-ring laser gyros provided reliable and accurate angular rate sensing during the test program and on sensor failures were detected during the evaluation. Criteria suitable for investigating cockpit systems in rotorcraft were developed. This criteria led to the development of two basic simulators. The first was a standard simulator which could be used to obtain baseline information for studying pilot workload and interactions. The second was an advanced simulator which integrated the RODAAS developed by Honeywell into this simulator. The second area also included surveying the aerospace industry to determine the level of use and impact of microcomputers and related components on avionics systems.

  13. Flight test results of the strapdown ring laser gyro tetrad inertial navigation system

    NASA Technical Reports Server (NTRS)

    Carestia, R. A.; Hruby, R. J.; Bjorkman, W. S.

    1983-01-01

    A helicopter flight test program undertaken to evaluate the performance of Tetrad (a strap down, laser gyro, inertial navigation system) is described. The results of 34 flights show a mean final navigational velocity error of 5.06 knots, with a standard deviation of 3.84 knots; a corresponding mean final position error of 2.66 n. mi., with a standard deviation of 1.48 n. mi.; and a modeled mean position error growth rate for the 34 tests of 1.96 knots, with a standard deviation of 1.09 knots. No laser gyro or accelerometer failures were detected during the flight tests. Off line parity residual studies used simulated failures with the prerecorded flight test and laboratory test data. The airborne Tetrad system's failure--detection logic, exercised during the tests, successfully demonstrated the detection of simulated ""hard'' failures and the system's ability to continue successfully to navigate by removing the simulated faulted sensor from the computations. Tetrad's four ring laser gyros provided reliable and accurate angular rate sensing during the 4 yr of the test program, and no sensor failures were detected during the evaluation of free inertial navigation performance.

  14. Transient in-plane thermal transport in nanofilms with internal heating

    PubMed Central

    Cao, Bing-Yang

    2016-01-01

    Wide applications of nanofilms in electronics necessitate an in-depth understanding of nanoscale thermal transport, which significantly deviates from Fourier's law. Great efforts have focused on the effective thermal conductivity under temperature difference, while it is still ambiguous whether the diffusion equation with an effective thermal conductivity can accurately characterize the nanoscale thermal transport with internal heating. In this work, transient in-plane thermal transport in nanofilms with internal heating is studied via Monte Carlo (MC) simulations in comparison to the heat diffusion model and mechanism analyses using Fourier transform. Phonon-boundary scattering leads to larger temperature rise and slower thermal response rate when compared with the heat diffusion model based on Fourier's law. The MC simulations are also compared with the diffusion model with effective thermal conductivity. In the first case of continuous internal heating, the diffusion model with effective thermal conductivity under-predicts the temperature rise by the MC simulations at the initial heating stage, while the deviation between them gradually decreases and vanishes with time. By contrast, for the one-pulse internal heating case, the diffusion model with effective thermal conductivity under-predicts both the peak temperature rise and the cooling rate, so the deviation can always exist. PMID:27118903

  15. Transient in-plane thermal transport in nanofilms with internal heating.

    PubMed

    Hua, Yu-Chao; Cao, Bing-Yang

    2016-02-01

    Wide applications of nanofilms in electronics necessitate an in-depth understanding of nanoscale thermal transport, which significantly deviates from Fourier's law. Great efforts have focused on the effective thermal conductivity under temperature difference, while it is still ambiguous whether the diffusion equation with an effective thermal conductivity can accurately characterize the nanoscale thermal transport with internal heating. In this work, transient in-plane thermal transport in nanofilms with internal heating is studied via Monte Carlo (MC) simulations in comparison to the heat diffusion model and mechanism analyses using Fourier transform. Phonon-boundary scattering leads to larger temperature rise and slower thermal response rate when compared with the heat diffusion model based on Fourier's law. The MC simulations are also compared with the diffusion model with effective thermal conductivity. In the first case of continuous internal heating, the diffusion model with effective thermal conductivity under-predicts the temperature rise by the MC simulations at the initial heating stage, while the deviation between them gradually decreases and vanishes with time. By contrast, for the one-pulse internal heating case, the diffusion model with effective thermal conductivity under-predicts both the peak temperature rise and the cooling rate, so the deviation can always exist.

  16. A determination of the absolute radiant energy of a Robertson-Berger meter sunburn unit

    NASA Astrophysics Data System (ADS)

    DeLuisi, John J.; Harris, Joyce M.

    Data from a Robertson-Berger (RB) sunburn meter were compared with concurrent measurements obtained with an ultraviolet double monochromator (DM), and the absolute energy of one sunburn unit measured by the RB-meter was determined. It was found that at a solar zenith angle of 30° one sunburn unit (SU) is equivalent to 35 ± 4 mJ cm -2, and at a solar zenith angle of 69°, one SU is equivalent to 20 ± 2 mJ cm -2 (relative to a wavelength of 297 nm), where the rate of change is non-linear. The deviation is due to the different response functions of the RB-meter and the DM system used to simulate the response of human skin to the incident u.v. solar spectrum. The average growth rate of the deviation with increasing solar zenith angle was found to be 1.2% per degree between solar zenith angles 30 and 50° and 2.3% per degree between solar zenith angles 50 and 70°. The deviations of response with solar zenith angle were found to be consistent with reported RB-meter characteristics.

  17. Large short-term deviations from dipolar field during the Levantine Iron Age Geomagnetic Anomaly ca. 1050-700 BCE

    NASA Astrophysics Data System (ADS)

    Shaar, R.; Tauxe, L.; Ebert, Y.

    2017-12-01

    Continuous decadal-resolution paleomagnetic data from archaeological and sedimentary sources in the Levant revealed the existence a local high-field anomaly, which spanned the first 350 years of the first millennium BCE. This so-called "the Levantine Iron Age geomagnetic Anomaly" (LIAA) was characterized by a high averaged geomagnetic field (virtual axial dipole moments, VADM > 140 Z Am2, nearly twice of today's field), short decadal-scale geomagnetic spikes (VADM of 160-185 Z Am2), fast directional and intensity variations, and substantial deviation (20°-25°) from dipole field direction. Similar high field values in the time frame of LIAA have been observed north, and northeast to the Levant: Eastern Anatolia, Turkmenistan, and Georgia. West of the Levant, in the Balkans, field values in the same time are moderate to low. The overall data suggest that the LIAA is a manifestation of a local positive geomagnetic field anomaly similar in magnitude and scale to the presently active negative South Atlantic Anomaly. In this presentation we review the overall archaeomagnetic and sedimentary evidences supporting the local anomaly hypothesis, and compare these observations with today's IGRF field. We analyze the global data during the first two millennia BCE, which suggest some unexpected large deviations from a simple dipolar geomagnetic structure.

  18. Vocal singing by prelingually-deafened children with cochlear implants.

    PubMed

    Xu, Li; Zhou, Ning; Chen, Xiuwu; Li, Yongxin; Schultz, Heather M; Zhao, Xiaoyan; Han, Demin

    2009-09-01

    The coarse pitch information in cochlear implants might hinder the development of singing in prelingually-deafened pediatric users. In the present study, seven prelingually-deafened children with cochlear implants (5.4-12.3 years old) sang one song that was the most familiar to him or her. The control group consisted of 14 normal-hearing children (4.1-8.0 years old). The fundamental frequencies (F0) of each note in the recorded songs were extracted. The following five metrics were computed based on the reference music scores: (1) F0 contour direction of the adjacent notes, (2) F0 compression ratio of the entire song, (3) mean deviation of the normalized F0 across the notes, (4) mean deviation of the pitch intervals, and (5) standard deviation of the note duration differences. Children with cochlear implants showed significantly poorer performance in the pitch-based assessments than the normal-hearing children. No significant differences were seen between the two groups in the rhythm-based measure. Prelingually-deafened children with cochlear implants have significant deficits in singing due to their inability to manipulate pitch in the correct directions and to produce accurate pitch height. Future studies with a large sample size are warranted in order to account for the large variability in singing performance.

  19. Spectral Relative Standard Deviation: A Practical Benchmark in Metabolomics

    EPA Science Inventory

    Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to...

  20. Vacuum stability and naturalness in type-II seesaw

    DOE PAGES

    Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...

    2016-06-16

    Here, we study the vacuum stability and perturbativity conditions in the minimal type-II seesaw model. These conditions give characteristic constraints to the model parameters. In the model, there is a SU(2) L triplet scalar field, which could cause a large Higgs mass correction. From the naturalness point of view, heavy Higgs masses should be lower than 350GeV, which may be testable by the LHC Run-II results. Due to the effects of the triplet scalar field, the branching ratios of the Higgs decay (h → γγ,Zγ) deviate from the standard model, and a large parameter region is excluded by the recentmore » ATLAS and CMS combined analysis of h → γγ. Our result of the signal strength for h → γγ is R γγ ≲ 1.1, but its deviation is too small to observe at the LHC experiment.« less

  1. Large deviations in the random sieve

    NASA Astrophysics Data System (ADS)

    Grimmett, Geoffrey

    1997-05-01

    The proportion [rho]k of gaps with length k between square-free numbers is shown to satisfy log[rho]k=[minus sign](1+o(1))(6/[pi]2) klogk as k[rightward arrow][infty infinity]. Such asymptotics are consistent with Erdos's challenge to prove that the gap following the square-free number t is smaller than clogt/log logt, for all t and some constant c satisfying c>[pi]2/12. The results of this paper are achieved by studying the probabilities of large deviations in a certain ‘random sieve’, for which the proportions [rho]k have representations as probabilities. The asymptotic form of [rho]k may be obtained in situations of greater generality, when the squared primes are replaced by an arbitrary sequence (sr) of relatively prime integers satisfying [sum L: summation operator]r1/sr<[infty infinity], subject to two further conditions of regularity on this sequence.

  2. Simple programmable voltage reference for low frequency noise measurements

    NASA Astrophysics Data System (ADS)

    Ivanov, V. E.; Chye, En Un

    2018-05-01

    The paper presents a circuit design of a low-noise voltage reference based on an electric double-layer capacitor, a microcontroller and a general purpose DAC. A large capacitance value (1F and more) makes it possible to create low-pass filter with a large time constant, effectively reducing low-frequency noise beyond its bandwidth. Choosing the optimum value of the resistor in the RC filter, one can achieve the best ratio between the transient time, the deviation of the output voltage from the set point and the minimum noise cut-off frequency. As experiments have shown, the spectral density of the voltage at a frequency of 1 kHz does not exceed 1.2 nV/√Hz the maximum deviation of the output voltage from the predetermined does not exceed 1.4 % and depends on the holding time of the previous value. Subsequently, this error is reduced to a constant value and can be compensated.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The results of a search for gluinos in final states with an isolated electron or muon, multiple jets and large missing transverse momentum using proton–proton collision data at a centre-of-mass energy ofmore » $$\\sqrt{s}$$ = 13 Te V are presented. The dataset used was recorded in 2015 by the ATLAS experiment at the Large Hadron Collider and corresponds to an integrated luminosity of 3.2 fb -1 . Six signal selections are defined that best exploit the signal characteristics. The data agree with the Standard Model background expectation in all six signal selections, and the largest deviation is a 2.1 standard deviation excess. The results are interpreted in a simplified model where pair-produced gluinos decay via the lightest chargino to the lightest neutralino. In this model, gluinos are excluded up to masses of approximately 1.6 Te V depending on the mass spectrum of the simplified model, thus surpassing the limits of previous searches.« less

  4. Geometric phase for a two-level system in photonic band gab crystal

    NASA Astrophysics Data System (ADS)

    Berrada, K.

    2018-05-01

    In this work, we investigate the geometric phase (GP) for a qubit system coupled to its own anisotropic and isotropic photonic band gap (PBG) crystal environment without Born or Markovian approximation. The qubit frequency affects the GP of the qubit directly through the effect of the PBG environment. The results show the deviation of the GP depends on the detuning parameter and this deviation will be large for relatively large detuning of atom frequency inside the gap with respect to the photonic band edge. Whereas for detunings outside the gap, the GP of the qubit changes abruptly to zero, exhibiting collapse phenomenon of the GP. Moreover, we find that the GP in the isotropic PBG photonic crystal is more robust than that in the anisotropic PBG under the same condition. Finally, we explore the relationship between the variation of the GP and population in terms of the physical parameters.

  5. Finite-key analysis for measurement-device-independent quantum key distribution.

    PubMed

    Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2014-04-29

    Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.

  6. Investigation of compositional segregation during unidirectional solidification of solid solution semiconducting alloys

    NASA Technical Reports Server (NTRS)

    Wang, J. C.

    1982-01-01

    Compositional segregation of solid solution semiconducting alloys in the radial direction during unidirectional solidification was investigated by calculating the effect of a curved solid liquid interface on solute concentration at the interface on the solid. The formulation is similar to that given by Coriell, Boisvert, Rehm, and Sekerka except that a more realistic cylindrical coordinate system which is moving with the interface is used. Analytical results were obtained for very small and very large values of beta with beta = VR/D, where V is the velocity of solidification, R the radius of the specimen, and D the diffusivity of solute in the liquid. For both very small and very large beta, the solute concentration at the interface in the solid C(si) approaches C(o) (original solute concentration) i.e., the deviation is minimal. The maximum deviation of C(si) from C(o) occurs for some intermediate value of beta.

  7. Large-visual-angle microstructure inspired from quantitative design of Morpho butterflies' lamellae deviation using the FDTD/PSO method.

    PubMed

    Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di

    2013-01-15

    The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.

  8. Lunar brightness temperature from Microwave Radiometers data of Chang'E-1 and Chang'E-2

    NASA Astrophysics Data System (ADS)

    Feng, J.-Q.; Su, Y.; Zheng, L.; Liu, J.-J.

    2011-10-01

    Both of the Chinese lunar orbiter, Chang'E-1 and Chang'E-2 carried Microwave Radiometers (MRM) to obtain the brightness temperature of the Moon. Based on the different characteristics of these two MRMs, modified algorithms of brightness temperature and specific ground calibration parameters were proposed, and the corresponding lunar global brightness temperature maps were made here. In order to analyze the data distributions of these maps, normalization method was applied on the data series. The second channel data with large deviations were rectified, and the reasons of deviations were analyzed in the end.

  9. Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image

    NASA Astrophysics Data System (ADS)

    Demir, N.; Kaynarca, M.; Oy, S.

    2016-06-01

    Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.

  10. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  11. Impact of buildings on surface solar radiation over urban Beijing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Bin; Liou, Kuo-Nan; Gu, Yu

    The rugged surface of an urban area due to varying buildings can interact with solar beams and affect both the magnitude and spatiotemporal distribution of surface solar fluxes. Here we systematically examine the impact of buildings on downward surface solar fluxes over urban Beijing by using a 3-D radiation parameterization that accounts for 3-D building structures vs. the conventional plane-parallel scheme. We find that the resulting downward surface solar flux deviations between the 3-D and the plane-parallel schemes are generally ±1–10 W m -2 at 800 m grid resolution and within ±1 W m -2 at 4 km resolution. Pairsmore » of positive–negative flux deviations on different sides of buildings are resolved at 800 m resolution, while they offset each other at 4 km resolution. Flux deviations from the unobstructed horizontal surface at 4 km resolution are positive around noon but negative in the early morning and late afternoon. The corresponding deviations at 800 m resolution, in contrast, show diurnal variations that are strongly dependent on the location of the grids relative to the buildings. Both the magnitude and spatiotemporal variations of flux deviations are largely dominated by the direct flux. Furthermore, we find that flux deviations can potentially be an order of magnitude larger by using a finer grid resolution. Atmospheric aerosols can reduce the magnitude of downward surface solar flux deviations by 10–65 %, while the surface albedo generally has a rather moderate impact on flux deviations. The results imply that the effect of buildings on downward surface solar fluxes may not be critically significant in mesoscale atmospheric models with a grid resolution of 4 km or coarser. However, the effect can play a crucial role in meso-urban atmospheric models as well as microscale urban dispersion models with resolutions of 1 m to 1 km.« less

  12. Severity of Illness Scores May Misclassify Critically Ill Obese Patients.

    PubMed

    Deliberato, Rodrigo Octávio; Ko, Stephanie; Komorowski, Matthieu; Armengol de La Hoz, M A; Frushicheva, Maria P; Raffa, Jesse D; Johnson, Alistair E W; Celi, Leo Anthony; Stone, David J

    2018-03-01

    Severity of illness scores rest on the assumption that patients have normal physiologic values at baseline and that patients with similar severity of illness scores have the same degree of deviation from their usual state. Prior studies have reported differences in baseline physiology, including laboratory markers, between obese and normal weight individuals, but these differences have not been analyzed in the ICU. We compared deviation from baseline of pertinent ICU laboratory test results between obese and normal weight patients, adjusted for the severity of illness. Retrospective cohort study in a large ICU database. Tertiary teaching hospital. Obese and normal weight patients who had laboratory results documented between 3 days and 1 year prior to hospital admission. None. Seven hundred sixty-nine normal weight patients were compared with 1,258 obese patients. After adjusting for the severity of illness score, age, comorbidity index, baseline laboratory result, and ICU type, the following deviations were found to be statistically significant: WBC 0.80 (95% CI, 0.27-1.33) × 10/L; p = 0.003; log (blood urea nitrogen) 0.01 (95% CI, 0.00-0.02); p = 0.014; log (creatinine) 0.03 (95% CI, 0.02-0.05), p < 0.001; with all deviations higher in obese patients. A logistic regression analysis suggested that after adjusting for age and severity of illness at least one of these deviations had a statistically significant effect on hospital mortality (p = 0.009). Among patients with the same severity of illness score, we detected clinically small but significant deviations in WBC, creatinine, and blood urea nitrogen from baseline in obese compared with normal weight patients. These small deviations are likely to be increasingly important as bigger data are analyzed in increasingly precise ways. Recognition of the extent to which all critically ill patients may deviate from their own baseline may improve the objectivity, precision, and generalizability of ICU mortality prediction and severity adjustment models.

  13. Multivessel supercritical fluid extraction of food items in Total Diet Study.

    PubMed

    Hopper, M L; King, J W; Johnson, J H; Serino, A A; Butler, R J

    1995-01-01

    An off-line, large capacity, multivessel supercritical fluid extractor (SFE) was designed and constructed for extraction of large samples. The extractor can simultaneously process 1-6 samples (15-25 g) by using supercritical carbon dioxide (SC-CO2), which is relatively nontoxic and nonflammable, as the solvent extraction medium. Lipid recoveries for the SFE system were comparable to those obtained by blending or Soxhlet extraction procedures. Extractions at 10,000 psi, 80 degrees C, expanded gaseous CO2 flow rates of 4-5 L/min (35 degrees C), and 1-3 h extraction times gave reproducible lipid recoveries for pork sausage (relative standard deviation [RSD], 1.32%), corn chips (RSD, 0.46%), cheddar cheese (RSD, 1.14%), and peanut butter (RSD, 0.44%). In addition, this SFE system gave reproducible recoveries (> 93%) for butter fortified with cis-chlordane and malathion at the 100 ppm and 0.1 ppm levels. Six portions each of cheddar cheese, saltine crackers, sandwich cookies, and ground hamburger also were simultaneously extracted with SC-CO2 and analyzed for incurred pesticide residues. Results obtained with this SFE system were reproducible and comparable with results from organic-solvent extraction procedures currently used in the Total Diet Study; therefore, use and disposal of large quantities of organic solvents can be eliminated.

  14. The decay of isotropic magnetohydrodynamics turbulence and the effects of cross-helicity

    NASA Astrophysics Data System (ADS)

    Briard, Antoine; Gomez, Thomas

    2018-02-01

    Decaying homogeneous and isotropic magnetohydrodynamics (MHD) turbulence is investigated numerically at large Reynolds numbers thanks to the eddy-damped quasi-normal Markovian (EDQNM) approximation. Without any background mean magnetic field, the total energy spectrum scales as -3/2$ in the inertial range as a consequence of the modelling. Moreover, the total energy is shown, both analytically and numerically, to decay at the same rate as kinetic energy in hydrodynamic isotropic turbulence: this differs from a previous prediction, and thus physical arguments are proposed to reconcile both results. Afterwards, the MHD turbulence is made imbalanced by an initial non-zero cross-helicity. A spectral modelling is developed for the velocity-magnetic correlation in a general homogeneous framework, which reveals that cross-helicity can contain subtle anisotropic effects. In the inertial range, as the Reynolds number increases, the slope of the cross-helical spectrum becomes closer to -5/3$ than -2$ . Furthermore, the Elsässer spectra deviate from -3/2$ with cross-helicity at large Reynolds numbers. Regarding the pressure spectrum P$ , its kinetic and magnetic parts are found to scale with -2$ in the inertial range, whereas the part due to cross-helicity rather scales in -7/3$ . Finally, the two rd laws for the total energy and cross-helicity are assessed numerically at large Reynolds numbers.

  15. [Conservative and surgical treatment of convergence excess].

    PubMed

    Ehrt, O

    2016-07-01

    Convergence excess is a common finding especially in pediatric strabismus. A detailed diagnostic approach has to start after full correction of any hyperopia measured in cycloplegia. It includes measurements of manifest and latent deviation at near and distance fixation, near deviation after relaxation of accommodation with addition of +3 dpt, assessment of binocular function with and without +3 dpt as well as the accommodation range. This diagnostic approach is important for the classification into three types of convergence excess, which require different therapeutic approaches: 1) hypo-accommodative convergence excess is treated with permanent bifocal glasses, 2) norm-accommodative patients should be treated with bifocals which can be weaned over years, especially in patients with good stereopsis and 3) non-accommodative convergence excess and patients with large distance deviations need a surgical approach. The most effective operations include those which reduce the muscle torque, e. g. bimedial Faden operations or Y‑splitting of the medial rectus muscles.

  16. Diode‐based transmission detector for IMRT delivery monitoring: a validation study

    PubMed Central

    Li, Taoran; Wu, Q. Jackie; Matzen, Thomas; Yin, Fang‐Fang

    2016-01-01

    The purpose of this work was to evaluate the potential of a new transmission detector for real‐time quality assurance of dynamic‐MLC‐based radiotherapy. The accuracy of detecting dose variation and static/dynamic MLC position deviations was measured, as well as the impact of the device on the radiation field (surface dose, transmission). Measured dose variations agreed with the known variations within 0.3%. The measurement of static and dynamic MLC position deviations matched the known deviations with high accuracy (0.7–1.2 mm). The absorption of the device was minimal (∼ 1%). The increased surface dose was small (1%–9%) but, when added to existing collimator scatter effects could become significant at large field sizes (≥30×30 cm2). Overall the accuracy and speed of the device show good potential for real‐time quality assurance. PACS number(s): 87.55.Qr PMID:27685115

  17. Determination of the optimal level for combining area and yield estimates

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.

    1981-01-01

    Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.

  18. Effects of vegetation canopy structure on remotely sensed canopy temperatures. [inferring plant water stress and yield

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.

    1979-01-01

    The effects of vegetation canopy structure on thermal infrared sensor response must be understood before vegetation surface temperatures of canopies with low percent ground cover can be accurately inferred. The response of a sensor is a function of vegetation geometric structure, the vertical surface temperature distribution of the canopy components, and sensor view angle. Large deviations between the nadir sensor effective radiant temperature (ERT) and vegetation ERT for a soybean canopy were observed throughout the growing season. The nadir sensor ERT of a soybean canopy with 35 percent ground cover deviated from the vegetation ERT by as much as 11 C during the mid-day. These deviations were quantitatively explained as a function of canopy structure and soil temperature. Remote sensing techniques which determine the vegetation canopy temperature(s) from the sensor response need to be studied.

  19. Uncertainty of large-area estimates of indicators of forest structural gamma diversity: A study based on national forest inventory data

    Treesearch

    Susanne Winter; Andreas Böck; Ronald E. McRoberts

    2012-01-01

    Tree diameter and height are commonly measured forest structural variables, and indicators based on them are candidates for assessing forest diversity. We conducted our study on the uncertainty of estimates for mostly large geographic scales for four indicators of forest structural gamma diversity: mean tree diameter, mean tree height, and standard deviations of tree...

  20. Global Behavior in Large Scale Systems

    DTIC Science & Technology

    2013-12-05

    release. AIR FORCE RESEARCH LABORATORY AF OFFICE OF SCIENTIFIC RESEARCH (AFOSR)/RSL ARLINGTON, VIRGINIA 22203 AIR FORCE MATERIEL COMMAND AFRL-OSR-VA...and Research 875 Randolph Street, Suite 325 Room 3112, Arlington, VA 22203 December 3, 2013 1 Abstract This research attained two main achievements: 1...microscopic random interactions among the agents. 2 1 Introduction In this research we considered two main problems: 1) large deviation error performance in

  1. Measuring Diameters Of Large Vessels

    NASA Technical Reports Server (NTRS)

    Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.

    1990-01-01

    Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.

  2. Data assimilation in the low noise regime

    NASA Astrophysics Data System (ADS)

    Weare, J.; Vanden-Eijnden, E.

    2012-12-01

    On-line data assimilation techniques such as ensemble Kalman filters and particle filters tend to lose accuracy dramatically when presented with an unlikely observation. Such observation may be caused by an unusually large measurement error or reflect a rare fluctuation in the dynamics of the system. Over a long enough span of time it becomes likely that one or several of these events will occur. In some cases they are signatures of the most interesting features of the underlying system and their prediction becomes the primary focus of the data assimilation procedure. The Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. Because of the important role that the Kuroshio plays in distributing heat and salinity in the surrounding region, prediction of these transitions is of acute interest. { Here we focus on a regime in which both the stochastic forcing on the system and the observational noise are small. In this setting large deviation theory can be used to understand why standard filtering methods fail and guide the design of the more effective data assimilation techniques. Motivated by our large deviations analysis we propose several data assimilation strategies capable of efficiently handling rare events such as the transitions of the Kuroshio. These techniques are tested on a model of the Kuroshio and shown to perform much better than standard filtering methods.Here the sequence of observations (circles) are taken directly from one of our Kuroshio model's transition events from the small meander to the large meander. We tested two new algorithms (Algorithms 3 and 4 in the legend) motivated by our large deviations analysis as well as a standard particle filter and an ensemble Kalman filter. The parameters of each algorithm are chosen so that their costs are comparable. The particle filter and an ensemble Kalman filter fail to accurately track the transition. Algorithms 3 and 4 maintain accuracy (and smaller scale resolution) throughout the transition.

  3. Thermonuclear 19F(p, {{\\boldsymbol{\\alpha }}}_{0})16O reaction rate

    NASA Astrophysics Data System (ADS)

    He, Jian-Jun; Lombardo, Ivano; Dell'Aquila, Daniele; Xu, Yi; Zhang, Li-Yong; Liu, Wei-Ping

    2018-01-01

    The thermonuclear 19F(p, {{{α }}}0)16O reaction rate in the temperature region 0.007-10 GK has been derived by re-evaluating the available experimental data, together with the low-energy theoretical R-matrix extrapolations. Our new rate deviates by up to about 30% compared to the previous results, although all rates are consistent within the uncertainties. At very low temperature (e.g. 0.01 GK) our reaction rate is about 20% lower than the most recently published rate, because of a difference in the low energy extrapolated S-factor and a more accurate estimate of the reduced mass used in the calculation of the reaction rate. At temperatures above ˜1 GK, our rate is lower, for instance, by about 20% around 1.75 GK, because we have re-evaluated the previous data (Isoya et al., Nucl. Phys. 7, 116 (1958)) in a meticulous way. The present interpretation is supported by the direct experimental data. The uncertainties of the present evaluated rate are estimated to be about 20% in the temperature region below 0.2 GK, and are mainly caused by the lack of low-energy experimental data and the large uncertainties in the existing data. Asymptotic giant branch (AGB) stars evolve at temperatures below 0.2 GK, where the 19F(p, {{α }})16O reaction may play a very important role. However, the current accuracy of the reaction rate is insufficient to help to describe, in a careful way, the fluorine over-abundances observed in AGB stars. Precise cross section (or S factor) data in the low energy region are therefore needed for astrophysical nucleosynthesis studies. Supported by National Natural Science Foundation of China (11490562, 11490560, 11675229) and National Key Research and Development Program of China (2016YFA0400503)

  4. Scaling in situ cosmogenic nuclide production rates using analytical approximations to atmospheric cosmic-ray fluxes

    NASA Astrophysics Data System (ADS)

    Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.

    2014-01-01

    Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.

  5. Characterization of difference of Gaussian filters in the detection of mammographic regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catarious, David M. Jr.; Baydush, Alan H.; Floyd, Carey E. Jr.

    2006-11-15

    In this article, we present a characterization of the effect of difference of Gaussians (DoG) filters in the detection of mammographic regions. DoG filters have been used previously in mammographic mass computer-aided detection (CAD) systems. As DoG filters are constructed from the subtraction of two bivariate Gaussian distributions, they require the specification of three parameters: the size of the filter template and the standard deviations of the constituent Gaussians. The influence of these three parameters in the detection of mammographic masses has not been characterized. In this work, we aim to determine how the parameters affect (1) the physical descriptorsmore » of the detected regions (2) the true and false positive rates, and (3) the classification performance of the individual descriptors. To this end, 30 DoG filters are created from the combination of three template sizes and four values for each of the Gaussians' standard deviations. The filters are used to detect regions in a study database of 181 craniocaudal-view mammograms extracted from the Digital Database for Screening Mammography. To describe the physical characteristics of the identified regions, morphological and textural features are extracted from each of the detected regions. Differences in the mean values of the features caused by altering the DoG parameters are examined through statistical and empirical comparisons. The parameters' effects on the true and false positive rate are determined by examining the mean malignant sensitivities and false positives per image (FPpI). Finally, the effect on the classification performance is described by examining the variation in FPpI at the point where 81% of the malignant masses in the study database are detected. Overall, the findings of the study indicate that increasing the standard deviations of the Gaussians used to construct a DoG filter results in a dramatic decrease in the number of regions identified at the expense of missing a small number of malignancies. The sharp reduction in the number of identified regions allowed the identification of textural differences between large and small mammographic regions. We find that the classification performances of the features that achieve the lowest average FPpI are influenced by all three of the parameters.« less

  6. Modeling failure in brittle porous ceramics

    NASA Astrophysics Data System (ADS)

    Keles, Ozgur

    Brittle porous materials (BPMs) are used for battery, fuel cell, catalyst, membrane, filter, bone graft, and pharmacy applications due to the multi-functionality of their underlying porosity. However, in spite of its technological benefits the effects of porosity on BPM fracture strength and Weibull statistics are not fully understood--limiting a wider use. In this context, classical fracture mechanics was combined with two-dimensional finite element simulations not only to account for pore-pore stress interactions, but also to numerically quantify the relationship between the local pore volume fraction and fracture statistics. Simulations show that even the microstructures with the same porosity level and size of pores differ substantially in fracture strength. The maximum reliability of BPMs was shown to be limited by the underlying pore--pore interactions. Fracture strength of BMPs decreases at a faster rate under biaxial loading than under uniaxial loading. Three different types of deviation from classic Weibull behavior are identified: P-type corresponding to a positive lower tail deviation, N-type corresponding to a negative lower tail deviation, and S-type corresponding to both positive upper and lower tail deviations. Pore-pore interactions result in either P-type or N-type deviation in the limit of low porosity, whereas S-type behavior occurs when clusters of low and high fracture strengths coexist in a fracture data.

  7. Incidence of Artifacts and Deviating Values in Research Data Obtained from an Anesthesia Information Management System in Children.

    PubMed

    Hoorweg, Anne-Lee J; Pasma, Wietze; van Wolfswinkel, Leo; de Graaff, Jurgen C

    2018-02-01

    Vital parameter data collected in anesthesia information management systems are often used for clinical research. The validity of this type of research is dependent on the number of artifacts. In this prospective observational cohort study, the incidence of artifacts in anesthesia information management system data was investigated in children undergoing anesthesia for noncardiac procedures. Secondary outcomes included the incidence of artifacts among deviating and nondeviating values, among the anesthesia phases, and among different anesthetic techniques. We included 136 anesthetics representing 10,236 min of anesthesia time. The incidence of artifacts was 0.5% for heart rate (95% CI: 0.4 to 0.7%), 1.3% for oxygen saturation (1.1 to 1.5%), 7.5% for end-tidal carbon dioxide (6.9 to 8.0%), 5.0% for noninvasive blood pressure (4.0 to 6.0%), and 7.3% for invasive blood pressure (5.9 to 8.8%). The incidence of artifacts among deviating values was 3.1% for heart rate (2.1 to 4.4%), 10.8% for oxygen saturation (7.6 to 14.8%), 14.1% for end-tidal carbon dioxide (13.0 to 15.2%), 14.4% for noninvasive blood pressure (10.3 to 19.4%), and 38.4% for invasive blood pressure (30.3 to 47.1%). Not all values in anesthesia information management systems are valid. The incidence of artifacts stored in the present pediatric anesthesia practice was low for heart rate and oxygen saturation, whereas noninvasive and invasive blood pressure and end-tidal carbon dioxide had higher artifact incidences. Deviating values are more often artifacts than values in a normal range, and artifacts are associated with the phase of anesthesia and anesthetic technique. Development of (automatic) data validation systems or solutions to deal with artifacts in data is warranted.

  8. Perceived Prevalence of Teasing and Bullying Predicts High School Dropout Rates

    ERIC Educational Resources Information Center

    Cornell, Dewey; Gregory, Anne; Huang, Francis; Fan, Xitao

    2013-01-01

    This prospective study of 276 Virginia public high schools found that the prevalence of teasing and bullying (PTB) as perceived by both 9th-grade students and teachers was predictive of dropout rates for this cohort 4 years later. Negative binomial regression indicated that one standard deviation increases in student- and teacher-reported PTB were…

  9. Child and Informant Influences on Behavioral Ratings of Preschool Children

    ERIC Educational Resources Information Center

    Phillips, Beth M.; Lonigan, Christopher J.

    2010-01-01

    This study investigated relationships among teacher, parent, and observer behavioral ratings of 3- and 4-year-old children using intra-class correlations and analysis of variance. Comparisons within and across children from middle-income (MI; N = 166; mean age = 54.25 months, standard deviation [SD] = 8.74) and low-income (LI; N = 199; mean age =…

  10. Combustion characteristics of paper and sewage sludge in a pilot-scale fluidized bed.

    PubMed

    Yu, Yong-Ho; Chung, Jinwook

    2015-01-01

    This study characterizes the combustion of paper and sewage sludge in a pilot-scale fluidized bed. The highest temperature during combustion within the system was found at the surface of the fluidized bed. Paper sludge containing roughly 59.8% water was burned without auxiliary fuel, but auxiliary fuel was required to incinerate the sewage sludge, which contained about 79.3% water. The stability of operation was monitored based on the average pressure and the standard deviation of pressure fluctuations. The average pressure at the surface of the fluidized bed decreased as the sludge feed rate increased. However, the standard deviation of pressure fluctuations increased as the sludge feed rate increased. Finally, carbon monoxide (CO) emissions decreased as oxygen content increased in the flue gas, and nitrogen oxide (NOx) emissions were also tied with oxygen content.

  11. An analytical and experimental study of the behavior of semi-infinite metal targets under hypervelocity impact

    NASA Technical Reports Server (NTRS)

    Chakrapani, B.; Rand, J. L.

    1971-01-01

    The material strength and strain rate effects associated with the hypervelocity impact problem were considered. A yield criterion involving the second and third invariants of the stress deviator and a strain rate sensitive constitutive equation were developed. The part of total deformation which represents change in shape is attributable to the stress deviator. Constitutive equation is a means for analytically describing the mechanical response of a continuum under study. The accuracy of the yield criterion was verified utilizing the published two and three dimensional experimental data. The constants associated with the constitutive equation were determined from one dimensional quasistatic and dynamic experiments. Hypervelocity impact experiments were conducted on semi-infinite targets of 1100 aluminum, 6061 aluminum alloy, mild steel, and commercially pure lead using spherically shaped and normally incident pyrex projectiles.

  12. Continuous-flow electrophoresis: Membrane-associated deviations of buffer pH and conductivity

    NASA Technical Reports Server (NTRS)

    Smolka, A. J. K.; Mcguire, J. K.

    1978-01-01

    The deviations in buffer pH and conductivity which occur near the electrode membranes in continuous-flow electrophoresis were studied in the Beckman charged particle electrophoresis system and the Hanning FF-5 preparative electrophoresis instrument. The nature of the membranes separating the electrode compartments from the electrophoresis chamber, the electric field strength, and the flow rate of electrophoresis buffer were all found to influence the formation of the pH and conductivity gradients. Variations in electrode buffer flow rate and the time of electrophoresis were less important. The results obtained supported the hypothesis that a combination of Donnan membrane effects and the differing ionic mobilities in the electrophoresis buffer was responsible for the formation of the gradients. The significance of the results for the design and stable operation of continuous-flow electrophoresis apparatus was discussed.

  13. Acoustic analysis of speech variables during depression and after improvement.

    PubMed

    Nilsonne, A

    1987-09-01

    Speech recordings were made of 16 depressed patients during depression and after clinical improvement. The recordings were analyzed using a computer program which extracts acoustic parameters from the fundamental frequency contour of the voice. The percent pause time, the standard deviation of the voice fundamental frequency distribution, the standard deviation of the rate of change of the voice fundamental frequency and the average speed of voice change were found to correlate to the clinical state of the patient. The mean fundamental frequency, the total reading time and the average rate of change of the voice fundamental frequency did not differ between the depressed and the improved group. The acoustic measures were more strongly correlated to the clinical state of the patient as measured by global depression scores than to single depressive symptoms such as retardation or agitation.

  14. Stratified turbulent Bunsen flames: flame surface analysis and flame surface density modelling

    NASA Astrophysics Data System (ADS)

    Ramaekers, W. J. S.; van Oijen, J. A.; de Goey, L. P. H.

    2012-12-01

    In this paper it is investigated whether the Flame Surface Density (FSD) model, developed for turbulent premixed combustion, is also applicable to stratified flames. Direct Numerical Simulations (DNS) of turbulent stratified Bunsen flames have been carried out, using the Flamelet Generated Manifold (FGM) reduction method for reaction kinetics. Before examining the suitability of the FSD model, flame surfaces are characterized in terms of thickness, curvature and stratification. All flames are in the Thin Reaction Zones regime, and the maximum equivalence ratio range covers 0.1⩽φ⩽1.3. For all flames, local flame thicknesses correspond very well to those observed in stretchless, steady premixed flamelets. Extracted curvature radii and mixing length scales are significantly larger than the flame thickness, implying that the stratified flames all burn in a premixed mode. The remaining challenge is accounting for the large variation in (subfilter) mass burning rate. In this contribution, the FSD model is proven to be applicable for Large Eddy Simulations (LES) of stratified flames for the equivalence ratio range 0.1⩽φ⩽1.3. Subfilter mass burning rate variations are taken into account by a subfilter Probability Density Function (PDF) for the mixture fraction, on which the mass burning rate directly depends. A priori analysis point out that for small stratifications (0.4⩽φ⩽1.0), the replacement of the subfilter PDF (obtained from DNS data) by the corresponding Dirac function is appropriate. Integration of the Dirac function with the mass burning rate m=m(φ), can then adequately model the filtered mass burning rate obtained from filtered DNS data. For a larger stratification (0.1⩽φ⩽1.3), and filter widths up to ten flame thicknesses, a β-function for the subfilter PDF yields substantially better predictions than a Dirac function. Finally, inclusion of a simple algebraic model for the FSD resulted only in small additional deviations from DNS data, thereby rendering this approach promising for application in LES.

  15. Age-standardisation when target setting and auditing performance of Down syndrome screening programmes.

    PubMed

    Cuckle, Howard; Aitken, David; Goodburn, Sandra; Senior, Brian; Spencer, Kevin; Standing, Sue

    2004-11-01

    To describe and illustrate a method of setting Down syndrome screening targets and auditing performance that allows for differences in the maternal age distribution. A reference population was determined from a Gaussian model of maternal age. Target detection and false-positive rates were determined by standard statistical modelling techniques, except that the reference population rather than an observed population was used. Second-trimester marker parameters were obtained for Down syndrome from a large meta-analysis, and for unaffected pregnancies from the combined results of more than 600,000 screens in five centres. Audited detection and false-positive rates were the weighted average of the rates in five broad age groups corrected for viability bias. Weights were based on the age distributions in the reference population. Maternal age was found to approximate reasonably well to a Gaussian distribution with mean 27 years and standard deviation 5.5 years. Depending on marker combination, the target detection rates were 59 to 64% and false-positive rate 4.2 to 5.4% for a 1 in 250 term cut-off; 65 to 68% and 6.1 to 7.3% for 1 in 270 at mid-trimester. Among the five centres, the audited detection rate ranged from 7% below target to 10% above target, with audited false-positive rates better than the target by 0.3 to 1.5%. Age-standardisation should help to improve screening quality by allowing for intrinsic differences between programmes, so that valid comparisons can be made. Copyright 2004 John Wiley & Sons, Ltd.

  16. Variations in Rotation Rate and Polar Motion of a Non-hydrostatic Titan

    NASA Astrophysics Data System (ADS)

    Van Hoolst, T.; Coyette, A.; Baland, R. M.

    2017-12-01

    Observations of the rotation of large synchronously rotating satellites such as Titan can help to probe their interior. Previous studies (Van Hoolst et al. 2013, Richard et al. 2014, Coyette et al. 2016) mostly assume that Titan is in hydrostatic equilibrium, although several measurements indicate that it deviates from such a state. Here we investigate the effect of non-hydrostatic equilibrium and of flow in the subsurface ocean on the rotation of Titan. We consider (1) the periodic changes in Titan's rotation rate with a period equal to Titan's orbital period (diurnal librations) as a result of the gravitational torque exerted by Saturn, (2) the periodic changes in Titan's rotation rate with a main period equal to half the orbital period of Saturn (seasonal librations) and due to the dynamic variations in the atmosphere of Titan and (3) the periodic changes of the axis of rotation with respect to the figure axis of Titan (polar motion) with a main period equal to the orbital period of Saturn and due to the dynamic variations in the atmosphere of Titan. The non-hydrostatic mass distribution significantly influences the amplitude of the diurnal and seasonal librations. It is less important for polar motion, which is sensitive to flow in the subsurface ocean. The smaller than synchronous rotation rate measured by Cassini (Meriggiola 2016) can be explained by the atmospheric forcing.

  17. A method for age-matched OCT angiography deviation mapping in the assessment of disease- related changes to the radial peripapillary capillaries.

    PubMed

    Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y

    2018-01-01

    To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.

  18. Motor equivalence during multi-finger accurate force production

    PubMed Central

    Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2014-01-01

    We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311

  19. Random matrix approach to cross correlations in financial data

    NASA Astrophysics Data System (ADS)

    Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene

    2002-06-01

    We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.

  20. Two large earthquakes in western Switzerland in the sixteenth century: 1524 in Ardon (VS) and 1584 in Aigle (VD)

    NASA Astrophysics Data System (ADS)

    Schwarz-Zanetti, Gabriela; Fäh, Donat; Gache, Sylvain; Kästli, Philipp; Loizeau, Jeanluc; Masciadri, Virgilio; Zenhäusern, Gregor

    2018-03-01

    The Valais is the most seismically active region of Switzerland. Strong damaging events occurred in 1755, 1855, and 1946. Based on historical documents, we discuss two known damaging events in the sixteenth century: the 1524 Ardon and the 1584 Aigle earthquakes. For the 1524, a document describes damage in Ardon, Plan-Conthey, and Savièse, and a stone tablet at the new bell tower of the Ardon church confirms the reconstruction of the bell tower after the earthquake. Additionally, a significant construction activity in the Upper Valais churches during the second quarter of the sixteenth century is discussed that however cannot be clearly related to this event. The assessed moment magnitude Mw of the 1524 event is 5.8, with an error of about 0.5 units corresponding to one standard deviation. The epicenter is at 46.27 N, 7.27 E with a high uncertainty of about 50 km corresponding to one standard deviation. The assessed moment magnitude Mw of the 1584 main shock is 5.9, with an error of about 0.25 units corresponding to one standard deviation. The epicenter is at 46.33 N and 6.97 E with an uncertainty of about 25 km corresponding to one standard deviation. Exceptional movements in the Lake Geneva wreaked havoc along the shore of the Rhone delta. The large dimension of the induced damage can be explained by an expanded subaquatic slide with resultant tsunami and seiche in Lake Geneva. The strongest of the aftershocks occurred on March 14 with magnitude 5.4 and triggered a destructive landslide covering the villages Corbeyrier and Yvorne, VD.

  1. SU-F-J-29: Dosimetric Effect of Image Registration ROI Size and Focus in Automated CBCT Registration for Spine SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magnelli, A; Smith, A; Chao, S

    2016-06-15

    Purpose: Spinal stereotactic body radiotherapy (SBRT) involves highly conformal dose distributions and steep dose gradients due to the proximity of the spinal cord to the treatment volume. To achieve the planning goals while limiting the spinal cord dose, patients are setup using kV cone-beam CT (kV-CBCT) with 6 degree corrections. The kV-CBCT registration with the reference CT is dependent on a user selected region of interest (ROI). The objective of this work is to determine the dosimetric impact of ROI selection. Methods: Twenty patients were selected for this study. For each patient, the kV-CBCT was registered to the reference CTmore » using three ROIs including: 1) the external body, 2) a large anatomic region, and 3) a small region focused in the target volume. Following each registration, the aligned CBCTs and contours were input to the treatment planning system for dose evaluation. The minimum dose, dose to 99% and 90% of the tumor volume (D99%, D90%), dose to 0.03cc and the dose to 10% of the spinal cord subvolume (V10Gy) were compared to the planned values. Results: The average deviations in the tumor minimum dose were 2.68%±1.7%, 4.6%±4.0%, 14.82%±9.9% for small, large and the external ROIs, respectively. The average deviations in tumor D99% were 1.15%±0.7%, 3.18%±1.7%, 10.0%±6.6%, respectively. The average deviations in tumor D90% were 1.00%±0.96%, 1.14%±1.05%, 3.19%±4.77% respectively. The average deviations in the maximum dose to the spinal cord were 2.80%±2.56%, 7.58%±8.28%, 13.35%±13.14%, respectively. The average deviation in V10Gy to the spinal cord were 1.69%±0.88%, 1.98%±2.79%, 2.71%±5.63%. Conclusion: When using automated registration algorithms for CBCT-Reference alignment, a small target-focused ROI results in the least dosimetric deviation from the plan. It is recommended to focus narrowly on the target volume to keep the spinal cord dose below tolerance.« less

  2. Motion-robust intensity-modulated proton therapy for distal esophageal cancer.

    PubMed

    Yu, Jen; Zhang, Xiaodong; Liao, Li; Li, Heng; Zhu, Ronald; Park, Peter C; Sahoo, Narayan; Gillin, Michael; Li, Yupeng; Chang, Joe Y; Komaki, Ritsuko; Lin, Steven H

    2016-03-01

    To develop methods for evaluation and mitigation of dosimetric impact due to respiratory and diaphragmatic motion during free breathing in treatment of distal esophageal cancers using intensity-modulated proton therapy (IMPT). This was a retrospective study on 11 patients with distal esophageal cancer. For each patient, four-dimensional computed tomography (4D CT) data were acquired, and a nominal dose was calculated on the average phase of the 4D CT. The changes of water equivalent thickness (ΔWET) to cover the treatment volume from the peak of inspiration to the valley of expiration were calculated for a full range of beam angle rotation. Two IMPT plans were calculated: one at beam angles corresponding to small ΔWET and one at beam angles corresponding to large ΔWET. Four patients were selected for the calculation of 4D-robustness-optimized IMPT plans due to large motion-induced dose errors generated in conventional IMPT. To quantitatively evaluate motion-induced dose deviation, the authors calculated the lowest dose received by 95% (D95) of the internal clinical target volume for the nominal dose, the D95 calculated on the maximum inhale and exhale phases of 4D CT DCT0 andDCT50 , the 4D composite dose, and the 4D dynamic dose for a single fraction. The dose deviation increased with the average ΔWET of the implemented beams, ΔWETave. When ΔWETave was less than 5 mm, the dose error was less than 1 cobalt gray equivalent based on DCT0 and DCT50 . The dose deviation determined on the basis of DCT0 and DCT50 was proportionally larger than that determined on the basis of the 4D composite dose. The 4D-robustness-optimized IMPT plans notably reduced the overall dose deviation of multiple fractions and the dose deviation caused by the interplay effect in a single fraction. In IMPT for distal esophageal cancer, ΔWET analysis can be used to select the beam angles that are least affected by respiratory and diaphragmatic motion. To further reduce dose deviation, the 4D-robustness optimization can be implemented for IMPT planning. Calculation of DCT0 and DCT50 is a conservative method to estimate the motion-induced dose errors.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wulff, J; Huggins, A

    Purpose: The shape of a single beam in proton PBS influences the resulting dose distribution. Spot profiles are modelled as two-dimensional Gaussian (single/ double) distributions in treatment planning systems (TPS). Impact of slight deviations from an ideal Gaussian on resulting dose distributions is typically assumed to be small due to alleviation by multiple Coulomb scattering (MCS) in tissue and superposition of many spots. Quantitative limits are however not clear per se. Methods: A set of 1250 deliberately deformed profiles with sigma=4 mm for a Gaussian fit were constructed. Profiles and fit were normalized to the same area, resembling output calibrationmore » in the TPS. Depth-dependent MCS was considered. The deviation between deformed and ideal profiles was characterized by root-mean-squared deviation (RMSD), skewness/ kurtosis (SK) and full-width at different percentage of maximum (FWxM). The profiles were convolved with different fluence patterns (regular/ random) resulting in hypothetical dose distributions. The resulting deviations were analyzed by applying a gamma-test. Results were compared to measured spot profiles. Results: A clear correlation between pass-rate and profile metrics could be determined. The largest impact occurred for a regular fluence-pattern with increasing distance between single spots, followed by a random distribution of spot weights. The results are strongly dependent on gamma-analysis dose and distance levels. Pass-rates of >95% at 2%/2 mm and 40 mm depth (=70 MeV) could only be achieved for RMSD<10%, deviation in FWxM at 20% and root of quadratic sum of SK <0.8. As expected the results improve for larger depths. The trends were well resembled for measured spot profiles. Conclusion: All measured profiles from ProBeam sites passed the criteria. Given the fact, that beam-line tuning can result shape distortions, the derived criteria represent a useful QA tool for commissioning and design of future beam-line optics.« less

  4. Precision theoretical analysis of neutron radiative beta decay to order O (α2/π2)

    NASA Astrophysics Data System (ADS)

    Ivanov, A. N.; Höllwieser, R.; Troitskaya, N. I.; Wellenzohn, M.; Berdnikov, Ya. A.

    2017-06-01

    In the Standard Model (SM) we calculate the decay rate of the neutron radiative β- decay to order O (α2/π2˜10-5), where α is the fine-structure constant, and radiative corrections to order O (α /π ˜10-3). The obtained results together with the recent analysis of the neutron radiative β- decay to next-to-leading order in the large proton-mass expansion, performed by Ivanov et al. [Phys. Rev. D 95, 033007 (2017), 10.1103/PhysRevD.95.033007], describe recent experimental data by the RDK II Collaboration [Bales et al., Phys. Rev. Lett. 116, 242501 (2016), 10.1103/PhysRevLett.116.242501] within 1.5 standard deviations. We argue a substantial influence of strong low-energy interactions of hadrons coupled to photons on the properties of the amplitude of the neutron radiative β- decay under gauge transformations of real and virtual photons.

  5. Time irreversibility and multifractality of power along single particle trajectories in turbulence

    NASA Astrophysics Data System (ADS)

    Cencini, Massimo; Biferale, Luca; Boffetta, Guido; De Pietro, Massimo

    2017-10-01

    The irreversible turbulent energy cascade epitomizes strongly nonequilibrium systems. At the level of single fluid particles, time irreversibility is revealed by the asymmetry of the rate of kinetic energy change, the Lagrangian power, whose moments display a power-law dependence on the Reynolds number, as recently shown by Xu et al. [H. Xu et al., Proc. Natl. Acad. Sci. USA 111, 7558 (2014), 10.1073/pnas.1321682111]. Here Lagrangian power statistics are rationalized within the multifractal model of turbulence, whose predictions are shown to agree with numerical and empirical data. Multifractal predictions are also tested, for very large Reynolds numbers, in dynamical models of the turbulent cascade, obtaining remarkably good agreement for statistical quantities insensitive to the asymmetry and, remarkably, deviations for those probing the asymmetry. These findings raise fundamental questions concerning time irreversibility in the infinite-Reynolds-number limit of the Navier-Stokes equations.

  6. A microphysical parameterization of aqSOA and sulfate formation in clouds

    NASA Astrophysics Data System (ADS)

    McVay, Renee; Ervens, Barbara

    2017-07-01

    Sulfate and secondary organic aerosol (cloud aqSOA) can be chemically formed in cloud water. Model implementation of these processes represents a computational burden due to the large number of microphysical and chemical parameters. Chemical mechanisms have been condensed by reducing the number of chemical parameters. Here an alternative is presented to reduce the number of microphysical parameters (number of cloud droplet size classes). In-cloud mass formation is surface and volume dependent due to surface-limited oxidant uptake and/or size-dependent pH. Box and parcel model simulations show that using the effective cloud droplet diameter (proportional to total volume-to-surface ratio) reproduces sulfate and aqSOA formation rates within ≤30% as compared to full droplet distributions; other single diameters lead to much greater deviations. This single-class approach reduces computing time significantly and can be included in models when total liquid water content and effective diameter are available.

  7. Thermal Texture Selection and Correction for Building Facade Inspection Based on Thermal Radiant Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, D.; Jarzabek-Rychard, M.; Schneider, D.; Maas, H.-G.

    2018-05-01

    An automatic building façade thermal texture mapping approach, using uncooled thermal camera data, is proposed in this paper. First, a shutter-less radiometric thermal camera calibration method is implemented to remove the large offset deviations caused by changing ambient environment. Then, a 3D façade model is generated from a RGB image sequence using structure-from-motion (SfM) techniques. Subsequently, for each triangle in the 3D model, the optimal texture is selected by taking into consideration local image scale, object incident angle, image viewing angle as well as occlusions. Afterwards, the selected textures can be further corrected using thermal radiant characteristics. Finally, the Gauss filter outperforms the voted texture strategy at the seams smoothing and thus for instance helping to reduce the false alarm rate in façade thermal leakages detection. Our approach is evaluated on a building row façade located at Dresden, Germany.

  8. Collective strong coupling with homogeneous Rabi frequencies using a 3D lumped element microwave resonator

    NASA Astrophysics Data System (ADS)

    Angerer, Andreas; Astner, Thomas; Wirtitsch, Daniel; Sumiya, Hitoshi; Onoda, Shinobu; Isoya, Junichi; Putz, Stefan; Majer, Johannes

    2016-07-01

    We design and implement 3D-lumped element microwave cavities that spatially focus magnetic fields to a small mode volume. They allow coherent and uniform coupling to electron spins hosted by nitrogen vacancy centers in diamond. We achieve large homogeneous single spin coupling rates, with an enhancement of more than one order of magnitude compared to standard 3D cavities with a fundamental resonance at 3 GHz. Finite element simulations confirm that the magnetic field distribution is homogeneous throughout the entire sample volume, with a root mean square deviation of 1.54%. With a sample containing 1017 nitrogen vacancy electron spins, we achieve a collective coupling strength of Ω = 12 MHz, a cooperativity factor C = 27, and clearly enter the strong coupling regime. This allows to interface a macroscopic spin ensemble with microwave circuits, and the homogeneous Rabi frequency paves the way to manipulate the full ensemble population in a coherent way.

  9. Multimedia telehomecare system using standard TV set.

    PubMed

    Guillén, S; Arredondo, M T; Traver, V; García, J M; Fernández, C

    2002-12-01

    Nowadays, there are a very large number of patients that need specific health support at home. The deployment of broadband communication networks is making feasible the provision of home care services with a proper quality of service. This paper presents a telehomecare multimedia platform that runs over integrated services digital network and internet protocol using videoconferencing standards H.320 and H.323, and standard TV set for patient interaction. This platform allows online remote monitoring: ECG, heart sound, blood pressure. Usability, affordability, and interoperability were considered for the design and development of its hardware and software components. A first evaluation of technical and usability aspects were carried forward with 52 patients of a private clinic and 10 students in the University. Results show a high rate (mean = 4.33, standard deviation--SD = 1.63 in a five-points Likert scale) in the global perception of users on the quality of images, voice, and feeling of virtual presence.

  10. Nonlinear Autoregressive Exogenous modeling of a large anaerobic digester producing biogas from cattle waste.

    PubMed

    Dhussa, Anil K; Sambi, Surinder S; Kumar, Shashi; Kumar, Sandeep; Kumar, Surendra

    2014-10-01

    In waste-to-energy plants, there is every likelihood of variations in the quantity and characteristics of the feed. Although intermediate storage tanks are used, but many times these are of inadequate capacity to dampen the variations. In such situations an anaerobic digester treating waste slurry operates under dynamic conditions. In this work a special type of dynamic Artificial Neural Network model, called Nonlinear Autoregressive Exogenous model, is used to model the dynamics of anaerobic digesters by using about one year data collected on the operating digesters. The developed model consists of two hidden layers each having 10 neurons, and uses 18days delay. There are five neurons in input layer and one neuron in output layer for a day. Model predictions of biogas production rate are close to plant performance within ±8% deviation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  12. Quantifying the Influence of Climate on Human Conflict

    NASA Astrophysics Data System (ADS)

    Hsiang, S. M.; Burke, M.; Miguel, E.

    2014-12-01

    A rapidly growing body of research examines whether human conflict can be affected by climatic changes. Drawing from archaeology, criminology, economics, geography, history, political science, and psychology, we assemble and analyze the most rigorous quantitative studies and document, for the first time, a striking convergence of results. We find strong causal evidence linking climatic events to human conflict across a range of spatial and temporal scales and across all major regions of the world. The magnitude of climate's influence is substantial: for each one standard deviation (1sd) change in climate toward warmer temperatures or more extreme rainfall, median estimates indicate that the frequency of interpersonal violence rises 4% and the frequency of intergroup conflict rises 14%. Because locations throughout the inhabited world are expected to warm 2sd to 4sd by 2050, amplified rates of human conflict could represent a large and critical impact of anthropogenic climate change.

  13. Vacuum deposition of iridium on large astronomical mirrors for use in the far UV

    NASA Technical Reports Server (NTRS)

    Herzig, H.; Spencer, R. S.

    1982-01-01

    An iridium coating has been deposited by electron-beam evaporation on a 0.91-m mirror which serves as the telescope primary of a sounding rocket instrument for far-UV spectrometry. The evaporation was carried out by applying 8 kV at 400 mA to the electron gun. Zone refined Ir of 99.99% purity was used, and the electron beam was electromagnetically swept over the surface of the evaporant. Under these conditions, deposition rates of 0.55 A/sec were achieved. The reflectance distribution achieved at a wavelength of 584 A was extremely uniform; the mean reflectance was 21.2% with a standard deviation of only 0.3%. This represents a substantial improvement over Al + MgF2 and Al + LiF coatings for applications involving multiple reflections and weak signals, as might be expected in a high-resolution spectrograph studying distant celestial objects.

  14. A numerical model for water and heat transport in freezing soils with nonequilibrium ice-water interfaces

    NASA Astrophysics Data System (ADS)

    Peng, Zhenyang; Tian, Fuqiang; Wu, Jingwei; Huang, Jiesheng; Hu, Hongchang; Darnault, Christophe J. G.

    2016-09-01

    A one-dimensional numerical model of heat and water transport in freezing soils is developed by assuming that ice-water interfaces are not necessarily in equilibrium. The Clapeyron equation, which is derived from a static ice-water interface using the thermal equilibrium theory, cannot be readily applied to a dynamic system, such as freezing soils. Therefore, we handled the redistribution of liquid water with the Richard's equation. In this application, the sink term is replaced by the freezing rate of pore water, which is proportional to the extent of supercooling and available water content for freezing by a coefficient, β. Three short-term laboratory column simulations show reasonable agreement with observations, with standard error of simulation on water content ranging between 0.007 and 0.011 cm3 cm-3, showing improved accuracy over other models that assume equilibrium ice-water interfaces. Simulation results suggest that when the freezing front is fixed at a specific depth, deviation of the ice-water interface from equilibrium, at this location, will increase with time. However, this deviation tends to weaken when the freezing front slowly penetrates to a greater depth, accompanied with thinner soils of significant deviation. The coefficient, β, plays an important role in the simulation of heat and water transport. A smaller β results in a larger deviation in the ice-water interface from equilibrium, and backward estimation of the freezing front. It also leads to an underestimation of water content in soils that were previously frozen by a rapid freezing rate, and an overestimation of water content in the rest of the soils.

  15. Distribution Development for STORM Ingestion Input Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fulton, John

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr tomore » a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e -4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e -4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)« less

  16. Factors leading to different viability predictions for a grizzly bear data set

    USGS Publications Warehouse

    Mills, L.S.; Hayes, S.G.; Wisdom, M.J.; Citta, J.; Mattson, D.J.; Murphy, K.

    1996-01-01

    Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear (Ursus arctos horribilis) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.

  17. Estimating bacterial production in marine waters from the simultaneous incorporation of thymidine and leucine.

    PubMed

    Chin-Leo, G; Kirchman, D L

    1988-08-01

    We examined the simultaneous incorporation of [H]thymidine and [C]leucine to obtain two independent indices of bacterial production (DNA and protein syntheses) in a single incubation. Incorporation rates of leucine estimated by the dual-label method were generally higher than those obtained by the single-label method, but the differences were small (dual/single = 1.1 +/- 0.2 [mean +/- standard deviation]) and were probably due to the presence of labeled leucyl-tRNA in the cold trichloroacetic acid-insoluble fraction. There were no significant differences in thymidine incorporation between dual- and single-label incubations (dual/ single = 1.03 +/- 0.13). Addition of the two substrates in relatively large amounts (25 nM) did not apparently increase bacterial activity during short incubations (<5 h). With the dual-label method we found that thymidine and leucine incorporation rates covaried over depth profiles of the Chesapeake Bay. Estimates of bacterial production based on thymidine and leucine differed by less than 25%. Although the need for appropriate conversion factors has not been eliminated, the dual-label approach can be used to examine the variation in bacterial production while ensuring that the observed variation in incorporation rates is due to real changes in bacterial production rather than changes in conversion factors or introduction of other artifacts.

  18. Longitudinal changes in speech recognition in older persons.

    PubMed

    Dubno, Judy R; Lee, Fu-Shing; Matthews, Lois J; Ahlstrom, Jayne B; Horwitz, Amy R; Mills, John H

    2008-01-01

    Recognition of isolated monosyllabic words in quiet and recognition of key words in low- and high-context sentences in babble were measured in a large sample of older persons enrolled in a longitudinal study of age-related hearing loss. Repeated measures were obtained yearly or every 2 to 3 years. To control for concurrent changes in pure-tone thresholds and speech levels, speech-recognition scores were adjusted using an importance-weighted speech-audibility metric (AI). Linear-regression slope estimated the rate of change in adjusted speech-recognition scores. Recognition of words in quiet declined significantly faster with age than predicted by declines in speech audibility. As subjects aged, observed scores deviated increasingly from AI-predicted scores, but this effect did not accelerate with age. Rate of decline in word recognition was significantly faster for females than males and for females with high serum progesterone levels, whereas noise history had no effect. Rate of decline did not accelerate with age but increased with degree of hearing loss, suggesting that with more severe injury to the auditory system, impairments to auditory function other than reduced audibility resulted in faster declines in word recognition as subjects aged. Recognition of key words in low- and high-context sentences in babble did not decline significantly with age.

  19. Minimizing Isolate Catalyst Motion in Metal-Assisted Chemical Etching for Deep Trenching of Silicon Nanohole Array.

    PubMed

    Kong, Lingyu; Zhao, Yunshan; Dasgupta, Binayak; Ren, Yi; Hippalgaonkar, Kedar; Li, Xiuling; Chim, Wai Kin; Chiam, Sing Yang

    2017-06-21

    The instability of isolate catalysts during metal-assisted chemical etching is a major hindrance to achieve high aspect ratio structures in the vertical and directional etching of silicon (Si). In this work, we discussed and showed how isolate catalyst motion can be influenced and controlled by the semiconductor doping type and the oxidant concentration ratio. We propose that the triggering event in deviating isolate catalyst motion is brought about by unequal etch rates across the isolate catalyst. This triggering event is indirectly affected by the oxidant concentration ratio through the etching rates. While the triggering events are stochastic, the doping concentration of silicon offers a good control in minimizing isolate catalyst motion. The doping concentration affects the porosity at the etching front, and this directly affects the van der Waals (vdWs) forces between the metal catalyst and Si during etching. A reduction in the vdWs forces resulted in a lower bending torque that can prevent the straying of the isolate catalyst from its directional etching, in the event of unequal etch rates. The key understandings in isolate catalyst motion derived from this work allowed us to demonstrate the fabrication of large area and uniformly ordered sub-500 nm nanoholes array with an unprecedented high aspect ratio of ∼12.

  20. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

Top