Cumulants and large deviations of the current through non-equilibrium steady states
NASA Astrophysics Data System (ADS)
Bodineau, Thierry; Derrida, Bernard
2007-06-01
Using a generalisation of detailed balance for systems maintained out of equilibrium by contact with 2 reservoirs at unequal temperatures or at unequal densities, one can recover the fluctuation theorem for the large deviation function of the current. For large diffusive systems, we show how the large deviation function of the current can be computed using a simple additivity principle. The validity of this additivity principle and the occurrence of phase transitions are discussed in the framework of the macroscopic fluctuation theory. To cite this article: T. Bodineau, B. Derrida, C. R. Physique 8 (2007).
A General Conditional Large Deviation Principle
La Cour, Brian R.; Schieve, William C.
2015-07-18
Given a sequence of Borel probability measures on a Hausdorff space which satisfy a large deviation principle (LDP), we consider the corresponding sequence of measures formed by conditioning on a set B. If the large deviation rate function I is good and effectively continuous, and the conditioning set has the property that (1)more » $$\\overline{B°}$$=$$\\overline{B}$$ and (2) I(x)<∞ for all xε$$\\overline{B}$$, then the sequence of conditional measures satisfies a LDP with the good, effectively continuous rate function I B, where I B(x)=I(x)-inf I(B) if xε$$\\overline{B}$$ and I B(x)=∞ otherwise.« less
Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk
2017-06-15
In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Roeck, W., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Maes, C., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Schütz, M., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be
2015-02-15
We study the projection on classical spins starting from quantum equilibria. We show Gibbsianness or quasi-locality of the resulting classical spin system for a class of gapped quantum systems at low temperatures including quantum ground states. A consequence of Gibbsianness is the validity of a large deviation principle in the quantum system which is known and here recovered in regimes of high temperature or for thermal states in one dimension. On the other hand, we give an example of a quantum ground state with strong nonlocality in the classical restriction, giving rise to what we call measurement induced entanglement andmore » still satisfying a large deviation principle.« less
NASA Astrophysics Data System (ADS)
Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki
2018-03-01
We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.
NASA Astrophysics Data System (ADS)
Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki
2017-12-01
We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2} ). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3} ) and the level sets of the Gaussian free field ({d≥ 3} ). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.
Large Deviations and Transitions Between Equilibria for Stochastic Landau-Lifshitz-Gilbert Equation
NASA Astrophysics Data System (ADS)
Brzeźniak, Zdzisław; Goldys, Ben; Jegaraj, Terence
2017-11-01
We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise. We first show that there exists a pathwise unique solution to this equation and that this solution enjoys the maximal regularity property. Next, we prove the large deviations principle for the small noise asymptotic of solutions using the weak convergence method. An essential ingredient of the proof is the compactness, or weak to strong continuity, of the solution map for a deterministic Landau-Lifschitz equation when considered as a transformation of external fields. We then apply this large deviations principle to show that small noise can cause magnetisation reversal. We also show the importance of the shape anisotropy parameter for reducing the disturbance of the solution caused by small noise. The problem is motivated by applications from ferromagnetic nanowires to the fabrication of magnetic memories.
NASA Astrophysics Data System (ADS)
Hanasaki, Itsuo; Kawano, Satoyuki
2013-11-01
Motility of bacteria is usually recognized in the trajectory data and compared with Brownian motion, but the diffusion coefficient is insufficient to evaluate it. In this paper, we propose a method based on the large deviation principle. We show that it can be used to evaluate the non-Gaussian characteristics of model Escherichia coli motions and to distinguish combinations of the mean running duration and running speed that lead to the same diffusion coefficient. Our proposed method does not require chemical stimuli to induce the chemotaxis in a specific direction, and it is applicable to various types of self-propelling motions for which no a priori information of, for example, threshold parameters for run and tumble or head/tail direction is available. We also address the issue of the finite-sample effect on the large deviation quantities, but we propose to make use of it to characterize the nature of motility.
A Quantitative Evaluation of the Flipped Classroom in a Large Lecture Principles of Economics Course
ERIC Educational Resources Information Center
Balaban, Rita A.; Gilleskie, Donna B.; Tran, Uyen
2016-01-01
This research provides evidence that the flipped classroom instructional format increases student final exam performance, relative to the traditional instructional format, in a large lecture principles of economics course. The authors find that the flipped classroom directly improves performance by 0.2 to 0.7 standardized deviations, depending on…
A large deviations principle for stochastic flows of viscous fluids
NASA Astrophysics Data System (ADS)
Cipriano, Fernanda; Costa, Tiago
2018-04-01
We study the well-posedness of a stochastic differential equation on the two dimensional torus T2, driven by an infinite dimensional Wiener process with drift in the Sobolev space L2 (0 , T ;H1 (T2)) . The solution corresponds to a stochastic Lagrangian flow in the sense of DiPerna Lions. By taking into account that the motion of a viscous incompressible fluid on the torus can be described through a suitable stochastic differential equation of the previous type, we study the inviscid limit. By establishing a large deviations principle, we show that, as the viscosity goes to zero, the Lagrangian stochastic Navier-Stokes flow approaches the Euler deterministic Lagrangian flow with an exponential rate function.
Some limit theorems for ratios of order statistics from uniform random variables.
Xu, Shou-Fang; Miao, Yu
2017-01-01
In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.
Large Deviations in Weakly Interacting Boundary Driven Lattice Gases
NASA Astrophysics Data System (ADS)
van Wijland, Frédéric; Rácz, Zoltán
2005-01-01
One-dimensional, boundary-driven lattice gases with local interactions are studied in the weakly interacting limit. The density profiles and the correlation functions are calculated to first order in the interaction strength for zero-range and short-range processes differing only in the specifics of the detailed-balance dynamics. Furthermore, the effective free-energy (large-deviation function) and the integrated current distribution are also found to this order. From the former, we find that the boundary drive generates long-range correlations only for the short-range dynamics while the latter provides support to an additivity principle recently proposed by Bodineau and Derrida.
Large deviations and mixing for dissipative PDEs with unbounded random kicks
NASA Astrophysics Data System (ADS)
Jakšić, V.; Nersesyan, V.; Pillet, C.-A.; Shirikyan, A.
2018-02-01
We study the problem of exponential mixing and large deviations for discrete-time Markov processes associated with a class of random dynamical systems. Under some dissipativity and regularisation hypotheses for the underlying deterministic dynamics and a non-degeneracy condition for the driving random force, we discuss the existence and uniqueness of a stationary measure and its exponential stability in the Kantorovich-Wasserstein metric. We next turn to the large deviations principle (LDP) and establish its validity for the occupation measures of the Markov processes in question. The proof is based on Kifer’s criterion for non-compact spaces, a result on large-time asymptotics for generalised Markov semigroup, and a coupling argument. These tools combined together constitute a new approach to LDP for infinite-dimensional processes without strong Feller property in a non-compact space. The results obtained can be applied to the two-dimensional Navier-Stokes system in a bounded domain and to the complex Ginzburg-Landau equation.
NASA Astrophysics Data System (ADS)
Reimberg, Paulo; Bernardeau, Francis
2018-01-01
We present a formalism based on the large deviation principle (LDP) applied to cosmological density fields, and more specifically to the arbitrary functional of density profiles, and we apply it to the derivation of the cumulant generating function and one-point probability distribution function (PDF) of the aperture mass (Map ), a common observable for cosmic shear observations. We show that the LDP can indeed be used in practice for a much larger family of observables than previously envisioned, such as those built from continuous and nonlinear functionals of density profiles. Taking advantage of this formalism, we can extend previous results, which were based on crude definitions of the aperture mass, with top-hat windows and the use of the reduced shear approximation (replacing the reduced shear with the shear itself). We were precisely able to quantify how this latter approximation affects the Map statistical properties. In particular, we derive the corrective term for the skewness of the Map and reconstruct its one-point PDF.
Testing the equivalence principle on cosmological scales
NASA Astrophysics Data System (ADS)
Bonvin, Camille; Fleury, Pierre
2018-05-01
The equivalence principle, that is one of the main pillars of general relativity, is very well tested in the Solar system; however, its validity is more uncertain on cosmological scales, or when dark matter is concerned. This article shows that relativistic effects in the large-scale structure can be used to directly test whether dark matter satisfies Euler's equation, i.e. whether its free fall is characterised by geodesic motion, just like baryons and light. After having proposed a general parametrisation for deviations from Euler's equation, we perform Fisher-matrix forecasts for future surveys like DESI and the SKA, and show that such deviations can be constrained with a precision of order 10%. Deviations from Euler's equation cannot be tested directly with standard methods like redshift-space distortions and gravitational lensing, since these observables are not sensitive to the time component of the metric. Our analysis shows therefore that relativistic effects bring new and complementary constraints to alternative theories of gravity.
In search of multipath interference using large molecules
Cotter, Joseph P.; Brand, Christian; Knobloch, Christian; Lilach, Yigal; Cheshnovsky, Ori; Arndt, Markus
2017-01-01
The superposition principle is fundamental to the quantum description of both light and matter. Recently, a number of experiments have sought to directly test this principle using coherent light, single photons, and nuclear spin states. We extend these experiments to massive particles for the first time. We compare the interference patterns arising from a beam of large dye molecules diffracting at single, double, and triple slit material masks to place limits on any high-order, or multipath, contributions. We observe an upper bound of less than one particle in a hundred deviating from the expectations of quantum mechanics over a broad range of transverse momenta and de Broglie wavelength. PMID:28819641
Large Fluctuations for Spatial Diffusion of Cold Atoms
NASA Astrophysics Data System (ADS)
Aghion, Erez; Kessler, David A.; Barkai, Eli
2017-06-01
We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.
NASA Astrophysics Data System (ADS)
Kakehashi, Yoshiro; Chandra, Sumal
2017-03-01
The momentum distribution function (MDF) bands of iron-group transition metals from Sc to Cu have been investigated on the basis of the first-principles momentum dependent local ansatz wavefunction method. It is found that the MDF for d electrons show a strong momentum dependence and a large deviation from the Fermi-Dirac distribution function along high-symmetry lines of the first Brillouin zone, while the sp electrons behave as independent electrons. In particular, the deviation in bcc Fe (fcc Ni) is shown to be enhanced by the narrow eg (t2g) bands with flat dispersion in the vicinity of the Fermi level. Mass enhancement factors (MEF) calculated from the jump on the Fermi surface are also shown to be momentum dependent. Large mass enhancements of Mn and Fe are found to be caused by spin fluctuations due to d electrons, while that for Ni is mainly caused by charge fluctuations. Calculated MEF are consistent with electronic specific heat data as well as recent angle resolved photoemission spectroscopy data.
Back in the saddle: large-deviation statistics of the cosmic log-density field
NASA Astrophysics Data System (ADS)
Uhlemann, C.; Codis, S.; Pichon, C.; Bernardeau, F.; Reimberg, P.
2016-08-01
We present a first principle approach to obtain analytical predictions for spherically averaged cosmic densities in the mildly non-linear regime that go well beyond what is usually achieved by standard perturbation theory. A large deviation principle allows us to compute the leading order cumulants of average densities in concentric cells. In this symmetry, the spherical collapse model leads to cumulant generating functions that are robust for finite variances and free of critical points when logarithmic density transformations are implemented. They yield in turn accurate density probability distribution functions (PDFs) from a straightforward saddle-point approximation valid for all density values. Based on this easy-to-implement modification, explicit analytic formulas for the evaluation of the one- and two-cell PDF are provided. The theoretical predictions obtained for the PDFs are accurate to a few per cent compared to the numerical integration, regardless of the density under consideration and in excellent agreement with N-body simulations for a wide range of densities. This formalism should prove valuable for accurately probing the quasi-linear scales of low-redshift surveys for arbitrary primordial power spectra.
NASA Astrophysics Data System (ADS)
Kakehashi, Yoshiro; Chandra, Sumal
2016-04-01
We have developed a first-principles local ansatz wavefunction approach with momentum-dependent variational parameters on the basis of the tight-binding LDA+U Hamiltonian. The theory goes beyond the first-principles Gutzwiller approach and quantitatively describes correlated electron systems. Using the theory, we find that the momentum distribution function (MDF) bands of paramagnetic bcc Fe along high-symmetry lines show a large deviation from the Fermi-Dirac function for the d electrons with eg symmetry and yield the momentum-dependent mass enhancement factors. The calculated average mass enhancement m*/m = 1.65 is consistent with low-temperature specific heat data as well as recent angle-resolved photoemission spectroscopy (ARPES) data.
Large deviations of a long-time average in the Ehrenfest urn model
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Zilber, Pini
2018-05-01
Since its inception in 1907, the Ehrenfest urn model (EUM) has served as a test bed of key concepts of statistical mechanics. Here we employ this model to study large deviations of a time-additive quantity. We consider two continuous-time versions of the EUM with K urns and N balls: with and without interactions between the balls in the same urn. We evaluate the probability distribution that the average number of balls in one urn over time T, , takes any specified value aN, where . For long observation time, , a Donsker–Varadhan large deviation principle holds: , where … denote additional parameters of the model. We calculate the rate function exactly by two different methods due to Donsker and Varadhan and compare the exact results with those obtained with a variant of WKB approximation (after Wentzel, Kramers and Brillouin). In the absence of interactions the WKB prediction for is exact for any N. In the presence of interactions the WKB method gives asymptotically exact results for . The WKB method also uncovers the (very simple) time history of the system which dominates the contribution of different time histories to .
The cosmological principle is not in the sky
NASA Astrophysics Data System (ADS)
Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan
2017-08-01
The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.
NASA Astrophysics Data System (ADS)
Gaunaa, Mac; Heinz, Joachim; Skrzypiński, Witold
2016-09-01
The crossflow principle is one of the key elements used in engineering models for prediction of the aerodynamic loads on wind turbine blades in standstill or blade installation situations, where the flow direction relative to the wind turbine blade has a component in the direction of the blade span direction. In the present work, the performance of the crossflow principle is assessed on the DTU 10MW reference blade using extensive 3D CFD calculations. Analysis of the computational results shows that there is only a relatively narrow region in which the crossflow principle describes the aerodynamic loading well. In some conditions the deviation of the predicted loadings can be quite significant, having a large influence on for instance the integral aerodynamic moments around the blade centre of mass; which is very important for single blade installation applications. The main features of these deviations, however, have a systematic behaviour on all force components, which in this paper is employed to formulate the first version of an engineering correction method to the crossflow principle applicable for wind turbine blades. The new correction model improves the agreement with CFD results for the key aerodynamic loads in crossflow situations. The general validity of this model for other blade shapes should be investigated in subsequent works.
Large-aperture space optical system testing based on the scanning Hartmann.
Wei, Haisong; Yan, Feng; Chen, Xindong; Zhang, Hao; Cheng, Qiang; Xue, Donglin; Zeng, Xuefeng; Zhang, Xuejun
2017-03-10
Based on the Hartmann testing principle, this paper proposes a novel image quality testing technology which applies to a large-aperture space optical system. Compared with the traditional testing method through a large-aperture collimator, the scanning Hartmann testing technology has great advantages due to its simple structure, low cost, and ability to perform wavefront measurement of an optical system. The basic testing principle of the scanning Hartmann testing technology, data processing method, and simulation process are presented in this paper. Certain simulation results are also given to verify the feasibility of this technology. Furthermore, a measuring system is developed to conduct a wavefront measurement experiment for a 200 mm aperture optical system. The small deviation (6.3%) of root mean square (RMS) between experimental results and interferometric results indicates that the testing system can measure low-order aberration correctly, which means that the scanning Hartmann testing technology has the ability to test the imaging quality of a large-aperture space optical system.
Robustness and cognition in stabilization problem of dynamical systems based on asymptotic methods
NASA Astrophysics Data System (ADS)
Dubovik, S. A.; Kabanov, A. A.
2017-01-01
The problem of synthesis of stabilizing systems based on principles of cognitive (logical-dynamic) control for mobile objects used under uncertain conditions is considered. This direction in control theory is based on the principles of guaranteeing robust synthesis focused on worst-case scenarios of the controlled process. The guaranteeing approach is able to provide functioning of the system with the required quality and reliability only at sufficiently low disturbances and in the absence of large deviations from some regular features of the controlled process. The main tool for the analysis of large deviations and prediction of critical states here is the action functional. After the forecast is built, the choice of anti-crisis control is the supervisory control problem that optimizes the control system in a normal mode and prevents escape of the controlled process in critical states. An essential aspect of the approach presented here is the presence of a two-level (logical-dynamic) control: the input data are used not only for generating of synthesized feedback (local robust synthesis) in advance (off-line), but also to make decisions about the current (on-line) quality of stabilization in the global sense. An example of using the presented approach for the problem of development of the ship tilting prediction system is considered.
Wang, Wanlin; Zhang, Wang; Chen, Weixin; Gu, Jiajun; Liu, Qinglei; Deng, Tao; Zhang, Di
2013-01-15
The wide angular range of the treelike structure in Morpho butterfly scales was investigated by finite-difference time-domain (FDTD)/particle-swarm-optimization (PSO) analysis. Using the FDTD method, different parameters in the Morpho butterflies' treelike structure were studied and their contributions to the angular dependence were analyzed. Then a wide angular range was realized by the PSO method from quantitatively designing the lamellae deviation (Δy), which was a crucial parameter with angular range. The field map of the wide-range reflection in a large area was given to confirm the wide angular range. The tristimulus values and corresponding color coordinates for various viewing directions were calculated to confirm the blue color in different observation angles. The wide angular range realized by the FDTD/PSO method will assist us in understanding the scientific principles involved and also in designing artificial optical materials.
NASA Astrophysics Data System (ADS)
Lopes, Artur O.; Neumann, Adriana
2015-05-01
In the present paper, we consider a family of continuous time symmetric random walks indexed by , . For each the matching random walk take values in the finite set of states ; notice that is a subset of , where is the unitary circle. The infinitesimal generator of such chain is denoted by . The stationary probability for such process converges to the uniform distribution on the circle, when . Here we want to study other natural measures, obtained via a limit on , that are concentrated on some points of . We will disturb this process by a potential and study for each the perturbed stationary measures of this new process when . We disturb the system considering a fixed potential and we will denote by the restriction of to . Then, we define a non-stochastic semigroup generated by the matrix , where is the infinifesimal generator of . From the continuous time Perron's Theorem one can normalized such semigroup, and, then we get another stochastic semigroup which generates a continuous time Markov Chain taking values on . This new chain is called the continuous time Gibbs state associated to the potential , see (Lopes et al. in J Stat Phys 152:894-933, 2013). The stationary probability vector for such Markov Chain is denoted by . We assume that the maximum of is attained in a unique point of , and from this will follow that . Thus, here, our main goal is to analyze the large deviation principle for the family , when . The deviation function , which is defined on , will be obtained from a procedure based on fixed points of the Lax-Oleinik operator and Aubry-Mather theory. In order to obtain the associated Lax-Oleinik operator we use the Varadhan's Lemma for the process . For a careful analysis of the problem we present full details of the proof of the Large Deviation Principle, in the Skorohod space, for such family of Markov Chains, when . Finally, we compute the entropy of the invariant probabilities on the Skorohod space associated to the Markov Chains we analyze.
Annealed Scaling for a Charged Polymer
NASA Astrophysics Data System (ADS)
Caravenna, F.; den Hollander, F.; Pétrélis, N.; Poisat, J.
2016-03-01
This paper studies an undirected polymer chain living on the one-dimensional integer lattice and carrying i.i.d. random charges. Each self-intersection of the polymer chain contributes to the interaction Hamiltonian an energy that is equal to the product of the charges of the two monomers that meet. The joint probability distribution for the polymer chain and the charges is given by the Gibbs distribution associated with the interaction Hamiltonian. The focus is on the annealed free energy per monomer in the limit as the length of the polymer chain tends to infinity. We derive a spectral representation for the free energy and use this to prove that there is a critical curve in the parameter plane of charge bias versus inverse temperature separating a ballistic phase from a subballistic phase. We show that the phase transition is first order. We prove large deviation principles for the laws of the empirical speed and the empirical charge, and derive a spectral representation for the associated rate functions. Interestingly, in both phases both rate functions exhibit flat pieces, which correspond to an inhomogeneous strategy for the polymer to realise a large deviation. The large deviation principles in turn lead to laws of large numbers and central limit theorems. We identify the scaling behaviour of the critical curve for small and for large charge bias. In addition, we identify the scaling behaviour of the free energy for small charge bias and small inverse temperature. Both are linked to an associated Sturm-Liouville eigenvalue problem. A key tool in our analysis is the Ray-Knight formula for the local times of the one-dimensional simple random walk. This formula is exploited to derive a closed form expression for the generating function of the annealed partition function, and for several related quantities. This expression in turn serves as the starting point for the derivation of the spectral representation for the free energy, and for the scaling theorems. What happens for the quenched free energy per monomer remains open. We state two modest results and raise a few questions.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Theory and Applications of Weakly Interacting Markov Processes
2018-02-03
Moderate deviation principles for stochastic dynamical systems. Boston University, Math Colloquium, March 27, 2015. • Moderate Deviation Principles for...Markov chain approximation method. Submitted. [8] E. Bayraktar and M. Ludkovski. Optimal trade execution in illiquid markets. Math . Finance, 21(4):681...701, 2011. [9] E. Bayraktar and M. Ludkovski. Liquidation in limit order books with controlled intensity. Math . Finance, 24(4):627–650, 2014. [10] P.D
Large Deviations for Nonlocal Stochastic Neural Fields
2014-01-01
We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations. Mathematics Subject Classification (2000): 60F10, 60H15, 65M60, 92C20. PMID:24742297
The explicit form of the rate function for semi-Markov processes and its contractions
NASA Astrophysics Data System (ADS)
Sughiyama, Yuki; Kobayashi, Testuya J.
2018-03-01
We derive the explicit form of the rate function for semi-Markov processes. Here, the ‘random time change trick’ plays an essential role. Also, by exploiting the contraction principle of large deviation theory to the explicit form, we show that the fluctuation theorem (Gallavotti-Cohen symmetry) holds for semi-Markov cases. Furthermore, we elucidate that our rate function is an extension of the level 2.5 rate function for Markov processes to semi-Markov cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yong, E-mail: 83229994@qq.com; Ge, Hao, E-mail: haoge@pku.edu.cn; Xiong, Jie, E-mail: jiexiong@umac.mo
Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.
Path integrals and large deviations in stochastic hybrid systems.
Bressloff, Paul C; Newby, Jay M
2014-04-01
We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.
Special ergodic theorems and dynamical large deviations
NASA Astrophysics Data System (ADS)
Kleptsyn, Victor; Ryzhov, Dmitry; Minkov, Stanislav
2012-11-01
Let f : M → M be a self-map of a compact Riemannian manifold M, admitting a global SRB measure μ. For a continuous test function \\varphi\\colon M\\to R and a constant α > 0, consider the set Kφ,α of the initial points for which the Birkhoff time averages of the function φ differ from its μ-space average by at least α. As the measure μ is a global SRB one, the set Kφ,α should have zero Lebesgue measure. The special ergodic theorem, whenever it holds, claims that, moreover, this set has a Hausdorff dimension less than the dimension of M. We prove that for Lipschitz maps, the special ergodic theorem follows from the dynamical large deviations principle. We also define and prove analogous result for flows. Applying the theorems of Young and of Araújo and Pacifico, we conclude that the special ergodic theorem holds for transitive hyperbolic attractors of C2-diffeomorphisms, as well as for some other known classes of maps (including the one of partially hyperbolic non-uniformly expanding maps) and flows.
Ergonomic design and evaluation of a diagnostic ultrasound transducer holder.
Ghasemi, Mohamad Sadegh; Hosseinzadeh, Payam; Zamani, Farhad; Ahmadpoor, Hossein; Dehghan, Naser
2017-12-01
Work-related musculoskeletal disorders (WMSDs) are injuries and disorders that affect the body's movement and musculoskeletal system. Awkward postures represent one of the major ergonomic risk factors that cause WMSDs among sonographers while working with an ultrasound transducer. This study aimed to design and evaluate a new holder for the ultrasound transducer. In the first phase a new holder was designed for the transducer, considering design principles. Evaluation of the new holder was then carried out by electrogoniometry and a locally perceived discomfort (LPD) scale. The application of design principles to the new holder resulted in an improvement of wrist posture and comfort. Wrist angles in extension, flexion, radial deviation and ulnar deviation were lower with utilization of the new holder. The severity of discomfort based on the LPD method in the two modes of work with and without the new holder was reported with values of 1.3 and 1.8, respectively (p < 0.05). Overall, this study indicated that applying ergonomics design principles was effective in minimizing wrist deviation and increasing comfort while working with the new holder.
What is the uncertainty principle of non-relativistic quantum mechanics?
NASA Astrophysics Data System (ADS)
Riggs, Peter J.
2018-05-01
After more than ninety years of discussions over the uncertainty principle, there is still no universal agreement on what the principle states. The Robertson uncertainty relation (incorporating standard deviations) is given as the mathematical expression of the principle in most quantum mechanics textbooks. However, the uncertainty principle is not merely a statement of what any of the several uncertainty relations affirm. It is suggested that a better approach would be to present the uncertainty principle as a statement about the probability distributions of incompatible variables and the resulting restrictions on quantum states.
The most likely voltage path and large deviations approximations for integrate-and-fire neurons.
Paninski, Liam
2006-08-01
We develop theory and numerical methods for computing the most likely subthreshold voltage path of a noisy integrate-and-fire (IF) neuron, given observations of the neuron's superthreshold spiking activity. This optimal voltage path satisfies a second-order ordinary differential (Euler-Lagrange) equation which may be solved analytically in a number of special cases, and which may be solved numerically in general via a simple "shooting" algorithm. Our results are applicable for both linear and nonlinear subthreshold dynamics, and in certain cases may be extended to correlated subthreshold noise sources. We also show how this optimal voltage may be used to obtain approximations to (1) the likelihood that an IF cell with a given set of parameters was responsible for the observed spike train; and (2) the instantaneous firing rate and interspike interval distribution of a given noisy IF cell. The latter probability approximations are based on the classical Freidlin-Wentzell theory of large deviations principles for stochastic differential equations. We close by comparing this most likely voltage path to the true observed subthreshold voltage trace in a case when intracellular voltage recordings are available in vitro.
Distribution of diameters for Erdős-Rényi random graphs.
Hartmann, A K; Mézard, M
2018-03-01
We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c. The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P(d) numerically for various values of c, in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10^{-100} which allow us to obtain the distribution over basically the full range of the support, for graphs up to N=1000 nodes. For values c<1, our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c>1 the distribution is more complex and no complete analytical results are available. For this parameter range, P(d) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c, we determined the finite-size rate function Φ(d/N) and were able to extrapolate numerically to N→∞, indicating that the large-deviation principle holds.
Distribution of diameters for Erdős-Rényi random graphs
NASA Astrophysics Data System (ADS)
Hartmann, A. K.; Mézard, M.
2018-03-01
We study the distribution of diameters d of Erdős-Rényi random graphs with average connectivity c . The diameter d is the maximum among all the shortest distances between pairs of nodes in a graph and an important quantity for all dynamic processes taking place on graphs. Here we study the distribution P (d ) numerically for various values of c , in the nonpercolating and percolating regimes. Using large-deviation techniques, we are able to reach small probabilities like 10-100 which allow us to obtain the distribution over basically the full range of the support, for graphs up to N =1000 nodes. For values c <1 , our results are in good agreement with analytical results, proving the reliability of our numerical approach. For c >1 the distribution is more complex and no complete analytical results are available. For this parameter range, P (d ) exhibits an inflection point, which we found to be related to a structural change of the graphs. For all values of c , we determined the finite-size rate function Φ (d /N ) and were able to extrapolate numerically to N →∞ , indicating that the large-deviation principle holds.
Roy, Tapta Kanchan; Carrington, Tucker; Gerber, R Benny
2014-08-21
Anharmonic vibrational spectroscopy calculations using MP2 and B3LYP computed potential surfaces are carried out for a series of molecules, and frequencies and intensities are compared with those from experiment. The vibrational self-consistent field with second-order perturbation correction (VSCF-PT2) is used in computing the spectra. The test calculations have been performed for the molecules HNO3, C2H4, C2H4O, H2SO4, CH3COOH, glycine, and alanine. Both MP2 and B3LYP give results in good accord with experimental frequencies, though, on the whole, MP2 gives very slightly better agreement. A statistical analysis of deviations in frequencies from experiment is carried out that gives interesting insights. The most probable percentage deviation from experimental frequencies is about -2% (to the red of the experiment) for B3LYP and +2% (to the blue of the experiment) for MP2. There is a higher probability for relatively large percentage deviations when B3LYP is used. The calculated intensities are also found to be in good accord with experiment, but the percentage deviations are much larger than those for frequencies. The results show that both MP2 and B3LYP potentials, used in VSCF-PT2 calculations, account well for anharmonic effects in the spectroscopy of molecules of the types considered.
NASA Technical Reports Server (NTRS)
Herrman, B. D.; Uman, M. A.; Brantley, R. D.; Krider, E. P.
1976-01-01
The principle of operation of a wideband crossed-loop magnetic-field direction finder is studied by comparing the bearing determined from the NS and EW magnetic fields at various times up to 155 microsec after return stroke initiation with the TV-determined lightning channel base direction. For 40 lightning strokes in the 3 to 12 km range, the difference between the bearings found from magnetic fields sampled at times between 1 and 10 microsec and the TV channel-base data has a standard deviation of 3-4 deg. Included in this standard deviation is a 2-3 deg measurement error. For fields sampled at progressively later times, both the mean and the standard deviation of the difference between the direction-finder bearing and the TV bearing increase. Near 150 microsec, means are about 35 deg and standard deviations about 60 deg. The physical reasons for the late-time inaccuracies in the wideband direction finder and the occurrence of these effects in narrow-band VLF direction finders are considered.
Crashworthiness of Restraints for Physically Disabled Children in Buses.
ERIC Educational Resources Information Center
Seeger, Barry R.; Caudrey, David J.
1983-01-01
Seven design principles identified from research as crashworthy for transporting disabled persons in buses are listed, and survey results of transportation of 161 disabled children in Australia are discussed relative to the design principles. Findings are discussed and recommendations made to correct deviations, such as absence of lapbelts. (MC)
[Study on the characteristics of radiance calibration using nonuniformity extended source].
Wang, Jian-Wei; Huang, Min; Xiangli, Bin; Tu, Xiao-Long
2013-07-01
Integrating sphere and diffuser are always used as extended source, and they have different effects on radiance calibration of imaging spectrometer with parameter difference. In the present paper, a mathematical model based on the theory of radiative transfer and calibration principle is founded to calculate the irradiance and calibration coefficients on CCD, taking relatively poor uniformity lights-board calibration system for example. The effects of the nonuniformity on the calibration was analyzed, which makes up the correlation of calibration coefficient matrix under ideal and unideal situation. The results show that the nonuniformity makes the viewing angle and the position of the point of intersection of the optical axis and the diffuse reflection plate have relatively large effects on calibration, while the observing distance's effect is small; under different viewing angles, a deviation value can be found that makes the calibration results closest to the desired results. So, the calibration error can be reduced by choosing appropriate deviation value.
Rehder, Bob; Waldmann, Michael R
2017-02-01
Causal Bayes nets capture many aspects of causal thinking that set them apart from purely associative reasoning. However, some central properties of this normative theory routinely violated. In tasks requiring an understanding of explaining away and screening off, subjects often deviate from these principles and manifest the operation of an associative bias that we refer to as the rich-get-richer principle. This research focuses on these two failures comparing tasks in which causal scenarios are merely described (via verbal statements of the causal relations) versus experienced (via samples of data that manifest the intervariable correlations implied by the causal relations). Our key finding is that we obtained stronger deviations from normative predictions in the described conditions that highlight the instructed causal model compared to those that presented data. This counterintuitive finding indicate that a theory of causal reasoning and learning needs to integrate normative principles with biases people hold about causal relations.
Large deviation function for a driven underdamped particle in a periodic potential
NASA Astrophysics Data System (ADS)
Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo
2018-02-01
Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.
Entanglement transitions induced by large deviations
NASA Astrophysics Data System (ADS)
Bhosale, Udaysinh T.
2017-12-01
The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B , is computed analytically using a Coulomb gas method. It is shown that this probability, for large N , goes as exp[-β N2Φ (ζ ) ] , where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ (ζ ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A , using the properties of the density matrix's partial transpose ρ12Γ. The density of states of ρ12Γ is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ . Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.
Entanglement transitions induced by large deviations.
Bhosale, Udaysinh T
2017-12-01
The probability of large deviations of the smallest Schmidt eigenvalue for random pure states of bipartite systems, denoted as A and B, is computed analytically using a Coulomb gas method. It is shown that this probability, for large N, goes as exp[-βN^{2}Φ(ζ)], where the parameter β is the Dyson index of the ensemble, ζ is the large deviation parameter, while the rate function Φ(ζ) is calculated exactly. Corresponding equilibrium Coulomb charge density is derived for its large deviations. Effects of the large deviations of the extreme (largest and smallest) Schmidt eigenvalues on the bipartite entanglement are studied using the von Neumann entropy. Effect of these deviations is also studied on the entanglement between subsystems 1 and 2, obtained by further partitioning the subsystem A, using the properties of the density matrix's partial transpose ρ_{12}^{Γ}. The density of states of ρ_{12}^{Γ} is found to be close to the Wigner's semicircle law with these large deviations. The entanglement properties are captured very well by a simple random matrix model for the partial transpose. The model predicts the entanglement transition across a critical large deviation parameter ζ. Log negativity is used to quantify the entanglement between subsystems 1 and 2. Analytical formulas for it are derived using the simple model. Numerical simulations are in excellent agreement with the analytical results.
Attacks exploiting deviation of mean photon number in quantum key distribution and coin tossing
NASA Astrophysics Data System (ADS)
Sajeed, Shihan; Radchenko, Igor; Kaiser, Sarah; Bourgoin, Jean-Philippe; Pappa, Anna; Monat, Laurent; Legré, Matthieu; Makarov, Vadim
2015-03-01
The security of quantum communication using a weak coherent source requires an accurate knowledge of the source's mean photon number. Finite calibration precision or an active manipulation by an attacker may cause the actual emitted photon number to deviate from the known value. We model effects of this deviation on the security of three quantum communication protocols: the Bennett-Brassard 1984 (BB84) quantum key distribution (QKD) protocol without decoy states, Scarani-Acín-Ribordy-Gisin 2004 (SARG04) QKD protocol, and a coin-tossing protocol. For QKD we model both a strong attack using technology possible in principle and a realistic attack bounded by today's technology. To maintain the mean photon number in two-way systems, such as plug-and-play and relativistic quantum cryptography schemes, bright pulse energy incoming from the communication channel must be monitored. Implementation of a monitoring detector has largely been ignored so far, except for ID Quantique's commercial QKD system Clavis2. We scrutinize this implementation for security problems and show that designing a hack-proof pulse-energy-measuring detector is far from trivial. Indeed, the first implementation has three serious flaws confirmed experimentally, each of which may be exploited in a cleverly constructed Trojan-horse attack. We discuss requirements for a loophole-free implementation of the monitoring detector.
Density Large Deviations for Multidimensional Stochastic Hyperbolic Conservation Laws
NASA Astrophysics Data System (ADS)
Barré, J.; Bernardin, C.; Chetrite, R.
2018-02-01
We investigate the density large deviation function for a multidimensional conservation law in the vanishing viscosity limit, when the probability concentrates on weak solutions of a hyperbolic conservation law. When the mobility and diffusivity matrices are proportional, i.e. an Einstein-like relation is satisfied, the problem has been solved in Bellettini and Mariani (Bull Greek Math Soc 57:31-45, 2010). When this proportionality does not hold, we compute explicitly the large deviation function for a step-like density profile, and we show that the associated optimal current has a non trivial structure. We also derive a lower bound for the large deviation function, valid for a more general weak solution, and leave the general large deviation function upper bound as a conjecture.
Anti-site disorder and improved functionality of Mn₂NiX (X = Al, Ga, In, Sn) inverse Heusler alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Souvik; Kundu, Ashis; Ghosh, Subhradip, E-mail: subhra@iitg.ernet.in
2014-10-07
Recent first-principles calculations have predicted Mn₂NiX (X = Al, Ga, In, Sn) alloys to be magnetic shape memory alloys. Moreover, experiments on Mn₂NiGa and Mn₂NiSn suggest that the alloys deviate from the perfect inverse Heusler arrangement and that there is chemical disorder at the sublattices with tetrahedral symmetry. In this work, we investigate the effects of such chemical disorder on phase stabilities and magnetic properties using first-principles electronic structure methods. We find that except Mn₂NiAl, all other alloys show signatures of martensitic transformations in presence of anti-site disorder at the sublattices with tetrahedral symmetry. This improves the possibilities of realizingmore » martensitic transformations at relatively low fields and the possibilities of obtaining significantly large inverse magneto-caloric effects, in comparison to perfect inverse Heusler arrangement of atoms. We analyze the origin of such improvements in functional properties by investigating electronic structures and magnetic exchange interactions.« less
The infection rate of Daphnia magna by Pasteuria ramosa conforms with the mass-action principle.
Regoes, R R; Hottinger, J W; Sygnarski, L; Ebert, D
2003-10-01
In simple epidemiological models that describe the interaction between hosts with their parasites, the infection process is commonly assumed to be governed by the law of mass action, i.e. it is assumed that the infection rate depends linearly on the densities of the host and the parasite. The mass-action assumption, however, can be problematic if certain aspects of the host-parasite interaction are very pronounced, such as spatial compartmentalization, host immunity which may protect from infection with low doses, or host heterogeneity with regard to susceptibility to infection. As deviations from a mass-action infection rate have consequences for the dynamics of the host-parasite system, it is important to test for the appropriateness of the mass-action assumption in a given host-parasite system. In this paper, we examine the relationship between the infection rate and the parasite inoculum for the water flee Daphnia magna and its bacterial parasite Pasteuria ramosa. We measured the fraction of infected hosts after exposure to 14 different doses of the parasite. We find that the observed relationship between the fraction of infected hosts and the parasite dose is largely consistent with an infection process governed by the mass-action principle. However, we have evidence for a subtle but significant deviation from a simple mass-action infection model, which can be explained either by some antagonistic effects of the parasite spores during the infection process, or by heterogeneity in the hosts' susceptibility with regard to infection.
Lightweight 3.66-meter-diameter conical mesh antenna reflector
NASA Technical Reports Server (NTRS)
Moore, D. M.
1974-01-01
A description is given of a 3.66 m diameter nonfurlable conical mesh antenna incorporating the line source feed principle recently developed. The weight of the mesh reflector and its support structure is 162 N. An area weighted RMS surface deviation of 0.28 mm was obtained. The RF performance measurements show a gain of 48.3 db at 8.448 GHz corresponding to an efficiency of 66%. During the design and development of this antenna, the technology for fabricating the large conical membranes of knitted mesh was developed. As part of this technology a FORTRAN computer program, COMESH, was developed which permits the user to predict the surface accuracy of a stretched conical membrane.
Babinet's principle for optical frequency metamaterials and nanoantennas
NASA Astrophysics Data System (ADS)
Zentgraf, T.; Meyrath, T. P.; Seidel, A.; Kaiser, S.; Giessen, H.; Rockstuhl, C.; Lederer, F.
2007-07-01
We consider Babinet’s principle for metamaterials at optical frequencies and include realistic conditions which deviate from the theoretical assumptions of the classic principle such as an infinitely thin and perfectly conducting metal layer. It is shown that Babinet’s principle associates not only transmission and reflection between a structure and its complement but also the field modal profiles of the electromagnetic resonances as well as effective material parameters—a critical concept for metamaterials. Also playing an important role in antenna design, Babinet’s principle is particularly interesting to consider in this case where the metasurfaces and their complements can be regarded as variations on a folded dipole antenna array and patch antenna array, respectively.
Predicting the Velocity Dispersions of the Dwarf Satellite Galaxies of Andromeda
NASA Astrophysics Data System (ADS)
McGaugh, Stacy S.
2016-05-01
Dwarf Spheroidal galaxies in the Local Group are the faintest and most diffuse stellar systems known. They exhibit large mass discrepancies, making them popular laboratories for studying the missing mass problem. The PANDAS survey of M31 revealed dozens of new examples of such dwarfs. As these systems were discovered, it was possible to use the observed photometric properties to predict their stellar velocity dispersions with the modified gravity theory MOND. These predictions, made in advance of the observations, have since been largely confirmed. A unique feature of MOND is that a structurally identical dwarf will behave differently when it is or is not subject to the external field of a massive host like Andromeda. The role of this "external field effect" is critical in correctly predicting the velocity dispersions of dwarfs that deviate from empirical scaling relations. With continued improvement in the observational data, these systems could provide a test of the strong equivalence principle.
From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction
NASA Astrophysics Data System (ADS)
Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo
This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.
Large deviations in the presence of cooperativity and slow dynamics
NASA Astrophysics Data System (ADS)
Whitelam, Stephen
2018-06-01
We study simple models of intermittency, involving switching between two states, within the dynamical large-deviation formalism. Singularities appear in the formalism when switching is cooperative or when its basic time scale diverges. In the first case the unbiased trajectory distribution undergoes a symmetry breaking, leading to a change in shape of the large-deviation rate function for a particular dynamical observable. In the second case the symmetry of the unbiased trajectory distribution remains unbroken. Comparison of these models suggests that singularities of the dynamical large-deviation formalism can signal the dynamical equivalent of an equilibrium phase transition but do not necessarily do so.
Efficiency and large deviations in time-asymmetric stochastic heat engines
Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...
2014-10-24
In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohapi, N.; Hees, A.; Larena, J., E-mail: n.mohapi@gmail.com, E-mail: a.hees@ru.ac.za, E-mail: j.larena@ru.ac.za
The Einstein Equivalence Principle is a fundamental principle of the theory of General Relativity. While this principle has been thoroughly tested with standard matter, the question of its validity in the Dark sector remains open. In this paper, we consider a general tensor-scalar theory that allows to test the equivalence principle in the Dark sector by introducing two different conformal couplings to standard matter and to Dark matter. We constrain these couplings by considering galactic observations of strong lensing and of velocity dispersion. Our analysis shows that, in the case of a violation of the Einstein Equivalence Principle, data favourmore » violations through coupling strengths that are of opposite signs for ordinary and Dark matter. At the same time, our analysis does not show any significant deviations from General Relativity.« less
The precautionary principle: is it safe.
Gignon, Maxime; Ganry, Olivier; Jardé, Olivier; Manaouil, Cécile
2013-06-01
The precautionary principle is generally acknowledged to be a powerful tool for protecting health but it was originally invoked by policy makers for dealing with environmental issues. In the 1990s, the principle was incorporated into many legislative and regulatory texts in international law. One can consider that the precautionary principle has turned into "precautionism" necessary to prove to the people, taking account of risk in decisions. There is now a risk that these abuses will deprive the principle of its meaning and value. When pushed to its limits, the precautionary principle can even be dangerous when applied to the healthcare field. This is why a critical analysis of the principle is necessary. Through the literature, it sometimes seems to deviate somehow from the essence of the precautionary principle as it is commonly used in relation to health. We believe that educational work is necessary to familiarize professionals, policy makers and public opinion of the precautionary principle and avoid confusion. We propose a critical analysis of the use and misuse of the precautionary principle.
Hurtado, Pablo I; Garrido, Pedro L
2010-04-01
Most systems, when pushed out of equilibrium, respond by building up currents of locally conserved observables. Understanding how microscopic dynamics determines the averages and fluctuations of these currents is one of the main open problems in nonequilibrium statistical physics. The additivity principle is a theoretical proposal that allows to compute the current distribution in many one-dimensional nonequilibrium systems. Using simulations, we validate this conjecture in a simple and general model of energy transport, both in the presence of a temperature gradient and in canonical equilibrium. In particular, we show that the current distribution displays a Gaussian regime for small current fluctuations, as prescribed by the central limit theorem, and non-Gaussian (exponential) tails for large current deviations, obeying in all cases the Gallavotti-Cohen fluctuation theorem. In order to facilitate a given current fluctuation, the system adopts a well-defined temperature profile different from that of the steady state and in accordance with the additivity hypothesis predictions. System statistics during a large current fluctuation is independent of the sign of the current, which implies that the optimal profile (as well as higher-order profiles and spatial correlations) are invariant upon current inversion. We also demonstrate that finite-time joint fluctuations of the current and the profile are well described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.
Frenetic Bounds on the Entropy Production
NASA Astrophysics Data System (ADS)
Maes, Christian
2017-10-01
We give a systematic derivation of positive lower bounds for the expected entropy production (EP) rate in classical statistical mechanical systems obeying a dynamical large deviation principle. The logic is the same for the return to thermodynamic equilibrium as it is for steady nonequilibria working under the condition of local detailed balance. We recover there recently studied "uncertainty" relations for the EP, appearing in studies about the effectiveness of mesoscopic machines. In general our refinement of the positivity of the expected EP rate is obtained in terms of a positive and even function of the expected current(s) which measures the dynamical activity in the system, a time-symmetric estimate of the changes in the system's configuration. Also underdamped diffusions can be included in the analysis.
Grain-Boundary Resistance in Copper Interconnects: From an Atomistic Model to a Neural Network
NASA Astrophysics Data System (ADS)
Valencia, Daniel; Wilson, Evan; Jiang, Zhengping; Valencia-Zapata, Gustavo A.; Wang, Kuang-Chung; Klimeck, Gerhard; Povolotskyi, Michael
2018-04-01
Orientation effects on the specific resistance of copper grain boundaries are studied systematically with two different atomistic tight-binding methods. A methodology is developed to model the specific resistance of grain boundaries in the ballistic limit using the embedded atom model, tight- binding methods, and nonequilibrium Green's functions. The methodology is validated against first-principles calculations for thin films with a single coincident grain boundary, with 6.4% deviation in the specific resistance. A statistical ensemble of 600 large, random structures with grains is studied. For structures with three grains, it is found that the distribution of specific resistances is close to normal. Finally, a compact model for grain-boundary-specific resistance is constructed based on a neural network.
Setup and evaluation of a sensor tilting system for dimensional micro- and nanometrology
NASA Astrophysics Data System (ADS)
Schuler, Alexander; Weckenmann, Albert; Hausotte, Tino
2014-06-01
Sensors in micro- and nanometrology show their limits if the measurement objects and surfaces feature high aspect ratios, high curvature and steep surface angles. Their measurable surface angle is limited and an excess leads to measurement deviation and not detectable surface points. We demonstrate a principle to adapt the sensor's working angle during the measurement keeping the sensor in its optimal working angle. After the simulation of the principle, a hardware prototype was realized. It is based on a rotary kinematic chain with two rotary degrees of freedom, which extends the measurable surface angle to ±90° and is combined with a nanopositioning and nanomeasuring machine. By applying a calibration procedure with a quasi-tactile 3D sensor based on electrical near-field interaction the systematic position deviation of the kinematic chain is reduced. The paper shows for the first time the completed setup and integration of the prototype, the performance results of the calibration, the measurements with the prototype and the tilting principle, and finishes with the interpretation and feedback of the practical results.
Transport Coefficients from Large Deviation Functions
NASA Astrophysics Data System (ADS)
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
NASA Astrophysics Data System (ADS)
Goswami, S.; Paul, K. S.; Paul, A.
2017-09-01
Multifrequency GPS transmissions have provided the opportunity for testing the applicability of the principle of frequency diversity for scintillation mitigation. Published results addressing this issue with quantified estimates are not available in literature, at least from the anomaly crest location of the Indian longitude sector. Multifrequency scattering within the same L band is often the attributed cause behind simultaneous decorrelated signal fluctuations. The present paper aims to provide proportion of time during scintillation patches that decorrelations are found across GPS L1, L2, and L5 frequencies associated with high
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
NASA Technical Reports Server (NTRS)
Litvin, Faydor L.; Kuan, Chihping; Zhang, YI
1991-01-01
A numerical method is developed for the minimization of deviations of real tooth surfaces from the theoretical ones. The deviations are caused by errors of manufacturing, errors of installment of machine-tool settings and distortion of surfaces by heat-treatment. The deviations are determined by coordinate measurements of gear tooth surfaces. The minimization of deviations is based on the proper correction of initially applied machine-tool settings. The contents of accomplished research project cover the following topics: (1) Descriptions of the principle of coordinate measurements of gear tooth surfaces; (2) Deviation of theoretical tooth surfaces (with examples of surfaces of hypoid gears and references for spiral bevel gears); (3) Determination of the reference point and the grid; (4) Determination of the deviations of real tooth surfaces at the points of the grid; and (5) Determination of required corrections of machine-tool settings for minimization of deviations. The procedure for minimization of deviations is based on numerical solution of an overdetermined system of n linear equations in m unknowns (m much less than n ), where n is the number of points of measurements and m is the number of parameters of applied machine-tool settings to be corrected. The developed approach is illustrated with numerical examples.
Al-Air Batteries: Fundamental Thermodynamic Limitations from First-Principles Theory.
Chen, Leanne D; Nørskov, Jens K; Luntz, Alan C
2015-01-02
The Al-air battery possesses high theoretical specific energy (4140 W h/kg) and is therefore an attractive candidate for vehicle propulsion. However, the experimentally observed open-circuit potential is much lower than what bulk thermodynamics predicts, and this potential loss is typically attributed to corrosion. Similarly, large Tafel slopes associated with the battery are assumed to be due to film formation. We present a detailed thermodynamic study of the Al-air battery using density functional theory. The results suggest that the maximum open-circuit potential of the Al anode is only -1.87 V versus the standard hydrogen electrode at pH 14.6 instead of the traditionally assumed -2.34 V and that large Tafel slopes are inherent in the electrochemistry. These deviations from the bulk thermodynamics are intrinsic to the electrochemical surface processes that define Al anodic dissolution. This has contributions from both asymmetry in multielectron transfers and, more importantly, a large chemical stabilization inherent to the formation of bulk Al(OH)3 from surface intermediates. These are fundamental limitations that cannot be improved even if corrosion and film effects are completely suppressed.
First-principles study of thermal transport in nitrogenated holey graphene.
Ouyang, Tao; Xiao, Huaping; Tang, Chao; Zhang, Xiaoliang; Hu, Ming; Zhong, Jianxin
2017-01-27
Nitrogenated holey graphene (NHG), a new two-dimensional graphene variant with a large fundamental direct band gap, has recently been successfully synthesized via a simple wet-chemical reaction. Motivated by its unique geometry and novel properties, we investigated the phonon transport properties of the material by combining first-principle calculations and the phonon Boltzmann transport equation. The lattice thermal conductivity of NHG at room temperature is predicted to be about 82.22 W mK -1 , which is almost two orders of magnitude lower than that of graphene (about 3500 W mK -1 ). Deviating from the traditional understanding that thermal transport is usually largely contributed by the acoustic phonon modes for most suspended 2D materials, both out-of-plane flexural acoustic (ZA) and optical phonon modes make a more or less equal contribution, and their combination abnormally dominates the overall thermal transport in NHG. The major three-phonon process in NHG is further analyzed and the scattering between the acoustic and optical phonon modes like [Formula: see text] is the main phonon process channel. Meanwhile, the mean free path distribution of different phonon modes is calculated for the purpose of the thermal management of NHG-based devices. Our results elucidate the unusual thermal transport properties of NHG as compared with the representative case of graphene, and underpin its potential application for use by the thermal management community.
First-principles study of thermal transport in nitrogenated holey graphene
NASA Astrophysics Data System (ADS)
Ouyang, Tao; Xiao, Huaping; Tang, Chao; Zhang, Xiaoliang; Hu, Ming; Zhong, Jianxin
2017-01-01
Nitrogenated holey graphene (NHG), a new two-dimensional graphene variant with a large fundamental direct band gap, has recently been successfully synthesized via a simple wet-chemical reaction. Motivated by its unique geometry and novel properties, we investigated the phonon transport properties of the material by combining first-principle calculations and the phonon Boltzmann transport equation. The lattice thermal conductivity of NHG at room temperature is predicted to be about 82.22 W mK-1, which is almost two orders of magnitude lower than that of graphene (about 3500 W mK-1). Deviating from the traditional understanding that thermal transport is usually largely contributed by the acoustic phonon modes for most suspended 2D materials, both out-of-plane flexural acoustic (ZA) and optical phonon modes make a more or less equal contribution, and their combination abnormally dominates the overall thermal transport in NHG. The major three-phonon process in NHG is further analyzed and the scattering between the acoustic and optical phonon modes like {{ZA}}/{{TA}}/{{LA}}+{{O}}≤ftrightarrow {{O}} is the main phonon process channel. Meanwhile, the mean free path distribution of different phonon modes is calculated for the purpose of the thermal management of NHG-based devices. Our results elucidate the unusual thermal transport properties of NHG as compared with the representative case of graphene, and underpin its potential application for use by the thermal management community.
[Quantification of prostate movements during radiotherapy].
Artignan, X; Rastkhah, M; Balosso, J; Fourneret, P; Gilliot, O; Bolla, M
2006-11-01
Decrease treatment uncertainties is one of the most important challenge in radiation oncology. Numerous techniques are available to quantify prostate motion and visualise prostate location day after day before each irradiation: CT-scan, cone-beam-CT-Scan, ultrason, prostatic markers... The knowledge of prostate motion is necessary to define the minimal margin around the target volume needed to avoid mispositioning during treatment session. Different kind of prostate movement have been studied and are reported in the present work: namely, those having a large amplitude extending through out the whole treatment period on one hand; and those with a shorter amplitude happening during treatment session one the other hand. The long lasting movement are mostly anterior-posterior (3 mm standard deviation), secondary in cranial-caudal (1-2 mm standard deviation) and lateral directions (0.5-1 mm standard deviation). They are mostly due to the rectal state of filling and mildly due to bladder filling or inferior limbs position. On the other hand, the shorter movement that occurs during the treatment session is mostly variation of position around a steady point represented by the apex. Ones again, the rectal filling state is the principle cause. This way, during the 20 minutes of a treatment session, including the positioning of the patient, a movement of less than 3 mm could be expected when the rectum is empty. Ideally, real time imaging tools should allow an accurate localisation of the prostate and the adaptation of the dosimetry before each treatment session in a time envelope not exceeding 20 minutes.
Babinet's principle in double-refraction systems
NASA Astrophysics Data System (ADS)
Ropars, Guy; Le Floch, Albert
2014-06-01
Babinet's principle applied to systems with double refraction is shown to involve spatial interchanges between the ordinary and extraordinary patterns observed through two complementary screens. As in the case of metamaterials, the extraordinary beam does not follow the Snell-Descartes refraction law, the superposition principle has to be applied simultaneously at two points. Surprisingly, by contrast to the intuitive impression, in the presence of the screen with an opaque region, we observe that the emerging extraordinary photon pattern, which however has undergone a deviation, remains fixed when a natural birefringent crystal is rotated while the ordinary one rotates with the crystal. The twofold application of Babinet's principle implies intensity and polarization interchanges but also spatial and dynamic interchanges which should occur in birefringent metamaterials.
Nonequilibrium thermodynamic potentials for continuous-time Markov chains.
Verley, Gatien
2016-01-01
We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.
A visual tristimulus projection colorimeter.
Valberg, A
1971-01-01
Based on the optical principle of a slide projector, a visual tristimulus projection colorimeter has been developed. The calorimeter operates with easily interchangeable sets of primary color filters placed in a frame at the objective. The apparatus has proved to be fairly accurate. The reproduction of the color matches as measured by the standard deviation is equal to the visual sensitivity to color differences for each observer. Examples of deviations in the matches among individuals as well as deviations compared with the CIE 1931 Standard Observer are given. These deviations are demonstrated to be solely due to individual differences in the perception of metameric colors. Thus, taking advantage of an objective observation (allowing all adjustments to be judged by a group of impartial observers), the colorimeter provides an excellent aid in the study of discrimination, metamerism, and related effects which are of considerable interest in current research in colorimetry and in the study of color vision tests.
Hessian matrix approach for determining error field sensitivity to coil deviations
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; Song, Yuntao; Wan, Yuanxi
2018-05-01
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code (Zhu et al 2018 Nucl. Fusion 58 016008) is utilized to provide fast and accurate calculations of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.
A tomographic test of cosmological principle using the JLA compilation of type Ia supernovae
NASA Astrophysics Data System (ADS)
Chang, Zhe; Lin, Hai-Nan; Sang, Yu; Wang, Sai
2018-05-01
We test the cosmological principle by fitting a dipolar modulation of distance modulus and searching for an evolution of this modulation with respect to cosmological redshift. Based on a redshift tomographic method, we divide the Joint Light-curve Analysis compilation of supernovae of type Ia into different redshift bins, and employ a Markov-Chain Monte-Carlo method to infer the anisotropic amplitude and direction in each redshift bin. However, we do not find any significant deviations from the cosmological principle, and the anisotropic amplitude is stringently constrained to be less than a few thousandths at 95% confidence level.
System of tolerances for a solar-tower power station
NASA Astrophysics Data System (ADS)
Aparisi, R. R.; Tepliakov, D. I.
The principles underlying the establishment of a system of tolerances for a solar-tower station are presented. Attention is given to static and dynamic tolerances and deviations for a single heliostat, and geometrical tolerances for a field of heliostats.
Humane Management in Times of Restraint.
ERIC Educational Resources Information Center
Auster, Ethel
1987-01-01
Briefly reviews the theoretical principles of decision making and communication for effective management, and describes management practices in Canadian academic libraries facing retrenchment that deviate from this theoretical model. Suggestions for achieving greater congruency between scholarly theory and management practice, thereby facilitating…
Optical Issues in Measuring Strabismus
Irsch, Kristina
2015-01-01
Potential errors and complications during examination and treatment of strabismic patients can be reduced by recognition of certain optical issues. This articles reviews basic as well as guiding principles of prism optics and optics of the eye to equip the reader with the necessary know-how to avoid pitfalls that are commonly encountered when using prisms to measure ocular deviations (e.g., during cover testing), and when observing the corneal light reflex to estimate ocular deviations (e.g., during Hirschberg or Krimsky testing in patients who do not allow for cover testing using prisms). PMID:26180462
Optical Issues in Measuring Strabismus.
Irsch, Kristina
2015-01-01
Potential errors and complications during examination and treatment of strabismic patients can be reduced by recognition of certain optical issues. This articles reviews basic as well as guiding principles of prism optics and optics of the eye to equip the reader with the necessary know-how to avoid pitfalls that are commonly encountered when using prisms to measure ocular deviations (e.g., during cover testing), and when observing the corneal light reflex to estimate ocular deviations (e.g., during Hirschberg or Krimsky testing in patients who do not allow for cover testing using prisms).
Accuracy of Cycling Power Meters against a Mathematical Model of Treadmill Cycling.
Maier, Thomas; Schmid, Lucas; Müller, Beat; Steiner, Thomas; Wehrlin, Jon Peter
2017-06-01
The aim of this study was to compare the accuracy among a high number of current mobile cycling power meters used by elite and recreational cyclists against a first principle-based mathematical model of treadmill cycling. 54 power meters from 9 manufacturers used by 32 cyclists were calibrated. While the cyclist coasted downhill on a motorised treadmill, a back-pulling system was adjusted to counter the downhill force. The system was then loaded 3 times with 4 different masses while the cyclist pedalled to keep his position. The mean deviation (trueness) to the model and coefficient of variation (precision) were analysed. The mean deviations of the power meters were -0.9±3.2% (mean±SD) with 6 power meters deviating by more than±5%. The coefficients of variation of the power meters were 1.2±0.9% (mean±SD), with Stages varying more than SRM (p<0.001) and PowerTap (p<0.001). In conclusion, current power meters used by elite and recreational cyclists vary considerably in their trueness; precision is generally high but differs between manufacturers. Calibrating and adjusting the trueness of every power meter against a first principle-based reference is advised for accurate measurements. © Georg Thieme Verlag KG Stuttgart · New York.
Beyond the Isogloss: Trends in Hispanic Dialectology.
ERIC Educational Resources Information Center
Lipski, John M.
1989-01-01
An overview of contemporary Hispanic dialectology, focusing on phonological phenomena, syntax, classification schemes, and bilingual communities, demonstrates that dialectology has long ceased to be the collection of innumerable surface deviations. It is suggested that dialectology is a theoretical discipline searching for universal principles to…
2 CFR Appendix A to Part 230 - General Principles
Code of Federal Regulations, 2013 CFR
2013-01-01
... subgrant or subcontract). Equipment, capital expenditures, charges for patient care, rental costs and the... care in connection with organizations or separate divisions thereof which receive the preponderance of... deviations from the established practices of the organization which may unjustifiably increase the award...
2 CFR Appendix A to Part 230 - General Principles
Code of Federal Regulations, 2011 CFR
2011-01-01
... subgrant or subcontract). Equipment, capital expenditures, charges for patient care, rental costs and the... care in connection with organizations or separate divisions thereof which receive the preponderance of... deviations from the established practices of the organization which may unjustifiably increase the award...
2 CFR Appendix A to Part 230 - General Principles
Code of Federal Regulations, 2012 CFR
2012-01-01
... subgrant or subcontract). Equipment, capital expenditures, charges for patient care, rental costs and the... care in connection with organizations or separate divisions thereof which receive the preponderance of... deviations from the established practices of the organization which may unjustifiably increase the award...
Squeezed States, Uncertainty Relations and the Pauli Principle in Composite and Cosmological Models
NASA Technical Reports Server (NTRS)
Terazawa, Hidezumi
1996-01-01
The importance of not only uncertainty relations but also the Pauli exclusion principle is emphasized in discussing various 'squeezed states' existing in the universe. The contents of this paper include: (1) Introduction; (2) Nuclear Physics in the Quark-Shell Model; (3) Hadron Physics in the Standard Quark-Gluon Model; (4) Quark-Lepton-Gauge-Boson Physics in Composite Models; (5) Astrophysics and Space-Time Physics in Cosmological Models; and (6) Conclusion. Also, not only the possible breakdown of (or deviation from) uncertainty relations but also the superficial violation of the Pauli principle at short distances (or high energies) in composite (and string) models is discussed in some detail.
On the superposition principle in interference experiments.
Sinha, Aninda; H Vijay, Aravind; Sinha, Urbasi
2015-05-14
The superposition principle is usually incorrectly applied in interference experiments. This has recently been investigated through numerics based on Finite Difference Time Domain (FDTD) methods as well as the Feynman path integral formalism. In the current work, we have derived an analytic formula for the Sorkin parameter which can be used to determine the deviation from the application of the principle. We have found excellent agreement between the analytic distribution and those that have been earlier estimated by numerical integration as well as resource intensive FDTD simulations. The analytic handle would be useful for comparing theory with future experiments. It is applicable both to physics based on classical wave equations as well as the non-relativistic Schrödinger equation.
Quantifying inhomogeneity in fractal sets
NASA Astrophysics Data System (ADS)
Fraser, Jonathan M.; Todd, Mike
2018-04-01
An inhomogeneous fractal set is one which exhibits different scaling behaviour at different points. The Assouad dimension of a set is a quantity which finds the ‘most difficult location and scale’ at which to cover the set and its difference from box dimension can be thought of as a first-level overall measure of how inhomogeneous the set is. For the next level of analysis, we develop a quantitative theory of inhomogeneity by considering the measure of the set of points around which the set exhibits a given level of inhomogeneity at a certain scale. For a set of examples, a family of -invariant subsets of the 2-torus, we show that this quantity satisfies a large deviations principle. We compare members of this family, demonstrating how the rate function gives us a deeper understanding of their inhomogeneity.
Cotter, C J; Gottwald, G A; Holm, D D
2017-09-01
In Holm (Holm 2015 Proc. R. Soc. A 471 , 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow.
A Near-Infrared Spectrometer Based on Novel Grating Light Modulators
Wei, Wei; Huang, Shanglian; Wang, Ning; Jin, Zhu; Zhang, Jie; Chen, Weimin
2009-01-01
A near-infrared spectrometer based on novel MOEMS grating light modulators is proposed. The spectrum detection method that combines a grating light modulator array with a single near-infrared detector has been applied. Firstly, optics theory has been used to analyze the essential principles of the proposed spectroscopic sensor. Secondly, the grating light modulators have been designed and fabricated by micro-machining technology. Finally, the principles of this spectroscopic sensor have been validated and its key parameters have been tested by experiments. The result shows that the spectral resolution is better than 10 nm, the wavelength deviation is less than 1 nm, the deviation of the intensity of peak wavelength is no more than 0.5%, the driving voltage of grating light modulators array device is below 25 V and the response frequency of it is about 5 kHz. With low cost, satisfactory precision, portability and other advantages, the spectrometer should find potential applications in food safety and quality monitoring, pharmaceutical identification and agriculture product quality classification. PMID:22574065
A near-infrared spectrometer based on novel grating light modulators.
Wei, Wei; Huang, Shanglian; Wang, Ning; Jin, Zhu; Zhang, Jie; Chen, Weimin
2009-01-01
A near-infrared spectrometer based on novel MOEMS grating light modulators is proposed. The spectrum detection method that combines a grating light modulator array with a single near-infrared detector has been applied. Firstly, optics theory has been used to analyze the essential principles of the proposed spectroscopic sensor. Secondly, the grating light modulators have been designed and fabricated by micro-machining technology. Finally, the principles of this spectroscopic sensor have been validated and its key parameters have been tested by experiments. The result shows that the spectral resolution is better than 10 nm, the wavelength deviation is less than 1 nm, the deviation of the intensity of peak wavelength is no more than 0.5%, the driving voltage of grating light modulators array device is below 25 V and the response frequency of it is about 5 kHz. With low cost, satisfactory precision, portability and other advantages, the spectrometer should find potential applications in food safety and quality monitoring, pharmaceutical identification and agriculture product quality classification.
Ultimate explanations and suboptimal choice.
Vasconcelos, Marco; Machado, Armando; Pandeirada, Josefa N S
2018-07-01
Researchers have unraveled multiple cases in which behavior deviates from rationality principles. We propose that such deviations are valuable tools to understand the adaptive significance of the underpinning mechanisms. To illustrate, we discuss in detail an experimental protocol in which animals systematically incur substantial foraging losses by preferring a lean but informative option over a rich but non-informative one. To understand how adaptive mechanisms may fail to maximize food intake, we review a model inspired by optimal foraging principles that reconciles sub-optimal choice with the view that current behavioral mechanisms were pruned by the optimizing action of natural selection. To move beyond retrospective speculation, we then review critical tests of the model, regarding both its assumptions and its (sometimes counterintuitive) predictions, all of which have been upheld. The overall contention is that (a) known mechanisms can be used to develop better ultimate accounts and that (b) to understand why mechanisms that generate suboptimal behavior evolved, we need to consider their adaptive value in the animal's characteristic ecology. Copyright © 2018 Elsevier B.V. All rights reserved.
Effect of stress on energy flux deviation of ultrasonic waves in GR/EP composites
NASA Technical Reports Server (NTRS)
Prosser, William H.; Kriz, R. D.; Fitting, Dale W.
1990-01-01
Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis (fiber axis) and the x1 for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers a new nondestructive technique of evaluating stress in composites.
Das, Biswajit; Gangopadhyay, Gautam
2018-05-07
In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.
NASA Astrophysics Data System (ADS)
Das, Biswajit; Gangopadhyay, Gautam
2018-05-01
In the framework of large deviation theory, we have characterized nonequilibrium turnover statistics of enzyme catalysis in a chemiostatic flow with externally controllable parameters, like substrate injection rate and mechanical force. In the kinetics of the process, we have shown the fluctuation theorems in terms of the symmetry of the scaled cumulant generating function (SCGF) in the transient and steady state regime and a similar symmetry rule is reflected in a large deviation rate function (LDRF) as a property of the dissipation rate through boundaries. Large deviation theory also gives the thermodynamic force of a nonequilibrium steady state, as is usually recorded experimentally by a single molecule technique, which plays a key role responsible for the dynamical symmetry of the SCGF and LDRF. Using some special properties of the Legendre transformation, here, we have provided a relation between the fluctuations of fluxes and dissipation rates, and among them, the fluctuation of the turnover rate is routinely estimated but the fluctuation in the dissipation rate is yet to be characterized for small systems. Such an enzymatic reaction flow system can be a very good testing ground to systematically understand the rare events from the large deviation theory which is beyond fluctuation theorem and central limit theorem.
Moderate Deviation Principles for Stochastic Differential Equations with Jumps
2014-01-15
N ŕ’"(dt; dy) and the controls ’" : X [0; T ] ! [0;1) are predictable processes satisfying LT (’") Ma2 (") for some constantM . Here LT denotes...space. Although in the moderate deviations problem one has the stronger bound LT (’") Ma2 (") on the cost of controls, the mere tightness of ’" does not...suitable quadratic form. For " > 0 and M ə, consider the spaces SM+;" : = f’ : X [0; T ]! R+j LT (’) Ma2 (")g (2.5) SM" : = f : X [0; T ]! Rj
Implementing New Public Management in Educational Policy
ERIC Educational Resources Information Center
van der Sluis, Margriet E.; Reezigt, Gerry J.; Borghans, Lex
2017-01-01
This article describes how the Dutch Department of Education incorporates New Public Management (NPM) principles in educational policy, and whether conflicts of interest between the Department and schools cause deviations from NPM. We reviewed policy documents and performed secondary analyses on school data. Educational policy focuses on output…
Progress in understanding heavy-ion stopping
NASA Astrophysics Data System (ADS)
Sigmund, P.; Schinner, A.
2016-09-01
We report some highlights of our work with heavy-ion stopping in the energy range where Bethe stopping theory breaks down. Main tools are our binary stopping theory (PASS code), the reciprocity principle, and Paul's data base. Comparisons are made between PASS and three alternative theoretical schemes (CasP, HISTOP and SLPA). In addition to equilibrium stopping we discuss frozen-charge stopping, deviations from linear velocity dependence below the Bragg peak, application of the reciprocity principle in low-velocity stopping, modeling of equilibrium charges, and the significance of the so-called effective charge.
Hessian matrix approach for determining error field sensitivity to coil deviations.
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.; ...
2018-03-15
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
NASA Technical Reports Server (NTRS)
Chen, Wei; Tsui, Kwok-Leung; Allen, Janet K.; Mistree, Farrokh
1994-01-01
In this paper we introduce a comprehensive and rigorous robust design procedure to overcome some limitations of the current approaches. A comprehensive approach is general enough to model the two major types of robust design applications, namely, robust design associated with the minimization of the deviation of performance caused by the deviation of noise factors (uncontrollable parameters), and robust design due to the minimization of the deviation of performance caused by the deviation of control factors (design variables). We achieve mathematical rigor by using, as a foundation, principles from the design of experiments and optimization. Specifically, we integrate the Response Surface Method (RSM) with the compromise Decision Support Problem (DSP). Our approach is especially useful for design problems where there are no closed-form solutions and system performance is computationally expensive to evaluate. The design of a solar powered irrigation system is used as an example. Our focus in this paper is on illustrating our approach rather than on the results per se.
Hessian matrix approach for determining error field sensitivity to coil deviations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Lazerson, Samuel A.
The presence of error fields has been shown to degrade plasma confinement and drive instabilities. Error fields can arise from many sources, but are predominantly attributed to deviations in the coil geometry. In this paper, we introduce a Hessian matrix approach for determining error field sensitivity to coil deviations. A primary cost function used for designing stellarator coils, the surface integral of normalized normal field errors, was adopted to evaluate the deviation of the generated magnetic field from the desired magnetic field. The FOCUS code [Zhu et al., Nucl. Fusion 58(1):016008 (2018)] is utilized to provide fast and accurate calculationsmore » of the Hessian. The sensitivities of error fields to coil displacements are then determined by the eigenvalues of the Hessian matrix. A proof-of-principle example is given on a CNT-like configuration. We anticipate that this new method could provide information to avoid dominant coil misalignments and simplify coil designs for stellarators.« less
NASA Astrophysics Data System (ADS)
Codis, Sandrine; Bernardeau, Francis; Pichon, Christophe
2016-08-01
In order to quantify the error budget in the measured probability distribution functions of cell densities, the two-point statistics of cosmic densities in concentric spheres is investigated. Bias functions are introduced as the ratio of their two-point correlation function to the two-point correlation of the underlying dark matter distribution. They describe how cell densities are spatially correlated. They are computed here via the so-called large deviation principle in the quasi-linear regime. Their large-separation limit is presented and successfully compared to simulations for density and density slopes: this regime is shown to be rapidly reached allowing to get sub-percent precision for a wide range of densities and variances. The corresponding asymptotic limit provides an estimate of the cosmic variance of standard concentric cell statistics applied to finite surveys. More generally, no assumption on the separation is required for some specific moments of the two-point statistics, for instance when predicting the generating function of cumulants containing any powers of concentric densities in one location and one power of density at some arbitrary distance from the rest. This exact `one external leg' cumulant generating function is used in particular to probe the rate of convergence of the large-separation approximation.
NASA Astrophysics Data System (ADS)
Ananna, Tonima Tasnin; Salvato, Mara; LaMassa, Stephanie; Urry, C. Megan; Cappelluti, Nico; Cardamone, Carolin; Civano, Francesca; Farrah, Duncan; Gilfanov, Marat; Glikman, Eilat; Hamilton, Mark; Kirkpatrick, Allison; Lanzuisi, Giorgio; Marchesi, Stefano; Merloni, Andrea; Nandra, Kirpal; Natarajan, Priyamvada; Richards, Gordon T.; Timlin, John
2017-11-01
Multiwavelength surveys covering large sky volumes are necessary to obtain an accurate census of rare objects such as high-luminosity and/or high-redshift active galactic nuclei (AGNs). Stripe 82X is a 31.3 X-ray survey with Chandra and XMM-Newton observations overlapping the legacy Sloan Digital Sky Survey Stripe 82 field, which has a rich investment of multiwavelength coverage from the ultraviolet to the radio. The wide-area nature of this survey presents new challenges for photometric redshifts for AGNs compared to previous work on narrow-deep fields because it probes different populations of objects that need to be identified and represented in the library of templates. Here we present an updated X-ray plus multiwavelength matched catalog, including Spitzer counterparts, and estimated photometric redshifts for 5961 (96% of a total of 6181) X-ray sources that have a normalized median absolute deviation, σnmad=0.06, and an outlier fraction, η = 13.7%. The populations found in this survey and the template libraries used for photometric redshifts provide important guiding principles for upcoming large-area surveys such as eROSITA and 3XMM (in X-ray) and the Large Synoptic Survey Telescope (optical).
NASA Technical Reports Server (NTRS)
Prosser, William H.; Kriz, R. D.; Fitting, Dale W.
1990-01-01
Ultrasonic waves suffer energy flux deviation in graphite/epoxy because of the large anisotropy. The angle of deviation is a function of the elastic coefficients. For nonlinear solids, these coefficients and thus the angle of deviation is a function of stress. Acoustoelastic theory was used to model the effect of stress on flux deviation for unidirectional T300/5208 using previously measured elastic coefficients. Computations were made for uniaxial stress along the x3 axis fiber axis) and the x1 axis for waves propagating in the x1x3 plane. These results predict a shift as large as three degrees for the quasi-transverse wave. The shift in energy flux offers new nondestructive technique of evaluating stress in composites.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-30
... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 69.79 1.59 1.97... Surface efficiency deviation interval technology unit % % ( ) % Large A Electric Coil... 1 64.52 0.87 1.08... technology unit % % ( ) % Large A Electric Coil... 1 79.81 1.66 2.06 B Electric........ 1 61.81 2.83 3.52...
Dispersion in Rectangular Networks: Effective Diffusivity and Large-Deviation Rate Function
NASA Astrophysics Data System (ADS)
Tzella, Alexandra; Vanneste, Jacques
2016-09-01
The dispersion of a diffusive scalar in a fluid flowing through a network has many applications including to biological flows, porous media, water supply, and urban pollution. Motivated by this, we develop a large-deviation theory that predicts the evolution of the concentration of a scalar released in a rectangular network in the limit of large time t ≫1 . This theory provides an approximation for the concentration that remains valid for large distances from the center of mass, specifically for distances up to O (t ) and thus much beyond the O (t1 /2) range where a standard Gaussian approximation holds. A byproduct of the approach is a closed-form expression for the effective diffusivity tensor that governs this Gaussian approximation. Monte Carlo simulations of Brownian particles confirm the large-deviation results and demonstrate their effectiveness in describing the scalar distribution when t is only moderately large.
NASA Astrophysics Data System (ADS)
Zhang, Huifang; Yang, Minghong; Xu, Xueke; Wu, Lunzhe; Yang, Weiguang; Shao, Jianda
2017-10-01
The surface figure control of the conventional annular polishing system is realized ordinarily by the interaction between the conditioner and the lap. The surface profile of the pitch lap corrected by the marble conditioner has been measured and analyzed as a function of kinematics, loading conditions, and polishing time. The surface profile measuring equipment of the large lap based on laser alignment was developed with the accuracy of about 1μm. The conditioning mechanism of the conditioner is simply determined by the kinematics and fully fitting principle, but the unexpected surface profile deviation of the lap emerged frequently due to numerous influencing factors including the geometrical relationship, the pressure distribution at the conditioner/lap interface. Both factors are quantitatively evaluated and described, and have been combined to develop a spatial and temporal model to simulate the surface profile evolution of pitch lap. The simulations are consistent with the experiments. This study is an important step toward deterministic full-aperture annular polishing, providing a beneficial guidance for the surface profile correction of the pitch lap.
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
Large Deviations: Advanced Probability for Undergrads
ERIC Educational Resources Information Center
Rolls, David A.
2007-01-01
In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…
Fermat's principle of least time predicts refraction of ant trails at substrate borders.
Oettler, Jan; Schmid, Volker S; Zankl, Niko; Rey, Olivier; Dress, Andreas; Heinze, Jürgen
2013-01-01
Fermat's principle of least time states that light rays passing through different media follow the fastest (and not the most direct) path between two points, leading to refraction at medium borders. Humans intuitively employ this rule, e.g., when a lifeguard has to infer the fastest way to traverse both beach and water to reach a swimmer in need. Here, we tested whether foraging ants also follow Fermat's principle when forced to travel on two surfaces that differentially affected the ants' walking speed. Workers of the little fire ant, Wasmannia auropunctata, established "refracted" pheromone trails to a food source. These trails deviated from the most direct path, but were not different to paths predicted by Fermat's principle. Our results demonstrate a new aspect of decentralized optimization and underline the versatility of the simple yet robust rules governing the self-organization of group-living animals.
Fermat’s Principle of Least Time Predicts Refraction of Ant Trails at Substrate Borders
Zankl, Niko; Rey, Olivier; Dress, Andreas; Heinze, Jürgen
2013-01-01
Fermat’s principle of least time states that light rays passing through different media follow the fastest (and not the most direct) path between two points, leading to refraction at medium borders. Humans intuitively employ this rule, e.g., when a lifeguard has to infer the fastest way to traverse both beach and water to reach a swimmer in need. Here, we tested whether foraging ants also follow Fermat’s principle when forced to travel on two surfaces that differentially affected the ants’ walking speed. Workers of the little fire ant, Wasmannia auropunctata, established “refracted” pheromone trails to a food source. These trails deviated from the most direct path, but were not different to paths predicted by Fermat’s principle. Our results demonstrate a new aspect of decentralized optimization and underline the versatility of the simple yet robust rules governing the self-organization of group-living animals. PMID:23527263
Moderate deviations-based importance sampling for stochastic recursive equations
Dupuis, Paul; Johnson, Dane
2017-11-17
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
Moderate deviations-based importance sampling for stochastic recursive equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dupuis, Paul; Johnson, Dane
Abstract Subsolutions to the Hamilton–Jacobi–Bellman equation associated with a moderate deviations approximation are used to design importance sampling changes of measure for stochastic recursive equations. Analogous to what has been done for large deviations subsolution-based importance sampling, these schemes are shown to be asymptotically optimal under the moderate deviations scaling. We present various implementations and numerical results to contrast their performance, and also discuss the circumstances under which a moderate deviation scaling might be appropriate.
Study on efficiency of different topologies of magnetic coupled resonant wireless charging system
NASA Astrophysics Data System (ADS)
Cui, S.; Liu, Z. Z.; Hou, Y. J.; Zeng, H.; Yue, Z. K.; Liang, L. H.
2017-11-01
This paper analyses the relationship between the output power, the transmission efficiency and the frequency, load and coupling coefficient of the four kinds of magnetic coupled resonant wireless charging system topologies. Based on mutual inductance principle, four kinds of circuit models are established, and the expressions of output power and transmission efficiency of different structures are calculated. The difference between the two power characteristics and efficiency characteristics is compared by simulating the SS (series-series) and SP (series-parallel) type wireless charging systems. With the same parameters of circuit components, the SS structure is usually suitable for small load resistance. The SP structure can be applied to large load resistors, when the transmission efficiency of the system is required to keep high. If the operating frequency deviates from the system resonance frequency, the SS type system has higher transmission efficiency than the SP type system.
ATR applications of minimax entropy models of texture and shape
NASA Astrophysics Data System (ADS)
Zhu, Song-Chun; Yuille, Alan L.; Lanterman, Aaron D.
2001-10-01
Concepts from information theory have recently found favor in both the mainstream computer vision community and the military automatic target recognition community. In the computer vision literature, the principles of minimax entropy learning theory have been used to generate rich probabilitistic models of texture and shape. In addition, the method of types and large deviation theory has permitted the difficulty of various texture and shape recognition tasks to be characterized by 'order parameters' that determine how fundamentally vexing a task is, independent of the particular algorithm used. These information-theoretic techniques have been demonstrated using traditional visual imagery in applications such as simulating cheetah skin textures and such as finding roads in aerial imagery. We discuss their application to problems in the specific application domain of automatic target recognition using infrared imagery. We also review recent theoretical and algorithmic developments which permit learning minimax entropy texture models for infrared textures in reasonable timeframes.
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-01-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254
The correlation between relatives on the supposition of genomic imprinting.
Spencer, Hamish G
2002-05-01
Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.
Stability of a horizontal well and hydraulic fracture initiation in rocks of the bazhenov formation
NASA Astrophysics Data System (ADS)
Stefanov, Yu. P.; Bakeev, R. A.; Myasnikov, A. V.; Akhtyamova, A. I.; Romanov, A. S.
2017-12-01
Three-dimensional numerical modeling of the formation of the stress-strain state in the vicinity of a horizontal well in weakened rocks of the Bazhenov formation is carried out. The influence of the well orientation and plastic deformation on the stress-strain state and the possibility of hydraulic fracturing are considered. It is shown that the deviation of the well from the direction of maximum compression leads to an increase in plastic deformation and a discrepancy between tangential stresses around the well bore and principle stresses in the surrounding medium. In an elastoplastic medium, an increase in the pressure in the well can lead to a large-scale development of plastic deformation, at which no tensile stresses necessary for hydraulic fracturing according to the classical scheme arise. In this case, there occur plastic expansion and fracture of the well.
Particle creation by naked singularities in higher dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyamoto, Umpei; Nemoto, Hiroya; Shimano, Masahiro
Recently, the possibility was pointed out by one of the present authors and his collaborators that an effective naked singularity referred to as ''a visible border of spacetime'' is generated by high-energy particle collision in the context of large extra dimensions or TeV-scale gravity. In this paper, we investigate the particle creation by a naked singularity in general dimensions, while adopting a model in which a marginally naked singularity forms in the collapse of a homothetic lightlike pressureless fluid. We find that the spectrum deviates from that of Hawking radiation due to scattering near the singularity but can be recastmore » in quasithermal form. The temperature is always higher than that of Hawking radiation of a same-mass black hole, and can be arbitrarily high depending on a parameter in the model. This implies that, in principle, the naked singularity may be distinguished from a black hole in collider experiments.« less
Cotter, C. J.
2017-01-01
In Holm (Holm 2015 Proc. R. Soc. A 471, 20140963. (doi:10.1098/rspa.2014.0963)), stochastic fluid equations were derived by employing a variational principle with an assumed stochastic Lagrangian particle dynamics. Here we show that the same stochastic Lagrangian dynamics naturally arises in a multi-scale decomposition of the deterministic Lagrangian flow map into a slow large-scale mean and a rapidly fluctuating small-scale map. We employ homogenization theory to derive effective slow stochastic particle dynamics for the resolved mean part, thereby obtaining stochastic fluid partial equations in the Eulerian formulation. To justify the application of rigorous homogenization theory, we assume mildly chaotic fast small-scale dynamics, as well as a centring condition. The latter requires that the mean of the fluctuating deviations is small, when pulled back to the mean flow. PMID:28989316
Teaching Standard Deviation by Building from Student Invention
ERIC Educational Resources Information Center
Day, James; Nakahara, Hiroko; Bonn, Doug
2010-01-01
First-year physics laboratories are often driven by a mix of goals that includes the illustration or discovery of basic physics principles and a myriad of technical skills involving specific equipment, data analysis, and report writing. The sheer number of such goals seems guaranteed to produce cognitive overload, even when highly detailed…
LD-SPatt: large deviations statistics for patterns on Markov chains.
Nuel, G
2004-01-01
Statistics on Markov chains are widely used for the study of patterns in biological sequences. Statistics on these models can be done through several approaches. Central limit theorem (CLT) producing Gaussian approximations are one of the most popular ones. Unfortunately, in order to find a pattern of interest, these methods have to deal with tail distribution events where CLT is especially bad. In this paper, we propose a new approach based on the large deviations theory to assess pattern statistics. We first recall theoretical results for empiric mean (level 1) as well as empiric distribution (level 2) large deviations on Markov chains. Then, we present the applications of these results focusing on numerical issues. LD-SPatt is the name of GPL software implementing these algorithms. We compare this approach to several existing ones in terms of complexity and reliability and show that the large deviations are more reliable than the Gaussian approximations in absolute values as well as in terms of ranking and are at least as reliable as compound Poisson approximations. We then finally discuss some further possible improvements and applications of this new method.
A new digitized reverse correction method for hypoid gears based on a one-dimensional probe
NASA Astrophysics Data System (ADS)
Li, Tianxing; Li, Jubo; Deng, Xiaozhong; Yang, Jianjun; Li, Genggeng; Ma, Wensuo
2017-12-01
In order to improve the tooth surface geometric accuracy and transmission quality of hypoid gears, a new digitized reverse correction method is proposed based on the measurement data from a one-dimensional probe. The minimization of tooth surface geometrical deviations is realized from the perspective of mathematical analysis and reverse engineering. Combining the analysis of complex tooth surface generation principles and the measurement mechanism of one-dimensional probes, the mathematical relationship between the theoretical designed tooth surface, the actual machined tooth surface and the deviation tooth surface is established, the mapping relation between machine-tool settings and tooth surface deviations is derived, and the essential connection between the accurate calculation of tooth surface deviations and the reverse correction method of machine-tool settings is revealed. Furthermore, a reverse correction model of machine-tool settings is built, a reverse correction strategy is planned, and the minimization of tooth surface deviations is achieved by means of the method of numerical iterative reverse solution. On this basis, a digitized reverse correction system for hypoid gears is developed by the organic combination of numerical control generation, accurate measurement, computer numerical processing, and digitized correction. Finally, the correctness and practicability of the digitized reverse correction method are proved through a reverse correction experiment. The experimental results show that the tooth surface geometric deviations meet the engineering requirements after two trial cuts and one correction.
Numerical Large Deviation Analysis of the Eigenstate Thermalization Hypothesis
NASA Astrophysics Data System (ADS)
Yoshizawa, Toru; Iyoda, Eiki; Sagawa, Takahiro
2018-05-01
A plausible mechanism of thermalization in isolated quantum systems is based on the strong version of the eigenstate thermalization hypothesis (ETH), which states that all the energy eigenstates in the microcanonical energy shell have thermal properties. We numerically investigate the ETH by focusing on the large deviation property, which directly evaluates the ratio of athermal energy eigenstates in the energy shell. As a consequence, we have systematically confirmed that the strong ETH is indeed true even for near-integrable systems. Furthermore, we found that the finite-size scaling of the ratio of athermal eigenstates is a double exponential for nonintegrable systems. Our result illuminates the universal behavior of quantum chaos, and suggests that a large deviation analysis would serve as a powerful method to investigate thermalization in the presence of the large finite-size effect.
NASA Astrophysics Data System (ADS)
Anderson, David; Yunes, Nicolás
2017-09-01
Scalar-tensor theories of gravity modify general relativity by introducing a scalar field that couples nonminimally to the metric tensor, while satisfying the weak-equivalence principle. These theories are interesting because they have the potential to simultaneously suppress modifications to Einstein's theory on Solar System scales, while introducing large deviations in the strong field of neutron stars. Scalar-tensor theories can be classified through the choice of conformal factor, a scalar that regulates the coupling between matter and the metric in the Einstein frame. The class defined by a Gaussian conformal factor with a negative exponent has been studied the most because it leads to spontaneous scalarization (i.e. the sudden activation of the scalar field in neutron stars), which consequently leads to large deviations from general relativity in the strong field. This class, however, has recently been shown to be in conflict with Solar System observations when accounting for the cosmological evolution of the scalar field. We here study whether this remains the case when the exponent of the conformal factor is positive, as well as in another class of theories defined by a hyperbolic conformal factor. We find that in both of these scalar-tensor theories, Solar System tests are passed only in a very small subset of coupling parameter space, for a large set of initial conditions compatible with big bang nucleosynthesis. However, while we find that it is possible for neutron stars to scalarize, one must carefully select the coupling parameter to do so, and even then, the scalar charge is typically 2 orders of magnitude smaller than in the negative-exponent case. Our study suggests that future work on scalar-tensor gravity, for example in the context of tests of general relativity with gravitational waves from neutron star binaries, should be carried out within the positive coupling parameter class.
Matijević, Valentina; Secić, Ana; Zivković, Tamara Kauzlarić; Borosak, Jesenka; Kolak, Zeljka; Dimić, Zdenka
2013-09-01
The early child development, from birth until the age of one year is, amongst other changes, characterized by intense motor learning. During that period, the voluntary learning patterns evolve from reflexive patterns to coordinated voluntary patterns. All of the child's voluntary movements present active forms in which the child communicates with the environment. In this communication, the hand plays an important role. Its brain representation covers one-third of the entire motor region, situated in the close proximity to the speech region. For this reason, some authors refer to hand as a "speech organ". According to numerous studies, each separate finger also has a relatively large representation in the cerebral cortex, which points to the importance of the fine motor skills development, or precise, highly differentiated movements of hand muscles following the principles of differentiation and hierarchical integration. Development of the fine motor skills in the hand is important for the overall child development, and it also serves as a predictor pointing to immaturity of the central nervous system. The aim of this paper is to present the development of hand motoricity from birth until the age of one year, as well as the most frequent deviations observed in children hospitalized at Children's Department of Rehabilitation, Clinical Department of Rheumatology, Physical Medicine and Rehabilitation, Sestre milosrdnice University Hospital Center.
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
NASA Astrophysics Data System (ADS)
Speck, Thomas; Engel, Andreas; Seifert, Udo
2012-12-01
We study the large deviation function for the entropy production rate in two driven one-dimensional systems: the asymmetric random walk on a discrete lattice and Brownian motion in a continuous periodic potential. We compare two approaches: using the Donsker-Varadhan theory and using the Freidlin-Wentzell theory. We show that the wings of the large deviation function are dominated by a single optimal trajectory: either in the forward direction (positive rate) or in the backward direction (negative rate). The joining of the two branches at zero entropy production implies a non-differentiability and thus the appearance of a ‘kink’. However, around zero entropy production, many trajectories contribute and thus the ‘kink’ is smeared out.
Hoeffding Type Inequalities and their Applications in Statistics and Operations Research
NASA Astrophysics Data System (ADS)
Daras, Tryfon
2007-09-01
Large Deviation theory is the branch of Probability theory that deals with rare events. Sometimes, these events can be described by the sum of random variables that deviates from its mean more than a "normal" amount. A precise calculation of the probabilities of such events turns out to be crucial in a variety of different contents (e.g. in Probability Theory, Statistics, Operations Research, Statistical Physics, Financial Mathematics e.t.c.). Recent applications of the theory deal with random walks in random environments, interacting diffusions, heat conduction, polymer chains [1]. In this paper we prove an inequality of exponential type, namely theorem 2.1, which gives a large deviation upper bound for a specific sequence of r.v.s. Inequalities of this type have many applications in Combinatorics [2]. The inequality generalizes already proven results of this type, in the case of symmetric probability measures. We get as consequences to the inequality: (a) large deviations upper bounds for exchangeable Bernoulli sequences of random variables, generalizing results proven for independent and identically distributed Bernoulli sequences of r.v.s. and (b) a general form of Bernstein's inequality. We compare the inequality with large deviation results already proven by the author and try to see its advantages. Finally, using the inequality, we solve one of the basic problems of Operations Research (bin packing problem) in the case of exchangeable r.v.s.
Generic dynamical phase transition in one-dimensional bulk-driven lattice gases with exclusion
NASA Astrophysics Data System (ADS)
Lazarescu, Alexandre
2017-06-01
Dynamical phase transitions are crucial features of the fluctuations of statistical systems, corresponding to boundaries between qualitatively different mechanisms of maintaining unlikely values of dynamical observables over long periods of time. They manifest themselves in the form of non-analyticities in the large deviation function of those observables. In this paper, we look at bulk-driven exclusion processes with open boundaries. It is known that the standard asymmetric simple exclusion process exhibits a dynamical phase transition in the large deviations of the current of particles flowing through it. That phase transition has been described thanks to specific calculation methods relying on the model being exactly solvable, but more general methods have also been used to describe the extreme large deviations of that current, far from the phase transition. We extend those methods to a large class of models based on the ASEP, where we add arbitrary spatial inhomogeneities in the rates and short-range potentials between the particles. We show that, as for the regular ASEP, the large deviation function of the current scales differently with the size of the system if one considers very high or very low currents, pointing to the existence of a dynamical phase transition between those two regimes: high current large deviations are extensive in the system size, and the typical states associated to them are Coulomb gases, which are highly correlated; low current large deviations do not depend on the system size, and the typical states associated to them are anti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate our results numerically on a simple example, and we interpret the transition in terms of the current pushing beyond its maximal hydrodynamic value, as well as relate it to the appearance of Tracy-Widom distributions in the relaxation statistics of such models. , which features invited work from the best early-career researchers working within the scope of J. Phys. A. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Alexandre Lazarescu was selected by the Editorial Board of J. Phys. A as an Emerging Talent.
Conservative relativity principle: Logical ground and analysis of relevant experiments
NASA Astrophysics Data System (ADS)
Kholmetskii, Alexander; Yarman, Tolga; Missevitch, Oleg
2014-05-01
We suggest a new relativity principle, which asserts the impossibility to distinguish the state of rest and the state of motion at the constant velocity of a system, if no work is done to the system in question during its motion. We suggest calling this new rule as "conservative relativity principle" (CRP). In the case of an empty space, CRP is reduced to the Einstein special relativity principle. We also show that CRP is compatible with the general relativity principle. One of important implications of CRP is the dependence of the proper time of a charged particle on the electric potential at its location. In the present paper we consider the relevant experimental facts gathered up to now, where the latter effect can be revealed. We show that in atomic physics the introduction of this effect furnishes a better convergence between theory and experiment than that provided by the standard approach. Finally, we reanalyze the Mössbauer experiments in rotating systems and show that the obtained recently puzzling deviation of the relative energy shift between emission and absorption lines from the relativistic prediction can be explained by the CRP.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Doll, J.; Dupuis, P.; Nyquist, P.
2017-02-08
Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less
NASA Astrophysics Data System (ADS)
Gao, Dongyang; Zheng, Xiaobing; Li, Jianjun; Hu, Youbo; Xia, Maopeng; Salam, Abdul; Zhang, Peng
2018-03-01
Based on spontaneous parametric downconversion process, we propose a novel self-calibration radiometer scheme which can self-calibrate the degradation of its own response and ultimately monitor the fluctuation of a target radiation. Monitor results were independent of its degradation and not linked to the primary standard detector scale. The principle and feasibility of the proposed scheme were verified by observing bromine-tungsten lamp. A relative standard deviation of 0.39 % was obtained for stable bromine-tungsten lamp. Results show that the proposed scheme is advanced of its principle. The proposed scheme could make a significant breakthrough in the self-calibration issue on the space platform.
Grover Search and the No-Signaling Principle
NASA Astrophysics Data System (ADS)
Bao, Ning; Bouland, Adam; Jordan, Stephen P.
2016-09-01
Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.
Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger
2018-05-01
In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.
Endometrioid adenocarcinoma of the uterus with a minimal deviation invasive pattern.
Landry, D; Mai, K T; Senterman, M K; Perkins, D G; Yazdi, H M; Veinot, J P; Thomas, J
2003-01-01
Minimal deviation adenocarcinoma of endometrioid type is a rare pathological entity. We describe a variant of typical endometrioid adenocarcinoma associated with minimal deviation adenocarcinoma of endometrioid type. One 'pilot' case of minimal deviation adenocarcinoma of endometrioid type associated with typical endometrioid adenocarcinoma was encountered at our institution in 2001. A second case of same type was received in consultation. We reviewed 168 consecutive hysterectomy specimens diagnosed with 'endometrioid adenocarcinoma' specifically to identify areas of minimal deviation adenocarcinoma of endometrioid type. Immunohistochemistry was done with the following antibodies: MIB1, p53, oestrogen receptor (ER), progesterone receptor (PR), cytokeratin 7 (CK7), cytokeratin 20 (CK20), carcinoembryonic antigen (CEA), and vimentin (VIM). Four additional cases of minimal deviation adenocarcinoma of endometrioid type were identified. All six cases of minimal deviation adenocarcinoma of endometrioid type were associated with superficial endometrioid adenocarcinoma. In two cases with a large amount of minimal deviation adenocarcinoma of endometrioid type, the cervix was involved. The immunoprofile of two representative cases was ER+, PR+, CK7+, CK20-, CEA-, VIM+. MIB1 immunostaining of four cases revealed little proliferative activity of the minimal deviation adenocarcinoma of endometrioid type glandular cells (0-1%) compared with the associated 'typical' endometrioid adenocarcinoma (20-30%). The same four cases showed no p53 immunostaining in minimal deviation adenocarcinoma of endometrioid type compared with a range of positive staining in the associated endometrioid adenocarcinoma. Minimal deviation adenocarcinoma of endometrioid type more often develops as a result of differentiation from typical endometrioid adenocarcinoma than de novo. Due to its deceptively benign microscopic appearance, minimal deviation adenocarcinoma of endometrioid type may be overlooked and may lead to incorrect assessment of tumour depth and pathological stage. There was a tendency for tumour with a large amount of minimal deviation adenocarcinoma of endometrioid type to invade the cervix.
Adaptive Gain-based Stable Power Smoothing of a DFIG
Muljadi, Eduard; Lee, Hyewon; Hwang, Min; ...
2017-11-01
In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
Adaptive Gain-based Stable Power Smoothing of a DFIG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muljadi, Eduard; Lee, Hyewon; Hwang, Min
In a power system that has a high wind penetration, the output power fluctuation of a large-scale wind turbine generator (WTG) caused by the varying wind speed increases the maximum frequency deviation, which is an important metric to assess the quality of electricity, because of the reduced system inertia. This paper proposes a stable power-smoothing scheme of a doubly-fed induction generator (DFIG) that can suppress the maximum frequency deviation, particularly for a power system with a high wind penetration. To do this, the proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combinationmore » with the maximum power point tracking control loop. To improve the power-smoothing capability while guaranteeing the stable operation of a DFIG, the gain of the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. Here, the simulation results based on the IEEE 14-bus system demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WTG under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
Behavior analysts and cultural analysis: Troubles and issues
Malagodi, E. F.; Jackson, Kevin
1989-01-01
Three strategic suggestions are offered to behavior analysts who are concerned with extending the interests of our discipline into domains traditionally assigned to the social sciences: (1) to expand our world-view perspectives beyond the boundaries commonly accepted by psychologists in general; (2) to build a cultural analytic framework upon the foundations we have developed for the study of individuals; and (3) to study the works of those social scientists whose views are generally compatible with, and complementary to, our own. Sociologist C. Wright Mills' distinction between troubles and issues and anthropologist Marvin Harris's principles of cultural materialism are related to topics raised by these three strategies. The pervasiveness of the “psychocentric” world view within psychology and the social sciences, and throughout our culture at large, is discussed from the points of view of Skinner, Mills, and Harris. It is suggested that a thorough commitment to radical behaviorism, and continuation of interaction between radical behaviorism and cultural materialism, are necessary for maintaining and extending an issues orientation within the discipline of behavior analysis and for guarding against dilutions and subversions of that orientation by “deviation-dampening” contingencies that exist in our profession and in our culture at large. PMID:22478014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hyewon; Hwang, Min; Muljadi, Eduard
In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
Lee, Hyewon; Hwang, Min; Muljadi, Eduard; ...
2017-04-18
In an electric power grid that has a high penetration level of wind, the power fluctuation of a large-scale wind power plant (WPP) caused by varying wind speeds deteriorates the system frequency regulation. This paper proposes a power-smoothing scheme of a doubly-fed induction generator (DFIG) that significantly mitigates the system frequency fluctuation while preventing over-deceleration of the rotor speed. The proposed scheme employs an additional control loop relying on the system frequency deviation that operates in combination with the maximum power point tracking control loop. To improve the power-smoothing capability while preventing over-deceleration of the rotor speed, the gain ofmore » the additional loop is modified with the rotor speed and frequency deviation. The gain is set to be high if the rotor speed and/or frequency deviation is large. In conclusion, the simulation results based on the IEEE 14-bus system clearly demonstrate that the proposed scheme significantly lessens the output power fluctuation of a WPP under various scenarios by modifying the gain with the rotor speed and frequency deviation, and thereby it can regulate the frequency deviation within a narrow range.« less
2012-09-30
Estimation Methods for Underwater OFDM 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. 6) Asynchronous Multiuser...multi-input multi-output ( MIMO ) OFDM is also pursued, where it is shown that the proposed hybrid initialization enables drastically improved receiver...are investigated. 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. This work studies a distributed system with
Siochi, R
2012-06-01
To develop a quality initiative discovery framework using process improvement techniques, software tools and operating principles. Process deviations are entered into a radiotherapy incident reporting database. Supervisors use an in-house Event Analysis System (EASy) to discuss incidents with staff. Major incidents are analyzed with an in-house Fault Tree Analysis (FTA). A meta-Analysis is performed using association, text mining, key word clustering, and differential frequency analysis. A key operating principle encourages the creation of forcing functions via rapid application development. 504 events have been logged this past year. The results for the key word analysis indicate that the root cause for the top ranked key words was miscommunication. This was also the root cause found from association analysis, where 24% of the time that an event involved a physician it also involved a nurse. Differential frequency analysis revealed that sharp peaks at week 27 were followed by 3 major incidents, two of which were dose related. The peak was largely due to the front desk which caused distractions in other areas. The analysis led to many PI projects but there is still a major systematic issue with the use of forms. The solution we identified is to implement Smart Forms to perform error checking and interlocking. Our first initiative replaced our daily QA checklist with a form that uses custom validation routines, preventing therapists from proceeding with treatments until out of tolerance conditions are corrected. PITSTOP has increased the number of quality initiatives in our department, and we have discovered or confirmed common underlying causes of a variety of seemingly unrelated errors. It has motivated the replacement of all forms with smart forms. © 2012 American Association of Physicists in Medicine.
Propagation of rotational Risley-prism-array-based Gaussian beams in turbulent atmosphere
NASA Astrophysics Data System (ADS)
Chen, Feng; Ma, Haotong; Dong, Li; Ren, Ge; Qi, Bo; Tan, Yufeng
2018-03-01
Limited by the size and weight of prism and optical assembling, Rotational Risley-prism-array system is a simple but effective way to realize high power and superior beam quality of deflecting laser output. In this paper, the propagation of the rotational Risley-prism-array-based Gaussian beam array in atmospheric turbulence is studied in detail. An analytical expression for the average intensity distribution at the receiving plane is derived based on nonparaxial ray tracing method and extended Huygens-Fresnel principle. Power in the diffraction-limited bucket is chosen to evaluate beam quality. The effect of deviation angle, propagation distance and intensity of turbulence on beam quality is studied in detail by quantitative simulation. It reveals that with the propagation distance increasing, the intensity distribution gradually evolves from multiple-petal-like shape into the pattern that contains one main-lobe in the center with multiple side-lobes in weak turbulence. The beam quality of rotational Risley-prism-array-based Gaussian beam array with lower deviation angle is better than its counterpart with higher deviation angle when propagating in weak and medium turbulent (i.e. Cn2 < 10-13m-2/3), the beam quality of higher deviation angle arrays degrades faster as the intensity of turbulence gets stronger. In the case of propagating in strong turbulence, the long propagation distance (i.e. z > 10km ) and deviation angle have no influence on beam quality.
Isotropy of low redshift type Ia supernovae: A Bayesian analysis
NASA Astrophysics Data System (ADS)
Andrade, U.; Bengaly, C. A. P.; Alcaniz, J. S.; Santos, B.
2018-04-01
The standard cosmology strongly relies upon the cosmological principle, which consists on the hypotheses of large scale isotropy and homogeneity of the Universe. Testing these assumptions is, therefore, crucial to determining if there are deviations from the standard cosmological paradigm. In this paper, we use the latest type Ia supernova compilations, namely JLA and Union2.1 to test the cosmological isotropy at low redshift ranges (z <0.1 ). This is performed through a Bayesian selection analysis, in which we compare the standard, isotropic model, with another one including a dipole correction due to peculiar velocities. The full covariance matrix of SN distance uncertainties are taken into account. We find that the JLA sample favors the standard model, whilst the Union2.1 results are inconclusive, yet the constraints from both compilations are in agreement with previous analyses. We conclude that there is no evidence for a dipole anisotropy from nearby supernova compilations, albeit this test should be greatly improved with the much-improved data sets from upcoming cosmological surveys.
Analysis of the correlation dimension for inertial particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustavsson, Kristian; Department of Physics, Göteborg University, 41296 Gothenburg; Mehlig, Bernhard
2015-07-15
We obtain an implicit equation for the correlation dimension which describes clustering of inertial particles in a complex flow onto a fractal measure. Our general equation involves a propagator of a nonlinear stochastic process in which the velocity gradient of the fluid appears as additive noise. When the long-time limit of the propagator is considered our equation reduces to an existing large-deviation formalism from which it is difficult to extract concrete results. In the short-time limit, however, our equation reduces to a solvability condition on a partial differential equation. In the case where the inertial particles are much denser thanmore » the fluid, we show how this approach leads to a perturbative expansion of the correlation dimension, for which the coefficients can be obtained exactly and in principle to any order. We derive the perturbation series for the correlation dimension of inertial particles suspended in three-dimensional spatially smooth random flows with white-noise time correlations, obtaining the first 33 non-zero coefficients exactly.« less
NASA Astrophysics Data System (ADS)
Barnea, A. Ronny; Cheshnovsky, Ori; Even, Uzi
2018-02-01
Interference experiments have been paramount in our understanding of quantum mechanics and are frequently the basis of testing the superposition principle in the framework of quantum theory. In recent years, several studies have challenged the nature of wave-function interference from the perspective of Born's rule—namely, the manifestation of so-called high-order interference terms in a superposition generated by diffraction of the wave functions. Here we present an experimental test of multipath interference in the diffraction of metastable helium atoms, with large-number counting statistics, comparable to photon-based experiments. We use a variation of the original triple-slit experiment and accurate single-event counting techniques to provide a new experimental bound of 2.9 ×10-5 on the statistical deviation from the commonly approximated null third-order interference term in Born's rule for matter waves. Our value is on the order of the maximal contribution predicted for multipath trajectories by Feynman path integrals.
NASA Astrophysics Data System (ADS)
Ianson, I. K.
1991-03-01
Research in the field of high-temperature superconductors based on methods of tunneling and microcontact spectroscopy is reviewed in a systematic manner. The theoretical principles of the methods are presented, and various types of contacts are described and classified. Attention is given to deviations of the measured volt-ampere characteristics from those predicted by simple theoretical models and those observed for conventional superconductors. Results of measurements of the energy gap and fine structure of volt ampere characteristic derivatives are presented for La(2-x)Sr(x)CuO4.
Phenomenology of small violations of Fermi and Bose statistics
NASA Astrophysics Data System (ADS)
Greenberg, O. W.; Mohapatra, Rabindra N.
1989-04-01
In a recent paper, we proposed a ``paronic'' field-theory framework for possible small deviations from the Pauli exclusion principle. This theory cannot be represented in a positive-metric (Hilbert) space. Nonetheless, the issue of possible small violations of the exclusion principle can be addressed in the framework of quantum mechanics, without being connected with a local quantum field theory. In this paper, we discuss the phenomenology of small violations of both Fermi and Bose statistics. We consider the implications of such violations in atomic, nuclear, particle, and condensed-matter physics and in astrophysics and cosmology. We also discuss experiments that can detect small violations of Fermi and Bose statistics or place stringent bounds on their validity.
Research on Dust Concentration Measurement Technique Based on the Theory of Ultrasonic Attenuation
NASA Astrophysics Data System (ADS)
Zhang, Yan; Lou, Wenzhong; Liao, Maohao
2018-03-01
In this paper, a method of characteristics dust concentration is proposed, which based on ultrasonic changes of MEMS piezoelectric ultrasonic transducer. The principle is that the intensity of the ultrasonic will produce attenuation with the propagation medium and propagation distance, the attenuation coefficient is affect by dust concentration. By detecting the changes of ultra acoustic in the dust, the concentration of the dust is calculate by the attenuation-concentration model, and the EACH theory model is based on this principle. The experimental results show that the MEMS piezoelectric ultrasonic transducer can be use for dust concentration of 100-900 g/m3 detection, the deviation between theory and experiments is smaller than 10.4%.
NASA Astrophysics Data System (ADS)
Klein, K. G.
2016-12-01
Weakly collisional plasmas, of the type typically observed in the solar wind, are commonly in a state other than local thermodynamic equilibrium. This deviation from a Maxwellian velocity distribution can be characterized by pressure anisotropies, disjoint beams streaming at differing speeds, leptokurtic distributions at large energies, and other non-thermal features. As these features may be artifacts of dynamic processes, including the the acceleration and expansion of the solar wind, and as the free energy contained in these features can drive kinetic micro-instabilities, accurate measurement and modeling of these features is essential for characterizing the solar wind. After a review of these features, a technique is presented for the efficient calculation of kinetic instabilities associated with a general, non-Maxwellian plasma. As a proof of principle, this technique is applied to bi-Maxwellian systems for which kinetic instability thresholds are known, focusing on parameter scans including beams and drifting heavy minor ions. The application of this technique to fits of velocity distribution functions from current, forthcoming, and proposed missions including WIND, DSCOVR, Solar Probe Plus, and THOR, as well as the underlying measured distribution functions, is discussed. Particular attention is paid to the effects of instrument pointing and integration time, as well as potential deviation between instabilities associated with the Maxwellian fits and those associated with the observed, potentially non-Maxwellian, velocity distribution. Such application may further illuminate the role instabilities play in the evolution of the solar wind.
Comparison of different methods for the in situ measurement of forest litter moisture content
NASA Astrophysics Data System (ADS)
Schunk, C.; Ruth, B.; Leuchner, M.; Wastl, C.; Menzel, A.
2015-06-01
Dead fine fuel (e.g. litter) moisture content is an important parameter for both forest fire and ecological applications as it is related to ignitability, fire behavior as well as soil respiration. However, the comprehensive literature review in this paper shows that there is no easy-to-use method for automated measurements available. This study investigates the applicability of four different sensor types (permittivity and electrical resistance measuring principles) for this measurement. Comparisons were made to manual gravimetric reference measurements carried out almost daily for one fire season and overall agreement was good (highly significant correlations with 0.792 ≦ r ≦ 0.947). Standard deviations within sensor types were linearly correlated to daily sensor mean values; however, above a certain threshold they became irregular, which may be linked to exceedance of the working ranges. Thus, measurements with irregular standard deviations were considered unusable and calibrations of all individual sensors were compared for useable periods. A large drift in the sensor raw value-litter moisture-relationship became obvious from drought to drought period. This drift may be related to installation effects or settling and decomposition of the litter layer throughout the fire season. Because of the drift and the in situ calibration necessary, it cannot be recommended to use the methods presented here for monitoring purposes. However, they may be interesting for scientific studies when some manual fuel moisture measurements are made anyway. Additionally, a number of potential methodological improvements are suggested.
Convex hulls of random walks in higher dimensions: A large-deviation study
NASA Astrophysics Data System (ADS)
Schawe, Hendrik; Hartmann, Alexander K.; Majumdar, Satya N.
2017-12-01
The distribution of the hypervolume V and surface ∂ V of convex hulls of (multiple) random walks in higher dimensions are determined numerically, especially containing probabilities far smaller than P =10-1000 to estimate large deviation properties. For arbitrary dimensions and large walk lengths T , we suggest a scaling behavior of the distribution with the length of the walk T similar to the two-dimensional case and behavior of the distributions in the tails. We underpin both with numerical data in d =3 and d =4 dimensions. Further, we confirm the analytically known means of those distributions and calculate their variances for large T .
Work fluctuations for a Brownian particle between two thermostats
NASA Astrophysics Data System (ADS)
Visco, Paolo
2006-06-01
We explicitly determine the large deviation function of the energy flow of a Brownian particle coupled to two heat baths at different temperatures. This toy model, initially introduced by Derrida and Brunet (2005, Einstein aujourd'hui (Les Ulis: EDP Sciences)), not only allows us to sort out the influence of initial conditions on large deviation functions but also allows us to pinpoint various restrictions bearing upon the range of validity of the Fluctuation Relation.
Continuous quantum measurements and the action uncertainty principle
NASA Astrophysics Data System (ADS)
Mensky, Michael B.
1992-09-01
The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.
Deviations from Newton's law in supersymmetric large extra dimensions
NASA Astrophysics Data System (ADS)
Callin, P.; Burgess, C. P.
2006-09-01
Deviations from Newton's inverse-squared law at the micron length scale are smoking-gun signals for models containing supersymmetric large extra dimensions (SLEDs), which have been proposed as approaches for resolving the cosmological constant problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the dark energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant naturally small also keeps the extra-dimensional moduli effectively massless, leading to deviations from general relativity in the far infrared of the scalar-tensor form. We here explore the deviations from Newton's law which are predicted over micron distances, and show the ways in which they differ and resemble those in the non-supersymmetric case.
A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY
Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...
Chimluang, Janya; Thanasilp, Sureeporn; Akkayagorn, Lanchasak; Upasen, Ratchaneekorn; Pudtong, Noppamat; Tantitrakul, Wilailuck
2017-12-01
To evaluate the effect of an intervention based on basic Buddhist principles on the spiritual well-being of patients with terminal cancer. This quasi-experimental research study had pre- and post-test control groups. The experimental group received conventional care and an intervention based on basic Buddhist principles for three consecutive days, including seven activities based on precept activities, concentration activities and wisdom activities. The control group received conventional care alone. Forty-eight patients participated in this study: 23 in the experimental group and 25 in the control group. Their mean age was 53 (standard deviation 10) years. The spiritual well-being of participants in the experimental group was significantly higher than that of participants in the control group at the second post-test (P < 0.05). An intervention based on basic Buddhist principles improved the spiritual well-being of patients with terminal cancer. This result supports the beneficial effects of implementing this type of intervention for patients with terminal cancer. Copyright © 2017 Elsevier Ltd. All rights reserved.
The role of strength defects in shaping impact crater planforms
NASA Astrophysics Data System (ADS)
Watters, W. A.; Geiger, L. M.; Fendrock, M.; Gibson, R.; Hundal, C. B.
2017-04-01
High-resolution imagery and digital elevation models (DEMs) were used to measure the planimetric shapes of well-preserved impact craters. These measurements were used to characterize the size-dependent scaling of the departure from circular symmetry, which provides useful insights into the processes of crater growth and modification. For example, we characterized the dependence of the standard deviation of radius (σR) on crater diameter (D) as σR ∼ Dm. For complex craters on the Moon and Mars, m ranges from 0.9 to 1.2 among strong and weak target materials. For the martian simple craters in our data set, m varies from 0.5 to 0.8. The value of m tends toward larger values in weak materials and modified craters, and toward smaller values in relatively unmodified craters as well as craters in high-strength targets, such as young lava plains. We hypothesize that m ≈ 1 for planforms shaped by modification processes (slumping and collapse), whereas m tends toward ∼ 1/2 for planforms shaped by an excavation flow that was influenced by strength anisotropies. Additional morphometric parameters were computed to characterize the following planform properties: the planform aspect ratio or ellipticity, the deviation from a fitted ellipse, and the deviation from a convex shape. We also measured the distribution of crater shapes using Fourier decomposition of the planform, finding a similar distribution for simple and complex craters. By comparing the strength of small and large circular harmonics, we confirmed that lunar and martian complex craters are more polygonal at small sizes. Finally, we have used physical and geometrical principles to motivate scaling arguments and simple Monte Carlo models for generating synthetic planforms, which depend on a characteristic length scale of target strength defects. One of these models can be used to generate populations of synthetic planforms which are very similar to the measured population of well-preserved simple craters on Mars.
Mi, Jin-Rui; Ma, Xiang; Zhang, Ya-Juan; Wang, Yi; Wen, Ya-Dong; Zhao, Long-Lian; Li, Jun-Hui; Zhang, Lu-Da
2011-04-01
The present paper builds a model based on Monte Carlo method in the projection of the blending tobacco. This model is made up of two parts: the projecting points of tobacco materials, whose coordinates are calculated by means of the PPF (projection based on principal component and Fisher criterion) projection method for the tobacco near-infrared spectrum; and the point of tobacco blend, which is produced by linear additive to the projecting point coordinates of tobacco materials. In order to analyze the projection points deviation from initial state levels, Monte Carlo method is introduced to simulate the differences and changes of raw material projection. The results indicate that there are two major factors affecting the relative deviation: the highest proportion of tobacco materials in the blend, which is too high to make the deviation under control; and the quantity of materials, which is so small to control the deviation. The conclusion is close to the principle of actual formulating designing, particularly, the more in the quantity while the lower in proportion of each. Finally the paper figures out the upper limit of the proportions in the different quantity of materials by theory. It also has important reference value for other agricultural products blend.
NASA Astrophysics Data System (ADS)
Leal-Junior, Arnaldo G.; Frizera, Anselmo; Marques, Carlos; Sánchez, Manuel R. A.; Botelho, Thomaz R.; Segatto, Marcelo V.; Pontes, Maria José
2018-03-01
This paper presents the development of a polymer optical fiber (POF) strain gauge based on the light coupling principle, which the power attenuation is created by the misalignment between two POFs. The misalignment, in this case, is proportional to the strain on the structure that the fibers are attached. This principle has the advantages of low cost, ease of implementation, temperature insensitiveness, electromagnetic fields immunity and simplicity on the sensor interrogation and signal processing. Such advantages make the proposed solution an interesting alternative to the electronic strain gauges. For this reason, an analytical model for the POF strain gauge is proposed and validated. Furthermore, the proposed POF sensor is applied on an active orthosis for knee rehabilitation exercises through flexion/extension cycles. The controller of the orthosis provides 10 different levels of robotic assistance on the flexion/extension movement. The POF strain gauge is tested at each one of these levels. Results show good correlation between the optical and electronic strain gauges with root mean squared deviation (RMSD) of 1.87 Nm when all cycles are analyzed, which represents a deviation of less than 8%. For the application, the proposed sensor presented higher stability than the electronic one, which can provide advantages on the rehabilitation exercises and on the inner controller of the device.
Analysis of form deviation in non-isothermal glass molding
NASA Astrophysics Data System (ADS)
Kreilkamp, H.; Grunwald, T.; Dambon, O.; Klocke, F.
2018-02-01
Especially in the market of sensors, LED lighting and medical technologies, there is a growing demand for precise yet low-cost glass optics. This demand poses a major challenge for glass manufacturers who are confronted with the challenge arising from the trend towards ever-higher levels of precision combined with immense pressure on market prices. Since current manufacturing technologies especially grinding and polishing as well as Precision Glass Molding (PGM) are not able to achieve the desired production costs, glass manufacturers are looking for alternative technologies. Non-isothermal Glass Molding (NGM) has been shown to have a big potential for low-cost mass manufacturing of complex glass optics. However, the biggest drawback of this technology at the moment is the limited accuracy of the manufactured glass optics. This research is addressing the specific challenges of non-isothermal glass molding with respect to form deviation of molded glass optics. Based on empirical models, the influencing factors on form deviation in particular form accuracy, waviness and surface roughness will be discussed. A comparison with traditional isothermal glass molding processes (PGM) will point out the specific challenges of non-isothermal process conditions. Furthermore, the underlying physical principle leading to the formation of form deviations will be analyzed in detail with the help of numerical simulation. In this way, this research contributes to a better understanding of form deviations in non-isothermal glass molding and is an important step towards new applications demanding precise yet low-cost glass optics.
Toddle temporal-spatial deviation index: Assessment of pediatric gait.
Cahill-Rowley, Katelyn; Rose, Jessica
2016-09-01
This research aims to develop a gait index for use in the pediatric clinic as well as research, that quantifies gait deviation in 18-22 month-old children: the Toddle Temporal-spatial Deviation Index (Toddle TDI). 81 preterm children (≤32 weeks) with very-low-birth-weights (≤1500g) and 42 full-term TD children aged 18-22 months, adjusted for prematurity, walked on a pressure-sensitive mat. Preterm children were administered the Bayley Scales of Infant Development-3rd Edition (BSID-III). Principle component analysis of TD children's temporal-spatial gait parameters quantified raw gait deviation from typical, normalized to an average(standard deviation) Toddle TDI score of 100(10), and calculated for all participants. The Toddle TDI was significantly lower for preterm versus TD children (86 vs. 100, p=0.003), and lower in preterm children with <85 vs. ≥85 BSID-III motor composite scores (66 vs. 89, p=0.004). The Toddle TDI, which by design plateaus at typical average (BSID-III gross motor 8-12), correlated with BSID-III gross motor (r=0.60, p<0.001) and not fine motor (r=0.08, p=0.65) in preterm children with gross motor scores ≤8, suggesting sensitivity to gross motor development. The Toddle TDI demonstrated sensitivity and specificity to gross motor function in very-low-birth-weight preterm children aged 18-22 months, and has been potential as an easily-administered, revealing clinical gait metric. Copyright © 2016 Elsevier B.V. All rights reserved.
Detailed gravity anomalies from GEOS-3 satellite altimetry data
NASA Technical Reports Server (NTRS)
Gopalapillai, G. S.; Mourad, A. G.
1978-01-01
A technique for deriving mean gravity anomalies from dense altimetry data was developed. A combination of both deterministic and statistical techniques was used. The basic mathematical model was based on the Stokes' equation which describes the analytical relationship between mean gravity anomalies and geoid undulations at a point; this undulation is a linear function of the altimetry data at that point. The overdetermined problem resulting from the excessive altimetry data available was solved using Least-Squares principles. These principles enable the simultaneous estimation of the associated standard deviations reflecting the internal consistency based on the accuracy estimates provided for the altimetry data as well as for the terrestrial anomaly data. Several test computations were made of the anomalies and their accuracy estimates using GOES-3 data.
Muon imaging: Principles, technologies and applications
NASA Astrophysics Data System (ADS)
Procureur, S.
2018-01-01
During the last 15 years muon-based imaging, or muography, has experienced an impressive development and has found applications in many different fields requiring penetrating probes. Structures of very different sizes and densities can be imaged thanks to the various implementations it offers: either in absorption/transmission or in deviation modes, not to mention the muon metrology for monitoring. The goal of this paper is to give an overview of the main principles of the muography, as well as the technologies employed nowadays and its current and potential applications. Considering the amount of studies dedicated to muography and the number of projects conducted in the last decade, this review focuses on the fields which are the most representative of the muography capabilities.
First-principles studies of PETN molecular crystal vibrational frequencies under high pressure
NASA Astrophysics Data System (ADS)
Perger, Warren; Zhao, Jijun
2005-07-01
The vibrational frequencies of the PETN molecular crystal were calculated using the first-principles CRYSTAL03 program which employs an all-electron LCAO approach and calculates analytic first derivatives of the total energy with respect to atomic displacements. Numerical second derivatives were used to enable calculation of the vibrational frequencies at ambient pressure and under various states of compression. Three different density functionals, B3LYP, PW91, and X3LYP were used to examine the effect of the exchange-correlation functional on the vibrational frequencies. The pressure-induced shift of the vibrational frequencies will be presented and compared with experiment. The average deviation with experimental results is shown to be on the order of 2-3%, depending on the functional used.
NASA Astrophysics Data System (ADS)
Baksht, E. Kh.; Buranchenko, A. G.; Kozyrev, A. V.; Tarasenko, V. F.
2017-12-01
An analysis of the fulfillment of the similarity law pτ = f(E/p) under conditions of a pulsed discharge triggered in a gas diode with a highly inhomogeneous field at the voltage in the incident wave >100 kV is performed. It is shown that in this case within the pressure range 1-12 atm the deviations from the similarity principles E/p(pτ) and Ubr(pd) are due to the nonconservation of proportions in the gas-filled diode geometry. Using a collector, the beam of runaway electrons is for the first time registered behind the anode foil in nitrogen at the pressure from 5 to 12 atm.
A framework for the direct evaluation of large deviations in non-Markovian processes
NASA Astrophysics Data System (ADS)
Cavallaro, Massimo; Harris, Rosemary J.
2016-11-01
We propose a general framework to simulate stochastic trajectories with arbitrarily long memory dependence and efficiently evaluate large deviation functions associated to time-extensive observables. This extends the ‘cloning’ procedure of Giardiná et al (2006 Phys. Rev. Lett. 96 120603) to non-Markovian systems. We demonstrate the validity of this method by testing non-Markovian variants of an ion-channel model and the totally asymmetric exclusion process, recovering results obtainable by other means.
Efficient characterisation of large deviations using population dynamics
NASA Astrophysics Data System (ADS)
Brewer, Tobias; Clark, Stephen R.; Bradford, Russell; Jack, Robert L.
2018-05-01
We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms.
Evaluation of True Power Luminous Efficiency from Experimental Luminance Values
NASA Astrophysics Data System (ADS)
Tsutsui, Tetsuo; Yamamato, Kounosuke
1999-05-01
A method for obtaining true external power luminous efficiencyfrom experimentally obtained luminance in organic light-emittingdiodes (LEDs) wasdemonstrated. Conventional two-layer organic LEDs with different electron-transport layer thicknesses wereprepared. Spatial distributions of emission intensities wereobserved. The large deviation in both emission spectra and spatialemission patterns were observed when the electron-transport layerthickness was varied. The deviation of emission patterns from thestandard Lambertian pattern was found to cause overestimations ofpower luminous efficiencies as large as 30%. A method for evaluatingcorrection factors was proposed.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Bressloff, Paul C
2015-01-01
We consider applications of path-integral methods to the analysis of a stochastic hybrid model representing a network of synaptically coupled spiking neuronal populations. The state of each local population is described in terms of two stochastic variables, a continuous synaptic variable and a discrete activity variable. The synaptic variables evolve according to piecewise-deterministic dynamics describing, at the population level, synapses driven by spiking activity. The dynamical equations for the synaptic currents are only valid between jumps in spiking activity, and the latter are described by a jump Markov process whose transition rates depend on the synaptic variables. We assume a separation of time scales between fast spiking dynamics with time constant [Formula: see text] and slower synaptic dynamics with time constant τ. This naturally introduces a small positive parameter [Formula: see text], which can be used to develop various asymptotic expansions of the corresponding path-integral representation of the stochastic dynamics. First, we derive a variational principle for maximum-likelihood paths of escape from a metastable state (large deviations in the small noise limit [Formula: see text]). We then show how the path integral provides an efficient method for obtaining a diffusion approximation of the hybrid system for small ϵ. The resulting Langevin equation can be used to analyze the effects of fluctuations within the basin of attraction of a metastable state, that is, ignoring the effects of large deviations. We illustrate this by using the Langevin approximation to analyze the effects of intrinsic noise on pattern formation in a spatially structured hybrid network. In particular, we show how noise enlarges the parameter regime over which patterns occur, in an analogous fashion to PDEs. Finally, we carry out a [Formula: see text]-loop expansion of the path integral, and use this to derive corrections to voltage-based mean-field equations, analogous to the modified activity-based equations generated from a neural master equation.
Weinberg, Seth H.; Smith, Gregory D.
2012-01-01
Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597
Does the Budyko curve reflect a maximum power state of hydrological systems? A backward analysis
NASA Astrophysics Data System (ADS)
Westhoff, Martijn; Zehe, Erwin; Archambeau, Pierre; Dewals, Benjamin
2016-04-01
Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving runoff and evaporation for a simple one-box model. We did this in an inverse manner such that when the conductances are optimized with the maximum power principle, the steady state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporations, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the - with the maximum power principle optimized - model with the asymptotes of the Budyko curve we were able to derive more realistic values of the aridity and evaporation index without any parameter calibration. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.
Does the Budyko curve reflect a maximum-power state of hydrological systems? A backward analysis
NASA Astrophysics Data System (ADS)
Westhoff, M.; Zehe, E.; Archambeau, P.; Dewals, B.
2016-01-01
Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving run-off and evaporation for a simple one-box model. We did this in an inverse manner such that, when the conductances are optimized with the maximum-power principle, the steady-state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporation, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the model that has been optimized with the maximum-power principle with the asymptotes of the Budyko curve, we were able to derive more realistic values of the aridity and evaporation index without any parameter calibration. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.
Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.
Gapsys, Vytautas; de Groot, Bert L
2017-12-12
Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .
Lower Current Large Deviations for Zero-Range Processes on a Ring
NASA Astrophysics Data System (ADS)
Chleboun, Paul; Grosskinsky, Stefan; Pizzoferrato, Andrea
2017-04-01
We study lower large deviations for the current of totally asymmetric zero-range processes on a ring with concave current-density relation. We use an approach by Jensen and Varadhan which has previously been applied to exclusion processes, to realize current fluctuations by travelling wave density profiles corresponding to non-entropic weak solutions of the hyperbolic scaling limit of the process. We further establish a dynamic transition, where large deviations of the current below a certain value are no longer typically attained by non-entropic weak solutions, but by condensed profiles, where a non-zero fraction of all the particles accumulates on a single fixed lattice site. This leads to a general characterization of the rate function, which is illustrated by providing detailed results for four generic examples of jump rates, including constant rates, decreasing rates, unbounded sublinear rates and asymptotically linear rates. Our results on the dynamic transition are supported by numerical simulations using a cloning algorithm.
A New Control Paradigm for Stochastic Differential Equations
NASA Astrophysics Data System (ADS)
Schmid, Matthias J. A.
This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.
Dissent and Nationalism in the Soviet Baltic.
1983-09-01
and which, in the final analysis, also has an internal policing function, is manned on the principle of extraterri- toriality, which means that local...descriptor of current antiestablish- ment activities and attitudes in the Baltic region. The common mean - ing of "dissent," i.e., deviation by a minority...denied any means of support. The regime’s intentions to eradicate the Catholic religion in Lithuania were unmistakable. Despite the critical situation in
Astronaut mass measurement using linear acceleration method and the effect of body non-rigidity
NASA Astrophysics Data System (ADS)
Yan, Hui; Li, LuMing; Hu, ChunHua; Chen, Hao; Hao, HongWei
2011-04-01
Astronaut's body mass is an essential factor of health monitoring in space. The latest mass measurement device for the International Space Station (ISS) has employed a linear acceleration method. The principle of this method is that the device generates a constant pulling force, and the astronaut is accelerated on a parallelogram motion guide which rotates at a large radius to achieve a nearly linear trajectory. The acceleration is calculated by regression analysis of the displacement versus time trajectory and the body mass is calculated by using the formula m= F/ a. However, in actual flight, the device is instable that the deviation between runs could be 6-7 kg. This paper considers the body non-rigidity as the major cause of error and instability and analyzes the effects of body non-rigidity from different aspects. Body non-rigidity makes the acceleration of the center of mass (C.M.) oscillate and fall behind the point where force is applied. Actual acceleration curves showed that the overall effect of body non-rigidity is an oscillation at about 7 Hz and a deviation of about 25%. To enhance body rigidity, better body restraints were introduced and a prototype based on linear acceleration method was built. Measurement experiment was carried out on ground on an air table. Three human subjects weighing 60-70 kg were measured. The average variance was 0.04 kg and the average measurement error was 0.4%. This study will provide reference for future development of China's own mass measurement device.
Chang, Young-Hui; Auyang, Arick G.; Scholz, John P.; Nichols, T. Richard
2009-01-01
Summary Biomechanics and neurophysiology studies suggest whole limb function to be an important locomotor control parameter. Inverted pendulum and mass-spring models greatly reduce the complexity of the legs and predict the dynamics of locomotion, but do not address how numerous limb elements are coordinated to achieve such simple behavior. As a first step, we hypothesized whole limb kinematics were of primary importance and would be preferentially conserved over individual joint kinematics after neuromuscular injury. We used a well-established peripheral nerve injury model of cat ankle extensor muscles to generate two experimental injury groups with a predictable time course of temporary paralysis followed by complete muscle self-reinnervation. Mean trajectories of individual joint kinematics were altered as a result of deficits after injury. By contrast, mean trajectories of limb orientation and limb length remained largely invariant across all animals, even with paralyzed ankle extensor muscles, suggesting changes in mean joint angles were coordinated as part of a long-term compensation strategy to minimize change in whole limb kinematics. Furthermore, at each measurement stage (pre-injury, paralytic and self-reinnervated) step-by-step variance of individual joint kinematics was always significantly greater than that of limb orientation. Our results suggest joint angle combinations are coordinated and selected to stabilize whole limb kinematics against short-term natural step-by-step deviations as well as long-term, pathological deviations created by injury. This may represent a fundamental compensation principle allowing animals to adapt to changing conditions with minimal effect on overall locomotor function. PMID:19837893
Comparison of different methods for the in situ measurement of forest litter moisture content
NASA Astrophysics Data System (ADS)
Schunk, C.; Ruth, B.; Leuchner, M.; Wastl, C.; Menzel, A.
2016-02-01
Dead fine fuel (e.g., litter) moisture content is an important parameter for both forest fire and ecological applications as it is related to ignitability, fire behavior and soil respiration. Real-time availability of this value would thus be a great benefit to fire risk management and prevention. However, the comprehensive literature review in this paper shows that there is no easy-to-use method for automated measurements available. This study investigates the applicability of four different sensor types (permittivity and electrical resistance measuring principles) for this measurement. Comparisons were made to manual gravimetric reference measurements carried out almost daily for one fire season and overall agreement was good (highly significant correlations with 0.792 < = r < = 0.947, p < 0.001). Standard deviations within sensor types were linearly correlated to daily sensor mean values; however, above a certain threshold they became irregular, which may be linked to exceedance of the working ranges. Thus, measurements with irregular standard deviations were considered unusable and relationships between gravimetric and automatic measurements of all individual sensors were compared only for useable periods. A large drift in these relationships became obvious from drought to drought period. This drift may be related to installation effects or settling and decomposition of the litter layer throughout the fire season. Because of the drift and the in situ calibration necessary, it cannot be recommended to use the methods presented here for monitoring purposes and thus operational hazard management. However, they may be interesting for scientific studies when some manual fuel moisture measurements are made anyway. Additionally, a number of potential methodological improvements are suggested.
Fast secant methods for the iterative solution of large nonsymmetric linear systems
NASA Technical Reports Server (NTRS)
Deuflhard, Peter; Freund, Roland; Walter, Artur
1990-01-01
A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX
2010-08-25
Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for buildingmore » parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable model-based fault estimation and correction for particle accelerators and industrial plants feasible.« less
Bioinspired principles for large-scale networked sensor systems: an overview.
Jacobsen, Rune Hylsberg; Zhang, Qi; Toftegaard, Thomas Skjødeberg
2011-01-01
Biology has often been used as a source of inspiration in computer science and engineering. Bioinspired principles have found their way into network node design and research due to the appealing analogies between biological systems and large networks of small sensors. This paper provides an overview of bioinspired principles and methods such as swarm intelligence, natural time synchronization, artificial immune system and intercellular information exchange applicable for sensor network design. Bioinspired principles and methods are discussed in the context of routing, clustering, time synchronization, optimal node deployment, localization and security and privacy.
NASA Astrophysics Data System (ADS)
Duffy, Ken; Lobunets, Olena; Suhov, Yuri
2007-05-01
We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.
Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains
NASA Astrophysics Data System (ADS)
Cofré, Rodrigo; Maldonado, Cesar
2018-01-01
We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.
Gait analysis in children with cerebral palsy.
Armand, Stéphane; Decoulon, Geraldo; Bonnefoy-Mazure, Alice
2016-12-01
Cerebral palsy (CP) children present complex and heterogeneous motor disorders that cause gait deviations.Clinical gait analysis (CGA) is needed to identify, understand and support the management of gait deviations in CP. CGA assesses a large amount of quantitative data concerning patients' gait characteristics, such as video, kinematics, kinetics, electromyography and plantar pressure data.Common gait deviations in CP can be grouped into the gait patterns of spastic hemiplegia (drop foot, equinus with different knee positions) and spastic diplegia (true equinus, jump, apparent equinus and crouch) to facilitate communication. However, gait deviations in CP tend to be a continuum of deviations rather than well delineated groups. To interpret CGA, it is necessary to link gait deviations to clinical impairments and to distinguish primary gait deviations from compensatory strategies.CGA does not tell us how to treat a CP patient, but can provide objective identification of gait deviations and further the understanding of gait deviations. Numerous treatment options are available to manage gait deviations in CP. Generally, treatments strive to limit secondary deformations, re-establish the lever arm function and preserve muscle strength.Additional roles of CGA are to better understand the effects of treatments on gait deviations. Cite this article: Armand S, Decoulon G, Bonnefoy-Mazure A. Gait analysis in children with cerebral palsy. EFORT Open Rev 2016;1:448-460. DOI: 10.1302/2058-5241.1.000052.
NASA Astrophysics Data System (ADS)
Perger, Warren F.; Zhao, Jijun; Winey, J. M.; Gupta, Y. M.
2006-07-01
The vibrational frequencies of the PETN molecular crystal were calculated using the first-principles CRYSTAL03 program which employs an all-electron LCAO approach and calculates analytic first derivatives of the total energy with respect to atomic displacements. Numerical second derivatives were used to enable calculation of the vibrational frequencies at ambient pressure and under various states of compression. Three different density functionals, B3LYP, PW91, and X3LYP were used to examine the effect of the exchange-correlation functional on the vibrational frequencies. The average deviation with experimental results is shown to be on the order of 2-3%, depending on the functional used. The pressure-induced shift of the vibrational frequencies is presented.
Gummerum, Michaela; Keller, Monika; Takezawa, Masanori; Mata, Jutta
2008-01-01
This study interconnects developmental psychology of fair and moral behavior with economic game theory. One hundred eighty-nine 9- to 17-year-old students shared a sum of money as individuals and groups with another anonymous group (dictator game). Individual allocations did not differ by age but did by gender and were predicted by participants' preferences for fair allocations. Group decision making followed a majority process. Level of moral reasoning did not predict individual offers, but group members with a higher moral reasoning ability were more influential during group negotiations and in influencing group outcomes. The youngest participants justified offers more frequently by referring to simple distribution principles. Older participants employed more complex reasons to justify deviations from allocation principles.
Abraha, Iosief; Cozzolino, Francesco; Orso, Massimiliano; Marchesi, Mauro; Germani, Antonella; Lombardo, Guido; Eusebi, Paolo; De Florio, Rita; Luchetta, Maria Laura; Iorio, Alfonso; Montedori, Alessandro
2017-04-01
To describe the characteristics, and estimate the incidence, of trials included in systematic reviews deviating from the intention-to-treat (ITT) principle. A 5% random sample of reviews were selected (Medline 2006-2010). Trials from reviews were classified based on the ITT: (1) ITT trials (trials reporting standard ITT analyses); (2) modified ITT (mITT) trials (modified ITT; trials deviating from standard ITT); or (3) no ITT trials. Of 222 reviews, 81 (36%) included at least one mITT trial. Reviews with mITT trials were more likely to contain trials that used placebo, that investigated drugs, and that reported favorable results. The incidence of reviews with mITT trial ranged from 29% (17/58) to 48% (23/48). Of the 2,349 trials, 597 (25.4%) were classified as ITT trials, 323 (13.8%) as mITT trials, and 1,429 (60.8%) as no ITT trials. The mITT trials were more likely to have reported exclusions compared to studies classified as ITT trials and to have received funding. The reporting of the type of ITT may differ according to the clinical area and the type of intervention. Deviation from ITT in randomized controlled trials is a widespread phenomenon that significantly affects systematic reviews. Copyright © 2017 Elsevier Inc. All rights reserved.
Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter.
Kopp, M; Harmeling, S; Schütz, G; Schölkopf, B; Fähnle, M
2015-01-01
The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 10(5)), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that - nevertheless - there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.
A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems
NASA Astrophysics Data System (ADS)
Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.
2018-06-01
We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω}. An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.
How to improve essential skills in introductory physics through brief, spaced, online practice.
NASA Astrophysics Data System (ADS)
Heckler, Andrew; Mikula, Brendon
2017-01-01
We developed and implemented a set of online ``essential skills'' tasks to help students achieve and retain a core level of mastery and fluency in basic skills necessary for their coursework. The task design is based on our research on student understanding and difficulties as well as three well-established cognitive principles: 1) spaced practice, to promote retention, 2) interleaved practice, to promote the ability to recognize when the learned skill is needed, and 3) mastery practice mastery practice, to promote a base level of performance. We report on training on a variety of skills with vector math. Students spent a relatively small amount of time, 10-20 minutes in practice each week, answering relevant questions online until a mastery level was achieved. Results indicate significant and often dramatic gains, often with average gains of over one standard deviation. Notably, these large gains are retained at least several months after the final practice session, including for less-prepared students. Funding for this research was provided by the Center for Emergent Materials: an NSF MRSEC under Award Number DMR-1420451.
Lithium alloy negative electrodes
NASA Astrophysics Data System (ADS)
Huggins, Robert A.
The 1996 announcement by Fuji Photo Film of the development of lithium batteries containing convertible metal oxides has caused a great deal of renewed interest in lithium alloys as alternative materials for use in the negative electrode of rechargeable lithium cells. The earlier work on lithium alloys, both at elevated and ambient temperatures is briefly reviewed. Basic principles relating thermodynamics, phase diagrams and electrochemical properties under near-equilibrium conditions are discussed, with the Li-Sn system as an example. Second-phase nucleation, and its hindrance under dynamic conditions plays an important role in determining deviations from equilibrium behavior. Two general types of composite microstructure electrodes, those with a mixed-conducting matrix, and those with a solid electrolyte matrix, are discussed. The Li-Sn-Si system at elevated temperatures, and the Li-Sn-Cd at ambient temperatures are shown to be examples of mixed-conducting matrix microstructures. The convertible oxides are an example of the solid electrolyte matrix type. Although the reversible capacity can be very large in this case, the first cycle irreversible capacity required to convert the oxides to alloys may be a significant handicap.
Kurtosis, skewness, and non-Gaussian cosmological density perturbations
NASA Technical Reports Server (NTRS)
Luo, Xiaochun; Schramm, David N.
1993-01-01
Cosmological topological defects as well as some nonstandard inflation models can give rise to non-Gaussian density perturbations. Skewness and kurtosis are the third and fourth moments that measure the deviation of a distribution from a Gaussian. Measurement of these moments for the cosmological density field and for the microwave background temperature anisotropy can provide a test of the Gaussian nature of the primordial fluctuation spectrum. In the case of the density field, the importance of measuring the kurtosis is stressed since it will be preserved through the weakly nonlinear gravitational evolution epoch. Current constraints on skewness and kurtosis of primeval perturbations are obtained from the observed density contrast on small scales and from recent COBE observations of temperature anisotropies on large scales. It is also shown how, in principle, future microwave anisotropy experiments might be able to reveal the initial skewness and kurtosis. It is shown that present data argue that if the initial spectrum is adiabatic, then it is probably Gaussian, but non-Gaussian isocurvature fluctuations are still allowed, and these are what topological defects provide.
Effects of Noise on Ecological Invasion Processes: Bacteriophage-mediated Competition in Bacteria
NASA Astrophysics Data System (ADS)
Joo, Jaewook; Eric, Harvill; Albert, Reka
2007-03-01
Pathogen-mediated competition, through which an invasive species carrying and transmitting a pathogen can be a superior competitor to a more vulnerable resident species, is one of the principle driving forces influencing biodiversity in nature. Using an experimental system of bacteriophage-mediated competition in bacterial populations and a deterministic model, we have shown in [Joo et al 2005] that the competitive advantage conferred by the phage depends only on the relative phage pathology and is independent of the initial phage concentration and other phage and host parameters such as the infection-causing contact rate, the spontaneous and infection-induced lysis rates, and the phage burst size. Here we investigate the effects of stochastic fluctuations on bacterial invasion facilitated by bacteriophage, and examine the validity of the deterministic approach. We use both numerical and analytical methods of stochastic processes to identify the source of noise and assess its magnitude. We show that the conclusions obtained from the deterministic model are robust against stochastic fluctuations, yet deviations become prominently large when the phage are more pathological to the invading bacterial strain.
A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems
NASA Astrophysics Data System (ADS)
Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.
2018-01-01
We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω} . An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.
Schnuerch, Robert; Richter, Jasmin; Koppehele-Gossel, Judith; Gibbons, Henning
2016-06-01
Detecting one's agreement with or deviation from other people, a key principle of social cognition, relies on neurocognitive mechanisms involved in reward processing, mismatch detection, and attentional orienting. Previous studies have focused on explicit depictions of the (in)congruency of individual and group judgments. Here, we report data from a novel experimental paradigm in which participants first rated a set of images and were later simply confronted with other individuals' ostensible preferences. Participants strongly aligned their judgments in the direction of other people's deviation from their own initial rating, which was neither an effect of regression toward the mean nor of evaluative conditioning (Experiment 1). Most importantly, we provide neurophysiological evidence of the involvement of fundamental cognitive functions related to social comparison (Experiment 2), even though our paradigm did not overly boost this process. Mismatches, as compared to matches, of preferences were associated with an amplitude increase of a broadly distributed N400-like deflection, suggesting that social deviance is represented in the human brain in a similar way as conflicts or breaches of expectation. Also, both early (P2) and late (LPC) signatures of attentional selection were significantly modulated by the social (mis)match of preferences. Our data thus strengthen and valuably extend previous findings on the neurocognitive principles of social proof. © 2016 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Yang, GuanYa; Wu, Jiang; Chen, ShuGuang; Zhou, WeiJun; Sun, Jian; Chen, GuanHua
2018-06-01
Neural network-based first-principles method for predicting heat of formation (HOF) was previously demonstrated to be able to achieve chemical accuracy in a broad spectrum of target molecules [L. H. Hu et al., J. Chem. Phys. 119, 11501 (2003)]. However, its accuracy deteriorates with the increase in molecular size. A closer inspection reveals a systematic correlation between the prediction error and the molecular size, which appears correctable by further statistical analysis, calling for a more sophisticated machine learning algorithm. Despite the apparent difference between simple and complex molecules, all the essential physical information is already present in a carefully selected set of small molecule representatives. A model that can capture the fundamental physics would be able to predict large and complex molecules from information extracted only from a small molecules database. To this end, a size-independent, multi-step multi-variable linear regression-neural network-B3LYP method is developed in this work, which successfully improves the overall prediction accuracy by training with smaller molecules only. And in particular, the calculation errors for larger molecules are drastically reduced to the same magnitudes as those of the smaller molecules. Specifically, the method is based on a 164-molecule database that consists of molecules made of hydrogen and carbon elements. 4 molecular descriptors were selected to encode molecule's characteristics, among which raw HOF calculated from B3LYP and the molecular size are also included. Upon the size-independent machine learning correction, the mean absolute deviation (MAD) of the B3LYP/6-311+G(3df,2p)-calculated HOF is reduced from 16.58 to 1.43 kcal/mol and from 17.33 to 1.69 kcal/mol for the training and testing sets (small molecules), respectively. Furthermore, the MAD of the testing set (large molecules) is reduced from 28.75 to 1.67 kcal/mol.
Wijsman, Liselotte Willemijn; Cachucho, Ricardo; Hoevenaar-Blom, Marieke Peternella; Mooijaart, Simon Pieter; Richard, Edo
2017-01-01
Background Smartphone-assisted technologies potentially provide the opportunity for large-scale, long-term, repeated monitoring of cognitive functioning at home. Objective The aim of this proof-of-principle study was to evaluate the feasibility and validity of performing cognitive tests in people at increased risk of dementia using smartphone-based technology during a 6 months follow-up period. Methods We used the smartphone-based app iVitality to evaluate five cognitive tests based on conventional neuropsychological tests (Memory-Word, Trail Making, Stroop, Reaction Time, and Letter-N-Back) in healthy adults. Feasibility was tested by studying adherence of all participants to perform smartphone-based cognitive tests. Validity was studied by assessing the correlation between conventional neuropsychological tests and smartphone-based cognitive tests and by studying the effect of repeated testing. Results We included 151 participants (mean age in years=57.3, standard deviation=5.3). Mean adherence to assigned smartphone tests during 6 months was 60% (SD 24.7). There was moderate correlation between the firstly made smartphone-based test and the conventional test for the Stroop test and the Trail Making test with Spearman ρ=.3-.5 (P<.001). Correlation increased for both tests when comparing the conventional test with the mean score of all attempts a participant had made, with the highest correlation for Stroop panel 3 (ρ=.62, P<.001). Performance on the Stroop and the Trail Making tests improved over time suggesting a learning effect, but the scores on the Letter-N-back, the Memory-Word, and the Reaction Time tests remained stable. Conclusions Repeated smartphone-assisted cognitive testing is feasible with reasonable adherence and moderate relative validity for the Stroop and the Trail Making tests compared with conventional neuropsychological tests. Smartphone-based cognitive testing seems promising for large-scale data-collection in population studies. PMID:28546139
Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence
NASA Astrophysics Data System (ADS)
Laurie, J.; Bouchet, F.; Zaboronski, O.
2012-12-01
We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.
NASA Astrophysics Data System (ADS)
Radu, M. C.; Schnakovszky, C.; Herghelegiu, E.; Tampu, N. C.; Zichil, V.
2016-08-01
Experimental tests were carried out on two high-strength steel materials (Ramor 400 and Ramor 550). Quantification of the dimensional accuracy was achieved by measuring the deviations from some geometric parameters of part (two lengths and two radii). It was found that in case of Ramor 400 steel, at the jet inlet, the deviations from the part radii are quite small for all the three analysed processes. Instead for the linear dimensions, the deviations are small only in case of laser cutting. At the jet outlet, the deviations raised in small amount compared to those obtained at the jet inlet for both materials as well as for all the three processes. Related to Ramor 550 steel, at the jet inlet the deviations from the part radii are very small in case of AWJ and laser cutting but larger in case of plasma cutting. At the jet outlet, the deviations from the part radii are very small for all processes; in case of linear dimensions, there was obtained very small deviations only in the case of laser processing, the other two processes leading to very large deviations.
Bioinspired Principles for Large-Scale Networked Sensor Systems: An Overview
Jacobsen, Rune Hylsberg; Zhang, Qi; Toftegaard, Thomas Skjødeberg
2011-01-01
Biology has often been used as a source of inspiration in computer science and engineering. Bioinspired principles have found their way into network node design and research due to the appealing analogies between biological systems and large networks of small sensors. This paper provides an overview of bioinspired principles and methods such as swarm intelligence, natural time synchronization, artificial immune system and intercellular information exchange applicable for sensor network design. Bioinspired principles and methods are discussed in the context of routing, clustering, time synchronization, optimal node deployment, localization and security and privacy. PMID:22163841
Distinguishing genetics and eugenics on the basis of fairness.
Ledley, F D
1994-01-01
There is concern that human applications of modern genetic technologies may lead inexorably to eugenic abuse. To prevent such abuse, it is essential to have clear, formal principles as well as algorithms for distinguishing genetics from eugenics. This work identifies essential distinctions between eugenics and genetics in the implied nature of the social contract and the importance ascribed to individual welfare relative to society. Rawls's construction of 'justice as fairness' is used as a model for how a formal systems of ethics can be used to proscribe eugenic practices. Rawls's synthesis can be applied to this problem if it is assumed that in the original condition all individuals are ignorant of their genetic constitution and unwilling to consent to social structures which may constrain their own potential. The principles of fairness applied to genetics requires that genetic interventions be directed at extending individual liberties and be applied to the greatest benefit of individuals with the least advantages. These principles are incompatible with negative eugenics which would further penalize those with genetic disadvantage. These principles limit positive eugenics to those practices which are designed to provide absolute benefit to those individuals with least advantage, are acceptable to its subjects, and further a system of basic equal liberties. This analysis also illustrates how simple deviations from first principles in Rawls's formulation could countenance eugenic applications of genetic technologies. PMID:7996561
Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets
ERIC Educational Resources Information Center
Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad
2017-01-01
Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…
NASA Astrophysics Data System (ADS)
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Internal fixators: a safe option for managing distal femur fractures?
Batista, Bruno Bellaguarda; Salim, Rodrigo; Paccola, Cleber Antonio Jansen; Kfuri, Mauricio
2014-01-01
OBJECTIVE: Evaluate safety and reliability of internal fixator for the treatment of intra-articular and periarticular distal femur fractures. METHODS: Retrospective data evaluation of 28 patients with 29 fractures fixed with internal fixator was performed. There was a predominance of male patients (53.5%), with 52% of open wound fractures, 76% of AO33C type fractures, and a mean follow up of 21.3 months. Time of fracture healing, mechanical axis deviation, rate of infection and postoperative complications were registered. RESULTS: Healing rate was 93% in this sample, with an average time of 5.5 months. Twenty-seven percent of patients ended up with mechanical axis deviation, mostly resulting from poor primary intra-operative reduction. There were two cases of implant loosening; two implant breakage, and three patients presented stiff knee. No case of infection was observed. Healing rate in this study was comparable with current literature; there was a high degree of angular deviation, especially in the coronal plane. CONCLUSION: Internal fixators are a breakthrough in the treatment of knee fractures, but its use does not preclude application of principles of anatomical articular reduction and mechanical axis restoration. Level of Evidence II, Retrospective Study. PMID:25061424
Large-deviation properties of Brownian motion with dry friction.
Chen, Yaming; Just, Wolfram
2014-10-01
We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.
In-depth analysis and discussions of water absorption-typed high power laser calorimeter
NASA Astrophysics Data System (ADS)
Wei, Ji Feng
2017-02-01
In high-power and high-energy laser measurement, the absorber materials can be easily destroyed under long-term direct laser irradiation. In order to improve the calorimeter's measuring capacity, a measuring system directly using water flow as the absorber medium was built. The system's basic principles and the designing parameters of major parts were elaborated. The system's measuring capacity, the laser working modes, and the effects of major parameters were analyzed deeply. Moreover, the factors that may affect the accuracy of measurement were analyzed and discussed. The specific control measures and methods were elaborated. The self-calibration and normal calibration experiments show that this calorimeter has very high accuracy. In electrical calibration, the average correction coefficient is only 1.015, with standard deviation of only 0.5%. In calibration experiments, the standard deviation relative to a middle-power standard calorimeter is only 1.9%.
Large deviation approach to the generalized random energy model
NASA Astrophysics Data System (ADS)
Dorlas, T. C.; Dukes, W. M. B.
2002-05-01
The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.
Current fluctuations in periodically driven systems
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Chetrite, Raphael
2018-05-01
Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.
2013-09-01
followed by an exhaust nozzle . He considered the turbojet as a hybrid of “propeller gas turbine” and “rocket” principles. In 1936, he then conceived...first bench-test of a jet engine using liquid fuel. Simultaneous with Whittle, a German scientist was making great headway into gas turbine engine...deviations, secondary flows, and similar loss producing phenomena. The results are applicable to both military and civil applications of gas turbine
Zipf’s Law and the Frequency of Kazak Phonemes in Word Formation
NASA Astrophysics Data System (ADS)
Xin, Ruiqing; Li, Yonghong; Yu, Hongzhi
2018-03-01
Zipf’s Law is the basis of the principle of Least Effort, and is widely applicable in all natural fields. The occurring frequency of each phoneme in all Kazak words has been counted to testify the application of Zipf’s law in Kazak. Due to the limitation of the sample size, deviation is unavoidable, but overall results indicate that the occurring frequency and the reciprocal rank of each phoneme in Kazak words formation are in line with Zipf’s distribution.
Determining Equilibrium Position For Acoustical Levitation
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Aveni, G.; Putterman, S.; Rudnick, J.
1989-01-01
Equilibrium position and orientation of acoustically-levitated weightless object determined by calibration technique on Earth. From calibration data, possible to calculate equilibrium position and orientation in presence of Earth gravitation. Sample not levitated acoustically during calibration. Technique relies on Boltzmann-Ehrenfest adiabatic-invariance principle. One converts resonant-frequency-shift data into data on normalized acoustical potential energy. Minimum of energy occurs at equilibrium point. From gradients of acoustical potential energy, one calculates acoustical restoring force or torque on objects as function of deviation from equilibrium position or orientation.
[Systemic approach to radiobiological studies].
Bulanova, K Ia; Lobanok, L M
2004-01-01
The principles of information theory were applied for analysis of radiobiological effects. The perception of ionizing radiations as a signal enables living organism to discern their benefits or harm, to react to absolute and relatively small deviations, to keep the logic and chronicle of events, to use the former experience for reacting in presence, to forecast consequences. The systemic analysis of organism's response to ionizing radiations allows explaining the peculiarities of effects of different absorbed doses, hormesis, apoptosis, remote consequences and other post-radiation effects.
Herges, T; Wenzel, W
2005-01-14
We report the reproducible first-principles folding of the 40 amino-acid, three-helix headpiece of the HIV accessory protein in a recently developed all-atom free-energy force field. Six of 20 simulations using an adapted basin-hopping method converged to better than 3 A backbone rms deviation to the experimental structure. Using over 60 000 low-energy conformations of this protein, we constructed a decoy tree that completely characterizes its folding funnel.
NASA Astrophysics Data System (ADS)
Herges, T.; Wenzel, W.
2005-01-01
We report the reproducible first-principles folding of the 40 amino-acid, three-helix headpiece of the HIV accessory protein in a recently developed all-atom free-energy force field. Six of 20 simulations using an adapted basin-hopping method converged to better than 3Å backbone rms deviation to the experimental structure. Using over 60 000 low-energy conformations of this protein, we constructed a decoy tree that completely characterizes its folding funnel.
Vortex-induced vibrations of a flexible cylinder at large inclination angle
Bourguet, Rémi; Triantafyllou, Michael S.
2015-01-01
The free vibrations of a flexible circular cylinder inclined at 80° within a uniform current are investigated by means of direct numerical simulation, at Reynolds number 500 based on the body diameter and inflow velocity. In spite of the large inclination angle, the cylinder exhibits regular in-line and cross-flow vibrations excited by the flow through the lock-in mechanism, i.e. synchronization of body motion and vortex formation. A profound reconfiguration of the wake is observed compared with the stationary body case. The vortex-induced vibrations are found to occur under parallel, but also oblique vortex shedding where the spanwise wavenumbers of the wake and structural response coincide. The shedding angle and frequency increase with the spanwise wavenumber. The cylinder vibrations and fluid forces present a persistent spanwise asymmetry which relates to the asymmetry of the local current relative to the body axis, owing to its in-line bending. In particular, the asymmetrical trend of flow–body energy transfer results in a monotonic orientation of the structural waves. Clockwise and counter-clockwise figure eight orbits of the body alternate along the span, but the latter are found to be more favourable to structure excitation. Additional simulations at normal incidence highlight a dramatic deviation from the independence principle, which states that the system behaviour is essentially driven by the normal component of the inflow velocity. PMID:25512586
A Validated Method for the Quality Control of Andrographis paniculata Preparations.
Karioti, Anastasia; Timoteo, Patricia; Bergonzi, Maria Camilla; Bilia, Anna Rita
2017-10-01
Andrographis paniculata is a herbal drug of Asian traditional medicine largely employed for the treatment of several diseases. Recently, it has been introduced in Europe for the prophylactic and symptomatic treatment of common cold and as an ingredient of dietary supplements. The active principles are diterpenes with andrographolide as the main representative. In the present study, an analytical protocol was developed for the determination of the main constituents in the herb and preparations of A. paniculata . Three different extraction protocols (methanol extraction using a modified Soxhlet procedure, maceration under ultrasonication, and decoction) were tested. Ultrasonication achieved the highest content of analytes. HPLC conditions were optimized in terms of solvent mixtures, time course, and temperature. A reversed phase C18 column eluted with a gradient system consisting of acetonitrile and acidified water and including an isocratic step at 30 °C was used. The HPLC method was validated for linearity, limits of quantitation and detection, repeatability, precision, and accuracy. The overall method was validated for precision and accuracy over at least three different concentration levels. Relative standard deviation was less than 1.13%, whereas recovery was between 95.50% and 97.19%. The method also proved to be suitable for the determination of a large number of commercial samples and was proposed to the European Pharmacopoeia for the quality control of Andrographidis herba. Georg Thieme Verlag KG Stuttgart · New York.
Adsorption structures and energetics of molecules on metal surfaces: Bridging experiment and theory
NASA Astrophysics Data System (ADS)
Maurer, Reinhard J.; Ruiz, Victor G.; Camarillo-Cisneros, Javier; Liu, Wei; Ferri, Nicola; Reuter, Karsten; Tkatchenko, Alexandre
2016-05-01
Adsorption geometry and stability of organic molecules on surfaces are key parameters that determine the observable properties and functions of hybrid inorganic/organic systems (HIOSs). Despite many recent advances in precise experimental characterization and improvements in first-principles electronic structure methods, reliable databases of structures and energetics for large adsorbed molecules are largely amiss. In this review, we present such a database for a range of molecules adsorbed on metal single-crystal surfaces. The systems we analyze include noble-gas atoms, conjugated aromatic molecules, carbon nanostructures, and heteroaromatic compounds adsorbed on five different metal surfaces. The overall objective is to establish a diverse benchmark dataset that enables an assessment of current and future electronic structure methods, and motivates further experimental studies that provide ever more reliable data. Specifically, the benchmark structures and energetics from experiment are here compared with the recently developed van der Waals (vdW) inclusive density-functional theory (DFT) method, DFT + vdWsurf. In comparison to 23 adsorption heights and 17 adsorption energies from experiment we find a mean average deviation of 0.06 Å and 0.16 eV, respectively. This confirms the DFT + vdWsurf method as an accurate and efficient approach to treat HIOSs. A detailed discussion identifies remaining challenges to be addressed in future development of electronic structure methods, for which the here presented benchmark database may serve as an important reference.
The acute toxicity of local anesthetics.
Mather, Laurence E
2010-11-01
Systemic toxicity, usually from overdose or intravascular dose, is feared because it mainly affects the heart and brain, and may be acutely life-threatening. Pharmacological studies of local anesthetic toxicity have largely been reviewed primarily relating to the evaluation of ropivacaine and levobupivacaine during the past decade. This review/opinion focuses more on the principles and concepts underlying the main models used, from chemical pharmacological and pharmacokinetic perspectives. Research models required to produce pivotal toxicity data are discussed. The potencies for neural blockade and systemic toxicity are associated across virtually all models, with some deviations through molecular stereochemistry. These models show that all local anesthetics can produce direct cardiovascular system toxicity and CNS excitotoxicity that may further affect the cardiovascular system response. Whereas the longer-acting local anesthetics are more likely to cause cardiac death by malignant arrhythmias, the shorter-acting agents are more likely to cause cardiac contraction failure. In most models, equi-anesthetic doses of ropivacaine and levobupivacaine are less likely to produce serious toxicity than bupivacaine. Of the various models, this reviewer favors a whole-body large animal preparation because of the comprehensive data collection possible. The conscious sheep preparation has contributed more than any other, and may be regarded as the de facto 'standard' experimental model for concurrent study of local anesthetic toxicity ± pharmacokinetics, using experimental designs that can reproduce the toxicity seen in clinical accidents.
First principle study of structural, elastic and electronic properties of APt3 (A=Mg, Sc, Y and Zr)
NASA Astrophysics Data System (ADS)
Benamer, A.; Roumili, A.; Medkour, Y.; Charifi, Z.
2018-02-01
We report results obtained from first principle calculations on APt3 compounds with A=Mg, Sc, Y and Zr. Our results of the lattice parameter a are in good agreement with experimental data, with deviations less than 0.8%. Single crystal elastic constants are calculated, then polycrystalline elastic moduli (bulk, shear and Young moduli, Poisson ration, anisotropy factor) are presented. Based on Debye model, Debye temperature ϴD is calculated from the sound velocities Vl, Vt and Vm. Band structure results show that the studied compounds are electrical conductors, the conduction mechanism is assured by Pt-d electrons. Different hybridisation states are observed between Pt-d and A-d orbitals. The study of the charge density distribution and the population analysis shows the coexistence of ionic, covalent and metallic bonds.
Vernik, N V; Ivantsova, M A; Yashin, D I
2015-01-01
To evaluate the ways of reduction complications during endoscopic procedures based on principals of professional ethics and improving the quality of working area. Data of fundamental literature, evidence based medicine, science publications and internet portals. Deontology is the fundamental principle of medical practice and one of the main factors of professional effectiveness. Complications in endoscopy are often the investigations of deviation from the deontological principals. The whole number of psychological factors influences on professional activity of endoscopists, where the emotional "burn-out" syndrome (EBS) occupies one of the main places. Prophylactic and timely relief of EBS serves improvement of the practical work quality. Creation of favorable working area is the strategically important task in prophylactics of endoscopy complications. The questions of practical realization of deontological principles in endoscopy are the subject of further discussion.
Stability and charge separation of different CH3NH3SnI3/TiO2 interface: A first-principles study
NASA Astrophysics Data System (ADS)
Yang, Zhenzhen; Wang, Yuanxu; Liu, Yunyan
2018-05-01
Interface has an important effect on charge separation of perovskite solar cells. Using first-principles calculations, we studied several different interfaces between CH3NH3SnI3 and TiO2. The interfacial structure and electronic structure of these interfaces are thoroughly explored. We found that the SnI2/anatase (SnI2/A) system is more stable than the other three systems, because an anatase surface can make Snsbnd I bond faster restore to the pristine value than a rutile surface, and SnI2/A system has a smaller standard deviation. The calculated plane-averaged electrostatic potential and the density of states suggest that SnI2/anatase interface has a better separation of photo-generated electron-hole pairs.
Cosmological implications of a large complete quasar sample.
Segal, I E; Nicoll, J F
1998-04-28
Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423-1460]. The Expanding Universe model as represented by the Friedman-Lemaitre cosmology with parameters qo = 0, Lambda = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude-redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11sigma from direct observation; none of the C2 predictions deviate by >2sigma. The C1 deviations may be reconciled with theory by the hypothesis of quasar "evolution," which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact.
NASA Astrophysics Data System (ADS)
Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.
2014-12-01
Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.
New method to improve dynamic stiffness of electro-hydraulic servo systems
NASA Astrophysics Data System (ADS)
Bai, Yanhong; Quan, Long
2013-09-01
Most current researches working on improving stiffness focus on the application of control theories. But controller in closed-loop hydraulic control system takes effect only after the controlled position is deviated, so the control action is lagged. Thus dynamic performance against force disturbance and dynamic load stiffness can’t be improved evidently by advanced control algorithms. In this paper, the elementary principle of maintaining piston position unchanged under sudden external force load change by charging additional oil is analyzed. On this basis, the conception of raising dynamic stiffness of electro hydraulic position servo system by flow feedforward compensation is put forward. And a scheme using double servo valves to realize flow feedforward compensation is presented, in which another fast response servo valve is added to the regular electro hydraulic servo system and specially utilized to compensate the compressed oil volume caused by load impact in time. The two valves are arranged in parallel to control the cylinder jointly. Furthermore, the model of flow compensation is derived, by which the product of the amplitude and width of the valve’s pulse command signal can be calculated. And determination rules of the amplitude and width of pulse signal are concluded by analysis and simulations. Using the proposed scheme, simulations and experiments at different positions with different force changes are conducted. The simulation and experimental results show that the system dynamic performance against load force impact is largely improved with decreased maximal dynamic position deviation and shortened settling time. That is, system dynamic load stiffness is evidently raised. This paper proposes a new method which can effectively improve the dynamic stiffness of electro-hydraulic servo systems.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L; Bakhtina, Marina M; Becker, Donald F; Bedwell, Gregory J; Bekdemir, Ahmet; Besong, Tabot M D; Birck, Catherine; Brautigam, Chad A; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B; Chaton, Catherine T; Cölfen, Helmut; Connaghan, Keith D; Crowley, Kimberly A; Curth, Ute; Daviter, Tina; Dean, William L; Díez, Ana I; Ebel, Christine; Eckert, Debra M; Eisele, Leslie E; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A; Fairman, Robert; Finn, Ron M; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E; Cifre, José G Hernández; Herr, Andrew B; Howell, Elizabeth E; Isaac, Richard S; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A; Kwon, Hyewon; Larson, Adam; Laue, Thomas M; Le Roy, Aline; Leech, Andrew P; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R; Ma, Jia; May, Carrie A; Maynard, Ernest L; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K; Park, Jin-Ku; Pawelek, Peter D; Perdue, Erby E; Perkins, Stephen J; Perugini, Matthew A; Peterson, Craig L; Peverelli, Martin G; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E; Raynal, Bertrand D E; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E; Rosenberg, Rose; Rowe, Arthur J; Rufer, Arne C; Scott, David J; Seravalli, Javier G; Solovyova, Alexandra S; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M; Streicher, Werner W; Sumida, John P; Swygert, Sarah G; Szczepanowski, Roman H; Tessmer, Ingrid; Toth, Ronald T; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F W; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies.
Zhao, Huaying; Ghirlando, Rodolfo; Alfonso, Carlos; Arisaka, Fumio; Attali, Ilan; Bain, David L.; Bakhtina, Marina M.; Becker, Donald F.; Bedwell, Gregory J.; Bekdemir, Ahmet; Besong, Tabot M. D.; Birck, Catherine; Brautigam, Chad A.; Brennerman, William; Byron, Olwyn; Bzowska, Agnieszka; Chaires, Jonathan B.; Chaton, Catherine T.; Cölfen, Helmut; Connaghan, Keith D.; Crowley, Kimberly A.; Curth, Ute; Daviter, Tina; Dean, William L.; Díez, Ana I.; Ebel, Christine; Eckert, Debra M.; Eisele, Leslie E.; Eisenstein, Edward; England, Patrick; Escalante, Carlos; Fagan, Jeffrey A.; Fairman, Robert; Finn, Ron M.; Fischle, Wolfgang; de la Torre, José García; Gor, Jayesh; Gustafsson, Henning; Hall, Damien; Harding, Stephen E.; Cifre, José G. Hernández; Herr, Andrew B.; Howell, Elizabeth E.; Isaac, Richard S.; Jao, Shu-Chuan; Jose, Davis; Kim, Soon-Jong; Kokona, Bashkim; Kornblatt, Jack A.; Kosek, Dalibor; Krayukhina, Elena; Krzizike, Daniel; Kusznir, Eric A.; Kwon, Hyewon; Larson, Adam; Laue, Thomas M.; Le Roy, Aline; Leech, Andrew P.; Lilie, Hauke; Luger, Karolin; Luque-Ortega, Juan R.; Ma, Jia; May, Carrie A.; Maynard, Ernest L.; Modrak-Wojcik, Anna; Mok, Yee-Foong; Mücke, Norbert; Nagel-Steger, Luitgard; Narlikar, Geeta J.; Noda, Masanori; Nourse, Amanda; Obsil, Tomas; Park, Chad K.; Park, Jin-Ku; Pawelek, Peter D.; Perdue, Erby E.; Perkins, Stephen J.; Perugini, Matthew A.; Peterson, Craig L.; Peverelli, Martin G.; Piszczek, Grzegorz; Prag, Gali; Prevelige, Peter E.; Raynal, Bertrand D. E.; Rezabkova, Lenka; Richter, Klaus; Ringel, Alison E.; Rosenberg, Rose; Rowe, Arthur J.; Rufer, Arne C.; Scott, David J.; Seravalli, Javier G.; Solovyova, Alexandra S.; Song, Renjie; Staunton, David; Stoddard, Caitlin; Stott, Katherine; Strauss, Holger M.; Streicher, Werner W.; Sumida, John P.; Swygert, Sarah G.; Szczepanowski, Roman H.; Tessmer, Ingrid; Toth, Ronald T.; Tripathy, Ashutosh; Uchiyama, Susumu; Uebel, Stephan F. W.; Unzai, Satoru; Gruber, Anna Vitlin; von Hippel, Peter H.; Wandrey, Christine; Wang, Szu-Huan; Weitzel, Steven E.; Wielgus-Kutrowska, Beata; Wolberger, Cynthia; Wolff, Martin; Wright, Edward; Wu, Yu-Sung; Wubben, Jacinta M.; Schuck, Peter
2015-01-01
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies. PMID:25997164
Gordon, G M; Steyn, M
2016-05-01
A recent review paper on cranio-facial superimposition (CFS) stated that "there have been specific conceptual variances" from the original methods used in the practice of skull-photo superimposition, leading to poor results as far as accuracy is concerned. It was argued that the deviations in the practice of the technique have resulted in the reduced accuracies (for both failure to include and failure to exclude) that are noted in several recent studies. This paper aims to present the results from recent research to highlight the advancement of skull-photo/cranio-facial superimposition, and to discuss some of the issues raised regarding deviations from original techniques. The evolving methodology of CFS is clarified in context with the advancement of technology, forensic science and specifically within the field of forensic anthropology. Developments in the skull-photo/cranio-facial superimposition techniques have largely focused on testing reliability and accuracy objectively. Techniques now being employed by forensic anthropologists must conform to rigorous scientific testing and methodologies. Skull-photo/cranio-facial superimposition is constantly undergoing accuracy and repeatability testing which is in line with the principles of the scientific method and additionally allows for advancement in the field. Much of the research has indicated that CFS is useful in exclusion which is consistent with the concept of Popperian falsifiability - a hypothesis and experimental design which is falsifiable. As the hypothesis is disproved or falsified, another evolves to replace it and explain the new observations. Current and future studies employing different methods to test the accuracy and reliability of skull-photo/cranio-facial superimposition will enable researchers to establish the contribution the technique can have for identification purposes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Did recent world record marathon runners employ optimal pacing strategies?
Angus, Simon D
2014-01-01
We apply statistical analysis of high frequency (1 km) split data for the most recent two world-record marathon runs: Run 1 (2:03:59, 28 September 2008) and Run 2 (2:03:38, 25 September 2011). Based on studies in the endurance cycling literature, we develop two principles to approximate 'optimal' pacing in the field marathon. By utilising GPS and weather data, we test, and then de-trend, for each athlete's field response to gradient and headwind on course, recovering standardised proxies for power-based pacing traces. The resultant traces were analysed to ascertain if either runner followed optimal pacing principles; and characterise any deviations from optimality. Whereas gradient was insignificant, headwind was a significant factor in running speed variability for both runners, with Runner 2 targeting the (optimal) parallel variation principle, whilst Runner 1 did not. After adjusting for these responses, neither runner followed the (optimal) 'even' power pacing principle, with Runner 2's macro-pacing strategy fitting a sinusoidal oscillator with exponentially expanding envelope whilst Runner 1 followed a U-shaped, quadratic form. The study suggests that: (a) better pacing strategy could provide elite marathon runners with an economical pathway to significant performance improvements at world-record level; and (b) the data and analysis herein is consistent with a complex-adaptive model of power regulation.
Kuritz, K; Stöhr, D; Pollak, N; Allgöwer, F
2017-02-07
Cyclic processes, in particular the cell cycle, are of great importance in cell biology. Continued improvement in cell population analysis methods like fluorescence microscopy, flow cytometry, CyTOF or single-cell omics made mathematical methods based on ergodic principles a powerful tool in studying these processes. In this paper, we establish the relationship between cell cycle analysis with ergodic principles and age structured population models. To this end, we describe the progression of a single cell through the cell cycle by a stochastic differential equation on a one dimensional manifold in the high dimensional dataspace of cell cycle markers. Given the assumption that the cell population is in a steady state, we derive transformation rules which transform the number density on the manifold to the steady state number density of age structured population models. Our theory facilitates the study of cell cycle dependent processes including local molecular events, cell death and cell division from high dimensional "snapshot" data. Ergodic analysis can in general be applied to every process that exhibits a steady state distribution. By combining ergodic analysis with age structured population models we furthermore provide the theoretic basis for extensions of ergodic principles to distribution that deviate from their steady state. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mental Health Nursing, Mechanical Restraint Measures and Patients’ Legal Rights
Birkeland, Soren; Gildberg, Frederik A.
2016-01-01
Coercive mechanical restraint (MR) in psychiatry constitutes the perhaps most extensive exception from the common health law requirement for involving patients in health care decisions and achieving their informed consent prior to treatment. Coercive measures and particularly MR seriously collide with patient autonomy principles, pose a particular challenge to psychiatric patients’ legal rights, and put intensified demands on health professional performance. Legal rights principles require rationale for coercive measure use be thoroughly considered and rigorously documented. This article presents an in-principle Danish Psychiatric Complaint Board decision concerning MR use initiated by untrained staff. The case illustrates that, judicially, weight must be put on the patient perspective on course of happenings and especially when health professional documentation is scant, patients’ rights call for taking notice of patient evaluations. Consequently, if it comes out that psychiatric staff failed to pay appropriate consideration for the patient’s mental state, perspective, and expressions, patient response deviations are to be judicially interpreted in this light potentially rendering MR use illegitimated. While specification of law criteria might possibly improve law use and promote patients’ rights, education of psychiatry professionals must address the need for, as far as possible, paying due regard to meeting patient perspectives and participation principles as well as formal law and documentation requirements. PMID:27123152
Scaling Deviations for Neutrino Reactions in Aysmptotically Free Field Theories
DOE R&D Accomplishments Database
Wilczek, F. A.; Zee, A.; Treiman, S. B.
1974-11-01
Several aspects of deep inelastic neutrino scattering are discussed in the framework of asymptotically free field theories. We first consider the growth behavior of the total cross sections at large energies. Because of the deviations from strict scaling which are characteristic of such theories the growth need not be linear. However, upper and lower bounds are established which rather closely bracket a linear growth. We next consider in more detail the expected pattern of scaling deviation for the structure functions and, correspondingly, for the differential cross sections. The analysis here is based on certain speculative assumptions. The focus is on qualitative effects of scaling breakdown as they may show up in the X and y distributions. The last section of the paper deals with deviations from the Callan-Gross relation.
Colorimetric analysis of three editions of the Velhagen-Broschmann pseudoisochromatic colour plates.
Kuchenbecker, Jörn; Nicklas, Sven; Behrens-Baumann, Wolfgang
2010-01-01
Chromatic variations across different copies and different editions of pseudoisochromatic tests and violation of underlying principles of construction for individual plates can influence test results. We analysed the colorimetric characteristics of three different editions of Velhagen-Broschmann pseudoisochromatic plates (30th edition printed in 1995, 31st edition printed in 1997, 32nd edition printed in 2000). One hundred and twelve coloured dots of 18 plates were chosen from each edition. We measured RGB and CIE XYZ values using a spectrophotometer. Differences in lightness and chromaticity between corresponding dots of different editions were analysed in terms of Delta L* and Delta u'v', respectively. For each plate deviations from dichromatic confusion lines were analysed. Furthermore, we determined the relative luminance of a target compared to its background in terms of the Weber contrast. The mean Delta L* across editions was 2.05 (+/-1.4) and the mean Delta u'v' was 0.0078 (+/-0.0029). For two plates the deviations of targets from dichromatic confusion lines exceeded suggested values. For a number of plates, the lightness contrast between the symbol and its background was high. Comparison with psychophysical data showed that these colour plates are easily detectable by colour-deficient observers. Lightness and chromatic variation across the three editions was moderate except for a small number of plates perhaps due to inaccuracies in the printing process. The design of several plates should be revised according to standard principles of construction of colour deficiency tests. (c) 2009 S. Karger AG, Basel.
Design and Development of Lateral Flight Director
NASA Technical Reports Server (NTRS)
Kudlinski, Kim E.; Ragsdale, William A.
1999-01-01
The current control law used for the flight director in the Boeing 737 simulator is inadequate with large localizer deviations near the middle marker. Eight different control laws are investigated. A heuristic method is used to design control laws that meet specific performance criteria. The design of each is described in detail. Several tests were performed and compared with the current control law for the flight director. The goal was to design a control law for the flight director that can be used with large localizer deviations near the middle marker, which could be caused by winds or wake turbulence, without increasing its level of complexity.
On the Geometry of Chemical Reaction Networks: Lyapunov Function and Large Deviations
NASA Astrophysics Data System (ADS)
Agazzi, A.; Dembo, A.; Eckmann, J.-P.
2018-04-01
In an earlier paper, we proved the validity of large deviations theory for the particle approximation of quite general chemical reaction networks. In this paper, we extend its scope and present a more geometric insight into the mechanism of that proof, exploiting the notion of spherical image of the reaction polytope. This allows to view the asymptotic behavior of the vector field describing the mass-action dynamics of chemical reactions as the result of an interaction between the faces of this polytope in different dimensions. We also illustrate some local aspects of the problem in a discussion of Wentzell-Freidlin theory, together with some examples.
Al-Ekrish, Asma'a A; Alfadda, Sara A; Ameen, Wadea; Hörmann, Romed; Puelacher, Wolfgang; Widmann, Gerlig
2018-06-16
To compare the surface of computer-aided design (CAD) models of the maxilla produced using ultra-low MDCT doses combined with filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR) reconstruction techniques with that produced from a standard dose/FBP protocol. A cadaveric completely edentulous maxilla was imaged using a standard dose protocol (CTDIvol: 29.4 mGy) and FBP, in addition to 5 low dose test protocols (LD1-5) (CTDIvol: 4.19, 2.64, 0.99, 0.53, and 0.29 mGy) reconstructed with FBP, ASIR 50, ASIR 100, and MBIR. A CAD model from each test protocol was superimposed onto the reference model using the 'Best Fit Alignment' function. Differences between the test and reference models were analyzed as maximum and mean deviations, and root-mean-square of the deviations, and color-coded models were obtained which demonstrated the location, magnitude and direction of the deviations. Based upon the magnitude, size, and distribution of areas of deviations, CAD models from the following protocols were comparable to the reference model: FBP/LD1; ASIR 50/LD1 and LD2; ASIR 100/LD1, LD2, and LD3; MBIR/LD1. The following protocols demonstrated deviations mostly between 1-2 mm or under 1 mm but over large areas, and so their effect on surgical guide accuracy is questionable: FBP/LD2; MBIR/LD2, LD3, LD4, and LD5. The following protocols demonstrated large deviations over large areas and therefore were not comparable to the reference model: FBP/LD3, LD4, and LD5; ASIR 50/LD3, LD4, and LD5; ASIR 100/LD4, and LD5. When MDCT is used for CAD models of the jaws, dose reductions of 86% may be possible with FBP, 91% with ASIR 50, and 97% with ASIR 100. Analysis of the stability and accuracy of CAD/CAM surgical guides as directly related to the jaws is needed to confirm the results.
NASA Astrophysics Data System (ADS)
Daglar, Bihter; Demirel, Gokcen Birlik; Khudiyev, Tural; Dogan, Tamer; Tobail, Osama; Altuntas, Sevde; Buyukserin, Fatih; Bayindir, Mehmet
2014-10-01
The melt-infiltration technique enables the fabrication of complex nanostructures for a wide range of applications in optics, electronics, biomaterials, and catalysis. Here, anemone-like nanostructures are produced for the first time under the surface/interface principles of melt-infiltration as a non-lithographic method. Functionalized anodized aluminum oxide (AAO) membranes are used as templates to provide large-area production of nanostructures, and polycarbonate (PC) films are used as active phase materials. In order to understand formation dynamics of anemone-like structures finite element method (FEM) simulations are performed and it is found that wetting behaviour of the polymer is responsible for the formation of cavities at the caps of the structures. These nanostructures are examined in the surface-enhanced-Raman-spectroscopy (SERS) experiment and they exhibit great potential in this field. Reproducible SERS signals are detected with relative standard deviations (RSDs) of 7.2-12.6% for about 10 000 individual spots. SERS measurements are demonstrated at low concentrations of Rhodamine 6G (R6G), even at the picomolar level, with an enhancement factor of ~1011. This high enhancement factor is ascribed to the significant electric field enhancement at the cavities of nanostructures and nanogaps between them, which is supported by finite difference time-domain (FDTD) simulations. These novel nanostructured films can be further optimized to be used in chemical and plasmonic sensors and as a single molecule SERS detection platform.The melt-infiltration technique enables the fabrication of complex nanostructures for a wide range of applications in optics, electronics, biomaterials, and catalysis. Here, anemone-like nanostructures are produced for the first time under the surface/interface principles of melt-infiltration as a non-lithographic method. Functionalized anodized aluminum oxide (AAO) membranes are used as templates to provide large-area production of nanostructures, and polycarbonate (PC) films are used as active phase materials. In order to understand formation dynamics of anemone-like structures finite element method (FEM) simulations are performed and it is found that wetting behaviour of the polymer is responsible for the formation of cavities at the caps of the structures. These nanostructures are examined in the surface-enhanced-Raman-spectroscopy (SERS) experiment and they exhibit great potential in this field. Reproducible SERS signals are detected with relative standard deviations (RSDs) of 7.2-12.6% for about 10 000 individual spots. SERS measurements are demonstrated at low concentrations of Rhodamine 6G (R6G), even at the picomolar level, with an enhancement factor of ~1011. This high enhancement factor is ascribed to the significant electric field enhancement at the cavities of nanostructures and nanogaps between them, which is supported by finite difference time-domain (FDTD) simulations. These novel nanostructured films can be further optimized to be used in chemical and plasmonic sensors and as a single molecule SERS detection platform. Electronic supplementary information (ESI) available: SEM images of the AAO membrane and bare polymer film, FEM simulations of anemone-like polymeric nanopillars depending on the time and pressure, and detailed calculation of the enhancement factor both including experimental and theoretical approaches. See DOI: 10.1039/c4nr03909b
Large deviations and portfolio optimization
NASA Astrophysics Data System (ADS)
Sornette, Didier
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Implementing the "Marketing You" Project in Large Sections of Principles of Marketing
ERIC Educational Resources Information Center
Smith, Karen H.
2004-01-01
There is mounting pressure on business education to increase experiential learning at the same time that budget constraints are forcing universities to increase class size. This article explains the design and implementation of the "Marketing You" project in two large sections of Principles of Marketing to bring experiential learning into the…
On Ruch's Principle of Decreasing Mixing Distance in classical statistical physics
NASA Astrophysics Data System (ADS)
Busch, Paul; Quadt, Ralf
1990-10-01
Ruch's Principle of Decreasing Mixing Distance is reviewed as a statistical physical principle and its basic suport and geometric interpretation, the Ruch-Schranner-Seligman theorem, is generalized to be applicable to a large representative class of classical statistical systems.
Offshore fatigue design turbulence
NASA Astrophysics Data System (ADS)
Larsen, Gunner C.
2001-07-01
Fatigue damage on wind turbines is mainly caused by stochastic loading originating from turbulence. While onshore sites display large differences in terrain topology, and thereby also in turbulence conditions, offshore sites are far more homogeneous, as the majority of them are likely to be associated with shallow water areas. However, despite this fact, specific recommendations on offshore turbulence intensities, applicable for fatigue design purposes, are lacking in the present IEC code. This article presents specific guidelines for such loading. These guidelines are based on the statistical analysis of a large number of wind data originating from two Danish shallow water offshore sites. The turbulence standard deviation depends on the mean wind speed, upstream conditions, measuring height and thermal convection. Defining a population of turbulence standard deviations, at a given measuring position, uniquely by the mean wind speed, variations in upstream conditions and atmospheric stability will appear as variability of the turbulence standard deviation. Distributions of such turbulence standard deviations, conditioned on the mean wind speed, are quantified by fitting the measured data to logarithmic Gaussian distributions. By combining a simple heuristic load model with the parametrized conditional probability density functions of the turbulence standard deviations, an empirical offshore design turbulence intensity is determined. For pure stochastic loading (as associated with standstill situations), the design turbulence intensity yields a fatigue damage equal to the average fatigue damage caused by the distributed turbulence intensity. If the stochastic loading is combined with a periodic deterministic loading (as in the normal operating situation), the proposed design turbulence intensity is shown to be conservative.
Ranking and validation of spallation models for isotopic production cross sections of heavy residua
NASA Astrophysics Data System (ADS)
Sharma, Sushil K.; Kamys, Bogusław; Goldenbaum, Frank; Filges, Detlef
2017-07-01
The production cross sections of isotopically identified residual nuclei of spallation reactions induced by 136Xe projectiles at 500AMeV on hydrogen target were analyzed in a two-step model. The first stage of the reaction was described by the INCL4.6 model of an intranuclear cascade of nucleon-nucleon and pion-nucleon collisions whereas the second stage was analyzed by means of four different models; ABLA07, GEM2, GEMINI++ and SMM. The quality of the data description was judged quantitatively using two statistical deviation factors; the H-factor and the M-factor. It was found that the present analysis leads to a different ranking of models as compared to that obtained from the qualitative inspection of the data reproduction. The disagreement was caused by sensitivity of the deviation factors to large statistical errors present in some of the data. A new deviation factor, the A factor, was proposed, that is not sensitive to the statistical errors of the cross sections. The quantitative ranking of models performed using the A-factor agreed well with the qualitative analysis of the data. It was concluded that using the deviation factors weighted by statistical errors may lead to erroneous conclusions in the case when the data cover a large range of values. The quality of data reproduction by the theoretical models is discussed. Some systematic deviations of the theoretical predictions from the experimental results are observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magnelli, A; Xia, P
2015-06-15
Purpose: Spine stereotactic body radiotherapy requires very conformal dose distributions and precise delivery. Prior to treatment, a KV cone-beam CT (KV-CBCT) is registered to the planning CT to provide image-guided positional corrections, which depend on selection of the region of interest (ROI) because of imperfect patient positioning and anatomical deformation. Our objective is to determine the dosimetric impact of ROI selections. Methods: Twelve patients were selected for this study with the treatment regions varied from C-spine to T-spine. For each patient, the KV-CBCT was registered to the planning CT three times using distinct ROIs: one encompassing the entire patient, amore » large ROI containing large bony anatomy, and a small target-focused ROI. Each registered CBCT volume, saved as an aligned dataset, was then sent to the planning system. The treated plan was applied to each dataset and dose was recalculated. The tumor dose coverage (percentage of target volume receiving prescription dose), maximum point dose to 0.03 cc of the spinal cord, and dose to 10% of the spinal cord volume (V10) for each alignment were compared to the original plan. Results: The average magnitude of tumor coverage deviation was 3.9%±5.8% with external contour, 1.5%±1.1% with large ROI, 1.3%±1.1% with small ROI. Spinal cord V10 deviation from plan was 6.6%±6.6% with external contour, 3.5%±3.1% with large ROI, and 1.2%±1.0% with small ROI. Spinal cord max point dose deviation from plan was: 12.2%±13.3% with external contour, 8.5%±8.4% with large ROI, and 3.7%±2.8% with small ROI. Conclusion: A small ROI focused on the target results in the smallest deviation from planned dose to target and cord although rotations at large distances from the targets were observed. It is recommended that image fusion during CBCT focus narrowly on the target volume to minimize dosimetric error. Improvement in patient setups may further reduce residual errors.« less
Thin Disk Accretion in the Magnetically-Arrested State
NASA Astrophysics Data System (ADS)
Avara, Mark J.; McKinney, Jonathan; Reynolds, Christopher S.
2016-01-01
Shakura-Sunyaev thin disk theory is fundamental to black hole astrophysics. Though applications of the theory are wide-spread and powerful tools for explaining observations, such as Soltan's argument using quasar power, broadened iron line measurements, continuum fitting, and recently reverberation mapping, a significant large-scale magnetic field causes substantial deviations from standard thin disk behavior. We have used fully 3D general relativistic MHD simulations with cooling to explore the thin (H/R~0.1) magnetically arrested disk (MAD) state and quantify these deviations. This work demonstrates that accumulation of large-scale magnetic flux into the MAD state is possible, and then extends prior numerical studies of thicker disks, allowing us to measure how jet power scales with the disk state, providing a natural explanation of phenomena like jet quenching in the high-soft state of X-ray binaries. We have also simulated thin MAD disks with a misaligned black hole spin axis in order to understand further deviations from thin disk theory that may significantly affect observations.
Solute segregation and deviation from bulk thermodynamics at nanoscale crystalline defects.
Titus, Michael S; Rhein, Robert K; Wells, Peter B; Dodge, Philip C; Viswanathan, Gopal Babu; Mills, Michael J; Van der Ven, Anton; Pollock, Tresa M
2016-12-01
It has long been known that solute segregation at crystalline defects can have profound effects on material properties. Nevertheless, quantifying the extent of solute segregation at nanoscale defects has proven challenging due to experimental limitations. A combined experimental and first-principles approach has been used to study solute segregation at extended intermetallic phases ranging from 4 to 35 atomic layers in thickness. Chemical mapping by both atom probe tomography and high-resolution scanning transmission electron microscopy demonstrates a markedly different composition for the 4-atomic-layer-thick phase, where segregation has occurred, compared to the approximately 35-atomic-layer-thick bulk phase of the same crystal structure. First-principles predictions of bulk free energies in conjunction with direct atomistic simulations of the intermetallic structure and chemistry demonstrate the breakdown of bulk thermodynamics at nanometer dimensions and highlight the importance of symmetry breaking due to the proximity of interfaces in determining equilibrium properties.
The fragmentation instability of a black hole with f( R) global monopole under GUP
NASA Astrophysics Data System (ADS)
Chen, Lingshen; Cheng, Hongbo
2018-03-01
Having studied the fragmentation of the black holes containing f( R) global monopole under the generalized uncertainty principle (GUP), we show the influences from this kind of monopole, f( R) theory, and GUP on the evolution of black holes. We focus on the possibility that the black hole breaks into two parts by means of the second law of thermodynamics. We derive the entropies of the initial black hole and the broken parts while the generalization of Heisenberg's uncertainty principle is introduced. We find that the f( R) global monopole black hole keeps stable instead of splitting without the generalization because the entropy difference is negative. The fragmentation of the black hole will happen if the black hole entropies are limited by the GUP and the considerable deviation from the general relativity leads to the case that the mass of one fragmented black hole is smaller and the other one's mass is larger.
Solute segregation and deviation from bulk thermodynamics at nanoscale crystalline defects
Titus, Michael S.; Rhein, Robert K.; Wells, Peter B.; Dodge, Philip C.; Viswanathan, Gopal Babu; Mills, Michael J.; Van der Ven, Anton; Pollock, Tresa M.
2016-01-01
It has long been known that solute segregation at crystalline defects can have profound effects on material properties. Nevertheless, quantifying the extent of solute segregation at nanoscale defects has proven challenging due to experimental limitations. A combined experimental and first-principles approach has been used to study solute segregation at extended intermetallic phases ranging from 4 to 35 atomic layers in thickness. Chemical mapping by both atom probe tomography and high-resolution scanning transmission electron microscopy demonstrates a markedly different composition for the 4–atomic-layer–thick phase, where segregation has occurred, compared to the approximately 35–atomic-layer–thick bulk phase of the same crystal structure. First-principles predictions of bulk free energies in conjunction with direct atomistic simulations of the intermetallic structure and chemistry demonstrate the breakdown of bulk thermodynamics at nanometer dimensions and highlight the importance of symmetry breaking due to the proximity of interfaces in determining equilibrium properties. PMID:28028543
Positronium, antihydrogen, light, and the equivalence principle
NASA Astrophysics Data System (ADS)
Karshenboim, Savely G.
2016-07-01
While discussing a certain generic difference in effects of gravity on particles and antiparticles, various neutral particles (i.e. the particles which are identical with their antiparticles) could be a perfect probe. One such neutral particles is the positronium atom, which has been available for precision experiments for a few decades. The other important neutral particle is the photon. Behavior of light in the presence of a gravitational field has been the key both to build and develop the theory of general relativity and to verify it experimentally. The very idea of antigravity for antimatter strongly contradicts both the principles of general relativity and its experimentally verified consequences. Consideration of existing experimental results on photons and positrons makes antigravity impossible and leads to a conclusion that the deviation of the ratio of acceleration of the free fall of particles and antiparticles cannot exceed the level of 1× {10}-5.
A prevalence-based association test for case-control studies.
Ryckman, Kelli K; Jiang, Lan; Li, Chun; Bartlett, Jacquelaine; Haines, Jonathan L; Williams, Scott M
2008-11-01
Genetic association is often determined in case-control studies by the differential distribution of alleles or genotypes. Recent work has demonstrated that association can also be assessed by deviations from the expected distributions of alleles or genotypes. Specifically, multiple methods motivated by the principles of Hardy-Weinberg equilibrium (HWE) have been developed. However, these methods do not take into account many of the assumptions of HWE. Therefore, we have developed a prevalence-based association test (PRAT) as an alternative method for detecting association in case-control studies. This method, also motivated by the principles of HWE, uses an estimated population allele frequency to generate expected genotype frequencies instead of using the case and control frequencies separately. Our method often has greater power, under a wide variety of genetic models, to detect association than genotypic, allelic or Cochran-Armitage trend association tests. Therefore, we propose PRAT as a powerful alternative method of testing for association.
Yönt, Gülendam Hakverdioğlu; Korhan, Esra Akin; Dizer, Berna; Gümüş, Fatma; Koyuncu, Rukiye
2014-01-01
Nurses are more likely to face the dilemma of whether to resort to physical restraints or not and have a hard time making that decision. This is a descriptive study. A total of 55 nurses participated in the research. For data collection, a question form developed by researchers to determine perceptions of ethical dilemmas by nurses in the application of physical restraint was used. A descriptive analysis was made by calculating the mean, standard deviation, and maximum and minimum values. The nurses expressed (36.4%) having difficulty in deciding to use physical restraint. Nurses reported that they experience ethical dilemmas mainly in relation to the ethic principles of nonmaleficence, beneficence, and convenience. We have concluded that majority of nurses working in critical care units apply physical restraint to patients, although they are facing ethical dilemmas concerning harm and benefit principles during the application.
The importance of ethic in the field of human tissue banking.
Morales Pedraza, Jorge; Herson, Marisa Roma
2012-03-01
A tissue bank is accountable before the community in fulfilling the expectations of tissue donors, their families and recipients. The expected output from the altruistic donation is that safe and high quality human tissue grafts will be provided for the medical treatment of patients. Thus, undertakings of tissue banks have to be not only authorised and audited by national competent health care authorities, but also comply with a strong ethical code, a code of practices and ethical principles. Ethical practice in the field of tissue banking requires the setting of principles, the identification of possible deviations and the establishment of mechanisms that will detect and hinder abuses that may occur during the procurement, processing and distribution of human tissues for transplantation. The opinions and suggestions manifested by the authors in this paper may not be necessarily a reflection of those within the institutions or community they are linked to.
Study unique artistic lopburi province for design brass tea set of bantahkrayang community
NASA Astrophysics Data System (ADS)
Pliansiri, V.; Seviset, S.
2017-07-01
The objectives of this study were as follows: 1) to study the production process of handcrafted Brass Tea Set; and 2) to design and develop the handcrafted of Brass Tea Set. The process of design was started by mutual analytical processes and conceptual framework for product design, Quality Function Deployment, Theory of Inventive Problem Solving, Principles of Craft Design, and Principle of Reverse Engineering. The experts in field of both Industrial Product Design and Brass Handicraft Product, have evaluated the Brass Tea Set design and created prototype of Brass tea set by the sample of consumers who have ever bought the Brass Tea Set of Bantahkrayang Community on this research. The statistics methods used were percentage, mean ({{{\\overline X}} = }) and standard deviation (S.D.) 3. To assess consumer satisfaction toward of handcrafted Brass tea set was at the high level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniel, Scott F.; Linder, Eric V.; Lawrence Berkeley National Laboratory, Berkeley, California
Deviations from general relativity, such as could be responsible for the cosmic acceleration, would influence the growth of large-scale structure and the deflection of light by that structure. We clarify the relations between several different model-independent approaches to deviations from general relativity appearing in the literature, devising a translation table. We examine current constraints on such deviations, using weak gravitational lensing data of the CFHTLS and COSMOS surveys, cosmic microwave background radiation data of WMAP5, and supernova distance data of Union2. A Markov chain Monte Carlo likelihood analysis of the parameters over various redshift ranges yields consistency with general relativitymore » at the 95% confidence level.« less
Cosmological implications of a large complete quasar sample
Segal, I. E.; Nicoll, J. F.
1998-01-01
Objective and reproducible determinations of the probabilistic significance levels of the deviations between theoretical cosmological prediction and direct model-independent observation are made for the Large Bright Quasar Sample [Foltz, C., Chaffee, F. H., Hewett, P. C., MacAlpine, G. M., Turnshek, D. A., et al. (1987) Astron. J. 94, 1423–1460]. The Expanding Universe model as represented by the Friedman–Lemaitre cosmology with parameters qo = 0, Λ = 0 denoted as C1 and chronometric cosmology (no relevant adjustable parameters) denoted as C2 are the cosmologies considered. The mean and the dispersion of the apparent magnitudes and the slope of the apparent magnitude–redshift relation are the directly observed statistics predicted. The C1 predictions of these cosmology-independent quantities are deviant by as much as 11σ from direct observation; none of the C2 predictions deviate by >2σ. The C1 deviations may be reconciled with theory by the hypothesis of quasar “evolution,” which, however, appears incapable of being substantiated through direct observation. The excellent quantitative agreement of the C1 deviations with those predicted by C2 without adjustable parameters for the results of analysis predicated on C1 indicates that the evolution hypothesis may well be a theoretical artifact. PMID:9560182
Extending the Principles of Intensive Writing to Large Macroeconomics Classes
ERIC Educational Resources Information Center
Docherty, Peter; Tse, Harry; Forman, Ross; McKenzie, Jo
2010-01-01
The authors report on the design and implementation of a pilot program to extend the principles of intensive writing outlined by W. Lee Hansen (1998), Murray S. Simpson and Shireen E. Carroll (1999) and David Carless (2006) to large macroeconomics classes. The key aspect of this program was its collaborative nature, with staff from two specialist…
Toward Instructional Design Principles: Inducing Faraday's Law with Contrasting Cases
ERIC Educational Resources Information Center
Kuo, Eric; Wieman, Carl E.
2016-01-01
Although physics education research (PER) has improved instructional practices, there are not agreed upon principles for designing effective instructional materials. Here, we illustrate how close comparison of instructional materials could support the development of such principles. Specifically, in discussion sections of a large, introductory…
The August Krogh principle applies to plants
NASA Technical Reports Server (NTRS)
Wayne, R.; Staves, M. P.
1996-01-01
The Krogh principle refers to the use of a large number of animals to study the large number of physiological problems, rather than limiting study to a particular organism for all problems. There may be organisms that are more suited to study of a particular problem than others. This same principle applies to plants. The authors are concerned with the recent trend in plant biology of using Arabidopsis thaliana as the "organism of choice." Arabidopsis is an excellent organism for molecular genetic research, but other plants are superior models for other research areas of plant biology. The authors present examples of the successful use of the Krogh principle in plant cell biology research, emphasizing the particular characteristics of the selected research organisms that make them the appropriate choice.
The Kolmogorov-Obukhov Statistical Theory of Turbulence
NASA Astrophysics Data System (ADS)
Birnir, Björn
2013-08-01
In 1941 Kolmogorov and Obukhov postulated the existence of a statistical theory of turbulence, which allows the computation of statistical quantities that can be simulated and measured in a turbulent system. These are quantities such as the moments, the structure functions and the probability density functions (PDFs) of the turbulent velocity field. In this paper we will outline how to construct this statistical theory from the stochastic Navier-Stokes equation. The additive noise in the stochastic Navier-Stokes equation is generic noise given by the central limit theorem and the large deviation principle. The multiplicative noise consists of jumps multiplying the velocity, modeling jumps in the velocity gradient. We first estimate the structure functions of turbulence and establish the Kolmogorov-Obukhov 1962 scaling hypothesis with the She-Leveque intermittency corrections. Then we compute the invariant measure of turbulence, writing the stochastic Navier-Stokes equation as an infinite-dimensional Ito process, and solving the linear Kolmogorov-Hopf functional differential equation for the invariant measure. Finally we project the invariant measure onto the PDF. The PDFs turn out to be the normalized inverse Gaussian (NIG) distributions of Barndorff-Nilsen, and compare well with PDFs from simulations and experiments.
Integrating pharmacology topics in high school biology and chemistry classes improves performance
NASA Astrophysics Data System (ADS)
Schwartz-Bloom, Rochelle D.; Halpin, Myra J.
2003-11-01
Although numerous programs have been developed for Grade Kindergarten through 12 science education, evaluation has been difficult owing to the inherent problems conducting controlled experiments in the typical classroom. Using a rigorous experimental design, we developed and tested a novel program containing a series of pharmacology modules (e.g., drug abuse) to help high school students learn basic principles in biology and chemistry. High school biology and chemistry teachers were recruited for the study and they attended a 1-week workshop to learn how to integrate pharmacology into their teaching. Working with university pharmacology faculty, they also developed classroom activities. The following year, teachers field-tested the pharmacology modules in their classrooms. Students in classrooms using the pharmacology topics scored significantly higher on a multiple choice test of basic biology and chemistry concepts compared with controls. Very large effect sizes (up to 1.27 standard deviations) were obtained when teachers used as many as four modules. In addition, biology students increased performance on chemistry questions and chemistry students increased performance on biology questions. Substantial gains in achievement may be made when high school students are taught science using topics that are interesting and relevant to their own lives.
Nagata, Yuki; Lennartz, Christian
2008-07-21
The atomistic simulation of charge transfer process for an amorphous Alq(3) system is reported. By employing electrostatic potential charges, we calculate site energies and find that the standard deviation of site energy distribution is about twice as large as predicted in previous research. The charge mobility is calculated via the Miller-Abrahams formalism and the master equation approach. We find that the wide site energy distribution governs Poole-Frenkel-type behavior of charge mobility against electric field, while the spatially correlated site energy is not a dominant mechanism of Poole-Frenkel behavior in the range from 2x10(5) to 1.4x10(6) V/cm. Also we reveal that randomly meshed connectivities are, in principle, required to account for the Poole-Frenkel mechanism. Charge carriers find a zigzag pathway at low electric field, while they find a straight pathway along electric field when a high electric field is applied. In the space-charge-limited current scheme, the charge-carrier density increases with electric field strength so that the nonlinear behavior of charge mobility is enhanced through the strong charge-carrier density dependence of charge mobility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sachse, Torsten; Martinez, Todd J.; Dietzek, Benjamin
Not only the molecular structure but also the presence or absence of aggregates determines many properties of organic materials. Theoretical investigation of such aggregates requires the prediction of a suitable set of diverse structures. Here, we present the open–source program EnergyScan for the unbiased prediction of geometrically diverse sets of small aggregates. Its bottom–up approach is complementary to existing ones by performing a detailed scan of an aggregate's potential energy surface, from which diverse local energy minima are selected. We crossvalidate this approach by predicting both literature–known and heretofore unreported geometries of the urea dimer. We also predict a diversemore » set of dimers of the less intensely studied case of porphin, which we investigate further using quantum chemistry. For several dimers, we find strong deviations from a reference absorption spectrum, which we explain using computed transition densities. Furthermore, this proof of principle clearly shows that EnergyScan successfully predicts aggregates exhibiting large structural and spectral diversity.« less
Sachse, Torsten; Martinez, Todd J.; Dietzek, Benjamin; ...
2018-01-03
Not only the molecular structure but also the presence or absence of aggregates determines many properties of organic materials. Theoretical investigation of such aggregates requires the prediction of a suitable set of diverse structures. Here, we present the open–source program EnergyScan for the unbiased prediction of geometrically diverse sets of small aggregates. Its bottom–up approach is complementary to existing ones by performing a detailed scan of an aggregate's potential energy surface, from which diverse local energy minima are selected. We crossvalidate this approach by predicting both literature–known and heretofore unreported geometries of the urea dimer. We also predict a diversemore » set of dimers of the less intensely studied case of porphin, which we investigate further using quantum chemistry. For several dimers, we find strong deviations from a reference absorption spectrum, which we explain using computed transition densities. Furthermore, this proof of principle clearly shows that EnergyScan successfully predicts aggregates exhibiting large structural and spectral diversity.« less
Leaner and greener analysis of cannabinoids.
Mudge, Elizabeth M; Murch, Susan J; Brown, Paula N
2017-05-01
There is an explosion in the number of labs analyzing cannabinoids in marijuana (Cannabis sativa L., Cannabaceae) but existing methods are inefficient, require expert analysts, and use large volumes of potentially environmentally damaging solvents. The objective of this work was to develop and validate an accurate method for analyzing cannabinoids in cannabis raw materials and finished products that is more efficient and uses fewer toxic solvents. An HPLC-DAD method was developed for eight cannabinoids in cannabis flowers and oils using a statistically guided optimization plan based on the principles of green chemistry. A single-laboratory validation determined the linearity, selectivity, accuracy, repeatability, intermediate precision, limit of detection, and limit of quantitation of the method. Amounts of individual cannabinoids above the limit of quantitation in the flowers ranged from 0.02 to 14.9% w/w, with repeatability ranging from 0.78 to 10.08% relative standard deviation. The intermediate precision determined using HorRat ratios ranged from 0.3 to 2.0. The LOQs for individual cannabinoids in flowers ranged from 0.02 to 0.17% w/w. This is a significant improvement over previous methods and is suitable for a wide range of applications including regulatory compliance, clinical studies, direct patient medical services, and commercial suppliers.
NASA Astrophysics Data System (ADS)
Dai, Jianhong; Yin, Yunyu; Wang, Xiao; Shen, Xudong; Liu, Zhehong; Ye, Xubin; Cheng, Jinguang; Jin, Changqing; Zhou, Guanghui; Hu, Zhiwei; Weng, Shihchang; Wan, Xiangang; Long, Youwen
2018-02-01
A new pyrochlore oxide C d2I r2O7 with an I r5 + charge state was prepared by high-pressure techniques. Although strong spin-orbit coupling (SOC) dominates the electronic states in most iridates so that a SOC-Mott state is proposed in S r2Ir O4 in the assumption of an undistorted Ir O6 octahedral crystalline field, the strongly distorted one in the current C d2I r2O7 exhibits a competing interaction with the SOC. Unexpected from a strong SOC limit, C d2I r2O7 deviates from a nonmagnetic and insulating J =0 ground state. It displays short-range ferromagnetic correlations and metallic electrical transport properties. First-principles calculations well reproduce the experimental observation, revealing the large mixture between the jeff=1 /2 and jeff=3 /2 bands near the Fermi surface due to the significant distortion of Ir O6 octahedra. This work sheds light on the critical role of a noncubic crystalline field in electronic properties which has been ignored in past studies of 5 d -electron systems.
A dynamic-solver-consistent minimum action method: With an application to 2D Navier-Stokes equations
NASA Astrophysics Data System (ADS)
Wan, Xiaoliang; Yu, Haijun
2017-02-01
This paper discusses the necessity and strategy to unify the development of a dynamic solver and a minimum action method (MAM) for a spatially extended system when employing the large deviation principle (LDP) to study the effects of small random perturbations. A dynamic solver is used to approximate the unperturbed system, and a minimum action method is used to approximate the LDP, which corresponds to solving an Euler-Lagrange equation related to but more complicated than the unperturbed system. We will clarify possible inconsistencies induced by independent numerical approximations of the unperturbed system and the LDP, based on which we propose to define both the dynamic solver and the MAM on the same approximation space for spatial discretization. The semi-discrete LDP can then be regarded as the exact LDP of the semi-discrete unperturbed system, which is a finite-dimensional ODE system. We achieve this methodology for the two-dimensional Navier-Stokes equations using a divergence-free approximation space. The method developed can be used to study the nonlinear instability of wall-bounded parallel shear flows, and be generalized straightforwardly to three-dimensional cases. Numerical experiments are presented.
Structure Defect Property Relationships in Binary Intermetallics
NASA Astrophysics Data System (ADS)
Medasani, Bharat; Ding, Hong; Chen, Wei; Persson, Kristin; Canning, Andrew; Haranczyk, Maciej; Asta, Mark
2015-03-01
Ordered intermetallics are light weight materials with technologically useful high temperature properties such as creep resistance. Knowledge of constitutional and thermal defects is required to understand these properties. Vacancies and antisites are the dominant defects in the intermetallics and their concentrations and formation enthalpies could be computed by using first principles density functional theory and thermodynamic formalisms such as dilute solution method. Previously many properties of the intermetallics such as melting temperatures and formation enthalpies were statistically analyzed for large number of intermetallics using structure maps and data mining approaches. We undertook a similar exercise to establish the dependence of the defect properties in binary intermetallics on the underlying structural and chemical composition. For more than 200 binary intermetallics comprising of AB, AB2 and AB3 structures, we computed the concentrations and formation enthalpies of vacancies and antisites in a small range of stoichiometries deviating from ideal stoichiometry. The calculated defect properties were datamined to gain predictive capabilities of defect properties as well as to classify the intermetallics for their suitability in high-T applications. Supported by the US DOE under Contract No. DEAC02-05CH11231 under the Materials Project Center grant (Award No. EDCBEE).
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
New ventures require accurate risk analyses and adjustments.
Eastaugh, S R
2000-01-01
For new business ventures to succeed, healthcare executives need to conduct robust risk analyses and develop new approaches to balance risk and return. Risk analysis involves examination of objective risks and harder-to-quantify subjective risks. Mathematical principles applied to investment portfolios also can be applied to a portfolio of departments or strategic business units within an organization. The ideal business investment would have a high expected return and a low standard deviation. Nonetheless, both conservative and speculative strategies should be considered in determining an organization's optimal service line and helping the organization manage risk.
NASA Technical Reports Server (NTRS)
Khristova, R.; Vanmen, M.
1986-01-01
Based on considerations of principles and experimental data, the interference of sulfate ions in poteniometric titration of EDTA with FeCl3 was confirmed. The method of back complexometric titration of molybdenum of Nonova and Gasheva was improved by replacing hydrazine sulfate with hydrazine hydrochloride for reduction of Mo(VI) to Mo(V). The method can be used for one to tenths of mg of molybdenum with 0.04 mg standard deviation. The specific method of determination of molybdenum in molybdenite concentrates is presented.
Orientation of Christian Churches
NASA Astrophysics Data System (ADS)
McCluskey, Stephen C.
The orientation of Christian churches reflects the historically documented concepts that one should turn eastward to pray and the architectural and liturgical principle that temples and churches should be constructed facing east (often specified as equinoctial east). Since many churches do not face equinoctial east, various attempts have been made to explain this deviation. Among them are the idea that those churches were incorrectly built or that they were oriented toward sunrise on the date their foundation was laid or on the feast or the saint to whom the church was dedicated.
Hernán, Miguel A.; Sauer, Brian C.; Hernández-Díaz, Sonia; Platt, Robert; Shrier, Ian
2016-01-01
Many analyses of observational data are attempts to emulate a target trial. The emulation of the target trial may fail when researchers deviate from simple principles that guide the design and analysis of randomized experiments. We review a framework to describe and prevent biases, including immortal time bias, that result from a failure to align start of follow-up, specification of eligibility, and treatment assignment. We review some analytic approaches to avoid these problems in comparative effectiveness or safety research. PMID:27237061
NASA Astrophysics Data System (ADS)
Michoud, Clément; Carrea, Dario; Augereau, Emmanuel; Cancouët, Romain; Costa, Stéphane; Davidson, Robert; Delacourt, Chirstophe; Derron, Marc-Henri; Jaboyedoff, Michel; Letortu, Pauline; Maquaire, Olivier
2013-04-01
Dieppe coastal cliffs, in Normandy, France, are mainly formed by sub-horizontal deposits of chalk and flintstone. Largely destabilized by an intense weathering and the Channel sea erosion, small and large rockfalls are regularly observed and contribute to retrogressive cliff processes. During autumn 2012, cliff and intertidal topographies have been acquired with a Terrestrial Laser Scanner (TLS) and a Mobile Laser Scanner (MLS), coupled with seafloor bathymetries realized with a multibeam echosounder (MBES). MLS is a recent development of laser scanning based on the same theoretical principles of aerial LiDAR, but using smaller, cheaper and portable devices. The MLS system, which is composed by an accurate dynamic positioning and orientation (INS) devices and a long range LiDAR, is mounted on a marine vessel; it is then possible to quickly acquire in motion georeferenced LiDAR point clouds with a resolution of about 15 cm. For example, it takes about 1 h to scan of shoreline of 2 km long. MLS is becoming a promising technique supporting erosion and rockfall assessments along the shores of lakes, fjords or seas. In this study, the MLS system used to acquire cliffs and intertidal areas of the Cap d'Ailly was composed by the INS Applanix POS-MV 320 V4 and the LiDAR Optech Ilirs LR. On the same day, three MLS scans with large overlaps (J1, J21 and J3) have been performed at ranges from 600 m at 4 knots (low tide) up to 200 m at 2.2 knots (up tide) with a calm sea at 2.5 Beaufort (small wavelets). Mean scan resolutions go from 26 cm for far scan (J1) to about 8.1 cm for close scan (J3). Moreover, one TLS point cloud on this test site has been acquired with a mean resolution of about 2.3 cm, using a Riegl LMS Z390i. In order to quantify the reliability of the methodology, comparisons between scans have been realized with the software Polyworks™, calculating shortest distances between points of one cloud and the interpolated surface of the reference point cloud. A MatLab™ routine was also written to extract interesting statistics. First, mean distances between points of the reference point clouds (J21) and its interpolated surface are about 0.35 cm with a standard deviation of 15 cm; errors introduced during the surface interpolation step, especially in vegetated areas, may explain those differences. Then, mean distances between J1's points (resp. J3) and the J21's reference surface are about 4 cm (resp. -17 cm) with a standard deviation of 53 cm (resp. 55 cm). After a best fit alignment of J1 and J3 on J21, mean distances between J1 (resp. J3) and the J21's reference surface decrease to about 0.15 cm (resp. 1.6 cm) with a standard deviation of 41 cm (resp. 21 cm). Finally, mean distances between the TLS point clouds and the J21's reference surface are about 3.2 cm with a standard deviation of 26 cm. In conclusion, MLS devices are able to quickly scan long shoreline with a resolution up to about 10 cm. The precision of the acquired data is relatively small enough to investigate on geomorphological features of coastal cliffs. The ability of the MLS technique to detect and monitor small and large rockfalls will be investigated thanks to new acquisitions of the Dieppe cliffs in a close future and enhanced adapted post-processing steps.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-24
... each cost allocation method must satisfy six cost allocation principles. DATES: Effective November 23... Interregional Transmission 61 Coordination Requirements III. Cost Allocation 65 1. Cost Allocation Principle 2... must satisfy six cost allocation principles. 3. In Order No. 1000-A, the Commission largely affirmed...
Rare events in networks with internal and external noise
NASA Astrophysics Data System (ADS)
Hindes, J.; Schwartz, I. B.
2017-12-01
We study rare events in networks with both internal and external noise, and develop a general formalism for analyzing rare events that combines pair-quenched techniques and large-deviation theory. The probability distribution, shape, and time scale of rare events are considered in detail for extinction in the Susceptible-Infected-Susceptible model as an illustration. We find that when both types of noise are present, there is a crossover region as the network size is increased, where the probability exponent for large deviations no longer increases linearly with the network size. We demonstrate that the form of the crossover depends on whether the endemic state is localized near the epidemic threshold or not.
NASA Astrophysics Data System (ADS)
Bateev, A. B.; Filippov, V. P.
2017-01-01
The principle possibility of using computer program Univem MS for Mössbauer spectra fitting as a demonstration material at studying such disciplines as atomic and nuclear physics and numerical methods by students is shown in the article. This program is associated with nuclear-physical parameters such as isomer (or chemical) shift of nuclear energy level, interaction of nuclear quadrupole moment with electric field and of magnetic moment with surrounded magnetic field. The basic processing algorithm in such programs is the Least Square Method. The deviation of values of experimental points on spectra from the value of theoretical dependence is defined on concrete examples. This value is characterized in numerical methods as mean square deviation. The shape of theoretical lines in the program is defined by Gaussian and Lorentzian distributions. The visualization of the studied material on atomic and nuclear physics can be improved by similar programs of the Mössbauer spectroscopy, X-ray Fluorescence Analyzer or X-ray diffraction analysis.
Tests of local Lorentz invariance violation of gravity in the standard model extension with pulsars.
Shao, Lijing
2014-03-21
The standard model extension is an effective field theory introducing all possible Lorentz-violating (LV) operators to the standard model and general relativity (GR). In the pure-gravity sector of minimal standard model extension, nine coefficients describe dominant observable deviations from GR. We systematically implemented 27 tests from 13 pulsar systems to tightly constrain eight linear combinations of these coefficients with extensive Monte Carlo simulations. It constitutes the first detailed and systematic test of the pure-gravity sector of minimal standard model extension with the state-of-the-art pulsar observations. No deviation from GR was detected. The limits of LV coefficients are expressed in the canonical Sun-centered celestial-equatorial frame for the convenience of further studies. They are all improved by significant factors of tens to hundreds with existing ones. As a consequence, Einstein's equivalence principle is verified substantially further by pulsar experiments in terms of local Lorentz invariance in gravity.
NASA Astrophysics Data System (ADS)
Majumdar, Paulami; Greeley, Jeffrey
2018-04-01
Linear scaling relations of adsorbate energies across a range of catalytic surfaces have emerged as a central interpretive paradigm in heterogeneous catalysis. They are, however, typically developed for low adsorbate coverages which are not always representative of realistic heterogeneous catalytic environments. Herein, we present generalized linear scaling relations on transition metals that explicitly consider adsorbate-coadsorbate interactions at variable coverages. The slopes of these scaling relations do not follow the simple bond counting principles that govern scaling on transition metals at lower coverages. The deviations from bond counting are explained using a pairwise interaction model wherein the interaction parameter determines the slope of the scaling relationship on a given metal at variable coadsorbate coverages, and the slope across different metals at fixed coadsorbate coverage is approximated by adding a coverage-dependent correction to the standard bond counting contribution. The analysis provides a compact explanation for coverage-dependent deviations from bond counting in scaling relationships and suggests a useful strategy for incorporation of coverage effects into catalytic trends studies.
NASA Astrophysics Data System (ADS)
Qian, Shinan; Geckeler, Ralf D.; Just, Andreas; Idir, Mourad; Wu, Xuehui
2015-06-01
Since the development of the Nano-Optic-Measuring Machine (NOM), the accuracy of measuring the profile of an optical surface has been enhanced to the 100-nrad rms level or better. However, to update the accuracy of the NOM system to sub-50 nrad rms, the large saw-tooth deviation (269 nrad rms) of an existing electronic autocollimator, the Elcomat 3000/8, must be resolved. We carried out simulations to assess the saw-tooth-like deviation. We developed a method for setting readings to reduce the deviation to sub-50 nrad rms, suitable for testing plane mirrors. With this method, we found that all the tests conducted in a slowly rising section of the saw-tooth show a small deviation of 28.8 to <40 nrad rms. We also developed a dense-measurement method and an integer-period method to lower the saw-tooth deviation during tests of sphere mirrors. Further research is necessary for formulating a precise test for a spherical mirror. We present a series of test results from our experiments that verify the value of the improvements we made.
van Dommelen, Paula; Deurloo, Jacqueline A; Gooskens, Rob H; Verkerk, Paul H
2015-04-01
Increased head circumference is often the first and main sign leading to the diagnosis of hydrocephalus. Our aim is to investigate the diagnostic accuracy of referral criteria for head circumference to detect hydrocephalus in the first year of life. A reference group with longitudinal head circumference data (n = 1938) was obtained from the Social Medical Survey of Children Attending Child Health Clinics study. The case group comprised infants with hydrocephalus treated in a tertiary pediatric hospital who had not already been detected during pregnancy (n = 125). Head circumference data were available for 43 patients. Head circumference data were standardized according to gestational age-specific references. Sensitivity and specificity of a very large head circumference (>2.5 standard deviations on the growth chart) were, respectively, 72.1% (95% confidence interval [CI]: 56.3-84.7) and 97.1% (95% CI:96.2-97.8). These figures were, respectively, 74.4% (95% CI: 58.8-86.5) and 93.0% (95% CI:91.8-94.1) for a large head circumference (>2.0 standard deviation), and 76.7% (95% CI:61.4-88.2) and 96.5% (95% CI:95.6-97.3) for a very large head circumference and/or a very large (>2.5 standard deviation) progressive growth of head circumference. A very large head circumference and/or a very large progressive growth of head circumference shows the best diagnostic accuracy to detect hydrocephalus at an early stage. Gestational age-specific growth charts are recommended. Further improvements may be possible by taking into account parental head circumference. Copyright © 2015 Elsevier Inc. All rights reserved.
On Large Time Behavior and Selection Principle for a Diffusive Carr-Penrose Model
NASA Astrophysics Data System (ADS)
Conlon, Joseph G.; Dabkowski, Michael; Wu, Jingchen
2016-04-01
This paper is concerned with the study of a diffusive perturbation of the linear LSW model introduced by Carr and Penrose. A main subject of interest is to understand how the presence of diffusion acts as a selection principle, which singles out a particular self-similar solution of the linear LSW model as determining the large time behavior of the diffusive model. A selection principle is rigorously proven for a model which is a semiclassical approximation to the diffusive model. Upper bounds on the rate of coarsening are also obtained for the full diffusive model.
ERIC Educational Resources Information Center
Salemi, Michael K.
2009-01-01
One of the most important challenges facing college instructors of economics is helping students engage. Engagement is particularly important in a large-enrollment Principles of Economics course, where it can help students achieve a long-lived understanding of how economists use basic economic ideas to look at the world. The author reports how…
Particle Orbit Analysis in the Finite Beta Plasma of the Large Helical Device using Real Coordinates
NASA Astrophysics Data System (ADS)
Seki, Ryousuke; Matsumoto, Yutaka; Suzuki, Yasuhiro; Watanabe, Kiyomasa; Itagaki, Masafumi
High-energy particles in a finite beta plasma of the Large Helical Device (LHD) are numerically traced in a real coordinate system. We investigate particle orbits by changing the beta value and/or the magnetic field strength. No significant difference is found in the particle orbit classifications between the vacuum magnetic field and the finite beta plasma cases. The deviation of a banana orbit from the flux surfaces strongly depends on the beta value, although the deviation of the orbit of a passing particle is independent of the beta value. In addition, the deviation of the orbit of the passing particle, rather than that of the banana-orbit particles, depends on the magnetic field strength. We also examine the effect of re-entering particles, which repeatedly pass in and out of the last closed flux surface, in the finite beta plasma of the LHD. It is found that the number of re-entering particles in the finite beta plasma is larger than that in the vacuum magnetic field. As a result, the role of reentering particles in the finite beta plasma of the LHD is more important than that in the vacuum magnetic field, and the effect of the charge-exchange reaction on particle confinement in the finite beta plasma is large.
Not a Copernican observer: biased peculiar velocity statistics in the local Universe
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej
2017-05-01
We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.
Large incidence angle and defocus influence cat's eye retro-reflector
NASA Astrophysics Data System (ADS)
Zhang, Lai-xian; Sun, Hua-yan; Zhao, Yan-zhong; Yang, Ji-guang; Zheng, Yong-hui
2014-11-01
Cat's eye lens make the laser beam retro-reflected exactly to the opposite direction of the incidence beam, called cat's eye effect, which makes rapid acquiring, tracking and pointing of free space optical communication possible. Study the influence of cat's eye effect to cat's eye retro-reflector at large incidence angle is useful. This paper analyzed the process of how the incidence angle and focal shit affect effective receiving area, retro-reflected beam divergence angle, central deviation of cat's eye retro-reflector at large incidence angle and cat's eye effect factor using geometrical optics method, and presented the analytic expressions. Finally, numerical simulation was done to prove the correction of the study. The result shows that the efficiency receiving area of cat's eye retro-reflector is mainly affected by incidence angle when the focal shift is positive, and it decreases rapidly when the incidence angle increases; the retro-reflected beam divergence and central deviation is mainly affected by focal shift, and within the effective receiving area, the central deviation is smaller than beam divergence in most time, which means the incidence beam can be received and retro-reflected to the other terminal in most time. The cat's eye effect factor gain is affected by both incidence angle and focal shift.
Analysis of change orders in geotechnical engineering work at INDOT.
DOT National Transportation Integrated Search
2011-01-01
Change orders represent a cost to the State and to tax payers that is real and often extremely large because contractors tend to charge very large : amounts to any additional work that deviates from the work that was originally planned. Therefore, ef...
Analysis of Electric Vehicle DC High Current Conversion Technology
NASA Astrophysics Data System (ADS)
Yang, Jing; Bai, Jing-fen; Lin, Fan-tao; Lu, Da
2017-05-01
Based on the background of electric vehicles, it is elaborated the necessity about electric energy accurate metering of electric vehicle power batteries, and it is analyzed about the charging and discharging characteristics of power batteries. It is needed a DC large current converter to realize accurate calibration of power batteries electric energy metering. Several kinds of measuring methods are analyzed based on shunts and magnetic induction principle in detail. It is put forward power batteries charge and discharge calibration system principle, and it is simulated and analyzed ripple waves containing rate and harmonic waves containing rate of power batteries AC side and DC side. It is put forward suitable DC large current measurement methods of power batteries by comparing different measurement principles and it is looked forward the DC large current measurement techniques.
Method of surface error visualization using laser 3D projection technology
NASA Astrophysics Data System (ADS)
Guo, Lili; Li, Lijuan; Lin, Xuezhu
2017-10-01
In the process of manufacturing large components, such as aerospace, automobile and shipping industry, some important mold or stamped metal plate requires precise forming on the surface, which usually needs to be verified, if necessary, the surface needs to be corrected and reprocessed. In order to make the correction of the machined surface more convenient, this paper proposes a method based on Laser 3D projection system, this method uses the contour form of terrain contour, directly showing the deviation between the actually measured data and the theoretical mathematical model (CAD) on the measured surface. First, measure the machined surface to get the point cloud data and the formation of triangular mesh; secondly, through coordinate transformation, unify the point cloud data to the theoretical model and calculate the three-dimensional deviation, according to the sign (positive or negative) and size of the deviation, use the color deviation band to denote the deviation of three-dimensional; then, use three-dimensional contour lines to draw and represent every coordinates deviation band, creating the projection files; finally, import the projection files into the laser projector, and make the contour line projected to the processed file with 1:1 in the form of a laser beam, compare the Full-color 3D deviation map with the projection graph, then, locate and make quantitative correction to meet the processing precision requirements. It can display the trend of the machined surface deviation clearly.
In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.
Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J
2004-01-01
In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.
Visual space under free viewing conditions.
Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J
2005-10-01
Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.
NASA Astrophysics Data System (ADS)
Grabsch, Aurélien; Majumdar, Satya N.; Texier, Christophe
2017-06-01
Invariant ensembles of random matrices are characterized by the distribution of their eigenvalues \\{λ _1,\\ldots ,λ _N\\}. We study the distribution of truncated linear statistics of the form \\tilde{L}=\\sum _{i=1}^p f(λ _i) with p
Educational principles and techniques for interpreters.
F. David Boulanger; John P. Smith
1973-01-01
Interpretation is in large part education, since it attempts to convey information, concepts, and principles while creating attitude changes and such emotional states as wonder, delight, and appreciation. Although interpreters might profit greatly by formal training in the principles and techniques of teaching, many have not had such training. Some means of making the...
Lutz, Werner K; Vamvakas, Spyros; Kopp-Schneider, Annette; Schlatter, Josef; Stopper, Helga
2002-12-01
Sublinear dose-response relationships are often seen in toxicity testing, particularly with bioassays for carcinogenicity. This is the result of a superimposition of various effects that modulate and contribute to the process of cancer formation. Examples are saturation of detoxification pathways or DNA repair with increasing dose, or regenerative hyperplasia and indirect DNA damage as a consequence of high-dose cytotoxicity and cell death. The response to a combination treatment can appear to be supra-additive, although it is in fact dose-additive along a sublinear dose-response curve for the single agents. Because environmental exposure of humans is usually in a low-dose range and deviation from linearity is less likely at the low-dose end, combination effects should be tested at the lowest observable effect levels (LOEL) of the components. This principle has been applied to combinations of genotoxic agents in various cellular models. For statistical analysis, all experiments were analyzed for deviation from additivity with an n-factor analysis of variance with an interaction term, n being the number of components tested in combination. Benzo[a]pyrene, benz[a]anthracene, and dibenz[a,c]anthracene were tested at the LOEL, separately and in combination, for the induction of revertants in the Ames test, using Salmonella typhimurium TA100 and rat liver S9 fraction. Combined treatment produced no deviation from additivity. The induction of micronuclei in vitro was investigated with ionizing radiation from a 137Cs source and ethyl methanesulfonate. Mouse lymphoma L5178Y cells revealed a significant 40% supra-additive combination effect in an experiment based on three independent replicates for controls and single and combination treatments. On the other hand, two human lymphoblastoid cell lines (TK6 and WTK1) as well as a pilot study with human primary fibroblasts from fetal lung did not show deviation from additivity. Data derived from one cell line should therefore not be generalized. Regarding the testing of mixtures for deviation from additive toxicity, the suggested experimental protocol is easily followed by toxicologists.
Rapidly rotating neutron stars with a massive scalar field—structure and universal relations
NASA Astrophysics Data System (ADS)
Doneva, Daniela D.; Yazadjiev, Stoytcho S.
2016-11-01
We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly and rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I-Love-Q relations.
A new multiple air beam approach for in-process form error optical measurement
NASA Astrophysics Data System (ADS)
Gao, Y.; Li, R.
2018-07-01
In-process measurement can provide feedback for the control of workpiece precision in terms of size, roughness and, in particular, mid-spatial frequency form error. Optical measurement methods are of the non-contact type and possess high precision, as required for in-process form error measurement. In precision machining, coolant is commonly used to reduce heat generation and thermal deformation on the workpiece surface. However, the use of coolant will induce an opaque coolant barrier if optical measurement methods are used. In this paper, a new multiple air beam approach is proposed. The new approach permits the displacement of coolant from any direction and with a large thickness, i.e. with a large amount of coolant. The model, the working principle, and the key features of the new approach are presented. Based on the proposed new approach, a new in-process form error optical measurement system is developed. The coolant removal capability and the performance of this new multiple air beam approach are assessed. The experimental results show that the workpiece surface y(x, z) can be measured successfully with standard deviation up to 0.3011 µm even under a large amount of coolant, such that the coolant thickness is 15 mm. This means a relative uncertainty of 2σ up to 4.35% and the workpiece surface is deeply immersed in the opaque coolant. The results also show that, in terms of coolant removal capability, air supply and air velocity, the proposed new approach improves by, respectively, 3.3, 1.3 and 5.3 times on the previous single air beam approach. The results demonstrate the significant improvements brought by the new multiple air beam method together with the developed measurement system.
Chen, Xiaomei; Longstaff, Andrew; Fletcher, Simon; Myers, Alan
2014-04-01
This paper presents and evaluates an active dual-sensor autofocusing system that combines an optical vision sensor and a tactile probe for autofocusing on arrays of small holes on freeform surfaces. The system has been tested on a two-axis test rig and then integrated onto a three-axis computer numerical control (CNC) milling machine, where the aim is to rapidly and controllably measure the hole position errors while the part is still on the machine. The principle of operation is for the tactile probe to locate the nominal positions of holes, and the optical vision sensor follows to focus and capture the images of the holes. The images are then processed to provide hole position measurement. In this paper, the autofocusing deviations are analyzed. First, the deviations caused by the geometric errors of the axes on which the dual-sensor unit is deployed are estimated to be 11 μm when deployed on a test rig and 7 μm on the CNC machine tool. Subsequently, the autofocusing deviations caused by the interaction of the tactile probe, surface, and small hole are mathematically analyzed and evaluated. The deviations are a result of the tactile probe radius, the curvatures at the positions where small holes are drilled on the freeform surface, and the effect of the position error of the hole on focusing. An example case study is provided for the measurement of a pattern of small holes on an elliptical cylinder on the two machines. The absolute sum of the autofocusing deviations is 118 μm on the test rig and 144 μm on the machine tool. This is much less than the 500 μm depth of field of the optical microscope. Therefore, the method is capable of capturing a group of clear images of the small holes on this workpiece for either implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
NASA Astrophysics Data System (ADS)
Nemoto, Takahiro; Jack, Robert L.; Lecomte, Vivien
2017-03-01
We analyze large deviations of the time-averaged activity in the one-dimensional Fredrickson-Andersen model, both numerically and analytically. The model exhibits a dynamical phase transition, which appears as a singularity in the large deviation function. We analyze the finite-size scaling of this phase transition numerically, by generalizing an existing cloning algorithm to include a multicanonical feedback control: this significantly improves the computational efficiency. Motivated by these numerical results, we formulate an effective theory for the model in the vicinity of the phase transition, which accounts quantitatively for the observed behavior. We discuss potential applications of the numerical method and the effective theory in a range of more general contexts.
On Deviations between Observed and Theoretically Estimated Values on Additivity-Law Failures
NASA Astrophysics Data System (ADS)
Nayatani, Yoshinobu; Sobagaki, Hiroaki
The authors have reported in the previous studies that the average observed results are about a half of the corresponding predictions on the experiments with large additivity-law failures. One of the reasons of the deviations is studied and clarified by using the original observed data on additivity-law failures in the Nakano experiment. The conclusion from the observations and their analyses clarified that it was essentially difficult to have a good agreement between the average observed results and the corresponding theoretical predictions in the experiments with large additivity-law failures. This is caused by a kind of unavoidable psychological pressure existing in subjects participated in the experiments. We should be satisfied with the agreement in trend between them.
Exact Large-Deviation Statistics for a Nonequilibrium Quantum Spin Chain
NASA Astrophysics Data System (ADS)
Žnidarič, Marko
2014-01-01
We consider a one-dimensional XX spin chain in a nonequilibrium setting with a Lindblad-type boundary driving. By calculating large-deviation rate function in the thermodynamic limit, a generalization of free energy to a nonequilibrium setting, we obtain a complete distribution of current, including closed expressions for lower-order cumulants. We also identify two phase-transition-like behaviors in either the thermodynamic limit, at which the current probability distribution becomes discontinuous, or at maximal driving, when the range of possible current values changes discontinuously. In the thermodynamic limit the current has a finite upper and lower bound. We also explicitly confirm nonequilibrium fluctuation relation and show that the current distribution is the same under mapping of the coupling strength Γ→1/Γ.
NASA Technical Reports Server (NTRS)
Zuk, J.
1976-01-01
The fundamental principles governing dynamic sealing operation are discussed. Different seals are described in terms of these principles. Despite the large variety of detailed construction, there appear to be some basic principles, or combinations of basic principles, by which all seals function, these are presented and discussed. Theoretical and practical considerations in the application of these principles are discussed. Advantages, disadvantages, limitations, and application examples of various conventional and special seals are presented. Fundamental equations governing liquid and gas flows in thin film seals, which enable leakage calculations to be made, are also presented. Concept of flow functions, application of Reynolds lubrication equation, and nonlubrication equation flow, friction and wear; and seal lubrication regimes are explained.
Guang, Hui; Ji, Linhong; Shi, Yingying; Misgeld, Berno J E
2018-01-01
The robot-assisted therapy has been demonstrated to be effective in the improvements of limb function and even activities of daily living for patients after stroke. This paper presents an interactive upper-limb rehabilitation robot with a parallel mechanism and an isometric screen embedded in the platform to display trajectories. In the dynamic modeling for impedance control, the effects of friction and inertia are reduced by introducing the principle of virtual work and derivative of Jacobian matrix. To achieve the assist-as-needed impedance control for arbitrary trajectories, the strategy based on orthogonal deviations is proposed. Simulations and experiments were performed to validate the dynamic modeling and impedance control. Besides, to investigate the influence of the impedance in practice, a subject participated in experiments and performed two types of movements with the robot, that is, rectilinear and circular movements, under four conditions, that is, with/without resistance or impedance, respectively. The results showed that the impedance and resistance affected both mean absolute error and standard deviation of movements and also demonstrated the significant differences between movements with/without impedance and resistance ( p < 0.001). Furthermore, the error patterns were discussed, which suggested that the impedance environment was capable of alleviating movement deviations by compensating the synergetic inadequacy between the shoulder and elbow joints.
Shi, Yingying; Misgeld, Berno J. E.
2018-01-01
The robot-assisted therapy has been demonstrated to be effective in the improvements of limb function and even activities of daily living for patients after stroke. This paper presents an interactive upper-limb rehabilitation robot with a parallel mechanism and an isometric screen embedded in the platform to display trajectories. In the dynamic modeling for impedance control, the effects of friction and inertia are reduced by introducing the principle of virtual work and derivative of Jacobian matrix. To achieve the assist-as-needed impedance control for arbitrary trajectories, the strategy based on orthogonal deviations is proposed. Simulations and experiments were performed to validate the dynamic modeling and impedance control. Besides, to investigate the influence of the impedance in practice, a subject participated in experiments and performed two types of movements with the robot, that is, rectilinear and circular movements, under four conditions, that is, with/without resistance or impedance, respectively. The results showed that the impedance and resistance affected both mean absolute error and standard deviation of movements and also demonstrated the significant differences between movements with/without impedance and resistance (p < 0.001). Furthermore, the error patterns were discussed, which suggested that the impedance environment was capable of alleviating movement deviations by compensating the synergetic inadequacy between the shoulder and elbow joints. PMID:29850004
Radiotherapy quality assurance report from children's oncology group AHOD0031
Dharmarajan, Kavita V.; Friedman, Debra L.; FitzGerald, T.J.; McCarten, Kathleen M.; Constine, Louis S.; Chen, Lu; Kessel, Sandy K.; Iandoli, Matt; Laurie, Fran; Schwartz, Cindy L.; Wolden, Suzanne L.
2016-01-01
Purpose A phase III trial assessing response-based therapy in intermediate-risk Hodgkin lymphoma, mandated real-time central review of involved field radiotherapy and imaging records by a centralized review center to maximize protocol compliance. We report the impact of centralized radiotherapy review upon protocol compliance. Methods Review of simulation films, port films, and dosimetry records was required pre-treatment and after treatment completion. Records were reviewed by study-affiliated or review center-affiliated radiation oncologists. A 6–10% deviation from protocol-specified dose was scored as “minor”; >10% was “major”. A volume deviation was scored as “minor” if margins were less than specified, or “major” if fields transected disease-bearing areas. Interventional review and final compliance review scores were assigned to each radiotherapy case and compared. Results Of 1712 patients enrolled, 1173 underwent IFRT at 256 institutions in 7 countries. An interventional review was performed in 88% and a final review in 98%. Overall, minor and major deviations were found in 12% and 6%, respectively. Among the cases for which ≥ 1 pre-IFRT modification was requested by QARC and subsequently made by the treating institution, 100% were made compliant on final review. In contrast, among the cases for which ≥ 1 modification was requested but not made by the treating institution, 10% were deemed compliant on final review. Conclusion In a large trial with complex treatment pathways and heterogeneous radiotherapy fields, central review was performed in a large percentage of cases pre-IFRT and identified frequent potential deviations in a timely manner. When suggested modifications were performed by the institutions, deviations were almost eliminated. PMID:25670539
Assessing reference evapotranspiration in a subhumid climate in NE Austria
NASA Astrophysics Data System (ADS)
Nolz, Reinhard; Eitzinger, Josef; Cepuder, Peter
2015-04-01
Computing reference evapotranspiration and multiplying it with a specific crop coefficient as recommended by the Food and Agriculture Organization of the United Nations (FAO) is the most widely accepted approach to estimate plant water requirements. The standardized form of the well-known FAO Penman-Monteith equation, published by the Environmental and Water Resources Institute of the American Society of Civil Engineers (ASCE-EWRI), is recommended as a standard procedure for calculating reference evapotranspiration. Applied and validated under different climatic conditions it generally achieved good results compared to other methods. However, several studies documented deviations between measured and calculated reference evapotranspiration depending on local environmental conditions. Consequently, it seems advisable to evaluate the model under local environmental conditions. Evapotranspiration was determined at a subhumid site in Austria (48°12'N, 16°34'E; 157 m asl) using a large weighing lysimeter operated at (limited) reference conditions and compared with calculations according to ASCE-EWRI. The lysimeter had an inner diameter of 1.9 m and a hemispherical bottom with a maximum depth of 2.5 m. Seepage water was measured at a free draining outlet using a tipping bucket. Lysimeter mass changes were sensed by a weighing facility with an accuracy of ±0.1 mm. Both rainfall and evapotranspiration were determined directly from lysimeter data using a simple water balance equation. Meteorological data for the ASCE-EWRI model were obtained from a weather station of the Central Institute for Meteorology and Geodynamics, Austria (ZAMG). The study period was from 2005 to 2010, analyses were based upon daily time steps. Daily calculated reference evapotranspiration was generally overestimated at small values, whereas it was rather underestimated when evapotranspiration was large, which is supported also by other studies. In the given case, advection of sensible heat proved to have an impact. On the other hand, it could not explain the differences exclusively, as it was also shown that small net radiation in combination with small wind velocity produced by trend better results than small net radiation with a large wind velocity, which is somehow contradicting the principle of advection. Obviously, there were also other disregarded influences, for example seasonal varying surface resistance, albedo and soil heat flux. Generally, the ASCE-EWRI equation for daily time steps performed best at average weather conditions. The outcomes should help to correctly interpret reference evapotranspiration data in the region and in similar environments and improve knowledge on the dynamics of the influencing factors that caused the deviations.
Hurricane track forecast cones from fluctuations
Meuel, T.; Prado, G.; Seychelles, F.; Bessafi, M.; Kellay, H.
2012-01-01
Trajectories of tropical cyclones may show large deviations from predicted tracks leading to uncertainty as to their landfall location for example. Prediction schemes usually render this uncertainty by showing track forecast cones representing the most probable region for the location of a cyclone during a period of time. By using the statistical properties of these deviations, we propose a simple method to predict possible corridors for the future trajectory of a cyclone. Examples of this scheme are implemented for hurricane Ike and hurricane Jimena. The corridors include the future trajectory up to at least 50 h before landfall. The cones proposed here shed new light on known track forecast cones as they link them directly to the statistics of these deviations. PMID:22701776
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
Psychiatry outside the framework of empiricism.
Mume, Celestine Okorome
2017-01-01
Science is interested in whatever that is empirical and objective. Any claim that cannot be objectively demonstrated has no place in science, because the subject does not deviate from the role, which it has set out to play in the life of mankind. Psychiatry, as a scientific discipline, plays along these basic principles. In the etiology, symptomatology, and management of psychiatric disorders, the biopsychosocial model recognizes the role of biological, psychological, and social factors. This essay views psychiatry from the biopsychosocial perspective and asserts that certain elements, which may not be readily and empirically verifiable, are important in the practice of psychiatry.
Hernán, Miguel A; Sauer, Brian C; Hernández-Díaz, Sonia; Platt, Robert; Shrier, Ian
2016-11-01
Many analyses of observational data are attempts to emulate a target trial. The emulation of the target trial may fail when researchers deviate from simple principles that guide the design and analysis of randomized experiments. We review a framework to describe and prevent biases, including immortal time bias, that result from a failure to align start of follow-up, specification of eligibility, and treatment assignment. We review some analytic approaches to avoid these problems in comparative effectiveness or safety research. Copyright © 2016 Elsevier Inc. All rights reserved.
Documentation of Current IDA Computer Material Developed for DCPA. Volume II,
1977-01-01
Values of sums df input values.dumber of XTs in pcwer^x is raised to, number of Y’s in power 3 is peJOGtod to, e.g., SXXXY = NJRT x5. v.P. L 1...Convention as with SXX. Normalize by dividing by same powers of standard deviations as number of B’s or L’s. For two tract cities separation between...but in degress Tractional moments about principle axis. Here the number if B’s in L’s is the reciprocal of the power , e.g., FRBBBL= NfT (x
A Priori Subgrid Analysis of Temporal Mixing Layers with Evaporating Droplets
NASA Technical Reports Server (NTRS)
Okongo, Nora; Bellan, Josette
1999-01-01
Subgrid analysis of a transitional temporal mixing layer with evaporating droplets has been performed using three sets of results from a Direct Numerical Simulation (DNS) database, with Reynolds numbers (based on initial vorticity thickness) as large as 600 and with droplet mass loadings as large as 0.5. In the DNS, the gas phase is computed using a Eulerian formulation, with Lagrangian droplet tracking. The Large Eddy Simulation (LES) equations corresponding to the DNS are first derived, and key assumptions in deriving them are first confirmed by computing the terms using the DNS database. Since LES of this flow requires the computation of unfiltered gas-phase variables at droplet locations from filtered gas-phase variables at the grid points, it is proposed to model these by assuming the gas-phase variables to be the sum of the filtered variables and a correction based on the filtered standard deviation; this correction is then computed from the Subgrid Scale (SGS) standard deviation. This model predicts the unfiltered variables at droplet locations considerably better than simply interpolating the filtered variables. Three methods are investigated for modeling the SGS standard deviation: the Smagorinsky approach, the Gradient model and the Scale-Similarity formulation. When the proportionality constant inherent in the SGS models is properly calculated, the Gradient and Scale-Similarity methods give results in excellent agreement with the DNS.
NASA Astrophysics Data System (ADS)
Bounab, S.; Bentabet, A.; Bouhadda, Y.; Belgoumri, Gh.; Fenineche, N.
2017-08-01
We have investigated the structural and electronic properties of the BAs x Sb 1- x , AlAs x Sb 1- x , GaAs x Sb 1- x and InAs x Sb 1- x semiconductor alloys using first-principles calculations under the virtual crystal approximation within both the density functional perturbation theory and the pseudopotential approach. In addition the optical properties have been calculated by using empirical methods. The ground state properties such as lattice constants, both bulk modulus and derivative of bulk modulus, energy gap, refractive index and optical dielectric constant have been calculated and discussed. The obtained results are in reasonable agreement with numerous experimental and theoretical data. The compositional dependence of the lattice constant, bulk modulus, energy gap and effective mass of electrons for ternary alloys show deviations from Vegard's law where our results are in agreement with the available data in the literature.
The action uncertainty principle for continuous measurements
NASA Astrophysics Data System (ADS)
Mensky, Michael B.
1996-02-01
The action uncertainty principle (AUP) for the specification of the most probable readouts of continuous quantum measurements is proved, formulated in different forms and analyzed (for nonlinear as well as linear systems). Continuous monitoring of an observable A(p,q,t) with resolution Δa( t) is considered. The influence of the measurement process on the evolution of the measured system (quantum measurement noise) is presented by an additional term δ F(t)A(p,q,t) in the Hamiltonian where the function δ F (generalized fictitious force) is restricted by the AUP ∫|δ F(t)| Δa( t) d t ≲ and arbitrary otherwise. Quantum-nondemolition (QND) measurements are analyzed with the help of the AUP. A simple uncertainty relation for continuous quantum measurements is derived. It states that the area of a certain band in the phase space should be of the order of. The width of the band depends on the measurement resolution while its length is determined by the deviation of the system, due to the measurement, from classical behavior.
Tests of gravity with future space-based experiments
NASA Astrophysics Data System (ADS)
Sakstein, Jeremy
2018-03-01
Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.
Kruijne, Wouter; Van der Stigchel, Stefan; Meeter, Martijn
2014-03-01
The trajectory of saccades to a target is often affected whenever there is a distractor in the visual field. Distractors can cause a saccade to deviate towards their location or away from it. The oculomotor mechanisms that produce deviation towards distractors have been thoroughly explored in behavioral, neurophysiological and computational studies. The mechanisms underlying deviation away, on the other hand, remain unclear. Behavioral findings suggest a mechanism of spatially focused, top-down inhibition in a saccade map, and deviation away has become a tool to investigate such inhibition. However, this inhibition hypothesis has little neuroanatomical or neurophysiological support, and recent findings go against it. Here, we propose that deviation away results from an unbalanced saccade drive from the brainstem, caused by spike rate adaptation in brainstem long-lead burst neurons. Adaptation to stimulation in the direction of the distractor results in an unbalanced drive away from it. An existing model of the saccade system was extended with this theory. The resulting model simulates a wide range of findings on saccade trajectories, including findings that have classically been interpreted to support inhibition views. Furthermore, the model replicated the effect of saccade latency on deviation away, but predicted this effect would be absent with large (400 ms) distractor-target onset asynchrony. This prediction was confirmed in an experiment, which demonstrates that the theory both explains classical findings on saccade trajectories and predicts new findings. Copyright © 2014 Elsevier Inc. All rights reserved.
[The organization of system of quality management in large multitype hospital].
Taĭts, B M; Krichmar, G N; Stvolinskiĭ, I Iu; Grandilevskaia, O L
2013-01-01
The article presents the characteristics and assessment of functioning of model of quality management in large multitype hospital. The results of work of the municipal hospital of Saint Venerable martyr Elizabeth of St Petersburg concerning the implementation of system of quality management in 2001-2011 of the foundation of principles of total quality management of medical service and principles of quality management according international standards ISO and their Russian analogues.
NASA Technical Reports Server (NTRS)
Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.
1987-01-01
This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.
A Novel Electronic Data Collection System for Large-Scale Surveys of Neglected Tropical Diseases
King, Jonathan D.; Buolamwini, Joy; Cromwell, Elizabeth A.; Panfel, Andrew; Teferi, Tesfaye; Zerihun, Mulat; Melak, Berhanu; Watson, Jessica; Tadesse, Zerihun; Vienneau, Danielle; Ngondi, Jeremiah; Utzinger, Jürg; Odermatt, Peter; Emerson, Paul M.
2013-01-01
Background Large cross-sectional household surveys are common for measuring indicators of neglected tropical disease control programs. As an alternative to standard paper-based data collection, we utilized novel paperless technology to collect data electronically from over 12,000 households in Ethiopia. Methodology We conducted a needs assessment to design an Android-based electronic data collection and management system. We then evaluated the system by reporting results of a pilot trial and from comparisons of two, large-scale surveys; one with traditional paper questionnaires and the other with tablet computers, including accuracy, person-time days, and costs incurred. Principle Findings The electronic data collection system met core functions in household surveys and overcame constraints identified in the needs assessment. Pilot data recorders took 264 (standard deviation (SD) 152 sec) and 260 sec (SD 122 sec) per person registered to complete household surveys using paper and tablets, respectively (P = 0.77). Data recorders felt a lack of connection with the interviewee during the first days using electronic devices, but preferred to collect data electronically in future surveys. Electronic data collection saved time by giving results immediately, obviating the need for double data entry and cross-correcting. The proportion of identified data entry errors in disease classification did not differ between the two data collection methods. Geographic coordinates collected using the tablets were more accurate than coordinates transcribed on a paper form. Costs of the equipment required for electronic data collection was approximately the same cost incurred for data entry of questionnaires, whereas repeated use of the electronic equipment may increase cost savings. Conclusions/Significance Conducting a needs assessment and pilot testing allowed the design to specifically match the functionality required for surveys. Electronic data collection using an Android-based technology was suitable for a large-scale health survey, saved time, provided more accurate geo-coordinates, and was preferred by recorders over standard paper-based questionnaires. PMID:24066147
Probability evolution method for exit location distribution
NASA Astrophysics Data System (ADS)
Zhu, Jinjie; Chen, Zhen; Liu, Xianbin
2018-03-01
The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.
Radar sea reflection for low-e targets
NASA Astrophysics Data System (ADS)
Chow, Winston C.; Groves, Gordon W.
1998-09-01
Modeling radar signal reflection from a wavy sea surface uses a realistic characteristic of the large surface features and parameterizes the effect of the small roughness elements. Representation of the reflection coefficient at each point of the sea surface as a function of the Specular Deviation Angle is, to our knowledge, a novel approach. The objective is to achieve enough simplification and retain enough fidelity to obtain a practical multipath model. The 'specular deviation angle' as used in this investigation is defined and explained. Being a function of the sea elevations, which are stochastic in nature, this quantity is also random and has a probability density function. This density function depends on the relative geometry of the antenna and target positions, and together with the beam- broadening effect of the small surface ripples determined the reflectivity of the sea surface at each point. The probability density function of the specular deviation angle is derived. The distribution of the specular deviation angel as function of position on the mean sea surface is described.
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
Recursive utility in a Markov environment with stochastic growth
Hansen, Lars Peter; Scheinkman, José A.
2012-01-01
Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron–Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility. PMID:22778428
Shapes of strong shock fronts in an inhomogeneous solar wind
NASA Technical Reports Server (NTRS)
Heinemann, M. A.; Siscoe, G. L.
1974-01-01
The shapes expected for solar-flare-produced strong shock fronts in the solar wind have been calculated, large-scale variations in the ambient medium being taken into account. It has been shown that for reasonable ambient solar wind conditions the mean and the standard deviation of the east-west shock normal angle are in agreement with experimental observations including shocks of all strengths. The results further suggest that near a high-speed stream it is difficult to distinguish between corotating shocks and flare-associated shocks on the basis of the shock normal alone. Although the calculated shapes are outside the range of validity of the linear approximation, these results indicate that the variations in the ambient solar wind may account for large deviations of shock normals from the radial direction.
NASA Astrophysics Data System (ADS)
Itoh, Tamitake; Yamamoto, Yuko S.; Tamaru, Hiroharu; Biju, Vasudevanpillai; Murase, Norio; Ozaki, Yukihiro
2013-06-01
We find unique properties accompanying surface-enhanced fluorescence (SEF) from dye molecules adsorbed on Ag nanoparticle aggregates, which generate surface-enhanced Raman scattering. The properties are observed in excitation laser energy dependence of SEF after excluding plasmonic spectral modulation in SEF. The unique properties are large blue shifts of fluorescence spectra, deviation of ratios between anti-Stokes SEF intensity and Stokes from those of normal fluorescence, super-broadening of Stokes spectra, and returning to original fluorescence by lower energy excitation. We elucidate that these properties are induced by electromagnetic enhancement of radiative decay rates exceeding the vibrational relaxation rates within an electronic excited state, which suggests that molecular electronic dynamics in strong plasmonic fields can be largely deviated from that in free space.
Large deviation analysis of a simple information engine
NASA Astrophysics Data System (ADS)
Maitland, Michael; Grosskinsky, Stefan; Harris, Rosemary J.
2015-11-01
Information thermodynamics provides a framework for studying the effect of feedback loops on entropy production. It has enabled the understanding of novel thermodynamic systems such as the information engine, which can be seen as a modern version of "Maxwell's Dæmon," whereby a feedback controller processes information gained by measurements in order to extract work. Here, we analyze a simple model of such an engine that uses feedback control based on measurements to obtain negative entropy production. We focus on the distribution and fluctuations of the information obtained by the feedback controller. Significantly, our model allows an analytic treatment for a two-state system with exact calculation of the large deviation rate function. These results suggest an approximate technique for larger systems, which is corroborated by simulation data.
Recursive utility in a Markov environment with stochastic growth.
Hansen, Lars Peter; Scheinkman, José A
2012-07-24
Recursive utility models that feature investor concerns about the intertemporal composition of risk are used extensively in applied research in macroeconomics and asset pricing. These models represent preferences as the solution to a nonlinear forward-looking difference equation with a terminal condition. In this paper we study infinite-horizon specifications of this difference equation in the context of a Markov environment. We establish a connection between the solution to this equation and to an arguably simpler Perron-Frobenius eigenvalue equation of the type that occurs in the study of large deviations for Markov processes. By exploiting this connection, we establish existence and uniqueness results. Moreover, we explore a substantive link between large deviation bounds for tail events for stochastic consumption growth and preferences induced by recursive utility.
First-principles study of giant thermoelectric power in incommensurate TlInSe2
NASA Astrophysics Data System (ADS)
Ishikawa, M.; Nakayama, T.; Wakita, K.; Shim, Y. G.; Mamedov, N.
2018-04-01
Ternary thallium compound TlInSe2 exhibits a giant Seebeck effect below around 410 K, where Tl atoms form one dimensional incommensurate (IC) arrays. To clarify the origin of large thermoelectric power in the IC phase, the electronic properties of Tl-atom super-structured TlInSe2 were studied using the first-principles calculations. It was shown that the super-structures induce strong binding states between Se-p orbitals in the nearest neighboring layers and produce large density of states near lower conduction bands, which might be one of the origins to produce large thermoelectric power.
On the influence of airfoil deviations on the aerodynamic performance of wind turbine rotors
NASA Astrophysics Data System (ADS)
Winstroth, J.; Seume, J. R.
2016-09-01
The manufacture of large wind turbine rotor blades is a difficult task that still involves a certain degree of manual labor. Due to the complexity, airfoil deviations between the design airfoils and the manufactured blade are certain to arise. Presently, the understanding of the impact of manufacturing uncertainties on the aerodynamic performance is still incomplete. The present work analyzes the influence of a series of airfoil deviations likely to occur during manufacturing by means of Computational Fluid Dynamics and the aeroelastic code FAST. The average power production of the NREL 5MW wind turbine is used to evaluate the different airfoil deviations. Analyzed deviations include: Mold tilt towards the leading and trailing edge, thick bond lines, thick bond lines with cantilever correction, backward facing steps and airfoil waviness. The most severe influences are observed for mold tilt towards the leading and thick bond lines. By applying the cantilever correction, the influence of thick bond lines is almost compensated. Airfoil waviness is very dependent on amplitude height and the location along the surface of the airfoil. Increased influence is observed for backward facing steps, once they are high enough to trigger boundary layer transition close to the leading edge.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capozziello, Salvatore; Tsujikawa, Shinji
2008-05-15
We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-{lambda}R{sub c}[1-(R{sub c}/R){sup 2n}] in the regime R>>R{sub c}. From the solar system constraints on the post-Newtonianmore » parameter {gamma}, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the {lambda}-cold dark matter ({lambda}CDM) cosmological model. For the model f(R)=R-{lambda}R{sub c}(R/R{sub c}){sup p} with 0
Surface Microparticles in Liquid Helium. Quantum Archimedes' Principle
NASA Astrophysics Data System (ADS)
Dyugaev, A. M.; Lebedeva, E. V.
2017-12-01
Deviations from Archimedes' principle for spherical molecular hydrogen particles with the radius R 0 at the surface of 4He liquid helium have been investigated. The classical Archimedes' principle holds if R 0 is larger than the helium capillary length L cap ≅ 500 μm. In this case, the elevation of a particle above the liquid is h + R 0. At 30 μm < R 0 < 500 μm, the buoyancy is suppressed by the surface tension and h + R 3 0/ L 2 cap. At R 0 < 30 μm, the particle is situated beneath the surface of the liquid. In this case, the buoyancy competes with the Casimir force, which repels the particle from the surface deep into the liquid. The distance of the particle to the surface is h - R 5/3 c/ R 2/3 0 if R 0 > R c. Here, {R_c} \\cong {( {{\\hbar c}/{ρ g}} )^{1/5}} ≈ 1, where ħ is Planck's constant, c is the speed of light, g is the acceleration due to gravity, and ρ is the mass density of helium. For very small particles ( R 0 < R c), the distance h_ to the surface of the liquid is independent of their size, h_ = R c.
Comparison of laser ray-tracing and skiascopic ocular wavefront-sensing devices
Bartsch, D-UG; Bessho, K; Gomez, L; Freeman, WR
2009-01-01
Purpose To compare two wavefront-sensing devices based on different principles. Methods Thirty-eight healthy eyes of 19 patients were measured five times in the reproducibility study. Twenty eyes of 10 patients were measured in the comparison study. The Tracey Visual Function Analyzer (VFA), based on the ray-tracing principle and the Nidek optical pathway difference (OPD)-Scan, based on the dynamic skiascopy principle were compared. Standard deviation (SD) of root mean square (RMS) errors was compared to verify the reproducibility. We evaluated RMS errors, Zernike terms and conventional refractive indexes (Sph, Cyl, Ax, and spherical equivalent). Results In RMS errors reading, both devices showed similar ratios of SD to the mean measurement value (VFA: 57.5±11.7%, OPD-Scan: 53.9±10.9%). Comparison on the same eye showed that almost all terms were significantly greater using the VFA than using the OPD-Scan. However, certain high spatial frequency aberrations (tetrafoil, pentafoil, and hexafoil) were consistently measured near zero with the OPD-Scan. Conclusion Both devices showed similar level of reproducibility; however, there was considerable difference in the wavefront reading between machines when measuring the same eye. Differences in the number of sample points, centration, and measurement algorithms between the two instruments may explain our results. PMID:17571088
Unifying principles in terrestrial locomotion: do hopping Australian marsupials fit in?
Bennett, M B
2000-01-01
Mammalian terrestrial locomotion has many unifying principles. However, the Macropodoidea are a particularly interesting group that exhibit a number of significant deviations from the principles that seem to apply to other mammals. While the properties of materials that comprise the musculoskeletal system of mammals are similar, evidence suggests that tendon properties in macropodoid marsupials may be size or function dependent, in contrast to the situation in placental mammals. Postural differences related to hopping versus running have a dramatic effect on the scaling of the pelvic limb musculoskeletal system. Ratios of muscle fibre to tendon cross-sectional areas for ankle extensors and digital flexors scale with positive allometry in all mammals, but exponents are significantly higher in macropods. Tendon safety factors decline with increasing body mass in mammals, with eutherians at risk of ankle extensor tendon rupture at a body mass of about 150 kg, whereas kangaroos encounter similar problems at a body mass of approximately 35 kg. Tendon strength appears to limit locomotor performance in these animals. Elastic strain energy storage in tendons is mass dependent in all mammals, but exponents are significantly larger in macropodid. Tibial stresses may scale with positive allometry in kangaroos, which result in lower bone safety factors in macropods compared to eutherian mammals.
Efficiency of thin magnetically arrested discs around black holes
NASA Astrophysics Data System (ADS)
Avara, Mark J.; McKinney, Jonathan C.; Reynolds, Christopher S.
2016-10-01
The radiative and jet efficiencies of thin magnetized accretion discs around black holes (BHs) are affected by BH spin and the presence of a magnetic field that, when strong, could lead to large deviations from Novikov-Thorne (NT) thin disc theory. To seek the maximum deviations, we perform general relativistic magnetohydrodynamic simulations of radiatively efficient thin (half-height H to radius R of H/R ≈ 0.10) discs around moderately rotating BHs with a/M = 0.5. First, our simulations, each evolved for more than 70 000 rg/c (gravitational radius rg and speed of light c), show that large-scale magnetic field readily accretes inward even through our thin disc and builds-up to the magnetically arrested disc (MAD) state. Secondly, our simulations of thin MADs show the disc achieves a radiative efficiency of ηr ≈ 15 per cent (after estimating photon capture), which is about twice the NT value of ηr ˜ 8 per cent for a/M = 0.5 and gives the same luminosity as an NT disc with a/M ≈ 0.9. Compared to prior simulations with ≲10 per cent deviations, our result of an ≈80 per cent deviation sets a new benchmark. Building on prior work, we are now able to complete an important scaling law which suggests that observed jet quenching in the high-soft state in BH X-ray binaries is consistent with an ever-present MAD state with a weak yet sustained jet.
Effects of Various Architectural Parameters on Six Room Acoustical Measures in Auditoria.
NASA Astrophysics Data System (ADS)
Chiang, Wei-Hwa
The effects of architectural parameters on six room acoustical measures were investigated by means of correlation analyses, factor analyses and multiple regression analyses based on data taken in twenty halls. Architectural parameters were used to estimate acoustical measures taken at individual locations within each room as well as the averages and standard deviations of all measured values in the rooms. The six acoustical measures were Early Decay Time (EDT10), Clarity Index (C80), Overall Level (G), Bass Ratio based on Early Decay Time (BR(EDT)), Treble Ratio based on Early Decay Time (TR(EDT)), and Early Inter-aural Cross Correlation (IACC80). A comprehensive method of quantifying various architectural characteristics of rooms was developed to define a large number of architectural parameters that were hypothesized to effect the acoustical measurements made in the rooms. This study quantitatively confirmed many of the principles used in the design of concert halls and auditoria. Three groups of room architectural parameters such as the parameters associated with the depth of diffusing surfaces were significantly correlated with the hall standard deviations of most of the acoustical measures. Significant differences of statistical relations among architectural parameters and receiver specific acoustical measures were found between a group of music halls and a group of lecture halls. For example, architectural parameters such as the relative distance from the receiver to the overhead ceiling increased the percentage of the variance of acoustical measures that was explained by Barron's revised theory from approximately 70% to 80% only when data were taken in the group of music halls. This study revealed the major architectural parameters which have strong relations with individual acoustical measures forming the basis for a more quantitative method for advancing the theoretical design of concert halls and other auditoria. The results of this study provide designers the information to predict acoustical measures in buildings at very early stages of the design process without using computer models or scale models.
ERIC Educational Resources Information Center
Gilleskie, Donna B.; Salemi, Michael K.
2012-01-01
In a typical economics principles course, students encounter a large number of concepts. In a literacy-targeted course, students study a "short list" of concepts that they can use for the rest of their lives. While a literacy-targeted principles course provides better education for nonmajors, it may place economic majors at a…
Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.
Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman
2013-02-01
This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.
Reflections on the EPSRC Principles of Robotics from the new far-side of the law
NASA Astrophysics Data System (ADS)
Voiculescu, Aurora
2017-04-01
The thought-provoking EPSRC Principles of Robotics stem largely from the reflection on the extent to which robots can affect our lives. These comments highlight the fact that, while the principles may address to a good extent the present technological challenges, they appear to be less immediately suited for future technological and conceptual dares. The first part of the paper is dedicated to the search of the definition of what a robot is. Such a definition should offer the basic conceptual platform on which a normative endeavour, aiming to regulate robots in society, should be based. Concluding that the Principles offer no clear yet flexible insight into such a (meta-) definition, which would allow one to take into account the parameters of informed technological imagination and of envisaged social transformation, the second half of the paper highlights a number of regulatory points of tension. Such tensions, it is argued, stem largely from the absence of an appropriate conceptual platform, influencing negatively the extent to which the principles can be effective in guiding social, ethical, legal and scientific conduct.
Evolution driven structural changes in CENP-E motor domain.
Kumar, Ambuj; Kamaraj, Balu; Sethumadhavan, Rao; Purohit, Rituraj
2013-06-01
Genetic evolution corresponds to various biochemical changes that are vital development of new functional traits. Phylogenetic analysis has provided an important insight into the genetic closeness among species and their evolutionary relationships. Centromere-associated protein-E (CENP-E) protein is vital for maintaining cell cycle and checkpoint signal mechanisms are vital for recruitment process of other essential kinetochore proteins. In this study we have focussed on the evolution driven structural changes in CENP-E motor domain among primate lineage. Through molecular dynamics simulation and computational chemistry approaches we examined the changes in ATP binding affinity and conformational deviations in human CENP-E motor domain as compared to the other primates. Root mean square deviation (RMSD), Root mean square fluctuation (RMSF), Radius of gyration (Rg) and principle component analysis (PCA) results together suggested a gain in stability level as we move from tarsier towards human. This study provides a significant insight into how the cell cycle proteins and their corresponding biochemical activities are evolving and illustrates the potency of a theoretical approach for assessing, in a single study, the structural, functional, and dynamical aspects of protein evolution.
Separate encoding of model-based and model-free valuations in the human brain.
Beierholm, Ulrik R; Anen, Cedric; Quartz, Steven; Bossaerts, Peter
2011-10-01
Behavioral studies have long shown that humans solve problems in two ways, one intuitive and fast (System 1, model-free), and the other reflective and slow (System 2, model-based). The neurobiological basis of dual process problem solving remains unknown due to challenges of separating activation in concurrent systems. We present a novel neuroeconomic task that predicts distinct subjective valuation and updating signals corresponding to these two systems. We found two concurrent value signals in human prefrontal cortex: a System 1 model-free reinforcement signal and a System 2 model-based Bayesian signal. We also found a System 1 updating signal in striatal areas and a System 2 updating signal in lateral prefrontal cortex. Further, signals in prefrontal cortex preceded choices that are optimal according to either updating principle, while signals in anterior cingulate cortex and globus pallidus preceded deviations from optimal choice for reinforcement learning. These deviations tended to occur when uncertainty regarding optimal values was highest, suggesting that disagreement between dual systems is mediated by uncertainty rather than conflict, confirming recent theoretical proposals. Copyright © 2011 Elsevier Inc. All rights reserved.
Growth rate measurement in free jet experiments
NASA Astrophysics Data System (ADS)
Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent
2017-07-01
An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.
Source-Independent Quantum Random Number Generation
NASA Astrophysics Data System (ADS)
Cao, Zhu; Zhou, Hongyi; Yuan, Xiao; Ma, Xiongfeng
2016-01-01
Quantum random number generators can provide genuine randomness by appealing to the fundamental principles of quantum mechanics. In general, a physical generator contains two parts—a randomness source and its readout. The source is essential to the quality of the resulting random numbers; hence, it needs to be carefully calibrated and modeled to achieve information-theoretical provable randomness. However, in practice, the source is a complicated physical system, such as a light source or an atomic ensemble, and any deviations in the real-life implementation from the theoretical model may affect the randomness of the output. To close this gap, we propose a source-independent scheme for quantum random number generation in which output randomness can be certified, even when the source is uncharacterized and untrusted. In our randomness analysis, we make no assumptions about the dimension of the source. For instance, multiphoton emissions are allowed in optical implementations. Our analysis takes into account the finite-key effect with the composable security definition. In the limit of large data size, the length of the input random seed is exponentially small compared to that of the output random bit. In addition, by modifying a quantum key distribution system, we experimentally demonstrate our scheme and achieve a randomness generation rate of over 5 ×103 bit /s .
Ab Initio Crystal Field for Lanthanides.
Ungur, Liviu; Chibotaru, Liviu F
2017-03-13
An ab initio methodology for the first-principle derivation of crystal-field (CF) parameters for lanthanides is described. The methodology is applied to the analysis of CF parameters in [Tb(Pc) 2 ] - (Pc=phthalocyanine) and Dy 4 K 2 ([Dy 4 K 2 O(OtBu) 12 ]) complexes, and compared with often used approximate and model descriptions. It is found that the application of geometry symmetrization, and the use of electrostatic point-charge and phenomenological CF models, lead to unacceptably large deviations from predictions based on ab initio calculations for experimental geometry. It is shown how the predictions of standard CASSCF (Complete Active Space Self-Consistent Field) calculations (with 4f orbitals in the active space) can be systematically improved by including effects of dynamical electronic correlation (CASPT2 step) and by admixing electronic configurations of the 5d shell. This is exemplified for the well-studied Er-trensal complex (H 3 trensal=2,2',2"-tris(salicylideneimido)trimethylamine). The electrostatic contributions to CF parameters in this complex, calculated with true charge distributions in the ligands, yield less than half of the total CF splitting, thus pointing to the dominant role of covalent effects. This analysis allows the conclusion that ab initio crystal field is an essential tool for the decent description of lanthanides. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Dense image registration through MRFs and efficient linear programming.
Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos
2008-12-01
In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.
The treatment of missing data in a large cardiovascular clinical outcomes study.
Little, Roderick J; Wang, Julia; Sun, Xiang; Tian, Hong; Suh, Eun-Young; Lee, Michael; Sarich, Troy; Oppenheimer, Leonard; Plotnikov, Alexei; Wittes, Janet; Cook-Bruns, Nancy; Burton, Paul; Gibson, C Michael; Mohanty, Surya
2016-06-01
The potential impact of missing data on the results of clinical trials has received heightened attention recently. A National Research Council study provides recommendations for limiting missing data in clinical trial design and conduct, and principles for analysis, including the need for sensitivity analyses to assess robustness of findings to alternative assumptions about the missing data. A Food and Drug Administration advisory committee raised missing data as a serious concern in their review of results from the ATLAS ACS 2 TIMI 51 study, a large clinical trial that assessed rivaroxaban for its ability to reduce the risk of cardiovascular death, myocardial infarction or stroke in patients with acute coronary syndrome. This case study describes a variety of measures that were taken to address concerns about the missing data. A range of analyses are described to assess the potential impact of missing data on conclusions. In particular, measures of the amount of missing data are discussed, and the fraction of missing information from multiple imputation is proposed as an alternative measure. The sensitivity analysis in the National Research Council study is modified in the context of survival analysis where some individuals are lost to follow-up. The impact of deviations from ignorable censoring is assessed by differentially increasing the hazard of the primary outcome in the treatment groups and multiply imputing events between dropout and the end of the study. Tipping-point analyses are described, where the deviation from ignorable censoring that results in a reversal of significance of the treatment effect is determined. A study to determine the vital status of participants lost to follow-up was also conducted, and the results of including this additional information are assessed. Sensitivity analyses suggest that findings of the ATLAS ACS 2 TIMI 51 study are robust to missing data; this robustness is reinforced by the follow-up study, since inclusion of data from this study had little impact on the study conclusions. Missing data are a serious problem in clinical trials. The methods presented here, namely, the sensitivity analyses, the follow-up study to determine survival of missing cases, and the proposed measurement of missing data via the fraction of missing information, have potential application in other studies involving survival analysis where missing data are a concern. © The Author(s) 2016.
NASA Astrophysics Data System (ADS)
Trugman, Daniel Taylor
The complexity of the earthquake rupture process makes earthquakes inherently unpredictable. Seismic hazard forecasts often presume that the rate of earthquake occurrence can be adequately modeled as a space-time homogenenous or stationary Poisson process and that the relation between the dynamical source properties of small and large earthquakes obey self-similar scaling relations. While these simplified models provide useful approximations and encapsulate the first-order statistical features of the historical seismic record, they are inconsistent with the complexity underlying earthquake occurrence and can lead to misleading assessments of seismic hazard when applied in practice. The six principle chapters of this thesis explore the extent to which the behavior of real earthquakes deviates from these simplified models, and the implications that the observed deviations have for our understanding of earthquake rupture processes and seismic hazard. Chapter 1 provides a brief thematic overview and introduction to the scope of this thesis. Chapter 2 examines the complexity of the 2010 M7.2 El Mayor-Cucapah earthquake, focusing on the relation between its unexpected and unprecedented occurrence and anthropogenic stresses from the nearby Cerro Prieto Geothermal Field. Chapter 3 compares long-term changes in seismicity within California's three largest geothermal fields in an effort to characterize the relative influence of natural and anthropogenic stress transients on local seismic hazard. Chapter 4 describes a hybrid, hierarchical clustering algorithm that can be used to relocate earthquakes using waveform cross-correlation, and applies the new algorithm to study the spatiotemporal evolution of two recent seismic swarms in western Nevada. Chapter 5 describes a new spectral decomposition technique that can be used to analyze the dynamic source properties of large datasets of earthquakes, and applies this approach to revisit the question of self-similar scaling of southern California seismicity. Chapter 6 builds upon these results and applies the same spectral decomposition technique to examine the source properties of several thousand recent earthquakes in southern Kansas that are likely human-induced by massive oil and gas operations in the region. Chapter 7 studies the connection between source spectral properties and earthquake hazard, focusing on spatial variations in dynamic stress drop and its influence on ground motion amplitudes. Finally, Chapter 8 provides a summary of the key findings of and relations between these studies, and outlines potential avenues of future research.
Ku-band radar threshold analysis
NASA Technical Reports Server (NTRS)
Weber, C. L.; Polydoros, A.
1979-01-01
The statistics of the CFAR threshold for the Ku-band radar was determined. Exact analytical results were developed for both the mean and standard deviations in the designated search mode. The mean value is compared to the results of a previously reported simulation. The analytical results are more optimistic than the simulation results, for which no explanation is offered. The normalized standard deviation is shown to be very sensitive to signal-to-noise ratio and very insensitive to the noise correlation present in the range gates of the designated search mode. The substantial variation in the CFAR threshold is dominant at large values of SNR where the normalized standard deviation is greater than 0.3. Whether or not this significantly affects the resulting probability of detection is a matter which deserves additional attention.
Heterogeneity-induced large deviations in activity and (in some cases) entropy production
NASA Astrophysics Data System (ADS)
Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.
2014-10-01
We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.
Large-deviation theory for diluted Wishart random matrices
NASA Astrophysics Data System (ADS)
Castillo, Isaac Pérez; Metz, Fernando L.
2018-03-01
Wishart random matrices with a sparse or diluted structure are ubiquitous in the processing of large datasets, with applications in physics, biology, and economy. In this work, we develop a theory for the eigenvalue fluctuations of diluted Wishart random matrices based on the replica approach of disordered systems. We derive an analytical expression for the cumulant generating function of the number of eigenvalues IN(x ) smaller than x ∈R+ , from which all cumulants of IN(x ) and the rate function Ψx(k ) controlling its large-deviation probability Prob[IN(x ) =k N ] ≍e-N Ψx(k ) follow. Explicit results for the mean value and the variance of IN(x ) , its rate function, and its third cumulant are discussed and thoroughly compared to numerical diagonalization, showing very good agreement. The present work establishes the theoretical framework put forward in a recent letter [Phys. Rev. Lett. 117, 104101 (2016), 10.1103/PhysRevLett.117.104101] as an exact and compelling approach to deal with eigenvalue fluctuations of sparse random matrices.
Rapidly rotating neutron stars with a massive scalar field—structure and universal relations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doneva, Daniela D.; Yazadjiev, Stoytcho S., E-mail: daniela.doneva@uni-tuebingen.de, E-mail: yazad@phys.uni-sofia.bg
We construct rapidly rotating neutron star models in scalar-tensor theories with a massive scalar field. The fact that the scalar field has nonzero mass leads to very interesting results since the allowed range of values of the coupling parameters is significantly broadened. Deviations from pure general relativity can be very large for values of the parameters that are in agreement with the observations. We found that the rapid rotation can magnify the differences several times compared to the static case. The universal relations between the normalized moment of inertia and quadrupole moment are also investigated both for the slowly andmore » rapidly rotating cases. The results show that these relations are still EOS independent up to a large extend and the deviations from pure general relativity can be large. This places the massive scalar-tensor theories amongst the few alternative theories of gravity that can be tested via the universal I -Love- Q relations.« less
WKB theory of large deviations in stochastic populations
NASA Astrophysics Data System (ADS)
Assaf, Michael; Meerson, Baruch
2017-06-01
Stochasticity can play an important role in the dynamics of biologically relevant populations. These span a broad range of scales: from intra-cellular populations of molecules to population of cells and then to groups of plants, animals and people. Large deviations in stochastic population dynamics—such as those determining population extinction, fixation or switching between different states—are presently in a focus of attention of statistical physicists. We review recent progress in applying different variants of dissipative WKB approximation (after Wentzel, Kramers and Brillouin) to this class of problems. The WKB approximation allows one to evaluate the mean time and/or probability of population extinction, fixation and switches resulting from either intrinsic (demographic) noise, or a combination of the demographic noise and environmental variations, deterministic or random. We mostly cover well-mixed populations, single and multiple, but also briefly consider populations on heterogeneous networks and spatial populations. The spatial setting also allows one to study large fluctuations of the speed of biological invasions. Finally, we briefly discuss possible directions of future work.
Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
2017-11-21
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
NASA Astrophysics Data System (ADS)
Korotkova, T. I.; Popova, V. I.
2017-11-01
The generalized mathematical model of decision-making in the problem of planning and mode selection providing required heat loads in a large heat supply system is considered. The system is multilevel, decomposed into levels of main and distribution heating networks with intermediate control stages. Evaluation of the effectiveness, reliability and safety of such a complex system is carried out immediately according to several indicators, in particular pressure, flow, temperature. This global multicriteria optimization problem with constraints is decomposed into a number of local optimization problems and the coordination problem. An agreed solution of local problems provides a solution to the global multicriterion problem of decision making in a complex system. The choice of the optimum operational mode of operation of a complex heat supply system is made on the basis of the iterative coordination process, which converges to the coordinated solution of local optimization tasks. The interactive principle of multicriteria task decision-making includes, in particular, periodic adjustment adjustments, if necessary, guaranteeing optimal safety, reliability and efficiency of the system as a whole in the process of operation. The degree of accuracy of the solution, for example, the degree of deviation of the internal air temperature from the required value, can also be changed interactively. This allows to carry out adjustment activities in the best way and to improve the quality of heat supply to consumers. At the same time, an energy-saving task is being solved to determine the minimum required values of heads at sources and pumping stations.
Classroom Demonstrations of Polymer Principles Part II. Polymer Formation.
ERIC Educational Resources Information Center
Rodriguez, F.; And Others
1987-01-01
This is part two in a series on classroom demonstrations of polymer principles. Described is how large molecules can be assembled from subunits (the process of polymerization). Examples chosen include both linear and branched or cross-linked molecules. (RH)
Rosenfeld, Simon
2013-01-01
Complex biological systems manifest a large variety of emergent phenomena among which prominent roles belong to self-organization and swarm intelligence. Generally, each level in a biological hierarchy possesses its own systemic properties and requires its own way of observation, conceptualization, and modeling. In this work, an attempt is made to outline general guiding principles in exploration of a wide range of seemingly dissimilar phenomena observed in large communities of individuals devoid of any personal intelligence and interacting with each other through simple stimulus-response rules. Mathematically, these guiding principles are well captured by the Global Consensus Theorem (GCT) equally applicable to neural networks and to Lotka-Volterra population dynamics. Universality of the mechanistic principles outlined by GCT allows for a unified approach to such diverse systems as biological networks, communities of social insects, robotic communities, microbial communities, communities of somatic cells, social networks and many other systems. Another cluster of universal laws governing the self-organization in large communities of locally interacting individuals is built around the principle of self-organized criticality (SOC). The GCT and SOC, separately or in combination, provide a conceptual basis for understanding the phenomena of self-organization occurring in large communities without involvement of a supervisory authority, without system-wide informational infrastructure, and without mapping of general plan of action onto cognitive/behavioral faculties of its individual members. Cancer onset and proliferation serves as an important example of application of these conceptual approaches. In this paper, the point of view is put forward that apparently irreconcilable contradictions between two opposing theories of carcinogenesis, that is, the Somatic Mutation Theory and the Tissue Organization Field Theory, may be resolved using the systemic approaches provided by GST and SOC. PMID:23471309
One-side forward-backward asymmetry in top quark pair production at the CERN Large Hadron Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Youkai; Xiao Bo; Zhu Shouhua
2010-11-01
Both D0 and CDF at Tevatron reported the measurements of forward-backward asymmetry in top pair production, which showed possible deviation from the standard model QCD prediction. In this paper, we explore how to examine the same higher-order QCD effects at the more powerful Large Hadron Collider.
Umari, P; Pasquarello, Alfredo
2005-09-23
We determine the fraction f of B atoms belonging to boroxol rings in vitreous boron oxide through a first-principles analysis. After generating a model structure of vitreous B2O3 by first-principles molecular dynamics, we address a large set of properties, including the neutron structure factor, the neutron density of vibrational states, the infrared spectra, the Raman spectra, and the 11B NMR spectra, and find overall good agreement with corresponding experimental data. From the analysis of Raman and 11B NMR spectra, we yield consistently for both probes a fraction f of approximately 0.75. This result indicates that the structure of vitreous boron oxide is largely dominated by boroxol rings.
General John J. Pershing: Critical Observations and Experiences in Manchuria and Mexico
2017-05-25
operational art and the principles of joint operations. So while General Pershing is not largely recognized as an operational artist in contemporary...writing, his observations and experiences with regards to the elements of operational art and the principles of joint operations before World War I...Japanese War, Punitive Expedition, World War I, AEF, Operational Art, Principles of Joint Operations 16. SECURITY CLASSIFICATION OF: 17
2017-02-01
services largely applied key principles of effective human capital management in the design of their S&I pay programs for nuclear propulsion...aviation, and cybersecurity occupations. However, the application of these key principles varied by service and occupation. Only the Navy’s S&I pay...programs for nuclear propulsion and aviation fully addressed all seven principles ; programs for other occupations and services generally exhibited a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klink, W.H.; Wickramasekara, S., E-mail: wickrama@grinnell.edu; Department of Physics, Grinnell College, Grinnell, IA 50112
2014-01-15
In previous work we have developed a formulation of quantum mechanics in non-inertial reference frames. This formulation is grounded in a class of unitary cocycle representations of what we have called the Galilean line group, the generalization of the Galilei group that includes transformations amongst non-inertial reference frames. These representations show that in quantum mechanics, just as is the case in classical mechanics, the transformations to accelerating reference frames give rise to fictitious forces. A special feature of these previously constructed representations is that they all respect the non-relativistic equivalence principle, wherein the fictitious forces associated with linear acceleration canmore » equivalently be described by gravitational forces. In this paper we exhibit a large class of cocycle representations of the Galilean line group that violate the equivalence principle. Nevertheless the classical mechanics analogue of these cocycle representations all respect the equivalence principle. -- Highlights: •A formulation of Galilean quantum mechanics in non-inertial reference frames is given. •The key concept is the Galilean line group, an infinite dimensional group. •A large class of general cocycle representations of the Galilean line group is constructed. •These representations show violations of the equivalence principle at the quantum level. •At the classical limit, no violations of the equivalence principle are detected.« less
Nonlinear Elastic Effects on the Energy Flux Deviation of Ultrasonic Waves in GR/EP Composites
NASA Technical Reports Server (NTRS)
Prosser, William H.; Kriz, R. D.; Fitting, Dale W.
1992-01-01
In isotropic materials, the direction of the energy flux (energy per unit time per unit area) of an ultrasonic plane wave is always along the same direction as the normal to the wave front. In anisotropic materials, however, this is true only along symmetry directions. Along other directions, the energy flux of the wave deviates from the intended direction of propagation. This phenomenon is known as energy flux deviation and is illustrated. The direction of the energy flux is dependent on the elastic coefficients of the material. This effect has been demonstrated in many anisotropic crystalline materials. In transparent quartz crystals, Schlieren photographs have been obtained which allow visualization of the ultrasonic waves and the energy flux deviation. The energy flux deviation in graphite/epoxy (gr/ep) composite materials can be quite large because of their high anisotropy. The flux deviation angle has been calculated for unidirectional gr/ep composites as a function of both fiber orientation and fiber volume content. Experimental measurements have also been made in unidirectional composites. It has been further demonstrated that changes in composite materials which alter the elastic properties such as moisture absorption by the matrix or fiber degradation, can be detected nondestructively by measurements of the energy flux shift. In this research, the effects of nonlinear elasticity on energy flux deviation in unidirectional gr/ep composites were studied. Because of elastic nonlinearity, the angle of the energy flux deviation was shown to be a function of applied stress. This shift in flux deviation was modeled using acoustoelastic theory and the previously measured second and third order elastic stiffness coefficients for T300/5208 gr/ep. Two conditions of applied uniaxial stress were considered. In the first case, the direction of applied uniaxial stress was along the fiber axis (x3) while in the second case it was perpendicular to the fiber axis along the laminate stacking direction (x1).
Reciprocity in the electronic stopping of slow ions in matter
NASA Astrophysics Data System (ADS)
Sigmund, P.
2008-04-01
The principle of reciprocity, i.e., the invariance of the inelastic excitation in ion-atom collisions against interchange of projectile and target, has been applied to the electronic stopping cross section of low-velocity ions and tested empirically on ion-target combinations supported by a more or less adequate amount of experimental data. Reciprocity is well obeyed (within ~10%) for many systems studied, and deviations exceeding ~20% are exceptional. Systematic deviations such as gas-solid or metal-insulator differences have been looked for but not identified on the present basis. A direct consequence of reciprocity is the equivalence of Z1 with Z2 structure for random slowing down. This feature is reasonably well supported empirically for ion-target combinations involving carbon, nitrogen, aluminium and argon. Reciprocity may be utilized as a criterion to reject questionable experimental data. In cases where a certain stopping cross section has not been or cannot be measured, the stopping cross section for the inverted system may be available and serve as a first estimate. It is suggested to build in reciprocity as a fundamental requirement into empirical interpolation schemes directed at the stopping of low-velocity ions. Examination of the SRIM and MSTAR codes reveals cases where reciprocity is obeyed accurately, but deviations of up to a factor of two are common. In case of heavy ions such as gold, electronic stopping cross sections predicted by SRIM are asserted to be almost an order of magnitude too high.
Variational principle model for the nuclear caloric curve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das Gupta, S.
2005-12-15
Following the lead of a recent work, I perform a variational principle model calculation for the nuclear caloric curve. A Skyrme-type interaction with and without momentum dependence is used. The calculation is done for a large nucleus, i.e., in the nuclear matter limit. Thus I address the issue of volume fragmentation only. Nonetheless, the results are similar to the previous, largely phenomenological calculation for a finite nucleus. I find that the onset of fragmentation can be sudden as a function of temperature or excitation energy.
Beyond δ: Tailoring marked statistics to reveal modified gravity
NASA Astrophysics Data System (ADS)
Valogiannis, Georgios; Bean, Rachel
2018-01-01
Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.
NASA Astrophysics Data System (ADS)
Yi, Huili; Tian, Jianxiang
2014-07-01
A new simple correlation based on the principle of corresponding state is proposed to estimate the temperature-dependent surface tension of normal saturated liquids. The correlation is a linear one and strongly stands for 41 saturated normal liquids. The new correlation requires only the triple point temperature, triple point surface tension and critical point temperature as input and is able to represent the experimental surface tension data for these 41 saturated normal liquids with a mean absolute average percent deviation of 1.26% in the temperature regions considered. For most substances, the temperature covers the range from the triple temperature to the one beyond the boiling temperature.
Antibacterial activity of silver-killed bacteria: the "zombies" effect
NASA Astrophysics Data System (ADS)
Wakshlak, Racheli Ben-Knaz; Pedahzur, Rami; Avnir, David
2015-04-01
We report a previously unrecognized mechanism for the prolonged action of biocidal agents, which we denote as the zombies effect: biocidally-killed bacteria are capable of killing living bacteria. The concept is demonstrated by first killing Pseudomonas aeruginosa PAO1 with silver nitrate and then challenging, with the dead bacteria, a viable culture of the same bacterium: Efficient antibacterial activity of the killed bacteria is observed. A mechanism is suggested in terms of the action of the dead bacteria as a reservoir of silver, which, due to Le-Chatelier's principle, is re-targeted to the living bacteria. Langmuirian behavior, as well as deviations from it, support the proposed mechanism.
Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization
NASA Astrophysics Data System (ADS)
Fyodorov, Yan V.; Le Doussal, Pierre
2014-01-01
Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.
NASA Astrophysics Data System (ADS)
Svenson, Eric Johan
Participants on the Invincible America Assembly in Fairfield, Iowa, and neighboring Maharishi Vedic City, Iowa, practicing Maharishi Transcendental Meditation(TM) (TM) and the TM-Sidhi(TM) programs in large groups, submitted written experiences that they had had during, and in some cases shortly after, their daily practice of the TM and TM-Sidhi programs. Participants were instructed to include in their written experiences only what they observed and to leave out interpretation and analysis. These experiences were then read by the author and compared with principles and phenomena of modern physics, particularly with quantum theory, astrophysics, quantum cosmology, and string theory as well as defining characteristics of higher states of consciousness as described by Maharishi Vedic Science. In all cases, particular principles or phenomena of physics and qualities of higher states of consciousness appeared qualitatively quite similar to the content of the given experience. These experiences are presented in an Appendix, in which the corresponding principles and phenomena of physics are also presented. These physics "commentaries" on the experiences were written largely in layman's terms, without equations, and, in nearly every case, with clear reference to the corresponding sections of the experiences to which a given principle appears to relate. An abundance of similarities were apparent between the subjective experiences during meditation and principles of modern physics. A theoretic framework for understanding these rich similarities may begin with Maharishi's theory of higher states of consciousness provided herein. We conclude that the consistency and richness of detail found in these abundant similarities warrants the further pursuit and development of such a framework.
Open inflation in the landscape
NASA Astrophysics Data System (ADS)
Yamauchi, Daisuke; Linde, Andrei; Naruko, Atsushi; Sasaki, Misao; Tanaka, Takahiro
2011-08-01
The open inflation scenario is attracting a renewed interest in the context of the string landscape. Since there are a large number of metastable de Sitter vacua in the string landscape, tunneling transitions to lower metastable vacua through the bubble nucleation occur quite naturally, which leads to a natural realization of open inflation. Although the deviation of Ω0 from unity is small by the observational bound, we argue that the effect of this small deviation on the large-angle CMB anisotropies can be significant for tensor-type perturbation in the open inflation scenario. We consider the situation in which there is a large hierarchy between the energy scale of the quantum tunneling and that of the slow-roll inflation in the nucleated bubble. If the potential just after tunneling is steep enough, a rapid-roll phase appears before the slow-roll inflation. In this case the power spectrum is basically determined by the Hubble rate during the slow-roll inflation. On the other hand, if such a rapid-roll phase is absent, the power spectrum keeps the memory of the high energy density there in the large angular components. Furthermore, the amplitude of large angular components can be enhanced due to the effects of the wall fluctuation mode if the bubble wall tension is small. Therefore, although even the dominant quadrupole component is suppressed by the factor (1-Ω0)2, one can construct some models in which the deviation of Ω0 from unity is large enough to produce measurable effects. We also consider a more general class of models, where the false vacuum decay may occur due to Hawking-Moss tunneling, as well as the models involving more than one scalar field. We discuss scalar perturbations in these models and point out that a large set of such models is already ruled out by observational data, unless there was a very long stage of slow-roll inflation after the tunneling. These results show that observational data allow us to test various assumptions concerning the structure of the string theory potentials and the duration of the last stage of inflation.
Lessons from a broad view of science: a response to Dr Robergs’ article
Pires, Flavio Oliveira
2018-01-01
Dr Robergs suggested that the central governor model (CGM) is not a well-worded theory, as it deviated from the tenant of falsification criteria. According to his view of science, exercise researches with the intent to prove rather than disprove the theory contribute little to new knowledge and condemn the theory to the label of pseudoscience. However, exercise scientists should be aware of limitations of the falsification criteria. First, the number of potential falsifiers for a given hypothesis is always infinite so that there is no mean to ensure asymmetric comparison between theories. Thus, assuming a competition between CGM and dichotomised central versus peripheral fatigue theories, scientists guided by the falsification principle should know, a priori, all possible falsifiers between these two theories in order to choose the finest one, thereby leading to an oversimplification of the theories. Second, the failure to formulate refutable hypothesis may be a simple consequence of the lack of instruments to make crucial measurements. The use of refutation principles to test the CGM theory requires capable technology for online feedback and feedforward measures integrated in the central nervous system, in a real-time exercise. Consequently, falsification principle is currently impracticable to test CGM theory. The falsification principle must be applied with equilibrium, as we should do with positive induction process, otherwise Popperian philosophy will be incompatible with the actual practice in science. Rather than driving the scientific debate on a biased single view of science, researchers in the field of exercise sciences may benefit more from different views of science. PMID:29629188
Student Performance in Undergraduate Economics Courses
ERIC Educational Resources Information Center
Mumford, Kevin J.; Ohland, Matthew W.
2011-01-01
Using undergraduate student records from six large public universities from 1990 to 2003, the authors analyze the characteristics and performance of students by major in two economics courses: Principles of Microeconomics and Intermediate Microeconomics. This article documents important differences across students by major in the principles course…
ERIC Educational Resources Information Center
Siyanova-Chanturia, Anna; Martinez, Ron
2015-01-01
John Sinclair's Idiom Principle famously posited that most texts are largely composed of multi-word expressions that "constitute single choices" in the mental lexicon. At the time that assertion was made, little actual psycholinguistic evidence existed in support of that holistic, "single choice," view of formulaic language. In…
Using Behavioral Economics to Design Physician Incentives That Deliver High-Value Care.
Emanuel, Ezekiel J; Ubel, Peter A; Kessler, Judd B; Meyer, Gregg; Muller, Ralph W; Navathe, Amol S; Patel, Pankaj; Pearl, Robert; Rosenthal, Meredith B; Sacks, Lee; Sen, Aditi P; Sherman, Paul; Volpp, Kevin G
2016-01-19
Behavioral economics provides insights about the development of effective incentives for physicians to deliver high-value care. It suggests that the structure and delivery of incentives can shape behavior, as can thoughtful design of the decision-making environment. This article discusses several principles of behavioral economics, including inertia, loss aversion, choice overload, and relative social ranking. Whereas these principles have been applied to motivate personal health decisions, retirement planning, and savings behavior, they have been largely ignored in the design of physician incentive programs. Applying these principles to physician incentives can improve their effectiveness through better alignment with performance goals. Anecdotal examples of successful incentive programs that apply behavioral economics principles are provided, even as the authors recognize that its application to the design of physician incentives is largely untested, and many outstanding questions exist. Application and rigorous evaluation of infrastructure changes and incentives are needed to design payment systems that incentivize high-quality, cost-conscious care.
Part-Task Training Strategies in Simulated Carrier Landing Final Approach Training
1983-11-01
received a large amount of attention in the recent past. However, the notion that the value of flight simulation may b• enhanced when principles of...as training devices through the application of principles of learning. The research proposed here s based on this point of view. THIS EXPERIMENT The...tracking. Following Goldstein’s suggestion, one should look for training techniques suggested by learnina principles developed from research on
Characterizing Accuracy and Precision of Glucose Sensors and Meters
2014-01-01
There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194
Role of the standard deviation in the estimation of benchmark doses with continuous data.
Gaylor, David W; Slikker, William
2004-12-01
For continuous data, risk is defined here as the proportion of animals with values above a large percentile, e.g., the 99th percentile or below the 1st percentile, for the distribution of values among control animals. It is known that reducing the standard deviation of measurements through improved experimental techniques will result in less stringent (higher) doses for the lower confidence limit on the benchmark dose that is estimated to produce a specified risk of animals with abnormal levels for a biological effect. Thus, a somewhat larger (less stringent) lower confidence limit is obtained that may be used as a point of departure for low-dose risk assessment. It is shown in this article that it is important for the benchmark dose to be based primarily on the standard deviation among animals, s(a), apart from the standard deviation of measurement errors, s(m), within animals. If the benchmark dose is incorrectly based on the overall standard deviation among average values for animals, which includes measurement error variation, the benchmark dose will be overestimated and the risk will be underestimated. The bias increases as s(m) increases relative to s(a). The bias is relatively small if s(m) is less than one-third of s(a), a condition achieved in most experimental designs.
Towards Behavioral Reflexion Models
NASA Technical Reports Server (NTRS)
Ackermann, Christopher; Lindvall, Mikael; Cleaveland, Rance
2009-01-01
Software architecture has become essential in the struggle to manage today s increasingly large and complex systems. Software architecture views are created to capture important system characteristics on an abstract and, thus, comprehensible level. As the system is implemented and later maintained, it often deviates from the original design specification. Such deviations can have implication for the quality of the system, such as reliability, security, and maintainability. Software architecture compliance checking approaches, such as the reflexion model technique, have been proposed to address this issue by comparing the implementation to a model of the systems architecture design. However, architecture compliance checking approaches focus solely on structural characteristics and ignore behavioral conformance. This is especially an issue in Systems-of- Systems. Systems-of-Systems (SoS) are decompositions of large systems, into smaller systems for the sake of flexibility. Deviations of the implementation to its behavioral design often reduce the reliability of the entire SoS. An approach is needed that supports the reasoning about behavioral conformance on architecture level. In order to address this issue, we have developed an approach for comparing the implementation of a SoS to an architecture model of its behavioral design. The approach follows the idea of reflexion models and adopts it to support the compliance checking of behaviors. In this paper, we focus on sequencing properties as they play an important role in many SoS. Sequencing deviations potentially have a severe impact on the SoS correctness and qualities. The desired behavioral specification is defined in UML sequence diagram notation and behaviors are extracted from the SoS implementation. The behaviors are then mapped to the model of the desired behavior and the two are compared. Finally, a reflexion model is constructed that shows the deviations between behavioral design and implementation. This paper discusses the approach and shows how it can be applied to investigate reliability issues in SoS.
Quantum superposition at the half-metre scale.
Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A
2015-12-24
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.
A global probabilistic tsunami hazard assessment from earthquake sources
Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana
2017-01-01
Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.
Principles of natural regeneration
1989-01-01
To maximize chances of successful regeneration, carefully consider the following regeneration principles. Harvesting alone does not guarantee that the desired species will be established. The conditions required for the initial establishment and early growth of the desired species largely determine what regeneration method you should use and any supplemental treatments...
C3 Domain Analysis, Lessons Learned
1993-09-30
organize the domain. This approach is heavily based on the principles of library science and is geared toward a reuse effort with a large library-like...method adapts many principles from library science to the organization and implementation of a reuse library. C-1 DEFENSE INFORMATION SYSTEMS AGENCY
Evaluating the Quality of Transfer versus Nontransfer Accounting Principles Grades.
ERIC Educational Resources Information Center
Colley, J. R.; And Others
1996-01-01
Using 1989-92 student records from three colleges accepting large numbers of transfers from junior schools into accounting, regression analyses compared grades of transfer and nontransfer students. Quality of accounting principle grades of transfer students was not equivalent to that of nontransfer students. (SK)
NASA Astrophysics Data System (ADS)
Shaar, R.; Tauxe, L.; Ebert, Y.
2017-12-01
Continuous decadal-resolution paleomagnetic data from archaeological and sedimentary sources in the Levant revealed the existence a local high-field anomaly, which spanned the first 350 years of the first millennium BCE. This so-called "the Levantine Iron Age geomagnetic Anomaly" (LIAA) was characterized by a high averaged geomagnetic field (virtual axial dipole moments, VADM > 140 Z Am2, nearly twice of today's field), short decadal-scale geomagnetic spikes (VADM of 160-185 Z Am2), fast directional and intensity variations, and substantial deviation (20°-25°) from dipole field direction. Similar high field values in the time frame of LIAA have been observed north, and northeast to the Levant: Eastern Anatolia, Turkmenistan, and Georgia. West of the Levant, in the Balkans, field values in the same time are moderate to low. The overall data suggest that the LIAA is a manifestation of a local positive geomagnetic field anomaly similar in magnitude and scale to the presently active negative South Atlantic Anomaly. In this presentation we review the overall archaeomagnetic and sedimentary evidences supporting the local anomaly hypothesis, and compare these observations with today's IGRF field. We analyze the global data during the first two millennia BCE, which suggest some unexpected large deviations from a simple dipolar geomagnetic structure.
Vocal singing by prelingually-deafened children with cochlear implants.
Xu, Li; Zhou, Ning; Chen, Xiuwu; Li, Yongxin; Schultz, Heather M; Zhao, Xiaoyan; Han, Demin
2009-09-01
The coarse pitch information in cochlear implants might hinder the development of singing in prelingually-deafened pediatric users. In the present study, seven prelingually-deafened children with cochlear implants (5.4-12.3 years old) sang one song that was the most familiar to him or her. The control group consisted of 14 normal-hearing children (4.1-8.0 years old). The fundamental frequencies (F0) of each note in the recorded songs were extracted. The following five metrics were computed based on the reference music scores: (1) F0 contour direction of the adjacent notes, (2) F0 compression ratio of the entire song, (3) mean deviation of the normalized F0 across the notes, (4) mean deviation of the pitch intervals, and (5) standard deviation of the note duration differences. Children with cochlear implants showed significantly poorer performance in the pitch-based assessments than the normal-hearing children. No significant differences were seen between the two groups in the rhythm-based measure. Prelingually-deafened children with cochlear implants have significant deficits in singing due to their inability to manipulate pitch in the correct directions and to produce accurate pitch height. Future studies with a large sample size are warranted in order to account for the large variability in singing performance.
Speckle interferometry with temporal phase evaluation for measuring large-object deformation.
Joenathan, C; Franze, B; Haible, P; Tiziani, H J
1998-05-01
We propose a new method for measuring large-object deformations byusing temporal evolution of the speckles in speckleinterferometry. The principle of the method is that by deformingthe object continuously, one obtains fluctuations in the intensity ofthe speckle. A large number of frames of the object motion arecollected to be analyzed later. The phase data for whole-objectdeformation are then retrieved by inverse Fourier transformation of afiltered spectrum obtained by Fourier transformation of thesignal. With this method one is capable of measuring deformationsof more than 100 mum, which is not possible using conventionalelectronic speckle pattern interferometry. We discuss theunderlying principle of the method and the results of theexperiments. Some nondestructive testing results are alsopresented.
Spectral Relative Standard Deviation: A Practical Benchmark in Metabolomics
Metabolomics datasets, by definition, comprise of measurements of large numbers of metabolites. Both technical (analytical) and biological factors will induce variation within these measurements that is not consistent across all metabolites. Consequently, criteria are required to...
Vacuum stability and naturalness in type-II seesaw
Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...
2016-06-16
Here, we study the vacuum stability and perturbativity conditions in the minimal type-II seesaw model. These conditions give characteristic constraints to the model parameters. In the model, there is a SU(2) L triplet scalar field, which could cause a large Higgs mass correction. From the naturalness point of view, heavy Higgs masses should be lower than 350GeV, which may be testable by the LHC Run-II results. Due to the effects of the triplet scalar field, the branching ratios of the Higgs decay (h → γγ,Zγ) deviate from the standard model, and a large parameter region is excluded by the recentmore » ATLAS and CMS combined analysis of h → γγ. Our result of the signal strength for h → γγ is R γγ ≲ 1.1, but its deviation is too small to observe at the LHC experiment.« less
Large deviations in the random sieve
NASA Astrophysics Data System (ADS)
Grimmett, Geoffrey
1997-05-01
The proportion [rho]k of gaps with length k between square-free numbers is shown to satisfy log[rho]k=[minus sign](1+o(1))(6/[pi]2) klogk as k[rightward arrow][infty infinity]. Such asymptotics are consistent with Erdos's challenge to prove that the gap following the square-free number t is smaller than clogt/log logt, for all t and some constant c satisfying c>[pi]2/12. The results of this paper are achieved by studying the probabilities of large deviations in a certain ‘random sieve’, for which the proportions [rho]k have representations as probabilities. The asymptotic form of [rho]k may be obtained in situations of greater generality, when the squared primes are replaced by an arbitrary sequence (sr) of relatively prime integers satisfying [sum L: summation operator]r1/sr<[infty infinity], subject to two further conditions of regularity on this sequence.
Simple programmable voltage reference for low frequency noise measurements
NASA Astrophysics Data System (ADS)
Ivanov, V. E.; Chye, En Un
2018-05-01
The paper presents a circuit design of a low-noise voltage reference based on an electric double-layer capacitor, a microcontroller and a general purpose DAC. A large capacitance value (1F and more) makes it possible to create low-pass filter with a large time constant, effectively reducing low-frequency noise beyond its bandwidth. Choosing the optimum value of the resistor in the RC filter, one can achieve the best ratio between the transient time, the deviation of the output voltage from the set point and the minimum noise cut-off frequency. As experiments have shown, the spectral density of the voltage at a frequency of 1 kHz does not exceed 1.2 nV/√Hz the maximum deviation of the output voltage from the predetermined does not exceed 1.4 % and depends on the holding time of the previous value. Subsequently, this error is reduced to a constant value and can be compensated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aad, G.; Abbott, B.; Abdallah, J.
The results of a search for gluinos in final states with an isolated electron or muon, multiple jets and large missing transverse momentum using proton–proton collision data at a centre-of-mass energy ofmore » $$\\sqrt{s}$$ = 13 Te V are presented. The dataset used was recorded in 2015 by the ATLAS experiment at the Large Hadron Collider and corresponds to an integrated luminosity of 3.2 fb -1 . Six signal selections are defined that best exploit the signal characteristics. The data agree with the Standard Model background expectation in all six signal selections, and the largest deviation is a 2.1 standard deviation excess. The results are interpreted in a simplified model where pair-produced gluinos decay via the lightest chargino to the lightest neutralino. In this model, gluinos are excluded up to masses of approximately 1.6 Te V depending on the mass spectrum of the simplified model, thus surpassing the limits of previous searches.« less
Geometric phase for a two-level system in photonic band gab crystal
NASA Astrophysics Data System (ADS)
Berrada, K.
2018-05-01
In this work, we investigate the geometric phase (GP) for a qubit system coupled to its own anisotropic and isotropic photonic band gap (PBG) crystal environment without Born or Markovian approximation. The qubit frequency affects the GP of the qubit directly through the effect of the PBG environment. The results show the deviation of the GP depends on the detuning parameter and this deviation will be large for relatively large detuning of atom frequency inside the gap with respect to the photonic band edge. Whereas for detunings outside the gap, the GP of the qubit changes abruptly to zero, exhibiting collapse phenomenon of the GP. Moreover, we find that the GP in the isotropic PBG photonic crystal is more robust than that in the anisotropic PBG under the same condition. Finally, we explore the relationship between the variation of the GP and population in terms of the physical parameters.
Finite-key analysis for measurement-device-independent quantum key distribution.
Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong
2014-04-29
Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.
NASA Technical Reports Server (NTRS)
Wang, J. C.
1982-01-01
Compositional segregation of solid solution semiconducting alloys in the radial direction during unidirectional solidification was investigated by calculating the effect of a curved solid liquid interface on solute concentration at the interface on the solid. The formulation is similar to that given by Coriell, Boisvert, Rehm, and Sekerka except that a more realistic cylindrical coordinate system which is moving with the interface is used. Analytical results were obtained for very small and very large values of beta with beta = VR/D, where V is the velocity of solidification, R the radius of the specimen, and D the diffusivity of solute in the liquid. For both very small and very large beta, the solute concentration at the interface in the solid C(si) approaches C(o) (original solute concentration) i.e., the deviation is minimal. The maximum deviation of C(si) from C(o) occurs for some intermediate value of beta.
Efficient Predictions of Excited State for Nanomaterials Using Aces 3 and 4
2017-12-20
by first-principle methods in the software package ACES by using large parallel computers, growing tothe exascale. 15. SUBJECT TERMS Computer...modeling, excited states, optical properties, structure, stability, activation barriers first principle methods , parallel computing 16. SECURITY...2 Progress with new density functional methods
1993-05-01
This Principles and Practices Board project was undertaken in response to the frequent requests from HFMA members for a standard calculation of "days of revenue in receivables." The board's work on this project indicated that every element of the calculation required standards, which is what this statement provides. Since there have been few standards for accounts receivable related to patient services, the industry follows a variety of practices, which often differ from each other. This statement is intended to provide a framework for enhanced external comparison of accounts receivable related to patient services, and thereby improve management information related to this very important asset. Thus, the standards described in this statement represent long-term goals for gradual transition of recordkeeping practices and not a sudden or revolutionary change. The standards described in this statement will provide the necessary framework for the most meaningful external comparisons. Furthermore, management's understanding of deviations from these standards will immediately assist in analysis of differences in data between providers.
Bio-inspired structural bistability employing elastomeric origami for morphing applications
NASA Astrophysics Data System (ADS)
Daynes, Stephen; Trask, Richard S.; Weaver, Paul M.
2014-12-01
A structural concept based upon the principles of adaptive morphing cells is presented whereby controlled bistability from a flat configuration into a textured arrangement is shown. The material consists of multiple cells made from silicone rubber with locally reinforced regions based upon kirigami principles. On pneumatic actuation these cells fold or unfold based on the fold lines created by the interaction of the geometry with the reinforced regions. Each cell is able to maintain its shape in either a retracted or deployed state, without the aid of mechanisms or sustained actuation, due to the existence of structural bistability. Mathematical quantification of the surface texture is introduced, based on out-of-plane deviations of a deployed structure compared to a reference plane. Additionally, finite element analysis is employed to characterize the geometry and stability of an individual cell during actuation and retraction. This investigation highlights the critical role that angular rotation, at the center of each cell, plays on the deployment angle as it transitions through the elastically deployed configuration. The analysis of this novel concept is presented and a pneumatically actuated proof-of-concept demonstrator is fabricated.
Progress in Lunar Laser Ranging Tests of Relativistic Gravity
NASA Astrophysics Data System (ADS)
Williams, James G.; Turyshev, Slava G.; Boggs, Dale H.
2004-12-01
Analyses of laser ranges to the Moon provide increasingly stringent limits on any violation of the equivalence principle (EP); they also enable several very accurate tests of relativistic gravity. These analyses give an EP test of Δ(MG/MI)EP=(-1.0±1.4)×10-13. This result yields a strong equivalence principle (SEP) test of Δ(MG/MI)SEP=(-2.0±2.0)×10-13. Also, the corresponding SEP violation parameter η is (4.4±4.5)×10-4, where η=4β-γ-3 and both β and γ are post-Newtonian parameters. Using the Cassini γ, the η result yields β-1=(1.2±1.1)×10-4. The geodetic precession test, expressed as a relative deviation from general relativity, is Kgp=-0.0019±0.0064. The search for a time variation in the gravitational constant results in G˙/G=(4±9)×10-13 yr-1; consequently there is no evidence for local (˜1 AU) scale expansion of the solar system.
Fundamental Principles in Aesthetic Rhinoplasty
2011-01-01
This review article will highlight several fundamental principles and advances in rhinoplasty. Nasal analysis has become more sophisticated and thorough in terms of breaking down the anomaly and identifying the anatomic etiology. Performing this analysis in a systematic manner each time helps refine these skills and is a prerequisite to sound surgical planning. Dorsal augmentation with alloplastic materials continue to be used but more conservatively and often mixed with autogenous grafts. Long term outcomes have also taught us much with regards to wound healing and soft tissue contracture. This is best demonstrated with a hump reduction where the progressive pinching at the middle vault creates both aesthetic and functional problems. Correcting the twisted nose is challenging and requires a more aggressive intervention than previously thought. Both cartilage and soft tissue appear to have a degree of memory that predispose to recurrent deviations. A complete structural breakdown and destabilization may be warranted before the nose is realigned. This must be followed by careful and meticulous restabilization. Tip refinement is a common request but no single maneuver can be universally applied; multiple techniques and grafts must be within the surgeon's armamentarium. PMID:21716951
Model-based review of Doppler global velocimetry techniques with laser frequency modulation
NASA Astrophysics Data System (ADS)
Fischer, Andreas
2017-06-01
Optical measurements of flow velocity fields are of crucial importance to understand the behavior of complex flow. One flow field measurement technique is Doppler global velocimetry (DGV). A large variety of different DGV approaches exist, e.g., applying different kinds of laser frequency modulation. In order to investigate the measurement capabilities especially of the newer DGV approaches with laser frequency modulation, a model-based review of all DGV measurement principles is performed. The DGV principles can be categorized by the respective number of required time steps. The systematic review of all DGV principle reveals drawbacks and benefits of the different measurement approaches with respect to the temporal resolution, the spatial resolution and the measurement range. Furthermore, the Cramér-Rao bound for photon shot is calculated and discussed, which represents a fundamental limit of the achievable measurement uncertainty. As a result, all DGV techniques provide similar minimal uncertainty limits. With Nphotons as the number of scattered photons, the minimal standard deviation of the flow velocity reads about 106 m / s /√{Nphotons } , which was calculated for a perpendicular arrangement of the illumination and observation direction and a laser wavelength of 895 nm. As a further result, the signal processing efficiencies are determined with a Monte-Carlo simulation. Except for the newest correlation-based DGV method, the signal processing algorithms are already optimal or near the optimum. Finally, the different DGV approaches are compared regarding errors due to temporal variations of the scattered light intensity and the flow velocity. The influence of a linear variation of the scattered light intensity can be reduced by maximizing the number of time steps, because this means to acquire more information for the correction of this systematic effect. However, more time steps can result in a flow velocity measurement with a lower temporal resolution, when operating at the maximal frame rate of the camera. DGV without laser frequency modulation then provides the highest temporal resolutions and is not sensitive with respect to temporal variations but with respect to spatial variations of the scattered light intensity. In contrast to this, all DGV variants suffer from velocity variations during the measurement. In summary, the experimental conditions and the measurement task finally decide about the ideal choice from the reviewed DGV methods.
Thermal Conductivity and Large Isotope Effect in GaN from First Principles
2012-08-28
August 2012) We present atomistic first principles results for the lattice thermal conductivity of GaN and compare them to those for GaP, GaAs, and GaSb ...weak scattering results from stiff atomic bonds and the large Ga to N mass ratio, which give phonons high frequencies and also a pronounced energy gap...66.70.f, 63.20.kg, 71.15.m Introduction.—Gallium nitride (GaN) is a wide band gap semiconductor and a promising candidate for use in opto- electronic
Lunar brightness temperature from Microwave Radiometers data of Chang'E-1 and Chang'E-2
NASA Astrophysics Data System (ADS)
Feng, J.-Q.; Su, Y.; Zheng, L.; Liu, J.-J.
2011-10-01
Both of the Chinese lunar orbiter, Chang'E-1 and Chang'E-2 carried Microwave Radiometers (MRM) to obtain the brightness temperature of the Moon. Based on the different characteristics of these two MRMs, modified algorithms of brightness temperature and specific ground calibration parameters were proposed, and the corresponding lunar global brightness temperature maps were made here. In order to analyze the data distributions of these maps, normalization method was applied on the data series. The second channel data with large deviations were rectified, and the reasons of deviations were analyzed in the end.
Wijsman, Liselotte W; Richard, Edo; Cachucho, Ricardo; de Craen, Anton Jm; Jongstra, Susan; Mooijaart, Simon P
2016-06-13
Mobile phone-assisted technologies provide the opportunity to optimize the feasibility of long-term blood pressure (BP) monitoring at home, with the potential of large-scale data collection. In this proof-of-principle study, we evaluated the feasibility of home BP monitoring using mobile phone-assisted technology, by investigating (1) the association between study center and home BP measurements; (2) adherence to reminders on the mobile phone to perform home BP measurements; and (3) referrals, treatment consequences and BP reduction after a raised home BP was diagnosed. We used iVitality, a research platform that comprises a Website, a mobile phone-based app, and health sensors, to measure BP and several other health characteristics during a 6-month period. BP was measured twice at baseline at the study center. Home BP was measured on 4 days during the first week, and thereafter, at semimonthly or monthly intervals, for which participants received reminders on their mobile phone. In the monthly protocol, measurements were performed during 2 consecutive days. In the semimonthly protocol, BP was measured at 1 day. We included 151 participants (mean age [standard deviation] 57.3 [5.3] years). BP measured at the study center was systematically higher when compared with home BP measurements (mean difference systolic BP [standard error] 8.72 [1.08] and diastolic BP 5.81 [0.68] mm Hg, respectively). Correlation of study center and home measurements of BP was high (R=0.72 for systolic BP and 0.72 for diastolic BP, both P<.001). Adherence was better in participants measuring semimonthly (71.4%) compared with participants performing monthly measurements (64.3%, P=.008). During the study, 41 (27.2%) participants were referred to their general practitioner because of a high BP. Referred participants had a decrease in their BP during follow-up (mean difference final and initial [standard error] -5.29 [1.92] for systolic BP and -2.93 [1.08] for diastolic BP, both P<.05). Mobile phone-assisted technology is a reliable and promising method with good adherence to measure BP at home during a 6-month period. This provides a possibility for implementation in large-scale studies and can potentially contribute to BP reduction.
Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image
NASA Astrophysics Data System (ADS)
Demir, N.; Kaynarca, M.; Oy, S.
2016-06-01
Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Impact of buildings on surface solar radiation over urban Beijing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Bin; Liou, Kuo-Nan; Gu, Yu
The rugged surface of an urban area due to varying buildings can interact with solar beams and affect both the magnitude and spatiotemporal distribution of surface solar fluxes. Here we systematically examine the impact of buildings on downward surface solar fluxes over urban Beijing by using a 3-D radiation parameterization that accounts for 3-D building structures vs. the conventional plane-parallel scheme. We find that the resulting downward surface solar flux deviations between the 3-D and the plane-parallel schemes are generally ±1–10 W m -2 at 800 m grid resolution and within ±1 W m -2 at 4 km resolution. Pairsmore » of positive–negative flux deviations on different sides of buildings are resolved at 800 m resolution, while they offset each other at 4 km resolution. Flux deviations from the unobstructed horizontal surface at 4 km resolution are positive around noon but negative in the early morning and late afternoon. The corresponding deviations at 800 m resolution, in contrast, show diurnal variations that are strongly dependent on the location of the grids relative to the buildings. Both the magnitude and spatiotemporal variations of flux deviations are largely dominated by the direct flux. Furthermore, we find that flux deviations can potentially be an order of magnitude larger by using a finer grid resolution. Atmospheric aerosols can reduce the magnitude of downward surface solar flux deviations by 10–65 %, while the surface albedo generally has a rather moderate impact on flux deviations. The results imply that the effect of buildings on downward surface solar fluxes may not be critically significant in mesoscale atmospheric models with a grid resolution of 4 km or coarser. However, the effect can play a crucial role in meso-urban atmospheric models as well as microscale urban dispersion models with resolutions of 1 m to 1 km.« less
Severity of Illness Scores May Misclassify Critically Ill Obese Patients.
Deliberato, Rodrigo Octávio; Ko, Stephanie; Komorowski, Matthieu; Armengol de La Hoz, M A; Frushicheva, Maria P; Raffa, Jesse D; Johnson, Alistair E W; Celi, Leo Anthony; Stone, David J
2018-03-01
Severity of illness scores rest on the assumption that patients have normal physiologic values at baseline and that patients with similar severity of illness scores have the same degree of deviation from their usual state. Prior studies have reported differences in baseline physiology, including laboratory markers, between obese and normal weight individuals, but these differences have not been analyzed in the ICU. We compared deviation from baseline of pertinent ICU laboratory test results between obese and normal weight patients, adjusted for the severity of illness. Retrospective cohort study in a large ICU database. Tertiary teaching hospital. Obese and normal weight patients who had laboratory results documented between 3 days and 1 year prior to hospital admission. None. Seven hundred sixty-nine normal weight patients were compared with 1,258 obese patients. After adjusting for the severity of illness score, age, comorbidity index, baseline laboratory result, and ICU type, the following deviations were found to be statistically significant: WBC 0.80 (95% CI, 0.27-1.33) × 10/L; p = 0.003; log (blood urea nitrogen) 0.01 (95% CI, 0.00-0.02); p = 0.014; log (creatinine) 0.03 (95% CI, 0.02-0.05), p < 0.001; with all deviations higher in obese patients. A logistic regression analysis suggested that after adjusting for age and severity of illness at least one of these deviations had a statistically significant effect on hospital mortality (p = 0.009). Among patients with the same severity of illness score, we detected clinically small but significant deviations in WBC, creatinine, and blood urea nitrogen from baseline in obese compared with normal weight patients. These small deviations are likely to be increasingly important as bigger data are analyzed in increasingly precise ways. Recognition of the extent to which all critically ill patients may deviate from their own baseline may improve the objectivity, precision, and generalizability of ICU mortality prediction and severity adjustment models.
[Conservative and surgical treatment of convergence excess].
Ehrt, O
2016-07-01
Convergence excess is a common finding especially in pediatric strabismus. A detailed diagnostic approach has to start after full correction of any hyperopia measured in cycloplegia. It includes measurements of manifest and latent deviation at near and distance fixation, near deviation after relaxation of accommodation with addition of +3 dpt, assessment of binocular function with and without +3 dpt as well as the accommodation range. This diagnostic approach is important for the classification into three types of convergence excess, which require different therapeutic approaches: 1) hypo-accommodative convergence excess is treated with permanent bifocal glasses, 2) norm-accommodative patients should be treated with bifocals which can be weaned over years, especially in patients with good stereopsis and 3) non-accommodative convergence excess and patients with large distance deviations need a surgical approach. The most effective operations include those which reduce the muscle torque, e. g. bimedial Faden operations or Y‑splitting of the medial rectus muscles.
Diode‐based transmission detector for IMRT delivery monitoring: a validation study
Li, Taoran; Wu, Q. Jackie; Matzen, Thomas; Yin, Fang‐Fang
2016-01-01
The purpose of this work was to evaluate the potential of a new transmission detector for real‐time quality assurance of dynamic‐MLC‐based radiotherapy. The accuracy of detecting dose variation and static/dynamic MLC position deviations was measured, as well as the impact of the device on the radiation field (surface dose, transmission). Measured dose variations agreed with the known variations within 0.3%. The measurement of static and dynamic MLC position deviations matched the known deviations with high accuracy (0.7–1.2 mm). The absorption of the device was minimal (∼ 1%). The increased surface dose was small (1%–9%) but, when added to existing collimator scatter effects could become significant at large field sizes (≥30×30 cm2). Overall the accuracy and speed of the device show good potential for real‐time quality assurance. PACS number(s): 87.55.Qr PMID:27685115
Determination of the optimal level for combining area and yield estimates
NASA Technical Reports Server (NTRS)
Bauer, M. E. (Principal Investigator); Hixson, M. M.; Jobusch, C. D.
1981-01-01
Several levels of obtaining both area and yield estimates of corn and soybeans in Iowa were considered: county, refined strata, refined/split strata, crop reporting district, and state. Using the CCEA model form and smoothed weather data, regression coefficients at each level were derived to compute yield and its variance. Variances were also computed with stratum level. The variance of the yield estimates was largest at the state and smallest at the county level for both crops. The refined strata had somewhat larger variances than those associated with the refined/split strata and CRD. For production estimates, the difference in standard deviations among levels was not large for corn, but for soybeans the standard deviation at the state level was more than 50% greater than for the other levels. The refined strata had the smallest standard deviations. The county level was not considered in evaluation of production estimates due to lack of county area variances.
NASA Technical Reports Server (NTRS)
Kimes, D. S.
1979-01-01
The effects of vegetation canopy structure on thermal infrared sensor response must be understood before vegetation surface temperatures of canopies with low percent ground cover can be accurately inferred. The response of a sensor is a function of vegetation geometric structure, the vertical surface temperature distribution of the canopy components, and sensor view angle. Large deviations between the nadir sensor effective radiant temperature (ERT) and vegetation ERT for a soybean canopy were observed throughout the growing season. The nadir sensor ERT of a soybean canopy with 35 percent ground cover deviated from the vegetation ERT by as much as 11 C during the mid-day. These deviations were quantitatively explained as a function of canopy structure and soil temperature. Remote sensing techniques which determine the vegetation canopy temperature(s) from the sensor response need to be studied.
Susanne Winter; Andreas Böck; Ronald E. McRoberts
2012-01-01
Tree diameter and height are commonly measured forest structural variables, and indicators based on them are candidates for assessing forest diversity. We conducted our study on the uncertainty of estimates for mostly large geographic scales for four indicators of forest structural gamma diversity: mean tree diameter, mean tree height, and standard deviations of tree...
Global Behavior in Large Scale Systems
2013-12-05
release. AIR FORCE RESEARCH LABORATORY AF OFFICE OF SCIENTIFIC RESEARCH (AFOSR)/RSL ARLINGTON, VIRGINIA 22203 AIR FORCE MATERIEL COMMAND AFRL-OSR-VA...and Research 875 Randolph Street, Suite 325 Room 3112, Arlington, VA 22203 December 3, 2013 1 Abstract This research attained two main achievements: 1...microscopic random interactions among the agents. 2 1 Introduction In this research we considered two main problems: 1) large deviation error performance in
Sub-Scale Analysis of New Large Aircraft Pool Fire-Suppression
2016-01-01
discrete ordinates radiation and single step Khan and Greeves soot model provided radiation and soot interaction. Agent spray dynamics were...Notable differences observed showed a modeled increase in the mockup surface heat-up rate as well as a modeled decreased rate of soot production...488 K SUPPRESSION STARTED Large deviation between sensors due to sensor alignment challenges and asymmetric fuel surface ignition Unremarkable
Measuring Diameters Of Large Vessels
NASA Technical Reports Server (NTRS)
Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.
1990-01-01
Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.
Data assimilation in the low noise regime
NASA Astrophysics Data System (ADS)
Weare, J.; Vanden-Eijnden, E.
2012-12-01
On-line data assimilation techniques such as ensemble Kalman filters and particle filters tend to lose accuracy dramatically when presented with an unlikely observation. Such observation may be caused by an unusually large measurement error or reflect a rare fluctuation in the dynamics of the system. Over a long enough span of time it becomes likely that one or several of these events will occur. In some cases they are signatures of the most interesting features of the underlying system and their prediction becomes the primary focus of the data assimilation procedure. The Kuroshio or Black Current that runs along the eastern coast of Japan is an example of just such a system. It undergoes infrequent but dramatic changes of state between a small meander during which the current remains close to the coast of Japan, and a large meander during which the current bulges away from the coast. Because of the important role that the Kuroshio plays in distributing heat and salinity in the surrounding region, prediction of these transitions is of acute interest. { Here we focus on a regime in which both the stochastic forcing on the system and the observational noise are small. In this setting large deviation theory can be used to understand why standard filtering methods fail and guide the design of the more effective data assimilation techniques. Motivated by our large deviations analysis we propose several data assimilation strategies capable of efficiently handling rare events such as the transitions of the Kuroshio. These techniques are tested on a model of the Kuroshio and shown to perform much better than standard filtering methods.Here the sequence of observations (circles) are taken directly from one of our Kuroshio model's transition events from the small meander to the large meander. We tested two new algorithms (Algorithms 3 and 4 in the legend) motivated by our large deviations analysis as well as a standard particle filter and an ensemble Kalman filter. The parameters of each algorithm are chosen so that their costs are comparable. The particle filter and an ensemble Kalman filter fail to accurately track the transition. Algorithms 3 and 4 maintain accuracy (and smaller scale resolution) throughout the transition.
Probability distributions of linear statistics in chaotic cavities and associated phase transitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vivo, Pierpaolo; Majumdar, Satya N.; Bohigas, Oriol
2010-03-01
We establish large deviation formulas for linear statistics on the N transmission eigenvalues (T{sub i}) of a chaotic cavity, in the framework of random matrix theory. Given any linear statistics of interest A=SIGMA{sub i=1}{sup N}a(T{sub i}), the probability distribution P{sub A}(A,N) of A generically satisfies the large deviation formula lim{sub N-}>{sub i}nfinity[-2 log P{sub A}(Nx,N)/betaN{sup 2}]=PSI{sub A}(x), where PSI{sub A}(x) is a rate function that we compute explicitly in many cases (conductance, shot noise, and moments) and beta corresponds to different symmetry classes. Using these large deviation expressions, it is possible to recover easily known results and to produce newmore » formulas, such as a closed form expression for v(n)=lim{sub N-}>{sub i}nfinity var(T{sub n}) (where T{sub n}=SIGMA{sub i}T{sub i}{sup n}) for arbitrary integer n. The universal limit v*=lim{sub n-}>{sub i}nfinity v(n)=1/2pibeta is also computed exactly. The distributions display a central Gaussian region flanked on both sides by non-Gaussian tails. At the junction of the two regimes, weakly nonanalytical points appear, a direct consequence of phase transitions in an associated Coulomb gas problem. Numerical checks are also provided, which are in full agreement with our asymptotic results in both real and Laplace space even for moderately small N. Part of the results have been announced by Vivo et al. [Phys. Rev. Lett. 101, 216809 (2008)].« less
Vicini, P; Di Nicola, S; Antonini, G; De Berardinis, E; Gentile, V; De Marco, F
2016-11-01
We present the use of a modified corporoplasty, based on geometrical principles, to determine the exact site for the incision in the tunica or plaque and the exact amount of albuginea for overlaying to correct with extreme precision the different types of congenital or acquired penile curvature due to Peyronie's disease. To describe our experience with a new surgical procedure for the enhancement of penile curvature avoiding any overcorrection or undercorrection. Between March 2004 and April 2013, a total of 74 patients underwent the geometrical modified corporoplasty. All patients had congenital curvature until 90° or acquired stable penile curvature 'less' than 60°, that made sexual intercourse very difficult or impossible, normal erectile function, absence of hourglass or hinge effect. Preoperative testing included a physical examination, 3 photographs (frontal, dorsal and lateral) of penis during erection, a 10 mcg PGE1-induced erection and Doppler ultrasound, administration of the International Index of Erectile Function (IIEF-15) questionnaire. A follow-up with postoperative evaluation at 12 weeks, 12 and 24 months, included the same preoperative testing. Satisfaction rates were better assessed with the use of validated questionnaire such as the International Erectile Dysfunction Inventory of the Treatment Satisfaction (EDITS). Statistical analysis with Student's t-test was performed using commercially available, personal computer software. A total of 25 patients had congenital penile curvature with a mean deviation of 46.8° (range 40-90), another 49 patients had Peyronie's disease with a mean deviation of 58.4 (range 45-60). No major complications were reported. Postoperative correction of the curvature was achieved in all patients (100%). Neither undercorrection nor overcorrection were recorded. No significant relapse (curvature>15°) occurred in our patients. Shortening of the penis was reported by 74% but did not influence the high overall satisfaction of 92% (patients completely satisfied with their sexual life). The erectile function was analyzed in both groups, Student's t-test showed a significant improvement in erectile function, preoperative average IIEF-15 scores were 17.43±4.67, whereas postoperatively it was 22.57±4.83 (P=0.001). This geometrical modified Nesbit corporoplasty is a valid therapy which allows penile straightening. The geometric principles make the technique reproducible in multicentre studies.
Factors reducing the expected deflection in initial orientation in clock-shifted homing pigeons.
Gagliardo, Anna; Odetti, Francesca; Ioalè, Paolo
2005-02-01
To orient from familiar sites, homing pigeons can rely on both an olfactory map and visual familiar landmarks. The latter can in principle be used in two different ways: either within a topographical map exploited for piloting or in a so-called mosaic map associated with a compass bearing. One way to investigate the matter is to put the compass and the topographical information in conflict by releasing clock-shifted pigeons from familiar locations. Although the compass orientation is in general dominant over a piloting strategy, a stronger or weaker tendency to correct towards the home direction by clock-shifted pigeons released from very familiar sites has often been observed. To investigate which factors are involved in the reduction of the deviation due to clock-shift, we performed a series of releases with intact and anosmic pigeons from familiar sites in unshifted and clock-shifted conditions and a series of releases from the same sites with naive clock-shifted birds. Our data suggest that the following factors have a role in reducing deviation due to the clock-shift: familiarity with the release site, the lack of olfactory information and some unknown site-dependent features.
Stoecklin, S; Volk, T; Yousaf, A; Reindl, L
2015-01-01
In this paper, an enhanced approach of a class E amplifier being insensitive to coil impedance variations is presented. While state of the art class E amplifiers widely being used to supply implanted systems show a strong degradation of efficiency when powering distance, coil orientation or the implant current consumption deviate from the nominal design, the presented concept is able to detect these deviations on-line and to reconfigure the amplifier automatically. The concept is facilitated by a new approach of sensing the load impedance without interruption of the power supply to the implant, while the main components of the class E amplifier are programmable by software. Therefore, the device is able to perform dynamic impedance matching. Besides presenting the operational principle and the design equations, we show an adaptive prototype reader system which achieves a drain efficiency of up to 92% for a wide range of reflected coil impedances from 1 to 40 Ω. The integrated communication concept allows downlink data rates of up to 500 kBit/s, while the load modulation based uplink from implant to reader was verified of providing up to 1.35 MBit/s.
COBE attitude as seen from the FDF
NASA Technical Reports Server (NTRS)
Sedlak, J.; Chu, D.; Scheidker, E.
1990-01-01
The goal of the Flight Dynamics Facility (FDF) attitude support is twofold: to determine spacecraft attitude and to explain deviations from nominal attitude behavior. Attitude determination often requires resolving contradictions in the sensor observations. This may be accomplished by applying calibration corrections or by revising the observation models. After accounting for all known sources of error, solution accuracy should be limited only by observation and propagation noise. The second half of the goal is to explain why the attitude may not be as originally intended. Reasons for such deviations include sensor or actuator misalignments and control system performance. In these cases, the ability to explain the behavior should, in principle, be limited only by knowledge of the sensor and actuator data and external torques. Documented here are some results obtained to date in support of the Cosmic Background Explorer (COBE). Advantages and shortcomings of the integrated attitude determination/sensor calibration software are discussed. Some preliminary attitude solutions using data from the Diffuse Infrared Background Experiment (DIRBE) instrument are presented and compared to solutions using Sun and Earth sensors. A dynamical model is constructed to illustrate the relative importance of the various sensor imprefections. This model also shows the connection between the high- and low-frequency attitude oscillations.
Son, Ji Y; Ramos, Priscilla; DeWolf, Melissa; Loftus, William; Stigler, James W
2018-01-01
In this article, we begin to lay out a framework and approach for studying how students come to understand complex concepts in rich domains. Grounded in theories of embodied cognition, we advance the view that understanding of complex concepts requires students to practice, over time, the coordination of multiple concepts, and the connection of this system of concepts to situations in the world. Specifically, we explore the role that a teacher's gesture might play in supporting students' coordination of two concepts central to understanding in the domain of statistics: mean and standard deviation. In Study 1 we show that university students who have just taken a statistics course nevertheless have difficulty taking both mean and standard deviation into account when thinking about a statistical scenario. In Study 2 we show that presenting the same scenario with an accompanying gesture to represent variation significantly impacts students' interpretation of the scenario. Finally, in Study 3 we present evidence that instructional videos on the internet fail to leverage gesture as a means of facilitating understanding of complex concepts. Taken together, these studies illustrate an approach to translating current theories of cognition into principles that can guide instructional design.
A micro dew point sensor with a thermal detection principle
NASA Astrophysics Data System (ADS)
Kunze, M.; Merz, J.; Hummel, W.-J.; Glosch, H.; Messner, S.; Zengerle, R.
2012-01-01
We present a dew point temperature sensor with the thermal detection of condensed water on a thin membrane, fabricated by silicon micromachining. The membrane (600 × 600 × ~1 µm3) is part of a silicon chip and contains a heating element as well as a thermopile for temperature measurement. By dynamically heating the membrane and simultaneously analyzing the transient increase of its temperature it is detected whether condensed water is on the membrane or not. To cool the membrane down, a peltier cooler is used and electronically controlled in a way that the temperature of the membrane is constantly held at a value where condensation of water begins. This temperature is measured and output as dew point temperature. The sensor system works in a wide range of dew point temperatures between 1 K and down to 44 K below air temperature. In experimental investigations it could be proven that the deviation of the measured dew point temperatures compared to reference values is below ±0.2 K in an air temperature range of 22 to 70 °C. At low dew point temperatures of -20 °C (air temperature = 22 °C) the deviation increases to nearly -1 K.
Li, Chenzhe; Thampy, Sampreetha; Zheng, Yongping; Kweun, Joshua M; Ren, Yixin; Chan, Julia Y; Kim, Hanchul; Cho, Maenghyo; Kim, Yoon Young; Hsu, Julia W P; Cho, Kyeongjae
2016-03-31
Understanding and effectively predicting the thermal stability of ternary transition metal oxides with heavy elements using first principle simulations are vital for understanding performance of advanced materials. In this work, we have investigated the thermal stability of mullite RMn2O5 (R = Bi, Pr, Sm, or Gd) structures by constructing temperature phase diagrams using an efficient mixed generalized gradient approximation (GGA) and the GGA + U method. Simulation predicted stability regions without corrections on heavy elements show a 4-200 K underestimation compared to our experimental results. We have found the number of d/f electrons in the heavy elements shows a linear relationship with the prediction deviation. Further correction on the strongly correlated electrons in heavy elements could significantly reduce the prediction deviations. Our corrected simulation results demonstrate that further correction of R-site elements in RMn2O5 could effectively reduce the underestimation of the density functional theory-predicted decomposition temperature to within 30 K. Therefore, it could produce an accurate thermal stability prediction for complex ternary transition metal oxide compounds with heavy elements.
NASA Astrophysics Data System (ADS)
Lu, Zhiwei; Han, Li; Hu, Chengjun; Pan, Yong; Duan, Shengnan; Wang, Ningbo; Li, Shijian; Nuer, Maimaiti
2017-10-01
With the development of oil and gas fields, the accuracy and quantity requirements of real-time dynamic monitoring data needed for well dynamic analysis and regulation are increasing. Permanent, distributed downhole optical fiber temperature and pressure monitoring and other online real-time continuous data monitoring has become an important data acquisition and transmission technology in digital oil field and intelligent oil field construction. Considering the requirement of dynamic analysis of steam chamber developing state in SAGD horizontal wells in F oil reservoir in Xinjiang oilfield, it is necessary to carry out real-time and continuous temperature monitoring in horizontal section. Based on the study of the principle of optical fiber temperature measurement, the factors that cause the deviation of optical fiber temperature sensing are analyzed, and the method of fiber temperature calibration is proposed to solve the problem of temperature deviation. Field application in three wells showed that it could attain accurate measurement of downhole temperature by temperature correction. The real-time and continuous downhole distributed fiber temperature sensing technology has higher application value in the reservoir management of SAGD horizontal wells. It also has a reference for similar dynamic monitoring in reservoir production.
NASA Astrophysics Data System (ADS)
Wu, Weibin; Dai, Yifan; Zhou, Lin; Xu, Mingjin
2016-09-01
Material removal accuracy has a direct impact on the machining precision and efficiency of ion beam figuring. By analyzing the factors suppressing the improvement of material removal accuracy, we conclude that correcting the removal function deviation and reducing the removal material amount during each iterative process could help to improve material removal accuracy. Removal function correcting principle can effectively compensate removal function deviation between actual figuring and simulated processes, while experiments indicate that material removal accuracy decreases with a long machining time, so a small amount of removal material in each iterative process is suggested. However, more clamping and measuring steps will be introduced in this way, which will also generate machining errors and suppress the improvement of material removal accuracy. On this account, a free-measurement iterative process method is put forward to improve material removal accuracy and figuring efficiency by using less measuring and clamping steps. Finally, an experiment on a φ 100-mm Zerodur planar is preformed, which shows that, in similar figuring time, three free-measurement iterative processes could improve the material removal accuracy and the surface error convergence rate by 62.5% and 17.6%, respectively, compared with a single iterative process.
Quantitative assessment of joint position sense recovery in subacute stroke patients: a pilot study.
Kattenstroth, Jan-Christoph; Kalisch, Tobias; Kowalewski, Rebecca; Tegenthoff, Martin; Dinse, Hubert R
2013-11-01
To assess joint position sense performance in subacute stroke patients using a novel quantitative assessment. Proof-of-principle pilot study with a group of subacute stroke patients. Assessment at baseline and after 2 weeks of intervention. Additional data for a healthy age-matched control group. Ten subacute stroke patients (aged 65.41 years (standard deviation 2.5), 4 females, 2.3 weeks (standard deviation 0.2)) post-stroke receiving in-patient standard rehabilitation and repetitive electrical stimulation of the affected hand. Joint position sense was assessed based on the ability of correctly perceiving the opening angles of the finger joints. Patients had to report size differences of polystyrene balls of various sizes, whilst the balls were enclosed simultaneously by the affected and the non-affected hands. A total of 21 pairwise size comparisons was used to quantify joint position performance. After 2 weeks of therapeutic intervention a significant improvement in joint position sense performance was observed; however, the performance level was still below that of a healthy control group. The results indicate high feasibility and sensitivity of the joint position test in subacute stroke patients. Testing allowed quantification of both the deficit and the rehabilitation outcome.
Apparent violation of the principle of equivalence and killing horizons. [for relativity
NASA Technical Reports Server (NTRS)
Zimmerman, R. L.; Farhoosh, H.
1980-01-01
By means of the principle of equivalence the qualitative behavior of the Schwarzschild horizon about a uniformly accelerating particle was deduced. This result is confirmed for an exact solution of a uniformly accelerating object in the limit of small accelerations. For large accelerations the Schwarzschild horizon appears to violate the qualitative behavior established via the principle of equivalence. When similar arguments are extended to an observable such as the red shift between two observers, there is no departure from the results expected from the principle of equivalence. The resolution of the paradox is brought about by a compensating effect due to the Rindler horizon.
NASA Astrophysics Data System (ADS)
Varshney, Kapil; Chang, Song; Wang, Z. Jane
2013-05-01
Falling parallelograms exhibit coupled motion of autogyration and tumbling, similar to the motion of falling tulip seeds, unlike maple seeds which autogyrate but do not tumble, or rectangular cards which tumble but do not gyrate. This coupled tumbling and autogyrating motion are robust, when card parameters, such as aspect ratio, internal angle, and mass density, are varied. We measure the three-dimensional (3D) falling kinematics of the parallelograms and quantify their descending speed, azimuthal rotation, tumbling rotation, and cone angle in each falling. The cone angle is insensitive to the variation of the card parameters, and the card tumbling axis does not overlap with but is close to the diagonal axis. In addition to this connection to the dynamics of falling seeds, these trajectories provide an ideal set of data to analyze 3D aerodynamic force and torque at an intermediate range of Reynolds numbers, and the results will be useful for constructing 3D aerodynamic force and torque models. Tracking these free falling trajectories gives us a nonintrusive method for deducing instantaneous aerodynamic forces. We determine the 3D aerodynamic forces and torques based on Newton-Euler equations. The dynamical analysis reveals that, although the angle of attack changes dramatically during tumbling, the aerodynamic forces have a weak dependence on the angle of attack. The aerodynamic lift is dominated by the coupling of translational and rotational velocities. The aerodynamic torque has an unexpectedly large component perpendicular to the card. The analysis of the Euler equation suggests that this large torque is related to the deviation of the tumbling axis from the principle axis of the card.
Doutel, E; Pinto, S I S; Campos, J B L M; Miranda, J M
2016-08-07
Murray developed two laws for the geometry of bifurcations in the circulatory system. Based on the principle of energy minimization, Murray found restrictions for the relation between the diameters and also between the angles of the branches. It is known that bifurcations are prone to the development of atherosclerosis, in regions associated to low wall shear stresses (WSS) and high oscillatory shear index (OSI). These indicators (size of low WSS regions, size of high OSI regions and size of high helicity regions) were evaluated in this work. All of them were normalized by the size of the outflow branches. The relation between Murray's laws and the size of low WSS regions was analysed in detail. It was found that the main factor leading to large regions of low WSS is the so called expansion ratio, a relation between the cross section areas of the outflow branches and the cross section area of the main branch. Large regions of low WSS appear for high expansion ratios. Furthermore, the size of low WSS regions is independent of the ratio between the diameters of the outflow branches. Since the expansion ratio in bifurcations following Murray's law is kept in a small range (1 and 1.25), all of them have regions of low WSS with similar size. However, the expansion ratio is not small enough to completely prevent regions with low WSS values and, therefore, Murray's law does not lead to atherosclerosis minimization. A study on the effect of the angulation of the bifurcation suggests that the Murray's law for the angles does not minimize the size of low WSS regions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Varshney, Kapil; Chang, Song; Wang, Z Jane
2013-05-01
Falling parallelograms exhibit coupled motion of autogyration and tumbling, similar to the motion of falling tulip seeds, unlike maple seeds which autogyrate but do not tumble, or rectangular cards which tumble but do not gyrate. This coupled tumbling and autogyrating motion are robust, when card parameters, such as aspect ratio, internal angle, and mass density, are varied. We measure the three-dimensional (3D) falling kinematics of the parallelograms and quantify their descending speed, azimuthal rotation, tumbling rotation, and cone angle in each falling. The cone angle is insensitive to the variation of the card parameters, and the card tumbling axis does not overlap with but is close to the diagonal axis. In addition to this connection to the dynamics of falling seeds, these trajectories provide an ideal set of data to analyze 3D aerodynamic force and torque at an intermediate range of Reynolds numbers, and the results will be useful for constructing 3D aerodynamic force and torque models. Tracking these free falling trajectories gives us a nonintrusive method for deducing instantaneous aerodynamic forces. We determine the 3D aerodynamic forces and torques based on Newton-Euler equations. The dynamical analysis reveals that, although the angle of attack changes dramatically during tumbling, the aerodynamic forces have a weak dependence on the angle of attack. The aerodynamic lift is dominated by the coupling of translational and rotational velocities. The aerodynamic torque has an unexpectedly large component perpendicular to the card. The analysis of the Euler equation suggests that this large torque is related to the deviation of the tumbling axis from the principle axis of the card.
Morphologic changes of the anal sphincter musculature during and after temporary stool deviation.
Sailer, M; Fein, M; Fuchs, K H; Bussen, D; Grun, C; Thiede, A
2001-04-01
Temporary stool deviation, using a stoma, is a well-known surgical principle to protect low colorectal or coloanal anastomoses. The purpose of this study was to evaluate any morphologic changes with regard to the anal sphincter muscles during and after temporary ileostomy. Forty-four patients with rectal carcinomas were studied prospectively. All patients underwent low anterior resection. Reconstruction was performed using either a coloanal pouch or a straight end-to-end anastomosis. A protective stoma was fashioned in all 44 patients (ileostomy n=41; colostomy n=3). Stoma closure was carried out after a median of 85 days (41-330 days). Using a standard protocol, anal-sphincter thickness [m. puborectalis, external anal sphincter (EAS) and internal anal (IAS) sphincter] was assessed by means of endoanal ultrasonography preoperatively, at the time of stoma closure, and every 3 months thereafter for 1 year. The diameter of the puborectal muscle decreased from a median preoperative value of 6.3 mm to 5.7 mm at the time of stoma closure (P=0.03). After 3 months, 6.2 mm was measured. This value remained stable for the complete follow-up period. Similar results were recorded for the EAS. The IAS thickness remained stable throughout the study period, measuring between 2.1 mm and 2.4 mm. Temporary stool deviation does lead to morphologic changes of the anal sphincter. While the smooth muscle remains unchanged, the striated counterpart undergoes atrophic transformation. However, after passage reconstruction, i.e., stoma closure, a rapid regeneration of the voluntary muscles is observed.
Quantitating Human Optic Disc Topography
NASA Astrophysics Data System (ADS)
Graebel, William P.; Cohan, Bruce E.; Pearch, Andrew C.
1980-07-01
A method is presented for quantitatively expressing the topography of the human optic disc, applicable in a clinical setting to the diagnosis and management of glaucoma. Pho-tographs of the disc illuminated by a pattern of fine, high contrast parallel lines are digitized. From the measured deviation of the lines as they traverse the disc surface, disc topography is calculated, using the principles of optical sectioning. The quantitators applied to express this topography have the the following advantages : sensitivity to disc shape; objectivity; going beyond the limits of cup-disc ratio estimates and volume calculations; perfect generality in a mathematical sense; an inherent scheme for determining a non-subjective reference frame to compare different discs or the same disc over time.
Probability of stress-corrosion fracture under random loading.
NASA Technical Reports Server (NTRS)
Yang, J.-N.
1972-01-01
A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.
Verifying Architectural Design Rules of the Flight Software Product Line
NASA Technical Reports Server (NTRS)
Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen
2009-01-01
This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.
Lunar Laser Ranging Science: Gravitational Physics and Lunar Interior and Geodesy
NASA Technical Reports Server (NTRS)
Williams, James G.; Turyshev, Slava G.; Boggs, Dale H.; Ratcliff, J. Todd
2004-01-01
Laser pulses fired at retroreflectors on the Moon provide very accurate ranges. Analysis yields information on Earth, Moon, and orbit. The highly accurate retroreflector positions have uncertainties less than a meter. Tides on the Moon show strong dissipation, with Q=33+/-4 at a month and a weak dependence on period. Lunar rotation depends on interior properties; a fluid core is indicated with radius approx.20% that of the Moon. Tests of relativistic gravity verify the equivalence principle to +/-1.4x10(exp -13), limit deviations from Einstein's general relativity, and show no rate for the gravitational constant G/G with uncertainty 9x10(exp -13)/yr.
Eshelby's problem of a spherical inclusion eccentrically embedded in a finite spherical body
He, Q.-C.
2017-01-01
Resorting to the superposition principle, the solution of Eshelby's problem of a spherical inclusion located eccentrically inside a finite spherical domain is obtained in two steps: (i) the solution to the problem of a spherical inclusion in an infinite space; (ii) the solution to the auxiliary problem of the corresponding finite spherical domain subjected to appropriate boundary conditions. Moreover, a set of functions called the sectional and harmonic deviators are proposed and developed to work out the auxiliary solution in a series form, including the displacement and Eshelby tensor fields. The analytical solutions are explicitly obtained and illustrated when the geometric and physical parameters and the boundary condition are specified. PMID:28293141
Eussen, J H H
1979-01-01
The interaction between alang-alang (Imperata cylindrica) and maize or sorghum was studied in competition experiments according to the replacement principle. Dry matter yield of alang-alang in these experiments appeared hardly affected by the presence of maize or sorghum, while this yield of the latter two was strongly reduced by the presence of alang-alang. The relative yield total (RYT) reached unity except in one experiment in which a value of 0.6 was obtained.The results suggest that the allelopathic activity of alang-alang will find expression in an RYT deviating from one only if alang-alang is not able to utilize all available space.
The Future of Medical Dosimetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Robert D., E-mail: robert_adams@med.unc.edu
2015-07-01
The world of health care delivery is becoming increasingly complex. The purpose of this manuscript is to analyze current metrics and analytically predict future practices and principles of medical dosimetry. The results indicate five potential areas precipitating change factors: a) evolutionary and revolutionary thinking processes, b) social factors, c) economic factors, d) political factors, and e) technological factors. Outcomes indicate that significant changes will occur in the job structure and content of being a practicing medical dosimetrist. Discussion indicates potential variables that can occur within each process and change factor and how the predicted outcomes can deviate from normative values.more » Finally, based on predicted outcomes, future opportunities for medical dosimetrists are given.« less
Pinhas, Alexander; Linderman, Rachel; Mo, Shelley; Krawitz, Brian D; Geyman, Lawrence S; Carroll, Joseph; Rosen, Richard B; Chui, Toco Y
2018-01-01
To present a method for age-matched deviation mapping in the assessment of disease-related changes to the radial peripapillary capillaries (RPCs). We reviewed 4.5x4.5mm en face peripapillary OCT-A scans of 133 healthy control eyes (133 subjects, mean 41.5 yrs, range 11-82 yrs) and 4 eyes with distinct retinal pathologies, obtained using spectral-domain optical coherence tomography angiography. Statistical analysis was performed to evaluate the impact of age on RPC perfusion densities. RPC density group mean and standard deviation maps were generated for each decade of life. Deviation maps were created for the diseased eyes based on these maps. Large peripapillary vessel (LPV; noncapillary vessel) perfusion density was also studied for impact of age. Average healthy RPC density was 42.5±1.47%. ANOVA and pairwise Tukey-Kramer tests showed that RPC density in the ≥60yr group was significantly lower compared to RPC density in all younger decades of life (p<0.01). Average healthy LPV density was 21.5±3.07%. Linear regression models indicated that LPV density decreased with age, however ANOVA and pairwise Tukey-Kramer tests did not reach statistical significance. Deviation mapping enabled us to quantitatively and visually elucidate the significance of RPC density changes in disease. It is important to consider changes that occur with aging when analyzing RPC and LPV density changes in disease. RPC density, coupled with age-matched deviation mapping techniques, represents a potentially clinically useful method in detecting changes to peripapillary perfusion in disease.
Wang, Chun; Zheng, Yi; Chang, Hua-Hua
2014-01-01
With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.
Motor equivalence during multi-finger accurate force production
Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.
2014-01-01
We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311
Random matrix approach to cross correlations in financial data
NASA Astrophysics Data System (ADS)
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices
NASA Astrophysics Data System (ADS)
Schwarz-Zanetti, Gabriela; Fäh, Donat; Gache, Sylvain; Kästli, Philipp; Loizeau, Jeanluc; Masciadri, Virgilio; Zenhäusern, Gregor
2018-03-01
The Valais is the most seismically active region of Switzerland. Strong damaging events occurred in 1755, 1855, and 1946. Based on historical documents, we discuss two known damaging events in the sixteenth century: the 1524 Ardon and the 1584 Aigle earthquakes. For the 1524, a document describes damage in Ardon, Plan-Conthey, and Savièse, and a stone tablet at the new bell tower of the Ardon church confirms the reconstruction of the bell tower after the earthquake. Additionally, a significant construction activity in the Upper Valais churches during the second quarter of the sixteenth century is discussed that however cannot be clearly related to this event. The assessed moment magnitude Mw of the 1524 event is 5.8, with an error of about 0.5 units corresponding to one standard deviation. The epicenter is at 46.27 N, 7.27 E with a high uncertainty of about 50 km corresponding to one standard deviation. The assessed moment magnitude Mw of the 1584 main shock is 5.9, with an error of about 0.25 units corresponding to one standard deviation. The epicenter is at 46.33 N and 6.97 E with an uncertainty of about 25 km corresponding to one standard deviation. Exceptional movements in the Lake Geneva wreaked havoc along the shore of the Rhone delta. The large dimension of the induced damage can be explained by an expanded subaquatic slide with resultant tsunami and seiche in Lake Geneva. The strongest of the aftershocks occurred on March 14 with magnitude 5.4 and triggered a destructive landslide covering the villages Corbeyrier and Yvorne, VD.
The Roles and Uses of Design Principles for Developing the Trialogical Approach on Learning
ERIC Educational Resources Information Center
Paavola, Sami; Lakkala, Minna; Muukkonen, Hanni; Kosonen, Kari; Karlgren, Klas
2011-01-01
In the present paper, the development and use of a specific set of pedagogical design principles in a large research and development project are analysed. The project (the Knowledge Practices Laboratory) developed technology and a pedagogical approach to support certain kinds of collaborative knowledge creation practices related to the…
ERIC Educational Resources Information Center
FERSTER, C.B.
THESE EXPERIMENTS WITH VERBAL BEHAVIOR WERE CARRIED OUT AS AN EXTENSION AND ADAPTATION OF GENERAL LABORATORY PRINCIPLES DEVELOPED WITH ANIMALS. THE EXPERIMENTS COVERED THREE AREAS. THE FIRST WAS AN APPLICATION OF GENERAL PRINCIPLES OF VERBAL BEHAVIOR, LARGELY BASED ON SKINNER'S ANALYSIS, TO THE PROBLEMS OF TEACHING A SECOND LANGUAGE. ACTUAL…
J.R. Curry; W.L. Fons
1940-01-01
General principles of forest management were established over 200 years ago in Central Europe and the task of the American forester has been largely to adapt these principles to the management of the vast, rough, and inaccessible natural forests of this country. A series of essentially new problems has arisen, however. In the humid climate where forestry originated,...
Leave islands as refugia for low-mobility species in managed forest mosaics
Stephanie J. Wessell-Kelly; Deanna H. Olson
2013-01-01
In recent years, forest management in the Pacifi c Northwest has shifted from one based largely on resource extraction to one based on ecosystem management principles. Forest management based on these principles involves simultaneously balancing and sustaining multiple forest resource values, including silvicultural, social, economic, and ecological objectives. Leave...
Teaching the Economics of Urban Sprawl in the Principles of Economics Course
ERIC Educational Resources Information Center
Eckenrod, Sarah B.; Holahan, William L.
2004-01-01
The authors provide an explanation of urban sprawl using topics commonly taught in the principles of economics course. Specifically, employing the concepts of congestible public goods, they explain that underpriced road usage leads to an inefficiently large proportion of the population moving farther from the cities. Increased demand for highway…
Assessment of a Diversity Assignment in a PR Principles Course
ERIC Educational Resources Information Center
Gallicano, Tiffany Derville; Stansberry, Kathleen
2012-01-01
This study assesses an assignment for incorporating diversity into the principles of public relations course. The assignment is tailored to the challenges of using an active learning approach in a large lecture class. For the assignment, students write a goal, objectives, strategies, an identification of tactics, and evaluation plans for either…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magnelli, A; Smith, A; Chao, S
2016-06-15
Purpose: Spinal stereotactic body radiotherapy (SBRT) involves highly conformal dose distributions and steep dose gradients due to the proximity of the spinal cord to the treatment volume. To achieve the planning goals while limiting the spinal cord dose, patients are setup using kV cone-beam CT (kV-CBCT) with 6 degree corrections. The kV-CBCT registration with the reference CT is dependent on a user selected region of interest (ROI). The objective of this work is to determine the dosimetric impact of ROI selection. Methods: Twenty patients were selected for this study. For each patient, the kV-CBCT was registered to the reference CTmore » using three ROIs including: 1) the external body, 2) a large anatomic region, and 3) a small region focused in the target volume. Following each registration, the aligned CBCTs and contours were input to the treatment planning system for dose evaluation. The minimum dose, dose to 99% and 90% of the tumor volume (D99%, D90%), dose to 0.03cc and the dose to 10% of the spinal cord subvolume (V10Gy) were compared to the planned values. Results: The average deviations in the tumor minimum dose were 2.68%±1.7%, 4.6%±4.0%, 14.82%±9.9% for small, large and the external ROIs, respectively. The average deviations in tumor D99% were 1.15%±0.7%, 3.18%±1.7%, 10.0%±6.6%, respectively. The average deviations in tumor D90% were 1.00%±0.96%, 1.14%±1.05%, 3.19%±4.77% respectively. The average deviations in the maximum dose to the spinal cord were 2.80%±2.56%, 7.58%±8.28%, 13.35%±13.14%, respectively. The average deviation in V10Gy to the spinal cord were 1.69%±0.88%, 1.98%±2.79%, 2.71%±5.63%. Conclusion: When using automated registration algorithms for CBCT-Reference alignment, a small target-focused ROI results in the least dosimetric deviation from the plan. It is recommended to focus narrowly on the target volume to keep the spinal cord dose below tolerance.« less
Motion-robust intensity-modulated proton therapy for distal esophageal cancer.
Yu, Jen; Zhang, Xiaodong; Liao, Li; Li, Heng; Zhu, Ronald; Park, Peter C; Sahoo, Narayan; Gillin, Michael; Li, Yupeng; Chang, Joe Y; Komaki, Ritsuko; Lin, Steven H
2016-03-01
To develop methods for evaluation and mitigation of dosimetric impact due to respiratory and diaphragmatic motion during free breathing in treatment of distal esophageal cancers using intensity-modulated proton therapy (IMPT). This was a retrospective study on 11 patients with distal esophageal cancer. For each patient, four-dimensional computed tomography (4D CT) data were acquired, and a nominal dose was calculated on the average phase of the 4D CT. The changes of water equivalent thickness (ΔWET) to cover the treatment volume from the peak of inspiration to the valley of expiration were calculated for a full range of beam angle rotation. Two IMPT plans were calculated: one at beam angles corresponding to small ΔWET and one at beam angles corresponding to large ΔWET. Four patients were selected for the calculation of 4D-robustness-optimized IMPT plans due to large motion-induced dose errors generated in conventional IMPT. To quantitatively evaluate motion-induced dose deviation, the authors calculated the lowest dose received by 95% (D95) of the internal clinical target volume for the nominal dose, the D95 calculated on the maximum inhale and exhale phases of 4D CT DCT0 andDCT50 , the 4D composite dose, and the 4D dynamic dose for a single fraction. The dose deviation increased with the average ΔWET of the implemented beams, ΔWETave. When ΔWETave was less than 5 mm, the dose error was less than 1 cobalt gray equivalent based on DCT0 and DCT50 . The dose deviation determined on the basis of DCT0 and DCT50 was proportionally larger than that determined on the basis of the 4D composite dose. The 4D-robustness-optimized IMPT plans notably reduced the overall dose deviation of multiple fractions and the dose deviation caused by the interplay effect in a single fraction. In IMPT for distal esophageal cancer, ΔWET analysis can be used to select the beam angles that are least affected by respiratory and diaphragmatic motion. To further reduce dose deviation, the 4D-robustness optimization can be implemented for IMPT planning. Calculation of DCT0 and DCT50 is a conservative method to estimate the motion-induced dose errors.
Fries, James F; Krishnan, Eswar
2004-01-01
The concept of 'equipoise', or the 'uncertainty principle', has been represented as a central ethical principle, and holds that a subject may be enrolled in a randomized controlled trial (RCT) only if there is true uncertainty about which of the trial arms is most likely to benefit the patient. We sought to estimate the frequency with which equipoise conditions were met in industry-sponsored RCTs in rheumatology, to explore the reasons for any deviations from equipoise, to examine the concept of 'design bias', and to consider alternative ethical formulations that might improve subject safety and autonomy. We studied abstracts accepted for the 2001 American College of Rheumatology meetings that reported RCTs, acknowledged industry sponsorship, and had clinical end-points (n = 45), and examined the proportion of studies that favored the registration or marketing of the sponsor's drug. In every trial (45/45) results were favorable to the sponsor, indicating that results could have been predicted in advance solely by knowledge of sponsorship (P < 0.0001). Equipoise clearly was being systematically violated. Publication bias appeared to be an incomplete explanation for this dramatic result; this bias occurs after a study is completed. Rather, we hypothesize that 'design bias', in which extensive preliminary data are used to design studies with a high likelihood of being positive, is the major cause of the asymmetric results. Design 'bias' occurs before the trial is begun and is inconsistent with the equipoise principle. However, design bias increases scientific efficiency, decreases drug development costs, and limits the number of subjects required, probably reducing aggregate risks to participants. Conceptual and ethical issues were found with the equipoise principle, which encourages performance of negative studies; ignores patient values, patient autonomy, and social benefits; is applied at a conceptually inappropriate decision point (after randomization rather than before); and is in conflict with the Belmont, Nuremberg, and other sets of ethical principles, as well as with US Food and Drug Administration procedures. We propose a principle of 'positive expected outcomes', which informs the assessment that a trial is ethical, together with a restatement of the priority of personal autonomy.
Principles of cooperation across systems: from human sharing to multicellularity and cancer.
Aktipis, Athena
2016-01-01
From cells to societies, several general principles arise again and again that facilitate cooperation and suppress conflict. In this study, I describe three general principles of cooperation and how they operate across systems including human sharing, cooperation in animal and insect societies and the massively large-scale cooperation that occurs in our multicellular bodies. The first principle is that of Walk Away: that cooperation is enhanced when individuals can leave uncooperative partners. The second principle is that resource sharing is often based on the need of the recipient (i.e., need-based transfers) rather than on strict account-keeping. And the last principle is that effective scaling up of cooperation requires increasingly sophisticated and costly cheater suppression mechanisms. By comparing how these principles operate across systems, we can better understand the constraints on cooperation. This can facilitate the discovery of novel ways to enhance cooperation and suppress cheating in its many forms, from social exploitation to cancer.
Statistical isotropy violation in WMAP CMB maps resulting from non-circular beams
NASA Astrophysics Data System (ADS)
Das, Santanu; Mitra, Sanjit; Rotti, Aditya; Pant, Nidhi; Souradeep, Tarun
2016-06-01
Statistical isotropy (SI) of cosmic microwave background (CMB) fluctuations is a key observational test to validate the cosmological principle underlying the standard model of cosmology. While a detection of SI violation would have immense cosmological ramification, it is important to recognise their possible origin in systematic effects of observations. The WMAP seven year (WMAP-7) release claimed significant deviation from SI in the bipolar spherical harmonic (BipoSH) coefficients and . Here we present the first explicit reproduction of the measurements reported in WMAP-7, confirming that beam systematics alone can completely account for the measured SI violation. The possibility of such a systematic origin was alluded to in WMAP-7 paper itself and other authors but not as explicitly so as to account for it accurately. We simulate CMB maps using the actual WMAP non-circular beams and scanning strategy. Our estimated BipoSH spectra from these maps match the WMAP-7 results very well. It is also evident that only a very careful and adequately detailed modelling, as carried out here, can conclusively establish that the entire signal arises from non-circular beam effect. This is important since cosmic SI violation signals are expected to be subtle and dismissing a large SI violation signal as observational artefact based on simplistic plausibility arguments run the serious risk of "throwing the baby out with the bathwater".
Joshi, Tirtha Raj; Hakel, Peter; Hsu, Scott C.; ...
2017-03-22
In this article, we report the first direct experimental evidence of interspecies ion separation in direct-drive inertial confinement fusion experiments performed at the OMEGA laser facility via spectrally, temporally, and spatially resolved imaging x-ray-spectroscopy data [S. C. Hsu et al., Europhys. Lett. 115, 65001 (2016)]. These experiments were designed based on the expectation that interspecies ion thermo-diffusion would be the strongest for species with a large mass and charge difference. The targets were spherical plastic shells filled with D2 and a trace amount of Ar (0.1% or 1% by atom). Ar K-shell spectral features were observed primarily between the timemore » of first-shock convergence and slightly before the neutron bang time, using a time- and space-integrated spectrometer, a streaked crystal spectrometer, and two gated multi-monochromatic x-ray imagers fielded along quasi-orthogonal lines of sight. Detailed spectroscopic analyses of spatially resolved Ar K-shell lines reveal the deviation from the initial 1% Ar gas fill and show both Ar-concentration enhancement and depletion at different times and radial positions of the implosion. The experimental results are interpreted using radiation-hydrodynamic simulations that include recently implemented, first-principles models of interspecies ion diffusion. Lastly, the experimentally inferred Ar-atom fraction profiles agree reasonably with calculated profiles associated with the incoming and rebounding first shock.« less
Polak, Micha; Rubinovich, Leonid
2011-10-06
Nanoconfinement entropic effects on chemical equilibrium involving a small number of molecules, which we term NCECE, are revealed by two widely diverse types of reactions. Employing statistical-mechanical principles, we show how the NCECE effect stabilizes nucleotide dimerization observed within self-assembled molecular cages. Furthermore, the effect provides the basis for dimerization even under an aqueous environment inside the nanocage. Likewise, the NCECE effect is pertinent to a longstanding issue in astrochemistry, namely the extra deuteration commonly observed for molecules reacting on interstellar dust grain surfaces. The origin of the NCECE effect is elucidated by means of the probability distributions of the reaction extent and related variations in the reactant-product mixing entropy. Theoretical modelling beyond our previous preliminary work highlights the role of the nanospace size in addition to that of the nanosystem size, namely the limited amount of molecules in the reaction mixture. Furthermore, the NCECE effect can depend also on the reaction mechanism, and on deviations from stoichiometry. The NCECE effect, leading to enhanced, greatly variable equilibrium "constants", constitutes a unique physical-chemical phenomenon, distinguished from the usual thermodynamical properties of macroscopically large systems. Being significant particularly for weakly exothermic reactions, the effects should stabilize products in other closed nanoscale structures, and thus can have notable implications for the growing nanotechnological utilization of chemical syntheses conducted within confined nanoreactors.
NASA Astrophysics Data System (ADS)
Choi, H. J.; Lee, S. B.; Lee, H. G.; Y Back, S.; Kim, S. H.; Kang, H. S.
2017-07-01
Several parts that comprise the large scientific device should be installed and operated at the accurate three-dimensional location coordinates (X, Y, and Z) where they should be subjected to survey and alignment. The location of the aligned parts should not be changed in order to ensure that the electron beam parameters (Energy 10 GeV, Charge 200 pC, and Bunch Length 60 fs, Emittance X/Y 0.481 μm/0.256 μm) of PAL-XFEL (X-ray Free Electron Laser of the Pohang Accelerator Laboratory) remain stable and can be operated without any problems. As time goes by, however, the ground goes through uplift and subsidence, which consequently deforms building floors. The deformation of the ground and buildings changes the location of several devices including magnets and RF accelerator tubes, which eventually leads to alignment errors (∆X, ∆Y, and ∆Z). Once alignment errors occur with regard to these parts, the electron beam deviates from its course and beam parameters change accordingly. PAL-XFEL has installed the Hydrostatic Leveling System (HLS) to measure and record the vertical change of buildings and ground consistently and systematically and the Wire Position System (WPS) to measure the two dimensional changes of girders. This paper is designed to introduce the operating principle and design concept of WPS and discuss the current situation regarding installation and operation.
Stott, Lucy C.; Schnell, Sabine; Hogstrand, Christer; Owen, Stewart F.; Bury, Nic R.
2015-01-01
The gill is the principle site of xenobiotic transfer to and from the aqueous environment. To replace, refine or reduce (3Rs) the large numbers of fish used in in vivo uptake studies an effective in vitro screen is required that mimics the function of the teleost gill. This study uses a rainbow trout (Oncorhynchus mykiss) primary gill cell culture system grown on permeable inserts, which tolerates apical freshwater thus mimicking the intact organ, to assess the uptake and efflux of pharmaceuticals across the gill. Bidirectional transport studies in media of seven pharmaceuticals (propranolol, metoprolol, atenolol, formoterol, terbutaline, ranitidine and imipramine) showed they were transported transcellularly across the epithelium. However, studies conducted in water showed enhanced uptake of propranolol, ranitidine and imipramine. Concentration-equilibrated conditions without a concentration gradient suggested that a proportion of the uptake of propranolol and imipramine is via a carrier-mediated process. Further study using propranolol showed that its transport is pH-dependent and at very low environmentally relevant concentrations (ng L−1), transport deviated from linearity. At higher concentrations, passive uptake dominated. Known inhibitors of drug transport proteins; cimetidine, MK571, cyclosporine A and quinidine inhibited propranolol uptake, whilst amantadine and verapamil were without effect. Together this suggests the involvement of specific members of SLC and ABC drug transporter families in pharmaceutical transport. PMID:25544062
Greek classicism in living structure? Some deductive pathways in animal morphology.
Zweers, G A
1985-01-01
Classical temples in ancient Greece show two deterministic illusionistic principles of architecture, which govern their functional design: geometric proportionalism and a set of illusion-strengthening rules in the proportionalism's "stochastic margin". Animal morphology, in its mechanistic-deductive revival, applies just one architectural principle, which is not always satisfactory. Whether a "Greek Classical" situation occurs in the architecture of living structure is to be investigated by extreme testing with deductive methods. Three deductive methods for explanation of living structure in animal morphology are proposed: the parts, the compromise, and the transformation deduction. The methods are based upon the systems concept for an organism, the flow chart for a functionalistic picture, and the network chart for a structuralistic picture, whereas the "optimal design" serves as the architectural principle for living structure. These methods show clearly the high explanatory power of deductive methods in morphology, but they also make one open end most explicit: neutral issues do exist. Full explanation of living structure asks for three entries: functional design within architectural and transformational constraints. The transformational constraint brings necessarily in a stochastic component: an at random variation being a sort of "free management space". This variation must be a variation from the deterministic principle of the optimal design, since any transformation requires space for plasticity in structure and action, and flexibility in role fulfilling. Nevertheless, finally the question comes up whether for animal structure a similar situation exists as in Greek Classical temples. This means that the at random variation, that is found when the optimal design is used to explain structure, comprises apart from a stochastic part also real deviations being yet another deterministic part. This deterministic part could be a set of rules that governs actualization in the "free management space".
Protein structure similarity from Principle Component Correlation analysis.
Zhou, Xiaobo; Chou, James; Wong, Stephen T C
2006-01-25
Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD) in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC) analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum eigenvalues can be highly effective in clustering structurally or topologically similar proteins. We believe that the PCC analysis of interaction matrix is highly flexible in adopting various structural parameters for protein structure comparison.
Balanced Branching in Transcription Termination
NASA Technical Reports Server (NTRS)
Harrington, K. J.; Laughlin, R. B.; Liang, S.
2001-01-01
The theory of stochastic transcription termination based on free-energy competition requires two or more reaction rates to be delicately balanced over a wide range of physical conditions. A large body of work on glasses and large molecules suggests that this should be impossible in such a large system in the absence of a new organizing principle of matter. We review the experimental literature of termination and find no evidence for such a principle but many troubling inconsistencies, most notably anomalous memory effects. These suggest that termination has a deterministic component and may conceivably be not stochastic at all. We find that a key experiment by Wilson and von Hippel allegedly refuting deterministic termination was an incorrectly analyzed regulatory effect of Mg(2+) binding.
Robust regression for large-scale neuroimaging studies.
Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand
2015-05-01
Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Optimization of hybrid power system composed of SMES and flywheel MG for large pulsed load
NASA Astrophysics Data System (ADS)
Niiyama, K.; Yagai, T.; Tsuda, M.; Hamajima, T.
2008-09-01
A superconducting magnetic storage system (SMES) has some advantages such as rapid large power response and high storage efficiency which are superior to other energy storage systems. A flywheel motor generator (FWMG) has large scaled capacity and high reliability, and hence is broadly utilized for a large pulsed load, while it has comparatively low storage efficiency due to high mechanical loss compared with SMES. A fusion power plant such as International Thermo-Nuclear Experimental Reactor (ITER) requires a large and long pulsed load which causes a frequency deviation in a utility power system. In order to keep the frequency within an allowable deviation, we propose a hybrid power system for the pulsed load, which equips the SMES and the FWMG with the utility power system. We evaluate installation cost and frequency control performance of three power systems combined with energy storage devices; (i) SMES with the utility power, (ii) FWMG with the utility power, (iii) both SMES and FWMG with the utility power. The first power system has excellent frequency power control performance but its installation cost is high. The second system has inferior frequency control performance but its installation cost is the lowest. The third system has good frequency control performance and its installation cost is attained lower than the first power system by adjusting the ratio between SMES and FWMG.
NASA Technical Reports Server (NTRS)
Palmer, Michael T.; Abbott, Kathy H.
1994-01-01
This study identifies improved methods to present system parameter information for detecting abnormal conditions and to identify system status. Two workstation experiments were conducted. The first experiment determined if including expected-value-range information in traditional parameter display formats affected subject performance. The second experiment determined if using a nontraditional parameter display format, which presented relative deviation from expected value, was better than traditional formats with expected-value ranges included. The inclusion of expected-value-range information onto traditional parameter formats was found to have essentially no effect. However, subjective results indicated support for including this information. The nontraditional column deviation parameter display format resulted in significantly fewer errors compared with traditional formats with expected-value-ranges included. In addition, error rates for the column deviation parameter display format remained stable as the scenario complexity increased, whereas error rates for the traditional parameter display formats with expected-value ranges increased. Subjective results also indicated that the subjects preferred this new format and thought that their performance was better with it. The column deviation parameter display format is recommended for display applications that require rapid recognition of out-of-tolerance conditions, especially for a large number of parameters.
NASA Astrophysics Data System (ADS)
Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis
2017-11-01
Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.
Adair, Brooke; Rodda, Jillian; McGinley, Jennifer L; Graham, H Kerr; Morris, Meg E
2016-08-01
To examine the kinematic gait deviations at the trunk and pelvis of children with hereditary spastic paraplegia (HSP). This exploratory observational study quantified gait kinematics for the trunk and pelvis from 11 children with HSP (7 males, 4 females) using the Gait Profile Score and Gait Variable Scores (GVS), and compared the kinematics to data from children with typical development using a Mann-Whitney U test. Children with HSP (median age 11y 4mo, interquartile range 4y) demonstrated large deviations in the GVS for the trunk and pelvis in the sagittal and coronal planes when compared to the gait patterns of children with typical development (p=0.010-0.020). Specific deviations included increased range of movement for the trunk in the coronal plane and increased excursion of the trunk and pelvis in the sagittal plane. In the transverse plane, children with HSP demonstrated later peaks in posterior pelvic rotation. The kinematic gait deviations identified in this study raise questions about the contribution of muscle weakness in HSP. Further research is warranted to determine contributing factors for gait dysfunction in HSP, especially the relative influence of spasticity and weakness. © 2016 Mac Keith Press.
Santillán, David; Mosquera, Juan-Carlos; Cueto-Felgueroso, Luis
2017-11-01
Hydraulic fracture trajectories in rocks and other materials are highly affected by spatial heterogeneity in their mechanical properties. Understanding the complexity and structure of fluid-driven fractures and their deviation from the predictions of homogenized theories is a practical problem in engineering and geoscience. We conduct a Monte Carlo simulation study to characterize the influence of heterogeneous mechanical properties on the trajectories of hydraulic fractures propagating in elastic media. We generate a large number of random fields of mechanical properties and simulate pressure-driven fracture propagation using a phase-field model. We model the mechanical response of the material as that of an elastic isotropic material with heterogeneous Young modulus and Griffith energy release rate, assuming that fractures propagate in the toughness-dominated regime. Our study shows that the variance and the spatial covariance of the mechanical properties are controlling factors in the tortuousness of the fracture paths. We characterize the deviation of fracture paths from the homogenous case statistically, and conclude that the maximum deviation grows linearly with the distance from the injection point. Additionally, fracture path deviations seem to be normally distributed, suggesting that fracture propagation in the toughness-dominated regime may be described as a random walk.
Analysis and Design of Launch Vehicle Flight Control Systems
NASA Technical Reports Server (NTRS)
Wie, Bong; Du, Wei; Whorton, Mark
2008-01-01
This paper describes the fundamental principles of launch vehicle flight control analysis and design. In particular, the classical concept of "drift-minimum" and "load-minimum" control principles is re-examined and its performance and stability robustness with respect to modeling uncertainties and a gimbal angle constraint is discussed. It is shown that an additional feedback of angle-of-attack or lateral acceleration can significantly improve the overall performance and robustness, especially in the presence of unexpected large wind disturbance. Non-minimum-phase structural filtering of "unstably interacting" bending modes of large flexible launch vehicles is also shown to be effective and robust.
The quantum limit for gravitational-wave detectors and methods of circumventing it
NASA Technical Reports Server (NTRS)
Thorne, K. S.; Caves, C. M.; Sandberg, V. D.; Zimmermann, M.; Drever, R. W. P.
1979-01-01
The Heisenberg uncertainty principle prevents the monitoring of the complex amplitude of a mechanical oscillator more accurately than a certain limit value. This 'quantum limit' is a serious obstacle to the achievement of a 10 to the -21st gravitational-wave detection sensitivity. This paper examines the principles of the back-action evasion technique and finds that this technique may be able to overcome the problem of the quantum limit. Back-action evasion does not solve, however, other problems of detection, such as weak coupling, large amplifier noise, and large Nyquist noise.
Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; ...
2015-10-22
Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less
Common inputs in subthreshold membrane potential: The role of quiescent states in neuronal activity
NASA Astrophysics Data System (ADS)
Montangie, Lisandro; Montani, Fernando
2018-06-01
Experiments in certain regions of the cerebral cortex suggest that the spiking activity of neuronal populations is regulated by common non-Gaussian inputs across neurons. We model these deviations from random-walk processes with q -Gaussian distributions into simple threshold neurons, and investigate the scaling properties in large neural populations. We show that deviations from the Gaussian statistics provide a natural framework to regulate population statistics such as sparsity, entropy, and specific heat. This type of description allows us to provide an adequate strategy to explain the information encoding in the case of low neuronal activity and its possible implications on information transmission.
Geochemical fingerprinting and source discrimination in soils at the continental scale
NASA Astrophysics Data System (ADS)
Negrel, Philippe; Sadeghi, Martiya; Ladenberger, Anna; Birke, Manfred; Reimann, Clemens
2014-05-01
Agricultural soil (Ap-horizon, 0-20 cm) samples were collected from a large part of Europe (33 countries, 5.6 million km2) at an average density of 1 sample site per 2500 km2. The resulting 2108 soil samples were air dried, sieved to <2 mm, milled and analysed for their major and trace element concentrations by wavelength dispersive X-ray fluorescence spectrometry (WD-XRF). The main goal of this study is to provide a global view of element mobility and source rocks at the continent scale, either by reference to crustal evolution or normalized patterns of element mobility during weathering processes. The survey area includes several sedimentary basins with different geological history, developed in different climate zones and landscapes and with different land use. In order to normalize the chemical composition of soils, mean values and standard deviation of the selected elements have been checked against values for the upper continental crust (UCC). Some elements turned out to be enriched relative to the UCC (Al, P, Zr, Pb) whereas others, like Mg, Na, Sr and Pb were depleted with regards to the variation represented by the standard deviation. The concept of UCC extended normalization patterns have been further used for the selected elements. The mean value of Rb, K, Y, Ti, Al, Si, Zr, Ce and Fe are very close to the UCC model even if standard deviation suggests slight enrichment or depletion, and Zr shows the best fit with the UCC model using both mean value and standard deviation. Lead and Cr are enriched in European soils when compared to UCC but their standard deviation values show very large variations, particularly towards very low values, which can be interpreted as a lithological effect. Element variability has been explored by looking at the variations using indicator elements. Soil data have been converted into Al-normalized enrichment factors and Na was applied as normalizing element for studying provenance source taking into account the main lithologies of the UCC. This latter normalization highlighted variations related to the soluble and insoluble behavior of some elements (K, Rb versus Ti, Al, Si, V, Y, Zr, Ba, and La, respectively), their reactivity (Fe, Mn, Zn), association with carbonates (Ca and Sr) and with phosphates (P and Ce). The maps of normalized composition revealed some problems with use of classical element ratios due to genetical differences in composition of parent material reflected, for example, in large differences in titanium content in bedrock and soil throughout the Europe.
Corrigan, Patrick W
2011-08-01
This column describes strategic stigma change (SSC), which comprises five principles and corresponding practices developed as a best practice to erase prejudice and discrimination associated with mental illness and promote affirming behaviors and social inclusion. SSC principles represent more than ten years of insights from the National Consortium on Stigma and Empowerment. The principles, which are centered on consumer contact that is targeted, local, credible, and continuous, were developed to inform the growth of large-scale social marketing campaigns supported by governments and nongovernmental organizations. Future social marketing efforts to address stigma and the need for evidence to determine SSC's penetration and impact are also discussed.
Data Use Agreement | Office of Cancer Clinical Proteomics Research
CPTAC requests that data users abide by the same principles that were previously established in the Fort Lauderdale and Amsterdam meetings. The recommendations from the Fort Lauderdale meeting (2003) on best practices and principles for sharing large-scale genomic data address the roles and responsibilities of data producers, data users and funders of community resource projects.
The Role of Fisher Information Theory in the Development of Fundamental Laws in Physical Chemistry
ERIC Educational Resources Information Center
Honig, J. M.
2009-01-01
The unifying principle that involves rendering the Fisher information measure an extremum is reviewed. It is shown that with this principle, in conjunction with appropriate constraints, a large number of fundamental laws can be derived from a common source in a unified manner. The resulting economy of thought pertaining to fundamental principles…
On propagators of nonlocal relativistic diffusion of galactic cosmic rays
NASA Astrophysics Data System (ADS)
Uchaikin, V. V.; Sibatov, R. T.
2018-01-01
This report discusses a new model of cosmic ray propagation in the Galaxy. In contrast to the known models based on the principles of Brownian motion, the proposed model agrees with the relativistic principle of speed limitation and takes into account the large-scale turbulence of the interstellar medium, justifying introduction of fractional differential operators.
The Contact Principle and Utilitarian Moral Judgments in Young Children
ERIC Educational Resources Information Center
Pellizzoni, Sandra; Siegal, Michael; Surian, Luca
2010-01-01
In three experiments involving 207 preschoolers and 28 adults, we investigated the extent to which young children base moral judgments of actions aimed to protect others on utilitarian principles. When asked to judge the rightness of intervening to hurt one person in order to save five others, the large majority of children aged 3 to 5 years…
NASA Technical Reports Server (NTRS)
Wolf, David A.; Schwarz, Ray P.
1992-01-01
Measurements were taken of the path of a simulated typical tissue segment or 'particle' within a rotating fluid as a function of gravitational strength, fluid rotation rate, particle sedimentation rate, and particle initial position. Parameters were examined within the useful range for tissue culture in the NASA rotating wall culture vessels. The particle moves along a nearly circular path through the fluid (as observed from the rotating reference frame of the fluid) at the same speed as its linear terminal sedimentation speed for the external gravitational field. This gravitationally induced motion causes an increasing deviation of the particle from its original position within the fluid for a decreased rotational rate, for a more rapidly sedimenting particle, and for an increased gravitational strength. Under low gravity conditions (less than 0.1 G), the particle's motion through the fluid and its deviation from its original position become negligible. Under unit gravity conditions, large distortions (greater than 0.25 inch) occur even for particles of slow sedimentation rate (less than 1.0 cm/sec). The particle's motion is nearly independent of the particle's initial position. Comparison with mathematically predicted particle paths show that a significant error in the mathematically predicted path occurs for large particle deviations. This results from a geometric approximation and numerically accumulating error in the mathematical technique.
Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil
2016-01-01
Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.
NASA Astrophysics Data System (ADS)
Lifton, Nathaniel; Sato, Tatsuhiko; Dunai, Tibor J.
2014-01-01
Several models have been proposed for scaling in situ cosmogenic nuclide production rates from the relatively few sites where they have been measured to other sites of interest. Two main types of models are recognized: (1) those based on data from nuclear disintegrations in photographic emulsions combined with various neutron detectors, and (2) those based largely on neutron monitor data. However, stubborn discrepancies between these model types have led to frequent confusion when calculating surface exposure ages from production rates derived from the models. To help resolve these discrepancies and identify the sources of potential biases in each model, we have developed a new scaling model based on analytical approximations to modeled fluxes of the main atmospheric cosmic-ray particles responsible for in situ cosmogenic nuclide production. Both the analytical formulations and the Monte Carlo model fluxes on which they are based agree well with measured atmospheric fluxes of neutrons, protons, and muons, indicating they can serve as a robust estimate of the atmospheric cosmic-ray flux based on first principles. We are also using updated records for quantifying temporal and spatial variability in geomagnetic and solar modulation effects on the fluxes. A key advantage of this new model (herein termed LSD) over previous Monte Carlo models of cosmogenic nuclide production is that it allows for faster estimation of scaling factors based on time-varying geomagnetic and solar inputs. Comparing scaling predictions derived from the LSD model with those of previously published models suggest potential sources of bias in the latter can be largely attributed to two factors: different energy responses of the secondary neutron detectors used in developing the models, and different geomagnetic parameterizations. Given that the LSD model generates flux spectra for each cosmic-ray particle of interest, it is also relatively straightforward to generate nuclide-specific scaling factors based on recently updated neutron and proton excitation functions (probability of nuclide production in a given nuclear reaction as a function of energy) for commonly measured in situ cosmogenic nuclides. Such scaling factors reflect the influence of the energy distribution of the flux folded with the relevant excitation functions. Resulting scaling factors indicate 3He shows the strongest positive deviation from the flux-based scaling, while 14C exhibits a negative deviation. These results are consistent with a recent Monte Carlo-based study using a different cosmic-ray physics code package but the same excitation functions.
A Gauge-generalized Solution for Non-Keplerian Motion in the Frenet-Serret Frame
NASA Astrophysics Data System (ADS)
Garber, Darren D.
2009-05-01
The customary modeling of perturbed planetary and spacecraft motion as a continuous sequence of unperturbed two-body orbits (instantaneous ellipses) is conveniently assigned a physical interpretation through the Keplerian and Delaunay elements and complemented mathematically by the Lagrange-type equations which describe the evolution of these variables. If however the actual motion is very non-Keplerian (i.e. the perturbed orbit varies greatly from a two-body orbit), then its modeling by a sequence of conics is not necessarily optimal in terms of its mathematical description and its resulting physical interpretation. Since, in principle a curve of any type can be represented as a sequence of points from a family of curves of any other type (Efroimsky 2005), alternate non-conic curves can be utilized to better describe the perturbed non-Keplerian motion of the body both mathematically and with a physically relevant interpretation. Non-Keplerian motion exists in both celestial mechanics and astrodynamics as evident by the complex interactions within star clusters and also as the result of a spacecraft accelerating via ion propulsion, solar sails and electro-dynamic tethers. For these cases, the sequence of simple orbits to describe the motion is not based on conics, but instead a family of spirals. The selection of spirals as the underlying simple motion is supported by the fact that it is unnecessary to describe the motion in terms of instantaneous orbits tangent to the actual trajectory (Efroimsky 2002, Newman & Efroimsky 2003) and at times there is an advantage to deviate from osculation, in order to greatly simplify the resulting mathematics via gauge freedom (Efroimsky & Goldreich 2003, Slabinski 2003, Gurfil 2004). From these two principles, (1) spirals as instantaneous orbits, and (2) controlled deviation from osculation, new planetary equations are derived for new non-osculating elements in the Frenet-Serret frame with the gauge function as a measure of non-osculation.
Analysis of magnetic fields using variational principles and CELAS2 elements
NASA Technical Reports Server (NTRS)
Frye, J. W.; Kasper, R. G.
1977-01-01
Prospective techniques for analyzing magnetic fields using NASTRAN are reviewed. A variational principle utilizing a vector potential function is presented which has as its Euler equations, the required field equations and boundary conditions for static magnetic fields including current sources. The need for an addition to this variational principle of a constraint condition is discussed. Some results using the Lagrange multiplier method to apply the constraint and CELAS2 elements to simulate the matrices are given. Practical considerations of using large numbers of CELAS2 elements are discussed.
Large-size space debris flyby in low earth orbits
NASA Astrophysics Data System (ADS)
Baranov, A. A.; Grishko, D. A.; Razoumny, Y. N.
2017-09-01
the analysis of NORAD catalogue of space objects executed with respect to the overall sizes of upper-stages and last stages of carrier rockets allows the classification of 5 groups of large-size space debris (LSSD). These groups are defined according to the proximity of orbital inclinations of the involved objects. The orbits within a group have various values of deviations in the Right Ascension of the Ascending Node (RAAN). It is proposed to use the RAANs deviations' evolution portrait to clarify the orbital planes' relative spatial distribution in a group so that the RAAN deviations should be calculated with respect to the concrete precessing orbital plane of the concrete object. In case of the first three groups (inclinations i = 71°, i = 74°, i = 81°) the straight lines of the RAAN relative deviations almost do not intersect each other. So the simple, successive flyby of group's elements is effective, but the significant value of total Δ V is required to form drift orbits. In case of the fifth group (Sun-synchronous orbits) these straight lines chaotically intersect each other for many times due to the noticeable differences in values of semi-major axes and orbital inclinations. The intersections' existence makes it possible to create such a flyby sequence for LSSD group when the orbit of one LSSD object simultaneously serves as the drift orbit to attain another LSSD object. This flyby scheme requiring less Δ V was called "diagonal." The RAANs deviations' evolution portrait built for the fourth group (to be studied in the paper) contains both types of lines, so the simultaneous combination of diagonal and successive flyby schemes is possible. The value of total Δ V and temporal costs were calculated to cover all the elements of the 4th group. The article is also enriched by the results obtained for the flyby problem solution in case of all the five mentioned LSSD groups. The general recommendations are given concerned with the required reserve of total Δ V and with amount of detachable de-orbiting units onboard the maneuvering platform and onboard the refueling vehicle.
NASA Astrophysics Data System (ADS)
Revuelto, J.; Dumont, M.; Tuzet, F.; Vionnet, V.; Lafaysse, M.; Lecourt, G.; Vernay, M.; Morin, S.; Cosme, E.; Six, D.; Rabatel, A.
2017-12-01
Nowadays snowpack models show a good capability in simulating the evolution of snow in mountain areas. However singular deviations of meteorological forcing and shortcomings in the modelling of snow physical processes, when accumulated on time along a snow season, could produce large deviations from real snowpack state. The evaluation of these deviations is usually assessed with on-site observations from automatic weather stations. Nevertheless the location of these stations could strongly influence the results of these evaluations since local topography may have a marked influence on snowpack evolution. Despite the evaluation of snowpack models with automatic weather stations usually reveal good results, there exist a lack of large scale evaluations of simulations results on heterogeneous alpine terrain subjected to local topographic effects.This work firstly presents a complete evaluation of the detailed snowpack model Crocus over an extended mountain area, the Arve upper catchment (western European Alps). This catchment has a wide elevation range with a large area above 2000m a.s.l. and/or glaciated. The evaluation compares results obtained with distributed and semi-distributed simulations (the latter nowadays used on the operational forecasting). Daily observations of the snow covered area from MODIS satellite sensor, seasonal glacier surface mass balance evolution measured in more than 65 locations and the galciers annual equilibrium line altitude from Landsat/Spot/Aster satellites, have been used for model evaluation. Additionally the latest advances in producing ensemble snowpack simulations for assimilating satellite reflectance data over extended areas will be presented. These advances comprises the generation of an ensemble of downscaled high-resolution meteorological forcing from meso-scale meteorological models and the application of a particle filter scheme for assimilating satellite observations. Despite the results are prefatory, they show a good potential improving snowpack forecasting capabilities.
What Are “X-shaped” Radio Sources Telling Us? II. Properties of a Sample of 87
NASA Astrophysics Data System (ADS)
Saripalli, Lakshmi; Roberts, David H.
2018-01-01
In an earlier paper, we presented Jansky Very Large Array multi-frequency, multi-array continuum imaging of a unique sample of low-axial ratio radio galaxies. In this paper, the second in the series, we examine the images to learn the phenomenology of how the off-axis emission relates to the main radio source. Inversion-symmetric offset emission appears to be bimodal and to originate from one of two strategic locations: outer ends of radio lobes (outer-deviation) or from inner ends (inner-deviation). The latter sources are almost always associated with edge-brightened sources. With S- and Z-shaped sources being a subset of outer-deviation sources, this class lends itself naturally to explanations involving black hole axis precession. Our data allow us to present a plausible model for the more enigmatic inner-deviation sources with impressive wings; as for outer-deviation sources these too require black hole axis shifts, although they also require plasma backflows into relic channels. Evolution in morphology over time relates the variety in structures in inner-deviation sources including XRGs. With features such as non-collinearities, central inner-S “spine,” corresponding lobe emission peaks, double and protruding hotspots not uncommon, black hole axis precession, drifts, or flips could be active in a significant fraction of radio sources with prominent off-axis emission. At least 4% of radio galaxies appear to undergo black hole axis rotation. Quasars offer a key signature for recognizing rotating axes. With a rich haul of sources that have likely undergone axis rotation, our work shows the usefulness of low-axial ratio sources in pursuing searches for binary supermassive black holes.
Gait Deviations in Children With Osteogenesis Imperfecta Type I.
Garman, Christina R; Graf, Adam; Krzak, Joseph; Caudill, Angela; Smith, Peter; Harris, Gerald
2017-08-02
Osteogenesis imperfecta (OI) is a congenital connective tissue disorder often characterized by orthopaedic complications that impact normal gait. As such, mobility is of particular interest in the OI population as it is associated with multiple aspects of participation and quality of life. The purpose of the current study was to identify and describe common gait deviations in a large sample of individuals with type I OI and speculate the etiology with a goal of improving function. Gait analysis was performed on 44 subjects with type I (11.7±3.08 y old) and 30 typically developing controls (9.54±3.1 y old ). Spatial temporal, kinematic, and kinetic gait data were calculated from the Vicon Plug-in-Gait Model. Musculoskeletal modeling of the muscle tendon lengths (MTL) was done in OpenSim 3.3 to evaluate the MTL of the gastrocnemius and gluteus maximus. The gait deviation index, a dimensionless parameter that evaluates the deviation of 9 kinematic gait parameters from a control database, was also calculated. Walking speed, single support time, stride, and step length were lower and double support time was higher in the OI group. The gait deviation index score was lower and external hip rotation angle was higher in the OI group. Peak hip flexor, knee extensor and ankle plantarflexor moments, and power generation at the ankle were lower in the OI group. MTL analysis revealed no significant length discrepancies between the OI group and the typically developing group. Together, these findings provide a comprehensive description of gait characteristics among a group of individuals with type I OI. Such data inform clinicians about specific gait deviations in this population allowing clinicians to recommend more focused interventions. Level III-case-control study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, N; DiCostanzo, D; Fullenkamp, M
2015-06-15
Purpose: To determine appropriate couch tolerance values for modern radiotherapy linac R&V systems with indexed patient setup. Methods: Treatment table tolerance values have been the most difficult to lower, due to many factors including variations in patient positioning and differences in table tops between machines. We recently installed nine linacs with similar tables and started indexing every patient in our clinic. In this study we queried our R&V database and analyzed the deviation of couch position values from the acquired values at verification simulation for all patients treated with indexed positioning. Mean and standard deviations of daily setup deviations weremore » computed in the longitudinal, lateral and vertical direction for 343 patient plans. The mean, median and standard error of the standard deviations across the whole patient population and for some disease sites were computed to determine tolerance values. Results: The plot of our couch deviation values showed a gaussian distribution, with some small deviations, corresponding to setup uncertainties on non-imaging days, and SRS/SRT/SBRT patients, as well as some large deviations which were spot checked and found to be corresponding to indexing errors that were overriden. Setting our tolerance values based on the median + 1 standard error resulted in tolerance values of 1cm lateral and longitudinal, and 0.5 cm vertical for all non- SRS/SRT/SBRT cases. Re-analizing the data, we found that about 92% of the treated fractions would be within these tolerance values (ignoring the mis-indexed patients). We also analyzed data for disease site based subpopulations and found no difference in the tolerance values that needed to be used. Conclusion: With the use of automation, auto-setup and other workflow efficiency tools being introduced into radiotherapy workflow, it is very essential to set table tolerances that allow safe treatments, but flag setup errors that need to be reassessed before treatments.« less
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
How does an asymmetric magnetic field change the vertical structure of a hot accretion flow?
NASA Astrophysics Data System (ADS)
Samadi, M.; Abbassi, S.; Lovelace, R. V. E.
2017-09-01
This paper explores the effects of large-scale magnetic fields in hot accretion flows for asymmetric configurations with respect to the equatorial plane. The solutions that we have found show that the large-scale asymmetric magnetic field can significantly affect the dynamics of the flow and also cause notable outflows in the outer parts. Previously, we treated a viscous resistive accreting disc in the presence of an odd symmetric B-field about the equatorial plane. Now, we extend our earlier work by taking into account another configuration of large-scale magnetic field that is no longer symmetric. We provide asymmetric field structures with small deviations from even and odd symmetric B-field. Our results show that the disc's dynamics and appearance become different above and below the equatorial plane. The set of solutions also predicts that even a small deviation in a symmetric field causes the disc to compress on one side and expand on the other. In some cases, our solution represents a very strong outflow from just one side of the disc. Therefore, the solution may potentially explain the origin of one-sided jets in radio galaxies.
Rogue waves and large deviations in deep sea.
Dematteis, Giovanni; Grafke, Tobias; Vanden-Eijnden, Eric
2018-01-30
The appearance of rogue waves in deep sea is investigated by using the modified nonlinear Schrödinger (MNLS) equation in one spatial dimension with random initial conditions that are assumed to be normally distributed, with a spectrum approximating realistic conditions of a unidirectional sea state. It is shown that one can use the incomplete information contained in this spectrum as prior and supplement this information with the MNLS dynamics to reliably estimate the probability distribution of the sea surface elevation far in the tail at later times. Our results indicate that rogue waves occur when the system hits unlikely pockets of wave configurations that trigger large disturbances of the surface height. The rogue wave precursors in these pockets are wave patterns of regular height, but with a very specific shape that is identified explicitly, thereby allowing for early detection. The method proposed here combines Monte Carlo sampling with tools from large deviations theory that reduce the calculation of the most likely rogue wave precursors to an optimization problem that can be solved efficiently. This approach is transferable to other problems in which the system's governing equations contain random initial conditions and/or parameters.
Sanov and central limit theorems for output statistics of quantum Markov chains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horssen, Merlijn van, E-mail: merlijn.vanhorssen@nottingham.ac.uk; Guţă, Mădălin, E-mail: madalin.guta@nottingham.ac.uk
2015-02-15
In this paper, we consider the statistics of repeated measurements on the output of a quantum Markov chain. We establish a large deviations result analogous to Sanov’s theorem for the multi-site empirical measure associated to finite sequences of consecutive outcomes of a classical stochastic process. Our result relies on the construction of an extended quantum transition operator (which keeps track of previous outcomes) in terms of which we compute moment generating functions, and whose spectral radius is related to the large deviations rate function. As a corollary to this, we obtain a central limit theorem for the empirical measure. Suchmore » higher level statistics may be used to uncover critical behaviour such as dynamical phase transitions, which are not captured by lower level statistics such as the sample mean. As a step in this direction, we give an example of a finite system whose level-1 (empirical mean) rate function is independent of a model parameter while the level-2 (empirical measure) rate is not.« less
The f ( R ) halo mass function in the cosmic web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun-Bates, F. von; Winther, H.A.; Alonso, D.
An important indicator of modified gravity is the effect of the local environment on halo properties. This paper examines the influence of the local tidal structure on the halo mass function, the halo orientation, spin and the concentration-mass relation. We use the excursion set formalism to produce a halo mass function conditional on large-scale structure. Our simple model agrees well with simulations on large scales at which the density field is linear or weakly non-linear. Beyond this, our principal result is that f ( R ) does affect halo abundances, the halo spin parameter and the concentration-mass relationship in anmore » environment-independent way, whereas we find no appreciable deviation from \\text(ΛCDM) for the mass function with fixed environment density, nor the alignment of the orientation and spin vectors of the halo to the eigenvectors of the local cosmic web. There is a general trend for greater deviation from \\text(ΛCDM) in underdense environments and for high-mass haloes, as expected from chameleon screening.« less
NASA Astrophysics Data System (ADS)
Aad, G.; Abbott, B.; Abdallah, J.; Abdinov, O.; Abeloos, B.; Aben, R.; Abolins, M.; AbouZeid, O. S.; Abramowicz, H.; Abreu, H.; Abreu, R.; Abulaiti, Y.; Acharya, B. S.; Adamczyk, L.; Adams, D. L.; Adelman, J.; Adomeit, S.; Adye, T.; Affolder, A. A.; Agatonovic-Jovin, T.; Agricola, J.; Aguilar-Saavedra, J. A.; Ahlen, S. P.; Ahmadov, F.; Aielli, G.; Akerstedt, H.; Åkesson, T. P. A.; Akimov, A. V.; Alberghi, G. L.; Albert, J.; Albrand, S.; Verzini, M. J. Alconada; Aleksa, M.; Aleksandrov, I. N.; Alexa, C.; Alexander, G.; Alexopoulos, T.; Alhroob, M.; Alimonti, G.; Alison, J.; Alkire, S. P.; Allbrooke, B. M. M.; Allen, B. W.; Allport, P. P.; Aloisio, A.; Alonso, A.; Alonso, F.; Alpigiani, C.; Gonzalez, B. Alvarez; Piqueras, D. Álvarez; Alviggi, M. G.; Amadio, B. T.; Amako, K.; Coutinho, Y. Amaral; Amelung, C.; Amidei, D.; Santos, S. P. Amor Dos; Amorim, A.; Amoroso, S.; Amram, N.; Amundsen, G.; Anastopoulos, C.; Ancu, L. S.; Andari, N.; Andeen, T.; Anders, C. F.; Anders, G.; Anders, J. K.; Anderson, K. J.; Andreazza, A.; Andrei, V.; Angelidakis, S.; Angelozzi, I.; Anger, P.; Angerami, A.; Anghinolfi, F.; Anisenkov, A. V.; Anjos, N.; Annovi, A.; Antonelli, M.; Antonov, A.; Antos, J.; Anulli, F.; Aoki, M.; Bella, L. Aperio; Arabidze, G.; Arai, Y.; Araque, J. P.; Arce, A. T. H.; Arduh, F. A.; Arguin, J.-F.; Argyropoulos, S.; Arik, M.; Armbruster, A. J.; Armitage, L. J.; Arnaez, O.; Arnold, H.; Arratia, M.; Arslan, O.; Artamonov, A.; Artoni, G.; Artz, S.; Asai, S.; Asbah, N.; Ashkenazi, A.; Åsman, B.; Asquith, L.; Assamagan, K.; Astalos, R.; Atkinson, M.; Atlay, N. B.; Augsten, K.; Avolio, G.; Axen, B.; Ayoub, M. K.; Azuelos, G.; Baak, M. A.; Baas, A. E.; Baca, M. J.; Bachacou, H.; Bachas, K.; Backes, M.; Backhaus, M.; Bagiacchi, P.; Bagnaia, P.; Bai, Y.; Baines, J. T.; Baker, O. K.; Baldin, E. M.; Balek, P.; Balestri, T.; Balli, F.; Balunas, W. K.; Banas, E.; Banerjee, Sw.; Bannoura, A. A. E.; Barak, L.; Barberio, E. L.; Barberis, D.; Barbero, M.; Barillari, T.; Barklow, T.; Barlow, N.; Barnes, S. L.; Barnett, B. M.; Barnett, R. M.; Barnovska, Z.; Baroncelli, A.; Barone, G.; Barr, A. J.; Navarro, L. Barranco; Barreiro, F.; da Costa, J. Barreiro Guimarães; Bartoldus, R.; Barton, A. E.; Bartos, P.; Basalaev, A.; Bassalat, A.; Basye, A.; Bates, R. L.; Batista, S. J.; Batley, J. R.; Battaglia, M.; Bauce, M.; Bauer, F.; Bawa, H. S.; Beacham, J. B.; Beattie, M. D.; Beau, T.; Beauchemin, P. H.; Bechtle, P.; Beck, H. P.; Becker, K.; Becker, M.; Beckingham, M.; Becot, C.; Beddall, A. J.; Beddall, A.; Bednyakov, V. A.; Bedognetti, M.; Bee, C. P.; Beemster, L. J.; Beermann, T. A.; Begel, M.; Behr, J. K.; Belanger-Champagne, C.; Bell, A. S.; Bella, G.; Bellagamba, L.; Bellerive, A.; Bellomo, M.; Belotskiy, K.; Beltramello, O.; Belyaev, N. L.; Benary, O.; Benchekroun, D.; Bender, M.; Bendtz, K.; Benekos, N.; Benhammou, Y.; Noccioli, E. Benhar; Benitez, J.; Garcia, J. A. Benitez; Benjamin, D. P.; Bensinger, J. R.; Bentvelsen, S.; Beresford, L.; Beretta, M.; Berge, D.; Kuutmann, E. Bergeaas; Berger, N.; Berghaus, F.; Beringer, J.; Berlendis, S.; Bernard, N. R.; Bernius, C.; Bernlochner, F. U.; Berry, T.; Berta, P.; Bertella, C.; Bertoli, G.; Bertolucci, F.; Bertram, I. A.; Bertsche, C.; Bertsche, D.; Besjes, G. J.; Bylund, O. Bessidskaia; Bessner, M.; Besson, N.; Betancourt, C.; Bethke, S.; Bevan, A. J.; Bhimji, W.; Bianchi, R. M.; Bianchini, L.; Bianco, M.; Biebel, O.; Biedermann, D.; Bielski, R.; Biesuz, N. V.; Biglietti, M.; De Mendizabal, J. Bilbao; Bilokon, H.; Bindi, M.; Binet, S.; Bingul, A.; Bini, C.; Biondi, S.; Bjergaard, D. M.; Black, C. W.; Black, J. E.; Black, K. M.; Blackburn, D.; Blair, R. E.; Blanchard, J.-B.; Blanco, J. E.; Blazek, T.; Bloch, I.; Blocker, C.; Blum, W.; Blumenschein, U.; Blunier, S.; Bobbink, G. J.; Bobrovnikov, V. S.; Bocchetta, S. S.; Bocci, A.; Bock, C.; Boehler, M.; Boerner, D.; Bogaerts, J. A.; Bogavac, D.; Bogdanchikov, A. G.; Bohm, C.; Boisvert, V.; Bold, T.; Boldea, V.; Boldyrev, A. S.; Bomben, M.; Bona, M.; Boonekamp, M.; Borisov, A.; Borissov, G.; Bortfeldt, J.; Bortoletto, D.; Bortolotto, V.; Bos, K.; Boscherini, D.; Bosman, M.; Sola, J. D. Bossio; Boudreau, J.; Bouffard, J.; Bouhova-Thacker, E. V.; Boumediene, D.; Bourdarios, C.; Bousson, N.; Boutle, S. K.; Boveia, A.; Boyd, J.; Boyko, I. R.; Bracinik, J.; Brandt, A.; Brandt, G.; Brandt, O.; Bratzler, U.; Brau, B.; Brau, J. E.; Braun, H. M.; Madden, W. D. Breaden; Brendlinger, K.; Brennan, A. J.; Brenner, L.; Brenner, R.; Bressler, S.; Bristow, T. M.; Britton, D.; Britzger, D.; Brochu, F. M.; Brock, I.; Brock, R.; Brooijmans, G.; Brooks, T.; Brooks, W. K.; Brosamer, J.; Brost, E.; Broughton, J. H.; de Renstrom, P. A. Bruckman; Bruncko, D.; Bruneliere, R.; Bruni, A.; Bruni, G.; Brunt, BH; Bruschi, M.; Bruscino, N.; Bryant, P.; Bryngemark, L.; Buanes, T.; Buat, Q.; Buchholz, P.; Buckley, A. G.; Budagov, I. A.; Buehrer, F.; Bugge, M. K.; Bulekov, O.; Bullock, D.; Burckhart, H.; Burdin, S.; Burgard, C. D.; Burghgrave, B.; Burka, K.; Burke, S.; Burmeister, I.; Busato, E.; Büscher, D.; Büscher, V.; Bussey, P.; Butler, J. M.; Butt, A. I.; Buttar, C. M.; Butterworth, J. M.; Butti, P.; Buttinger, W.; Buzatu, A.; Buzykaev, A. R.; Urbán, S. Cabrera; Caforio, D.; Cairo, V. M.; Cakir, O.; Calace, N.; Calafiura, P.; Calandri, A.; Calderini, G.; Calfayan, P.; Caloba, L. P.; Calvet, D.; Calvet, S.; Calvet, T. P.; Toro, R. Camacho; Camarda, S.; Camarri, P.; Cameron, D.; Armadans, R. Caminal; Camincher, C.; Campana, S.; Campanelli, M.; Campoverde, A.; Canale, V.; Canepa, A.; Bret, M. Cano; Cantero, J.; Cantrill, R.; Cao, T.; Garrido, M. D. M. Capeans; Caprini, I.; Caprini, M.; Capua, M.; Caputo, R.; Carbone, R. M.; Cardarelli, R.; Cardillo, F.; Carli, I.; Carli, T.; Carlino, G.; Carminati, L.; Caron, S.; Carquin, E.; Carrillo-Montoya, G. D.; Carter, J. R.; Carvalho, J.; Casadei, D.; Casado, M. P.; Casolino, M.; Casper, D. W.; Castaneda-Miranda, E.; Castelli, A.; Gimenez, V. Castillo; Castro, N. F.; Catinaccio, A.; Catmore, J. R.; Cattai, A.; Caudron, J.; Cavaliere, V.; Cavalli, D.; Cavalli-Sforza, M.; Cavasinni, V.; Ceradini, F.; Alberich, L. Cerda; Cerio, B. C.; Cerqueira, A. S.; Cerri, A.; Cerrito, L.; Cerutti, F.; Cerv, M.; Cervelli, A.; Cetin, S. A.; Chafaq, A.; Chakraborty, D.; Chan, S. K.; Chan, Y. L.; Chang, P.; Chapman, J. D.; Charlton, D. G.; Chatterjee, A.; Chau, C. C.; Barajas, C. A. Chavez; Che, S.; Cheatham, S.; Chegwidden, A.; Chekanov, S.; Chekulaev, S. V.; Chelkov, G. A.; Chelstowska, M. A.; Chen, C.; Chen, H.; Chen, K.; Chen, S.; Chen, S.; Chen, X.; Chen, Y.; Cheng, H. C.; Cheng, H. J.; Cheng, Y.; Cheplakov, A.; Cheremushkina, E.; Moursli, R. Cherkaoui El; Chernyatin, V.; Cheu, E.; Chevalier, L.; Chiarella, V.; Chiarelli, G.; Chiodini, G.; Chisholm, A. S.; Chitan, A.; Chizhov, M. V.; Choi, K.; Chomont, A. R.; Chouridou, S.; Chow, B. K. B.; Christodoulou, V.; Chromek-Burckhart, D.; Chudoba, J.; Chuinard, A. J.; Chwastowski, J. J.; Chytka, L.; Ciapetti, G.; Ciftci, A. K.; Cinca, D.; Cindro, V.; Cioara, I. A.; Ciocio, A.; Cirotto, F.; Citron, Z. H.; Ciubancan, M.; Clark, A.; Clark, B. L.; Clark, P. J.; Clarke, R. N.; Clement, C.; Coadou, Y.; Cobal, M.; Coccaro, A.; Cochran, J.; Coffey, L.; Colasurdo, L.; Cole, B.; Cole, S.; Colijn, A. P.; Collot, J.; Colombo, T.; Compostella, G.; Muiño, P. Conde; Coniavitis, E.; Connell, S. H.; Connelly, I. A.; Consorti, V.; Constantinescu, S.; Conta, C.; Conti, G.; Conventi, F.; Cooke, M.; Cooper, B. D.; Cooper-Sarkar, A. M.; Cornelissen, T.; Corradi, M.; Corriveau, F.; Corso-Radu, A.; Cortes-Gonzalez, A.; Cortiana, G.; Costa, G.; Costa, M. J.; Costanzo, D.; Cottin, G.; Cowan, G.; Cox, B. E.; Cranmer, K.; Crawley, S. J.; Cree, G.; Crépé-Renaudin, S.; Crescioli, F.; Cribbs, W. A.; Ortuzar, M. Crispin; Cristinziani, M.; Croft, V.; Crosetti, G.; Donszelmann, T. Cuhadar; Cummings, J.; Curatolo, M.; Cúth, J.; Cuthbert, C.; Czirr, H.; Czodrowski, P.; D'Auria, S.; D'Onofrio, M.; De Sousa, M. J. Da Cunha Sargedas; Via, C. Da; Dabrowski, W.; Dai, T.; Dale, O.; Dallaire, F.; Dallapiccola, C.; Dam, M.; Dandoy, J. R.; Dang, N. P.; Daniells, A. C.; Dann, N. S.; Danninger, M.; Hoffmann, M. Dano; Dao, V.; Darbo, G.; Darmora, S.; Dassoulas, J.; Dattagupta, A.; Davey, W.; David, C.; Davidek, T.; Davies, M.; Davison, P.; Davygora, Y.; Dawe, E.; Dawson, I.; Daya-Ishmukhametova, R. K.; De, K.; de Asmundis, R.; De Benedetti, A.; De Castro, S.; De Cecco, S.; De Groot, N.; de Jong, P.; De la Torre, H.; De Lorenzi, F.; De Pedis, D.; De Salvo, A.; De Sanctis, U.; De Santo, A.; De Regie, J. B. De Vivie; Dearnaley, W. J.; Debbe, R.; Debenedetti, C.; Dedovich, D. V.; Deigaard, I.; Del Peso, J.; Del Prete, T.; Delgove, D.; Deliot, F.; Delitzsch, C. M.; Deliyergiyev, M.; Dell'Acqua, A.; Dell'Asta, L.; Dell'Orso, M.; Della Pietra, M.; della Volpe, D.; Delmastro, M.; Delsart, P. A.; Deluca, C.; DeMarco, D. A.; Demers, S.; Demichev, M.; Demilly, A.; Denisov, S. P.; Denysiuk, D.; Derendarz, D.; Derkaoui, J. E.; Derue, F.; Dervan, P.; Desch, K.; Deterre, C.; Dette, K.; Deviveiros, P. O.; Dewhurst, A.; Dhaliwal, S.; Di Ciaccio, A.; Di Ciaccio, L.; Di Clemente, W. K.; Di Donato, C.; Di Girolamo, A.; Di Girolamo, B.; Di Micco, B.; Di Nardo, R.; Di Simone, A.; Di Sipio, R.; Di Valentino, D.; Diaconu, C.; Diamond, M.; Dias, F. A.; Diaz, M. A.; Diehl, E. B.; Dietrich, J.; Diglio, S.; Dimitrievska, A.; Dingfelder, J.; Dita, P.; Dita, S.; Dittus, F.; Djama, F.; Djobava, T.; Djuvsland, J. I.; do Vale, M. A. B.; Dobos, D.; Dobre, M.; Doglioni, C.; Dohmae, T.; Dolejsi, J.; Dolezal, Z.; Dolgoshein, B. A.; Donadelli, M.; Donati, S.; Dondero, P.; Donini, J.; Dopke, J.; Doria, A.; Dova, M. T.; Doyle, A. T.; Drechsler, E.; Dris, M.; Du, Y.; Duarte-Campderros, J.; Duchovni, E.; Duckeck, G.; Ducu, O. A.; Duda, D.; Dudarev, A.; Duflot, L.; Duguid, L.; Dührssen, M.; Dunford, M.; Yildiz, H. Duran; Düren, M.; Durglishvili, A.; Duschinger, D.; Dutta, B.; Dyndal, M.; Eckardt, C.; Ecker, K. M.; Edgar, R. C.; Edson, W.; Edwards, N. C.; Eifert, T.; Eigen, G.; Einsweiler, K.; Ekelof, T.; Kacimi, M. El; Ellajosyula, V.; Ellert, M.; Elles, S.; Ellinghaus, F.; Elliot, A. A.; Ellis, N.; Elmsheuser, J.; Elsing, M.; Emeliyanov, D.; Enari, Y.; Endner, O. C.; Endo, M.; Ennis, J. S.; Erdmann, J.; Ereditato, A.; Ernis, G.; Ernst, J.; Ernst, M.; Errede, S.; Ertel, E.; Escalier, M.; Esch, H.; Escobar, C.; Esposito, B.; Etienvre, A. I.; Etzion, E.; Evans, H.; Ezhilov, A.; Fabbri, F.; Fabbri, L.; Facini, G.; Fakhrutdinov, R. M.; Falciano, S.; Falla, R. J.; Faltova, J.; Fang, Y.; Fanti, M.; Farbin, A.; Farilla, A.; Farina, C.; Farooque, T.; Farrell, S.; Farrington, S. M.; Farthouat, P.; Fassi, F.; Fassnacht, P.; Fassouliotis, D.; Giannelli, M. Faucci; Favareto, A.; Fawcett, W. J.; Fayard, L.; Fedin, O. L.; Fedorko, W.; Feigl, S.; Feligioni, L.; Feng, C.; Feng, E. J.; Feng, H.; Fenyuk, A. B.; Feremenga, L.; Martinez, P. Fernandez; Perez, S. Fernandez; Ferrando, J.; Ferrari, A.; Ferrari, P.; Ferrari, R.; de Lima, D. E. Ferreira; Ferrer, A.; Ferrere, D.; Ferretti, C.; Parodi, A. Ferretto; Fiedler, F.; Filipčič, A.; Filipuzzi, M.; Filthaut, F.; Fincke-Keeler, M.; Finelli, K. D.; Fiolhais, M. C. N.; Fiorini, L.; Firan, A.; Fischer, A.; Fischer, C.; Fischer, J.; Fisher, W. C.; Flaschel, N.; Fleck, I.; Fleischmann, P.; Fletcher, G. T.; Fletcher, G.; Fletcher, R. R. M.; Flick, T.; Floderus, A.; Castillo, L. R. Flores; Flowerdew, M. J.; Forcolin, G. T.; Formica, A.; Forti, A.; Foster, A. G.; Fournier, D.; Fox, H.; Fracchia, S.; Francavilla, P.; Franchini, M.; Francis, D.; Franconi, L.; Franklin, M.; Frate, M.; Fraternali, M.; Freeborn, D.; Fressard-Batraneanu, S. M.; Friedrich, F.; Froidevaux, D.; Frost, J. A.; Fukunaga, C.; Torregrosa, E. Fullana; Fusayasu, T.; Fuster, J.; Gabaldon, C.; Gabizon, O.; Gabrielli, A.; Gabrielli, A.; Gach, G. P.; Gadatsch, S.; Gadomski, S.; Gagliardi, G.; Gagnon, L. G.; Gagnon, P.; Galea, C.; Galhardo, B.; Gallas, E. J.; Gallop, B. J.; Gallus, P.; Galster, G.; Gan, K. K.; Gao, J.; Gao, Y.; Gao, Y. S.; Walls, F. M. Garay; García, C.; Navarro, J. E. García; Garcia-Sciveres, M.; Gardner, R. W.; Garelli, N.; Garonne, V.; Bravo, A. Gascon; Gatti, C.; Gaudiello, A.; Gaudio, G.; Gaur, B.; Gauthier, L.; Gavrilenko, I. L.; Gay, C.; Gaycken, G.; Gazis, E. N.; Gecse, Z.; Gee, C. N. P.; Geich-Gimbel, Ch.; Geisler, M. P.; Gemme, C.; Genest, M. H.; Geng, C.; Gentile, S.; George, S.; Gerbaudo, D.; Gershon, A.; Ghasemi, S.; Ghazlane, H.; Giacobbe, B.; Giagu, S.; Giannetti, P.; Gibbard, B.; Gibson, S. M.; Gignac, M.; Gilchriese, M.; Gillam, T. P. S.; Gillberg, D.; Gilles, G.; Gingrich, D. M.; Giokaris, N.; Giordani, M. P.; Giorgi, F. M.; Giorgi, F. M.; Giraud, P. F.; Giromini, P.; Giugni, D.; Giuliani, C.; Giulini, M.; Gjelsten, B. K.; Gkaitatzis, S.; Gkialas, I.; Gkougkousis, E. L.; Gladilin, L. K.; Glasman, C.; Glatzer, J.; Glaysher, P. C. F.; Glazov, A.; Goblirsch-Kolb, M.; Godlewski, J.; Goldfarb, S.; Golling, T.; Golubkov, D.; Gomes, A.; Gonçalo, R.; Costa, J. Goncalves Pinto Firmino Da; Gonella, L.; Gongadze, A.; de la Hoz, S. González; Parra, G. Gonzalez; Gonzalez-Sevilla, S.; Goossens, L.; Gorbounov, P. A.; Gordon, H. A.; Gorelov, I.; Gorini, B.; Gorini, E.; Gorišek, A.; Gornicki, E.; Goshaw, A. T.; Gössling, C.; Gostkin, M. I.; Goudet, C. R.; Goujdami, D.; Goussiou, A. G.; Govender, N.; Gozani, E.; Graber, L.; Grabowska-Bold, I.; Gradin, P. O. J.; Grafström, P.; Gramling, J.; Gramstad, E.; Grancagnolo, S.; Gratchev, V.; Gray, H. M.; Graziani, E.; Greenwood, Z. D.; Grefe, C.; Gregersen, K.; Gregor, I. M.; Grenier, P.; Grevtsov, K.; Griffiths, J.; Grillo, A. A.; Grimm, K.; Grinstein, S.; Gris, Ph.; Grivaz, J.-F.; Groh, S.; Grohs, J. P.; Gross, E.; Grosse-Knetter, J.; Grossi, G. C.; Grout, Z. J.; Guan, L.; Guan, W.; Guenther, J.; Guescini, F.; Guest, D.; Gueta, O.; Guido, E.; Guillemin, T.; Guindon, S.; Gul, U.; Gumpert, C.; Guo, J.; Guo, Y.; Gupta, S.; Gustavino, G.; Gutierrez, P.; Ortiz, N. G. Gutierrez; Gutschow, C.; Guyot, C.; Gwenlan, C.; Gwilliam, C. B.; Haas, A.; Haber, C.; Hadavand, H. K.; Haddad, N.; Hadef, A.; Haefner, P.; Hageböck, S.; Hajduk, Z.; Hakobyan, H.; Haleem, M.; Haley, J.; Hall, D.; Halladjian, G.; Hallewell, G. D.; Hamacher, K.; Hamal, P.; Hamano, K.; Hamilton, A.; Hamity, G. N.; Hamnett, P. G.; Han, L.; Hanagaki, K.; Hanawa, K.; Hance, M.; Haney, B.; Hanke, P.; Hanna, R.; Hansen, J. B.; Hansen, J. D.; Hansen, M. C.; Hansen, P. H.; Hara, K.; Hard, A. S.; Harenberg, T.; Hariri, F.; Harkusha, S.; Harrington, R. D.; Harrison, P. F.; Hartjes, F.; Hartmann, N. M.; Hasegawa, M.; Hasegawa, Y.; Hasib, A.; Hassani, S.; Haug, S.; Hauser, R.; Hauswald, L.; Havranek, M.; Hawkes, C. M.; Hawkings, R. J.; Hawkins, A. D.; Hayden, D.; Hays, C. P.; Hays, J. M.; Hayward, H. S.; Haywood, S. J.; Head, S. J.; Heck, T.; Hedberg, V.; Heelan, L.; Heim, S.; Heim, T.; Heinemann, B.; Heinrich, J. J.; Heinrich, L.; Heinz, C.; Hejbal, J.; Helary, L.; Hellman, S.; Helsens, C.; Henderson, J.; Henderson, R. C. W.; Heng, Y.; Henkelmann, S.; Correia, A. M. Henriques; Henrot-Versille, S.; Herbert, G. H.; Jiménez, Y. Hernández; Herten, G.; Hertenberger, R.; Hervas, L.; Hesketh, G. G.; Hessey, N. P.; Hetherly, J. W.; Hickling, R.; Higón-Rodriguez, E.; Hill, E.; Hill, J. C.; Hiller, K. H.; Hillier, S. J.; Hinchliffe, I.; Hines, E.; Hinman, R. R.; Hirose, M.; Hirschbuehl, D.; Hobbs, J.; Hod, N.; Hodgkinson, M. C.; Hodgson, P.; Hoecker, A.; Hoeferkamp, M. R.; Hoenig, F.; Hohlfeld, M.; Hohn, D.; Holmes, T. R.; Homann, M.; Hong, T. M.; Hooberman, B. H.; Hopkins, W. H.; Horii, Y.; Horton, A. J.; Hostachy, J.-Y.; Hou, S.; Hoummada, A.; Howard, J.; Howarth, J.; Hrabovsky, M.; Hristova, I.; Hrivnac, J.; Hryn'ova, T.; Hrynevich, A.; Hsu, C.; Hsu, P. J.; Hsu, S.-C.; Hu, D.; Hu, Q.; Huang, Y.; Hubacek, Z.; Hubaut, F.; Huegging, F.; Huffman, T. B.; Hughes, E. W.; Hughes, G.; Huhtinen, M.; Hülsing, T. A.; Huseynov, N.; Huston, J.; Huth, J.; Iacobucci, G.; Iakovidis, G.; Ibragimov, I.; Iconomidou-Fayard, L.; Ideal, E.; Idrissi, Z.; Iengo, P.; Igonkina, O.; Iizawa, T.; Ikegami, Y.; Ikeno, M.; Ilchenko, Y.; Iliadis, D.; Ilic, N.; Ince, T.; Introzzi, G.; Ioannou, P.; Iodice, M.; Iordanidou, K.; Ippolito, V.; Quiles, A. Irles; Isaksson, C.; Ishino, M.; Ishitsuka, M.; Ishmukhametov, R.; Issever, C.; Istin, S.; Ito, F.; Ponce, J. M. Iturbe; Iuppa, R.; Ivarsson, J.; Iwanski, W.; Iwasaki, H.; Izen, J. M.; Izzo, V.; Jabbar, S.; Jackson, B.; Jackson, M.; Jackson, P.; Jain, V.; Jakobi, K. B.; Jakobs, K.; Jakobsen, S.; Jakoubek, T.; Jamin, D. O.; Jana, D. K.; Jansen, E.; Jansky, R.; Janssen, J.; Janus, M.; Jarlskog, G.; Javadov, N.; Javůrek, T.; Jeanneau, F.; Jeanty, L.; Jejelava, J.; Jeng, G.-Y.; Jennens, D.; Jenni, P.; Jentzsch, J.; Jeske, C.; Jézéquel, S.; Ji, H.; Jia, J.; Jiang, H.; Jiang, Y.; Jiggins, S.; Pena, J. Jimenez; Jin, S.; Jinaru, A.; Jinnouchi, O.; Johansson, P.; Johns, K. A.; Johnson, W. J.; Jon-And, K.; Jones, G.; Jones, R. W. L.; Jones, S.; Jones, T. J.; Jongmanns, J.; Jorge, P. M.; Jovicevic, J.; Ju, X.; Rozas, A. Juste; Köhler, M. K.; Kaczmarska, A.; Kado, M.; Kagan, H.; Kagan, M.; Kahn, S. J.; Kajomovitz, E.; Kalderon, C. W.; Kaluza, A.; Kama, S.; Kamenshchikov, A.; Kanaya, N.; Kaneti, S.; Kantserov, V. A.; Kanzaki, J.; Kaplan, B.; Kaplan, L. S.; Kapliy, A.; Kar, D.; Karakostas, K.; Karamaoun, A.; Karastathis, N.; Kareem, M. J.; Karentzos, E.; Karnevskiy, M.; Karpov, S. N.; Karpova, Z. M.; Karthik, K.; Kartvelishvili, V.; Karyukhin, A. N.; Kasahara, K.; Kashif, L.; Kass, R. D.; Kastanas, A.; Kataoka, Y.; Kato, C.; Katre, A.; Katzy, J.; Kawagoe, K.; Kawamoto, T.; Kawamura, G.; Kazama, S.; Kazanin, V. F.; Keeler, R.; Kehoe, R.; Keller, J. S.; Kempster, J. J.; Kentaro, K.; Keoshkerian, H.; Kepka, O.; Kerševan, B. P.; Kersten, S.; Keyes, R. A.; Khalil-zada, F.; Khandanyan, H.; Khanov, A.; Kharlamov, A. G.; Khoo, T. J.; Khovanskiy, V.; Khramov, E.; Khubua, J.; Kido, S.; Kim, H. Y.; Kim, S. H.; Kim, Y. K.; Kimura, N.; Kind, O. M.; King, B. T.; King, M.; King, S. B.; Kirk, J.; Kiryunin, A. E.; Kishimoto, T.; Kisielewska, D.; Kiss, F.; Kiuchi, K.; Kivernyk, O.; Kladiva, E.; Klein, M. H.; Klein, M.; Klein, U.; Kleinknecht, K.; Klimek, P.; Klimentov, A.; Klingenberg, R.; Klinger, J. A.; Klioutchnikova, T.; Kluge, E.-E.; Kluit, P.; Kluth, S.; Knapik, J.; Kneringer, E.; Knoops, E. B. F. G.; Knue, A.; Kobayashi, A.; Kobayashi, D.; Kobayashi, T.; Kobel, M.; Kocian, M.; Kodys, P.; Koffas, T.; Koffeman, E.; Kogan, L. A.; Koi, T.; Kolanoski, H.; Kolb, M.; Koletsou, I.; Komar, A. A.; Komori, Y.; Kondo, T.; Kondrashova, N.; Köneke, K.; König, A. C.; Kono, T.; Konoplich, R.; Konstantinidis, N.; Kopeliansky, R.; Koperny, S.; Köpke, L.; Kopp, A. K.; Korcyl, K.; Kordas, K.; Korn, A.; Korol, A. A.; Korolkov, I.; Korolkova, E. V.; Kortner, O.; Kortner, S.; Kosek, T.; Kostyukhin, V. V.; Kotov, V. M.; Kotwal, A.; Kourkoumeli-Charalampidi, A.; Kourkoumelis, C.; Kouskoura, V.; Koutsman, A.; Kowalewska, A. B.; Kowalewski, R.; Kowalski, T. Z.; Kozanecki, W.; Kozhin, A. S.; Kramarenko, V. A.; Kramberger, G.; Krasnopevtsev, D.; Krasny, M. W.; Krasznahorkay, A.; Kraus, J. K.; Kravchenko, A.; Kretz, M.; Kretzschmar, J.; Kreutzfeldt, K.; Krieger, P.; Krizka, K.; Kroeninger, K.; Kroha, H.; Kroll, J.; Kroseberg, J.; Krstic, J.; Kruchonak, U.; Krüger, H.; Krumnack, N.; Kruse, A.; Kruse, M. C.; Kruskal, M.; Kubota, T.; Kucuk, H.; Kuday, S.; Kuechler, J. T.; Kuehn, S.; Kugel, A.; Kuger, F.; Kuhl, A.; Kuhl, T.; Kukhtin, V.; Kukla, R.; Kulchitsky, Y.; Kuleshov, S.; Kuna, M.; Kunigo, T.; Kupco, A.; Kurashige, H.; Kurochkin, Y. A.; Kus, V.; Kuwertz, E. S.; Kuze, M.; Kvita, J.; Kwan, T.; Kyriazopoulos, D.; Rosa, A. La; Navarro, J. L. La Rosa; Rotonda, L. La; Lacasta, C.; Lacava, F.; Lacey, J.; Lacker, H.; Lacour, D.; Lacuesta, V. R.; Ladygin, E.; Lafaye, R.; Laforge, B.; Lagouri, T.; Lai, S.; Lammers, S.; Lampl, W.; Lançon, E.; Landgraf, U.; Landon, M. P. J.; Lang, V. S.; Lange, J. C.; Lankford, A. J.; Lanni, F.; Lantzsch, K.; Lanza, A.; Laplace, S.; Lapoire, C.; Laporte, J. F.; Lari, T.; Manghi, F. Lasagni; Lassnig, M.; Laurelli, P.; Lavrijsen, W.; Law, A. T.; Laycock, P.; Lazovich, T.; Lazzaroni, M.; Dortz, O. Le; Guirriec, E. Le; Menedeu, E. Le; Quilleuc, E. P. Le; LeBlanc, M.; LeCompte, T.; Ledroit-Guillon, F.; Lee, C. A.; Lee, S. C.; Lee, L.; Lefebvre, G.; Lefebvre, M.; Legger, F.; Leggett, C.; Lehan, A.; Miotto, G. Lehmann; Lei, X.; Leight, W. A.; Leisos, A.; Leister, A. G.; Leite, M. A. L.; Leitner, R.; Lellouch, D.; Lemmer, B.; Leney, K. J. C.; Lenz, T.; Lenzi, B.; Leone, R.; Leone, S.; Leonidopoulos, C.; Leontsinis, S.; Lerner, G.; Leroy, C.; Lesage, A. A. J.; Lester, C. G.; Levchenko, M.; Levêque, J.; Levin, D.; Levinson, L. J.; Levy, M.; Leyko, A. M.; Leyton, M.; Li, B.; Li, H.; Li, H. L.; Li, L.; Li, L.; Li, Q.; Li, S.; Li, X.; Li, Y.; Liang, Z.; Liao, H.; Liberti, B.; Liblong, A.; Lichard, P.; Lie, K.; Liebal, J.; Liebig, W.; Limbach, C.; Limosani, A.; Lin, S. C.; Lin, T. H.; Lindquist, B. E.; Lipeles, E.; Lipniacka, A.; Lisovyi, M.; Liss, T. M.; Lissauer, D.; Lister, A.; Litke, A. M.; Liu, B.; Liu, D.; Liu, H.; Liu, H.; Liu, J.; Liu, J. B.; Liu, K.; Liu, L.; Liu, M.; Liu, M.; Liu, Y. L.; Liu, Y.; Livan, M.; Lleres, A.; Merino, J. Llorente; Lloyd, S. L.; Sterzo, F. Lo; Lobodzinska, E.; Loch, P.; Lockman, W. S.; Loebinger, F. K.; Loevschall-Jensen, A. E.; Loew, K. M.; Loginov, A.; Lohse, T.; Lohwasser, K.; Lokajicek, M.; Long, B. A.; Long, J. D.; Long, R. E.; Longo, L.; Looper, K. A.; Lopes, L.; Mateos, D. Lopez; Paredes, B. Lopez; Paz, I. Lopez; Solis, A. Lopez; Lorenz, J.; Martinez, N. Lorenzo; Losada, M.; Lösel, P. J.; Lou, X.; Lounis, A.; Love, J.; Love, P. A.; Lu, H.; Lu, N.; Lubatti, H. J.; Luci, C.; Lucotte, A.; Luedtke, C.; Luehring, F.; Lukas, W.; Luminari, L.; Lundberg, O.; Lund-Jensen, B.; Lynn, D.; Lysak, R.; Lytken, E.; Lyubushkin, V.; Ma, H.; Ma, L. L.; Maccarrone, G.; Macchiolo, A.; Macdonald, C. M.; Maček, B.; Miguens, J. Machado; Madaffari, D.; Madar, R.; Maddocks, H. J.; Mader, W. F.; Madsen, A.; Maeda, J.; Maeland, S.; Maeno, T.; Maevskiy, A.; Magradze, E.; Mahlstedt, J.; Maiani, C.; Maidantchik, C.; Maier, A. A.; Maier, T.; Maio, A.; Majewski, S.; Makida, Y.; Makovec, N.; Malaescu, B.; Malecki, Pa.; Maleev, V. P.; Malek, F.; Mallik, U.; Malon, D.; Malone, C.; Maltezos, S.; Malyukov, S.; Mamuzic, J.; Mancini, G.; Mandelli, B.; Mandelli, L.; Mandić, I.; Maneira, J.; Filho, L. Manhaes de Andrade; Ramos, J. Manjarres; Mann, A.; Mansoulie, B.; Mantifel, R.; Mantoani, M.; Manzoni, S.; Mapelli, L.; Marceca, G.; March, L.; Marchiori, G.; Marcisovsky, M.; Marjanovic, M.; Marley, D. E.; Marroquim, F.; Marsden, S. P.; Marshall, Z.; Marti, L. F.; Marti-Garcia, S.; Martin, B.; Martin, T. A.; Martin, V. J.; Latour, B. Martin dit; Martinez, M.; Martin-Haugh, S.; Martoiu, V. S.; Martyniuk, A. C.; Marx, M.; Marzano, F.; Marzin, A.; Masetti, L.; Mashimo, T.; Mashinistov, R.; Masik, J.; Maslennikov, A. L.; Massa, I.; Massa, L.; Mastrandrea, P.; Mastroberardino, A.; Masubuchi, T.; Mättig, P.; Mattmann, J.; Maurer, J.; Maxfield, S. J.; Maximov, D. A.; Mazini, R.; Mazza, S. M.; Fadden, N. C. Mc; Goldrick, G. Mc; Kee, S. P. Mc; McCarn, A.; McCarthy, R. L.; McCarthy, T. G.; McClymont, L. I.; McFarlane, K. W.; Mcfayden, J. A.; Mchedlidze, G.; McMahon, S. J.; McPherson, R. A.; Medinnis, M.; Meehan, S.; Mehlhase, S.; Mehta, A.; Meier, K.; Meineck, C.; Meirose, B.; Garcia, B. R. Mellado; Meloni, F.; Mengarelli, A.; Menke, S.; Meoni, E.; Mercurio, K. M.; Mergelmeyer, S.; Mermod, P.; Merola, L.; Meroni, C.; Merritt, F. S.; Messina, A.; Metcalfe, J.; Mete, A. S.; Meyer, C.; Meyer, C.; Meyer, J.-P.; Meyer, J.; Theenhausen, H. Meyer Zu; Middleton, R. P.; Miglioranzi, S.; Mijović, L.; Mikenberg, G.; Mikestikova, M.; Mikuž, M.; Milesi, M.; Milic, A.; Miller, D. W.; Mills, C.; Milov, A.; Milstead, D. A.; Minaenko, A. A.; Minami, Y.; Minashvili, I. A.; Mincer, A. I.; Mindur, B.; Mineev, M.; Ming, Y.; Mir, L. M.; Mistry, K. P.; Mitani, T.; Mitrevski, J.; Mitsou, V. A.; Miucci, A.; Miyagawa, P. S.; Mjörnmark, J. U.; Moa, T.; Mochizuki, K.; Mohapatra, S.; Mohr, W.; Molander, S.; Moles-Valls, R.; Monden, R.; Mondragon, M. C.; Mönig, K.; Monk, J.; Monnier, E.; Montalbano, A.; Berlingen, J. Montejo; Monticelli, F.; Monzani, S.; Moore, R. W.; Morange, N.; Moreno, D.; Llácer, M. Moreno; Morettini, P.; Mori, D.; Mori, T.; Morii, M.; Morinaga, M.; Morisbak, V.; Moritz, S.; Morley, A. K.; Mornacchi, G.; Morris, J. D.; Mortensen, S. S.; Morvaj, L.; Mosidze, M.; Moss, J.; Motohashi, K.; Mount, R.; Mountricha, E.; Mouraviev, S. V.; Moyse, E. J. W.; Muanza, S.; Mudd, R. D.; Mueller, F.; Mueller, J.; Mueller, R. S. P.; Mueller, T.; Muenstermann, D.; Mullen, P.; Mullier, G. A.; Sanchez, F. J. Munoz; Quijada, J. A. Murillo; Murray, W. J.; Musheghyan, H.; Myagkov, A. G.; Myska, M.; Nachman, B. P.; Nackenhorst, O.; Nadal, J.; Nagai, K.; Nagai, R.; Nagai, Y.; Nagano, K.; Nagasaka, Y.; Nagata, K.; Nagel, M.; Nagy, E.; Nairz, A. M.; Nakahama, Y.; Nakamura, K.; Nakamura, T.; Nakano, I.; Namasivayam, H.; Garcia, R. F. Naranjo; Narayan, R.; Villar, D. I. Narrias; Naryshkin, I.; Naumann, T.; Navarro, G.; Nayyar, R.; Neal, H. A.; Nechaeva, P. Yu.; Neep, T. J.; Nef, P. D.; Negri, A.; Negrini, M.; Nektarijevic, S.; Nellist, C.; Nelson, A.; Nemecek, S.; Nemethy, P.; Nepomuceno, A. A.; Nessi, M.; Neubauer, M. S.; Neumann, M.; Neves, R. M.; Nevski, P.; Newman, P. R.; Nguyen, D. H.; Nickerson, R. B.; Nicolaidou, R.; Nicquevert, B.; Nielsen, J.; Nikiforov, A.; Nikolaenko, V.; Nikolic-Audit, I.; Nikolopoulos, K.; Nilsen, J. K.; Nilsson, P.; Ninomiya, Y.; Nisati, A.; Nisius, R.; Nobe, T.; Nodulman, L.; Nomachi, M.; Nomidis, I.; Nooney, T.; Norberg, S.; Nordberg, M.; Norjoharuddeen, N.; Novgorodova, O.; Nowak, S.; Nozaki, M.; Nozka, L.; Ntekas, K.; Nurse, E.; Nuti, F.; O'grady, F.; O'Neil, D. C.; O'Rourke, A. A.; O'Shea, V.; Oakham, F. G.; Oberlack, H.; Obermann, T.; Ocariz, J.; Ochi, A.; Ochoa, I.; Ochoa-Ricoux, J. P.; Oda, S.; Odaka, S.; Ogren, H.; Oh, A.; Oh, S. H.; Ohm, C. C.; Ohman, H.; Oide, H.; Okawa, H.; Okumura, Y.; Okuyama, T.; Olariu, A.; Seabra, L. F. Oleiro; Pino, S. A. Olivares; Damazio, D. Oliveira; Olszewski, A.; Olszowska, J.; Onofre, A.; Onogi, K.; Onyisi, P. U. E.; Oram, C. J.; Oreglia, M. J.; Oren, Y.; Orestano, D.; Orlando, N.; Orr, R. S.; Osculati, B.; Ospanov, R.; Garzon, G. Otero y.; Otono, H.; Ouchrif, M.; Ould-Saada, F.; Ouraou, A.; Oussoren, K. P.; Ouyang, Q.; Ovcharova, A.; Owen, M.; Owen, R. E.; Ozcan, V. E.; Ozturk, N.; Pachal, K.; Pages, A. Pacheco; Aranda, C. Padilla; Pagáčová, M.; Griso, S. Pagan; Paige, F.; Pais, P.; Pajchel, K.; Palacino, G.; Palestini, S.; Palka, M.; Pallin, D.; Palma, A.; Panagiotopoulou, E. St.; Pandini, C. E.; Vazquez, J. G. Panduro; Pani, P.; Panitkin, S.; Pantea, D.; Paolozzi, L.; Papadopoulou, Th. D.; Papageorgiou, K.; Paramonov, A.; Hernandez, D. Paredes; Parker, M. A.; Parker, K. A.; Parodi, F.; Parsons, J. A.; Parzefall, U.; Pascuzzi, V. R.; Pasqualucci, E.; Passaggio, S.; Pastore, F.; Pastore, Fr.; Pásztor, G.; Pataraia, S.; Patel, N. D.; Pater, J. R.; Pauly, T.; Pearce, J.; Pearson, B.; Pedersen, L. E.; Pedersen, M.; Lopez, S. Pedraza; Pedro, R.; Peleganchuk, S. V.; Pelikan, D.; Penc, O.; Peng, C.; Peng, H.; Penwell, J.; Peralva, B. S.; Perego, M. M.; Perepelitsa, D. V.; Codina, E. Perez; Perini, L.; Pernegger, H.; Perrella, S.; Peschke, R.; Peshekhonov, V. D.; Peters, K.; Peters, R. F. Y.; Petersen, B. A.; Petersen, T. C.; Petit, E.; Petridis, A.; Petridou, C.; Petroff, P.; Petrolo, E.; Petrov, M.; Petrucci, F.; Pettersson, N. E.; Peyaud, A.; Pezoa, R.; Phillips, P. W.; Piacquadio, G.; Pianori, E.; Picazio, A.; Piccaro, E.; Piccinini, M.; Pickering, M. A.; Piegaia, R.; Pilcher, J. E.; Pilkington, A. D.; Pin, A. W. J.; Pina, J.; Pinamonti, M.; Pinfold, J. L.; Pingel, A.; Pires, S.; Pirumov, H.; Pitt, M.; Plazak, L.; Pleier, M.-A.; Pleskot, V.; Plotnikova, E.; Plucinski, P.; Pluth, D.; Poettgen, R.; Poggioli, L.; Pohl, D.; Polesello, G.; Poley, A.; Policicchio, A.; Polifka, R.; Polini, A.; Pollard, C. S.; Polychronakos, V.; Pommès, K.; Pontecorvo, L.; Pope, B. G.; Popeneciu, G. A.; Popovic, D. S.; Poppleton, A.; Pospisil, S.; Potamianos, K.; Potrap, I. N.; Potter, C. J.; Potter, C. T.; Poulard, G.; Poveda, J.; Pozdnyakov, V.; Astigarraga, M. E. Pozo; Pralavorio, P.; Pranko, A.; Prell, S.; Price, D.; Price, L. E.; Primavera, M.; Prince, S.; Proissl, M.; Prokofiev, K.; Prokoshin, F.; Protopopescu, S.; Proudfoot, J.; Przybycien, M.; Puddu, D.; Puldon, D.; Purohit, M.; Puzo, P.; Qian, J.; Qin, G.; Qin, Y.; Quadt, A.; Quayle, W. B.; Queitsch-Maitland, M.; Quilty, D.; Raddum, S.; Radeka, V.; Radescu, V.; Radhakrishnan, S. K.; Radloff, P.; Rados, P.; Ragusa, F.; Rahal, G.; Rajagopalan, S.; Rammensee, M.; Rangel-Smith, C.; Ratti, M. G.; Rauscher, F.; Rave, S.; Ravenscroft, T.; Raymond, M.; Read, A. L.; Readioff, N. P.; Rebuzzi, D. M.; Redelbach, A.; Redlinger, G.; Reece, R.; Reeves, K.; Rehnisch, L.; Reichert, J.; Reisin, H.; Rembser, C.; Ren, H.; Rescigno, M.; Resconi, S.; Rezanova, O. L.; Reznicek, P.; Rezvani, R.; Richter, R.; Richter, S.; Richter-Was, E.; Ricken, O.; Ridel, M.; Rieck, P.; Riegel, C. J.; Rieger, J.; Rifki, O.; Rijssenbeek, M.; Rimoldi, A.; Rinaldi, L.; Ristić, B.; Ritsch, E.; Riu, I.; Rizatdinova, F.; Rizvi, E.; Rizzi, C.; Robertson, S. H.; Robichaud-Veronneau, A.; Robinson, D.; Robinson, J. E. M.; Robson, A.; Roda, C.; Rodina, Y.; Perez, A. Rodriguez; Rodriguez, D. Rodriguez; Roe, S.; Rogan, C. S.; Røhne, O.; Romaniouk, A.; Romano, M.; Saez, S. M. Romano; Adam, E. Romero; Rompotis, N.; Ronzani, M.; Roos, L.; Ros, E.; Rosati, S.; Rosbach, K.; Rose, P.; Rosenthal, O.; Rossetti, V.; Rossi, E.; Rossi, L. P.; Rosten, J. H. N.; Rosten, R.; Rotaru, M.; Roth, I.; Rothberg, J.; Rousseau, D.; Royon, C. R.; Rozanov, A.; Rozen, Y.; Ruan, X.; Rubbo, F.; Rubinskiy, I.; Rud, V. I.; Rudolph, M. S.; Rühr, F.; Ruiz-Martinez, A.; Rurikova, Z.; Rusakovich, N. A.; Ruschke, A.; Russell, H. L.; Rutherfoord, J. P.; Ruthmann, N.; Ryabov, Y. F.; Rybar, M.; Rybkin, G.; Ryu, S.; Ryzhov, A.; Saavedra, A. F.; Sabato, G.; Sacerdoti, S.; Sadrozinski, H. F.-W.; Sadykov, R.; Tehrani, F. Safai; Saha, P.; Sahinsoy, M.; Saimpert, M.; Saito, T.; Sakamoto, H.; Sakurai, Y.; Salamanna, G.; Salamon, A.; Loyola, J. E. Salazar; Salek, D.; De Bruin, P. H. Sales; Salihagic, D.; Salnikov, A.; Salt, J.; Salvatore, D.; Salvatore, F.; Salvucci, A.; Salzburger, A.; Sammel, D.; Sampsonidis, D.; Sanchez, A.; Sánchez, J.; Martinez, V. Sanchez; Sandaker, H.; Sandbach, R. L.; Sander, H. G.; Sanders, M. P.; Sandhoff, M.; Sandoval, C.; Sandstroem, R.; Sankey, D. P. C.; Sannino, M.; Sansoni, A.; Santoni, C.; Santonico, R.; Santos, H.; Castillo, I. Santoyo; Sapp, K.; Sapronov, A.; Saraiva, J. G.; Sarrazin, B.; Sasaki, O.; Sasaki, Y.; Sato, K.; Sauvage, G.; Sauvan, E.; Savage, G.; Savard, P.; Sawyer, C.; Sawyer, L.; Saxon, J.; Sbarra, C.; Sbrizzi, A.; Scanlon, T.; Scannicchio, D. A.; Scarcella, M.; Scarfone, V.; Schaarschmidt, J.; Schacht, P.; Schaefer, D.; Schaefer, R.; Schaeffer, J.; Schaepe, S.; Schaetzel, S.; Schäfer, U.; Schaffer, A. C.; Schaile, D.; Schamberger, R. D.; Scharf, V.; Schegelsky, V. A.; Scheirich, D.; Schernau, M.; Schiavi, C.; Schillo, C.; Schioppa, M.; Schlenker, S.; Schmieden, K.; Schmitt, C.; Schmitt, S.; Schmitz, S.; Schneider, B.; Schnellbach, Y. J.; Schnoor, U.; Schoeffel, L.; Schoening, A.; Schoenrock, B. D.; Schopf, E.; Schorlemmer, A. L. S.; Schott, M.; Schouten, D.; Schovancova, J.; Schramm, S.; Schreyer, M.; Schuh, N.; Schultens, M. J.; Schultz-Coulon, H.-C.; Schulz, H.; Schumacher, M.; Schumm, B. A.; Schune, Ph.; Schwanenberger, C.; Schwartzman, A.; Schwarz, T. A.; Schwegler, Ph.; Schweiger, H.; Schwemling, Ph.; Schwienhorst, R.; Schwindling, J.; Schwindt, T.; Sciolla, G.; Scuri, F.; Scutti, F.; Searcy, J.; Seema, P.; Seidel, S. C.; Seiden, A.; Seifert, F.; Seixas, J. M.; Sekhniaidze, G.; Sekhon, K.; Sekula, S. J.; Seliverstov, D. M.; Semprini-Cesari, N.; Serfon, C.; Serin, L.; Serkin, L.; Sessa, M.; Seuster, R.; Severini, H.; Sfiligoj, T.; Sforza, F.; Sfyrla, A.; Shabalina, E.; Shaikh, N. W.; Shan, L. Y.; Shang, R.; Shank, J. T.; Shapiro, M.; Shatalov, P. B.; Shaw, K.; Shaw, S. M.; Shcherbakova, A.; Shehu, C. Y.; Sherwood, P.; Shi, L.; Shimizu, S.; Shimmin, C. O.; Shimojima, M.; Shiyakova, M.; Shmeleva, A.; Saadi, D. Shoaleh; Shochet, M. J.; Shojaii, S.; Shrestha, S.; Shulga, E.; Shupe, M. A.; Sicho, P.; Sidebo, P. E.; Sidiropoulou, O.; Sidorov, D.; Sidoti, A.; Siegert, F.; Sijacki, Dj.; Silva, J.; Silverstein, S. B.; Simak, V.; Simard, O.; Simic, Lj.; Simion, S.; Simioni, E.; Simmons, B.; Simon, D.; Simon, M.; Sinervo, P.; Sinev, N. B.; Sioli, M.; Siragusa, G.; Sivoklokov, S. Yu.; Sjölin, J.; Sjursen, T. B.; Skinner, M. B.; Skottowe, H. P.; Skubic, P.; Slater, M.; Slavicek, T.; Slawinska, M.; Sliwa, K.; Slovak, R.; Smakhtin, V.; Smart, B. H.; Smestad, L.; Smirnov, S. Yu.; Smirnov, Y.; Smirnova, L. N.; Smirnova, O.; Smith, M. N. K.; Smith, R. W.; Smizanska, M.; Smolek, K.; Snesarev, A. A.; Snidero, G.; Snyder, S.; Sobie, R.; Socher, F.; Soffer, A.; Soh, D. A.; Sokhrannyi, G.; Sanchez, C. A. Solans; Solar, M.; Soldatov, E. Yu.; Soldevila, U.; Solodkov, A. A.; Soloshenko, A.; Solovyanov, O. V.; Solovyev, V.; Sommer, P.; Son, H.; Song, H. Y.; Sood, A.; Sopczak, A.; Sopko, V.; Sorin, V.; Sosa, D.; Sotiropoulou, C. L.; Soualah, R.; Soukharev, A. M.; South, D.; Sowden, B. C.; Spagnolo, S.; Spalla, M.; Spangenberg, M.; Spanò, F.; Sperlich, D.; Spettel, F.; Spighi, R.; Spigo, G.; Spiller, L. A.; Spousta, M.; Denis, R. D. St.; Stabile, A.; Stahlman, J.; Stamen, R.; Stamm, S.; Stanecka, E.; Stanek, R. W.; Stanescu, C.; Stanescu-Bellu, M.; Stanitzki, M. M.; Stapnes, S.; Starchenko, E. A.; Stark, G. H.; Stark, J.; Staroba, P.; Starovoitov, P.; Stärz, S.; Staszewski, R.; Steinberg, P.; Stelzer, B.; Stelzer, H. J.; Stelzer-Chilton, O.; Stenzel, H.; Stewart, G. A.; Stillings, J. A.; Stockton, M. C.; Stoebe, M.; Stoicea, G.; Stolte, P.; Stonjek, S.; Stradling, A. R.; Straessner, A.; Stramaglia, M. E.; Strandberg, J.; Strandberg, S.; Strandlie, A.; Strauss, M.; Strizenec, P.; Ströhmer, R.; Strom, D. M.; Stroynowski, R.; Strubig, A.; Stucci, S. A.; Stugu, B.; Styles, N. A.; Su, D.; Su, J.; Subramaniam, R.; Suchek, S.; Sugaya, Y.; Suk, M.; Sulin, V. V.; Sultansoy, S.; Sumida, T.; Sun, S.; Sun, X.; Sundermann, J. E.; Suruliz, K.; Susinno, G.; Sutton, M. R.; Suzuki, S.; Svatos, M.; Swiatlowski, M.; Sykora, I.; Sykora, T.; Ta, D.; Taccini, C.; Tackmann, K.; Taenzer, J.; Taffard, A.; Tafirout, R.; Taiblum, N.; Takai, H.; Takashima, R.; Takeda, H.; Takeshita, T.; Takubo, Y.; Talby, M.; Talyshev, A. A.; Tam, J. Y. C.; Tan, K. G.; Tanaka, J.; Tanaka, R.; Tanaka, S.; Tannenwald, B. B.; Araya, S. Tapia; Tapprogge, S.; Tarem, S.; Tartarelli, G. F.; Tas, P.; Tasevsky, M.; Tashiro, T.; Tassi, E.; Delgado, A. Tavares; Tayalati, Y.; Taylor, A. C.; Taylor, G. N.; Taylor, P. T. E.; Taylor, W.; Teischinger, F. A.; Teixeira-Dias, P.; Temming, K. K.; Temple, D.; Kate, H. Ten; Teng, P. K.; Teoh, J. J.; Tepel, F.; Terada, S.; Terashi, K.; Terron, J.; Terzo, S.; Testa, M.; Teuscher, R. J.; Theveneaux-Pelzer, T.; Thomas, J. P.; Thomas-Wilsker, J.; Thompson, E. N.; Thompson, P. D.; Thompson, R. J.; Thompson, A. S.; Thomsen, L. A.; Thomson, E.; Thomson, M.; Tibbetts, M. J.; Torres, R. E. Ticse; Tikhomirov, V. O.; Tikhonov, Yu. A.; Timoshenko, S.; Tipton, P.; Tisserant, S.; Todome, K.; Todorov, T.; Todorova-Nova, S.; Tojo, J.; Tokár, S.; Tokushuku, K.; Tolley, E.; Tomlinson, L.; Tomoto, M.; Tompkins, L.; Toms, K.; Tong, B.; Torrence, E.; Torres, H.; Pastor, E. Torró; Toth, J.; Touchard, F.; Tovey, D. R.; Trefzger, T.; Tricoli, A.; Trigger, I. M.; Trincaz-Duvoid, S.; Tripiana, M. F.; Trischuk, W.; Trocmé, B.; Trofymov, A.; Troncon, C.; Trottier-McDonald, M.; Trovatelli, M.; Truong, L.; Trzebinski, M.; Trzupek, A.; Tseng, J. C.-L.; Tsiareshka, P. V.; Tsipolitis, G.; Tsirintanis, N.; Tsiskaridze, S.; Tsiskaridze, V.; Tskhadadze, E. G.; Tsui, K. M.; Tsukerman, I. I.; Tsulaia, V.; Tsuno, S.; Tsybychev, D.; Tudorache, A.; Tudorache, V.; Tuna, A. N.; Tupputi, S. A.; Turchikhin, S.; Turecek, D.; Turgeman, D.; Turra, R.; Turvey, A. J.; Tuts, P. M.; Tylmad, M.; Tyndel, M.; Ucchielli, G.; Ueda, I.; Ueno, R.; Ughetto, M.; Ukegawa, F.; Unal, G.; Undrus, A.; Unel, G.; Ungaro, F. C.; Unno, Y.; Unverdorben, C.; Urban, J.; Urquijo, P.; Urrejola, P.; Usai, G.; Usanova, A.; Vacavant, L.; Vacek, V.; Vachon, B.; Valderanis, C.; Santurio, E. Valdes; Valencic, N.; Valentinetti, S.; Valero, A.; Valery, L.; Valkar, S.; Vallecorsa, S.; Ferrer, J. A. Valls; Van Den Wollenberg, W.; Van Der Deijl, P. C.; van der Geer, R.; van der Graaf, H.; van Eldik, N.; van Gemmeren, P.; Van Nieuwkoop, J.; van Vulpen, I.; van Woerden, M. C.; Vanadia, M.; Vandelli, W.; Vanguri, R.; Vaniachine, A.; Vankov, P.; Vardanyan, G.; Vari, R.; Varnes, E. W.; Varol, T.; Varouchas, D.; Vartapetian, A.; Varvell, K. E.; Vazeille, F.; Schroeder, T. Vazquez; Veatch, J.; Veloce, L. M.; Veloso, F.; Veneziano, S.; Ventura, A.; Venturi, M.; Venturi, N.; Venturini, A.; Vercesi, V.; Verducci, M.; Verkerke, W.; Vermeulen, J. C.; Vest, A.; Vetterli, M. C.; Viazlo, O.; Vichou, I.; Vickey, T.; Boeriu, O. E. Vickey; Viehhauser, G. H. A.; Viel, S.; Vigne, R.; Villa, M.; Perez, M. Villaplana; Vilucchi, E.; Vincter, M. G.; Vinogradov, V. B.; Vittori, C.; Vivarelli, I.; Vlachos, S.; Vlasak, M.; Vogel, M.; Vokac, P.; Volpi, G.; Volpi, M.; von der Schmitt, H.; von Toerne, E.; Vorobel, V.; Vorobev, K.; Vos, M.; Voss, R.; Vossebeld, J. H.; Vranjes, N.; Milosavljevic, M. Vranjes; Vrba, V.; Vreeswijk, M.; Vuillermet, R.; Vukotic, I.; Vykydal, Z.; Wagner, P.; Wagner, W.; Wahlberg, H.; Wahrmund, S.; Wakabayashi, J.; Walder, J.; Walker, R.; Walkowiak, W.; Wallangen, V.; Wang, C.; Wang, C.; Wang, F.; Wang, H.; Wang, H.; Wang, J.; Wang, J.; Wang, K.; Wang, R.; Wang, S. M.; Wang, T.; Wang, T.; Wang, X.; Wanotayaroj, C.; Warburton, A.; Ward, C. P.; Wardrope, D. R.; Washbrook, A.; Watkins, P. M.; Watson, A. T.; Watson, I. J.; Watson, M. F.; Watts, G.; Watts, S.; Waugh, B. M.; Webb, S.; Weber, M. S.; Weber, S. W.; Webster, J. S.; Weidberg, A. R.; Weinert, B.; Weingarten, J.; Weiser, C.; Weits, H.; Wells, P. S.; Wenaus, T.; Wengler, T.; Wenig, S.; Wermes, N.; Werner, M.; Werner, P.; Wessels, M.; Wetter, J.; Whalen, K.; Whallon, N. L.; Wharton, A. M.; White, A.; White, M. J.; White, R.; White, S.; Whiteson, D.; Wickens, F. J.; Wiedenmann, W.; Wielers, M.; Wienemann, P.; Wiglesworth, C.; Wiik-Fuchs, L. A. M.; Wildauer, A.; Wilkens, H. G.; Williams, H. H.; Williams, S.; Willis, C.; Willocq, S.; Wilson, J. A.; Wingerter-Seez, I.; Winklmeier, F.; Winston, O. J.; Winter, B. T.; Wittgen, M.; Wittkowski, J.; Wollstadt, S. J.; Wolter, M. W.; Wolters, H.; Wosiek, B. K.; Wotschack, J.; Woudstra, M. J.; Wozniak, K. W.; Wu, M.; Wu, M.; Wu, S. L.; Wu, X.; Wu, Y.; Wyatt, T. R.; Wynne, B. M.; Xella, S.; Xu, D.; Xu, L.; Yabsley, B.; Yacoob, S.; Yakabe, R.; Yamaguchi, D.; Yamaguchi, Y.; Yamamoto, A.; Yamamoto, S.; Yamanaka, T.; Yamauchi, K.; Yamazaki, Y.; Yan, Z.; Yang, H.; Yang, H.; Yang, Y.; Yang, Z.; Yao, W.-M.; Yap, Y. C.; Yasu, Y.; Yatsenko, E.; Wong, K. H. Yau; Ye, J.; Ye, S.; Yeletskikh, I.; Yen, A. L.; Yildirim, E.; Yorita, K.; Yoshida, R.; Yoshihara, K.; Young, C.; Young, C. J. S.; Youssef, S.; Yu, D. R.; Yu, J.; Yu, J. M.; Yu, J.; Yuan, L.; Yuen, S. P. Y.; Yusuff, I.; Zabinski, B.; Zaidan, R.; Zaitsev, A. M.; Zakharchuk, N.; Zalieckas, J.; Zaman, A.; Zambito, S.; Zanello, L.; Zanzi, D.; Zeitnitz, C.; Zeman, M.; Zemla, A.; Zeng, J. C.; Zeng, Q.; Zengel, K.; Zenin, O.; Ženiš, T.; Zerwas, D.; Zhang, D.; Zhang, F.; Zhang, G.; Zhang, H.; Zhang, J.; Zhang, L.; Zhang, R.; Zhang, R.; Zhang, X.; Zhang, Z.; Zhao, X.; Zhao, Y.; Zhao, Z.; Zhemchugov, A.; Zhong, J.; Zhou, B.; Zhou, C.; Zhou, L.; Zhou, L.; Zhou, M.; Zhou, N.; Zhu, C. G.; Zhu, H.; Zhu, J.; Zhu, Y.; Zhuang, X.; Zhukov, K.; Zibell, A.; Zieminska, D.; Zimine, N. I.; Zimmermann, C.; Zimmermann, S.; Zinonos, Z.; Zinser, M.; Ziolkowski, M.; Živković, L.; Zobernig, G.; Zoccoli, A.; Nedden, M. zur; Zurzolo, G.; Zwalinski, L.
2016-10-01
The results of a search for gluinos in final states with an isolated electron or muon, multiple jets and large missing transverse momentum using proton-proton collision data at a centre-of-mass energy of √{s} = 13 { Te V} are presented. The dataset used was recorded in 2015 by the ATLAS experiment at the Large Hadron Collider and corresponds to an integrated luminosity of 3.2 fb^{-1}. Six signal selections are defined that best exploit the signal characteristics. The data agree with the Standard Model background expectation in all six signal selections, and the largest deviation is a 2.1 standard deviation excess. The results are interpreted in a simplified model where pair-produced gluinos decay via the lightest chargino to the lightest neutralino. In this model, gluinos are excluded up to masses of approximately 1.6 Te V depending on the mass spectrum of the simplified model, thus surpassing the limits of previous searches.
Measurement of Z0-boson production at large rapidities in Pb-Pb collisions at √{sNN } = 5.02TeV
NASA Astrophysics Data System (ADS)
Acharya, S.; Adamová, D.; Adolfsson, J.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Al-Turany, M.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alfaro Molina, R.; Ali, Y.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altenkamper, L.; Altsybeev, I.; Andrei, C.; Andreou, D.; Andrews, H. A.; Andronic, A.; Angeletti, M.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Apadula, N.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barioglio, L.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartsch, E.; Bastid, N.; Basu, S.; Batigne, G.; Batyunya, B.; Batzing, P. C.; Bazo Alba, J. L.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhaduri, P. P.; Bhasin, A.; Bhat, I. R.; Bhattacharjee, B.; Bhom, J.; Bianchi, A.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Blair, J. T.; Blau, D.; Blume, C.; Boca, G.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonomi, G.; Bonora, M.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Bratrud, L.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camacho, R. S.; Camerini, P.; Capon, A. A.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Chandra, S.; Chang, B.; Chang, W.; Chapeland, S.; Chartier, M.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Chojnacki, M.; Choudhury, S.; Chowdhury, T.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Concas, M.; Conesa Balbastre, G.; Conesa Del Valle, Z.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Costanza, S.; Crkovská, J.; Crochet, P.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; de, S.; de Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; de Falco, A.; de Gruttola, D.; De Marco, N.; de Pasquale, S.; de Souza, R. D.; Degenhardt, H. F.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; di Bari, D.; di Mauro, A.; di Nezza, P.; di Ruzza, B.; Dietel, T.; Dillenseger, P.; Ding, Y.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Doremalen, L. V. R.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Dudi, S.; Duggal, A. K.; Dukhishyam, M.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Fabbietti, L.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Fernández Téllez, A.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Ganoti, P.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Gay Ducati, M. B.; Germain, M.; Ghosh, J.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Gronefeld, J. M.; Grosa, F.; Grosse-Oetringhaus, J. F.; Grosso, R.; Guber, F.; Guernane, R.; Guerzoni, B.; Guittiere, M.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Haque, M. R.; Harris, J. W.; Harton, A.; Hassan, H.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Hernandez, E. G.; Herrera Corral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hills, C.; Hippolyte, B.; Hohlweger, B.; Horak, D.; Hornung, S.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Iddon, J. P.; Iga Buitron, S. A.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Islam, M. S.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jaelani, S.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jercic, M.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karczmarczyk, P.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Ketzer, B.; Khabanova, Z.; Khan, P.; Khan, S.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kielbowicz, M. M.; Kileng, B.; Kim, B.; Kim, D.; Kim, D. J.; Kim, E. J.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Köhler, M. K.; Kollegger, T.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Konyushikhin, M.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Králik, I.; Kravčáková, A.; Kreis, L.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lai, Y. S.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lavicka, R.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, X.; Li, X. L.; Lien, J.; Lietava, R.; Lim, B.; Lindal, S.; Lindenstruth, V.; Lindsay, S. W.; Lippmann, C.; Lisa, M. A.; Litichevskyi, V.; Liu, A.; Llope, W. J.; Lodato, D. F.; Loenne, P. I.; Loginov, V.; Loizides, C.; Loncar, P.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Luhder, J. R.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martinengo, P.; Martinez, J. A. L.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Masciocchi, S.; Masera, M.; Masoni, A.; Massacrier, L.; Masson, E.; Mastroserio, A.; Mathis, A. M.; Matuoka, P. F. T.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mihaylov, D. L.; Mikhaylov, K.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, A. P.; Mohanty, B.; Mohisin Khan, M.; Moreira de Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munoz, M. I. A.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Myrcha, J. W.; Nag, D.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Narayan, A.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Negrao de Oliveira, R. A.; Nellen, L.; Nesbo, S. V.; Neskovic, G.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Nobuhiro, A.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, H.; Ohlson, A.; Olah, L.; Oleniacz, J.; Oliveira da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Paić, G.; Palni, P.; Pan, J.; Pandey, A. K.; Panebianco, S.; Papikyan, V.; Pareek, P.; Park, J.; Parmar, S.; Passfeld, A.; Pathak, S. P.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira, L. G.; Pereira da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrovici, M.; Petta, C.; Pezzi, R. P.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pliquett, F.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Punin, V.; Putschke, J.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reshetin, A.; Reygers, K.; Riabov, V.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogalev, R.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Rokita, P. S.; Ronchetti, F.; Rosas, E. D.; Roslon, K.; Rosnet, P.; Rossi, A.; Rotondi, A.; Roukoutakis, F.; Roy, C.; Roy, P.; Rueda, O. V.; Rui, R.; Rumyantsev, B.; Rustamov, A.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Saha, S. K.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sarkar, A.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Schaefer, B.; Scheid, H. S.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M. O.; Schmidt, M.; Schmidt, N. V.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shahoyan, R.; Shaikh, W.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shimomura, M.; Shirinkin, S.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silaeva, S.; Silvermyr, D.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Soramel, F.; Sorensen, S.; Sozzi, F.; Sputowska, I.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Stocco, D.; Storetvedt, M. M.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thakur, S.; Thomas, D.; Thoresen, F.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Toppi, M.; Torres, S. R.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Tropp, L.; Trubnikov, V.; Trzaska, W. H.; Trzeciak, B. A.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; van der Maarel, J.; van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vermunt, L.; Vernet, R.; Vértesi, R.; Vickovic, L.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wang, H.; Wang, M.; Watanabe, Y.; Weber, M.; Weber, S. G.; Wegrzynek, A.; Weiser, D. F.; Wenzel, S. C.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Willsher, E.; Windelband, B.; Witt, W. E.; Xu, R.; Yalcin, S.; Yamakawa, K.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yun, E.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, Y.; Zichichi, A.; Zimmermann, M. B.; Zinovjev, G.; Zmeskal, J.; Zou, S.; Alice Collaboration
2018-05-01
The production of Z0 bosons at large rapidities in Pb-Pb collisions at √{sNN } = 5.02TeV is reported. Z0 candidates are reconstructed in the dimuon decay channel (Z0 →μ+μ-), based on muons selected with pseudo-rapidity - 4.0 < η < - 2.5 and pT > 20GeV/ c. The invariant yield and the nuclear modification factor, RAA, are presented as a function of rapidity and collision centrality. The value of RAA for the 0-20% central Pb-Pb collisions is 0.67 ± 0.11(stat.) ± 0.03(syst.) ± 0.06(corr. syst.), exhibiting a deviation of 2.6σ from unity. The results are well-described by calculations that include nuclear modifications of the parton distribution functions, while the predictions using vacuum PDFs deviate from data by 2.3σ in the 0-90% centrality class and by 3σ in the 0-20% central collisions.
Twenge, Jean M; Gentile, Brittany; DeWall, C Nathan; Ma, Debbie; Lacefield, Katharine; Schurtz, David R
2010-03-01
Two cross-temporal meta-analyses find large generational increases in psychopathology among American college students (N=63,706) between 1938 and 2007 on the MMPI and MMPI-2 and high school students (N=13,870) between 1951 and 2002 on the MMPI-A. The current generation of young people scores about a standard deviation higher (average d=1.05) on the clinical scales, including Pd (Psychopathic Deviation), Pa (Paranoia), Ma (Hypomania), and D (Depression). Five times as many now score above common cutoffs for psychopathology, including up to 40% on Ma. The birth cohort effects are still large and significant after controlling for the L and K validity scales, suggesting that the changes are not caused by response bias. The results best fit a model citing cultural shifts toward extrinsic goals, such as materialism and status and away from intrinsic goals, such as community, meaning in life, and affiliation. Copyright 2009 Elsevier B.V. All rights reserved.