A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions
Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.
2017-01-01
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945
A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.
Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J
2017-04-12
Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.
Quantum and classical behavior in interacting bosonic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertzberg, Mark P.
It is understood that in free bosonic theories, the classical field theory accurately describes the full quantum theory when the occupancy numbers of systems are very large. However, the situation is less understood in interacting theories, especially on time scales longer than the dynamical relaxation time. Recently there have been claims that the quantum theory deviates spectacularly from the classical theory on this time scale, even if the occupancy numbers are extremely large. Furthermore, it is claimed that the quantum theory quickly thermalizes while the classical theory does not. The evidence for these claims comes from noticing a spectacular differencemore » in the time evolution of expectation values of quantum operators compared to the classical micro-state evolution. If true, this would have dramatic consequences for many important phenomena, including laboratory studies of interacting BECs, dark matter axions, preheating after inflation, etc. In this work we critically examine these claims. We show that in fact the classical theory can describe the quantum behavior in the high occupancy regime, even when interactions are large. The connection is that the expectation values of quantum operators in a single quantum micro-state are approximated by a corresponding classical ensemble average over many classical micro-states. Furthermore, by the ergodic theorem, a classical ensemble average of local fields with statistical translation invariance is the spatial average of a single micro-state. So the correlation functions of the quantum and classical field theories of a single micro-state approximately agree at high occupancy, even in interacting systems. Furthermore, both quantum and classical field theories can thermalize, when appropriate coarse graining is introduced, with the classical case requiring a cutoff on low occupancy UV modes. We discuss applications of our results.« less
Classical boson sampling algorithms with superior performance to near-term experiments
NASA Astrophysics Data System (ADS)
Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony
2017-12-01
It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.
Signatures of bifurcation on quantum correlations: Case of the quantum kicked top
NASA Astrophysics Data System (ADS)
Bhosale, Udaysinh T.; Santhanam, M. S.
2017-01-01
Quantum correlations reflect the quantumness of a system and are useful resources for quantum information and computational processes. Measures of quantum correlations do not have a classical analog and yet are influenced by classical dynamics. In this work, by modeling the quantum kicked top as a multiqubit system, the effect of classical bifurcations on measures of quantum correlations such as the quantum discord, geometric discord, and Meyer and Wallach Q measure is studied. The quantum correlation measures change rapidly in the vicinity of a classical bifurcation point. If the classical system is largely chaotic, time averages of the correlation measures are in good agreement with the values obtained by considering the appropriate random matrix ensembles. The quantum correlations scale with the total spin of the system, representing its semiclassical limit. In the vicinity of trivial fixed points of the kicked top, the scaling function decays as a power law. In the chaotic limit, for large total spin, quantum correlations saturate to a constant, which we obtain analytically, based on random matrix theory, for the Q measure. We also suggest that it can have experimental consequences.
Scale-independent inflation and hierarchy generation
Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.
2016-10-20
We discuss models involving two scalar fields coupled to classical gravity that satisfy the general criteria: (i) the theory has no mass input parameters, (ii) classical scale symmetry is broken only throughmore » $$-\\frac{1}{12}\\varsigma \\phi^2 R$$ couplings where $$\\varsigma$$ departs from the special conformal value of $1$; (iii) the Planck mass is dynamically generated by the vacuum expectations values (VEVs) of the scalars (iv) there is a stage of viable inflation associated with slow roll in the two--scalar potential; (v) the final vacuum has a small to vanishing cosmological constant and an hierarchically small ratio of the VEVs and the ratio of the scalar masses to the Planck scale. In addition, this assumes the paradigm of classical scale symmetry as a custodial symmetry of large hierarchies.« less
Enstrophy Cascade in Decaying Two-Dimensional Quantum Turbulence
NASA Astrophysics Data System (ADS)
Reeves, Matthew T.; Billam, Thomas P.; Yu, Xiaoquan; Bradley, Ashton S.
2017-11-01
We report evidence for an enstrophy cascade in large-scale point-vortex simulations of decaying two-dimensional quantum turbulence. Devising a method to generate quantum vortex configurations with kinetic energy narrowly localized near a single length scale, the dynamics are found to be well characterized by a superfluid Reynolds number Res that depends only on the number of vortices and the initial kinetic energy scale. Under free evolution the vortices exhibit features of a classical enstrophy cascade, including a k-3 power-law kinetic energy spectrum, and constant enstrophy flux associated with inertial transport to small scales. Clear signatures of the cascade emerge for N ≳500 vortices. Simulating up to very large Reynolds numbers (N =32 768 vortices), additional features of the classical theory are observed: the Kraichnan-Batchelor constant is found to converge to C'≈1.6 , and the width of the k-3 range scales as Res1 /2 .
ERIC Educational Resources Information Center
Caprara, Gian Vittorio; Alessandri, Guido; Eisenberg, Nancy; Kupfer, A.; Steca, Patrizia; Caprara, Maria Giovanna; Yamaguchi, Susumu; Fukuzawa, Ai; Abela, John
2012-01-01
Five studies document the validity of a new 8-item scale designed to measure "positivity," defined as the tendency to view life and experiences with a positive outlook. In the first study (N = 372), the psychometric properties of Positivity Scale (P Scale) were examined in accordance with classical test theory using a large number of…
ERIC Educational Resources Information Center
MacMillan, Peter D.
2000-01-01
Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…
Effective model hierarchies for dynamic and static classical density functional theories
NASA Astrophysics Data System (ADS)
Majaniemi, S.; Provatas, N.; Nonomura, M.
2010-09-01
The origin and methodology of deriving effective model hierarchies are presented with applications to solidification of crystalline solids. In particular, it is discussed how the form of the equations of motion and the effective parameters on larger scales can be obtained from the more microscopic models. It will be shown that tying together the dynamic structure of the projection operator formalism with static classical density functional theories can lead to incomplete (mass) transport properties even though the linearized hydrodynamics on large scales is correctly reproduced. To facilitate a more natural way of binding together the dynamics of the macrovariables and classical density functional theory, a dynamic generalization of density functional theory based on the nonequilibrium generating functional is suggested.
The Phenomenology of Small-Scale Turbulence
NASA Astrophysics Data System (ADS)
Sreenivasan, K. R.; Antonia, R. A.
I have sometimes thought that what makes a man's work classic is often just this multiplicity [of interpretations], which invites and at the same time resists our craving for a clear understanding. Wright (1982, p. 34), on Wittgenstein's philosophy Small-scale turbulence has been an area of especially active research in the recent past, and several useful research directions have been pursued. Here, we selectively review this work. The emphasis is on scaling phenomenology and kinematics of small-scale structure. After providing a brief introduction to the classical notions of universality due to Kolmogorov and others, we survey the existing work on intermittency, refined similarity hypotheses, anomalous scaling exponents, derivative statistics, intermittency models, and the structure and kinematics of small-scale structure - the latter aspect coming largely from the direct numerical simulation of homogeneous turbulence in a periodic box.
Quantum and classical ripples in graphene
NASA Astrophysics Data System (ADS)
Hašík, Juraj; Tosatti, Erio; MartoÅák, Roman
2018-04-01
Thermal ripples of graphene are well understood at room temperature, but their quantum counterparts at low temperatures are in need of a realistic quantitative description. Here we present atomistic path-integral Monte Carlo simulations of freestanding graphene, which show upon cooling a striking classical-quantum evolution of height and angular fluctuations. The crossover takes place at ever-decreasing temperatures for ever-increasing wavelengths so that a completely quantum regime is never attained. Zero-temperature quantum graphene is flatter and smoother than classical graphene at large scales yet rougher at short scales. The angular fluctuation distribution of the normals can be quantitatively described by coexistence of two Gaussians, one classical strongly T -dependent and one quantum about 2° wide, of zero-point character. The quantum evolution of ripple-induced height and angular spread should be observable in electron diffraction in graphene and other two-dimensional materials, such as MoS2, bilayer graphene, boron nitride, etc.
A new model for extinction and recolonization in two dimensions: quantifying phylogeography.
Barton, Nicholas H; Kelleher, Jerome; Etheridge, Alison M
2010-09-01
Classical models of gene flow fail in three ways: they cannot explain large-scale patterns; they predict much more genetic diversity than is observed; and they assume that loosely linked genetic loci evolve independently. We propose a new model that deals with these problems. Extinction events kill some fraction of individuals in a region. These are replaced by offspring from a small number of parents, drawn from the preexisting population. This model of evolution forwards in time corresponds to a backwards model, in which ancestral lineages jump to a new location if they are hit by an event, and may coalesce with other lineages that are hit by the same event. We derive an expression for the identity in allelic state, and show that, over scales much larger than the largest event, this converges to the classical value derived by Wright and Malécot. However, rare events that cover large areas cause low genetic diversity, large-scale patterns, and correlations in ancestry between unlinked loci. © 2010 The Author(s). Journal compilation © 2010 The Society for the Study of Evolution.
Turbulent statistics and intermittency enhancement in coflowing superfluid 4He
NASA Astrophysics Data System (ADS)
Biferale, L.; Khomenko, D.; L'vov, V.; Pomyalov, A.; Procaccia, I.; Sahoo, G.
2018-02-01
The large-scale turbulent statistics of mechanically driven superfluid 4He was shown experimentally to follow the classical counterpart. In this paper, we use direct numerical simulations to study the whole range of scales in a range of temperatures T ∈[1.3 ,2.1 ] K. The numerics employ self-consistent and nonlinearly coupled normal and superfluid components. The main results are that (i) the velocity fluctuations of normal and super components are well correlated in the inertial range of scales, but decorrelate at small scales. (ii) The energy transfer by mutual friction between components is particulary efficient in the temperature range between 1.8 and 2 K, leading to enhancement of small-scale intermittency for these temperatures. (iii) At low T and close to Tλ, the scaling properties of the energy spectra and structure functions of the two components are approaching those of classical hydrodynamic turbulence.
Novel doorways and resonances in large-scale classical systems
NASA Astrophysics Data System (ADS)
Franco-Villafañe, J. A.; Flores, J.; Mateos, J. L.; Méndez-Sánchez, R. A.; Novaro, O.; Seligman, T. H.
2011-05-01
We show how the concept of doorway states carries beyond its typical applications and usual concepts. The scale on which it may occur is increased to large classical wave systems. Specifically we analyze the seismic response of sedimentary basins covered by water-logged clays, a rather common situation for urban sites. A model is introduced in which the doorway state is a plane wave propagating in the interface between the sediments and the clay. This wave is produced by the coupling of a Rayleigh and an evanescent SP-wave. This in turn leads to a strong resonant response in the soft clays near the surface of the basin. Our model calculations are compared with measurements during Mexico City earthquakes, showing quite good agreement. This not only provides a transparent explanation of catastrophic resonant seismic response in certain basins but at the same time constitutes up to this date the largest-scale example of the doorway state mechanism in wave scattering. Furthermore the doorway state itself has interesting and rather unusual characteristics.
NASA Astrophysics Data System (ADS)
Schuite, Jonathan; Longuevergne, Laurent; Bour, Olivier; Boudin, Frédérick; Durand, Stéphane; Lavenant, Nicolas
2015-12-01
Fractured aquifers which bear valuable water resources are often difficult to characterize with classical hydrogeological tools due to their intrinsic heterogeneities. Here we implement ground surface deformation tools (tiltmetry and optical leveling) to monitor groundwater pressure changes induced by a classical hydraulic test at the Ploemeur observatory. By jointly analyzing complementary time constraining data (tilt) and spatially constraining data (vertical displacement), our results strongly suggest that the use of these surface deformation observations allows for estimating storativity and structural properties (dip, root depth, and lateral extension) of a large hydraulically active fracture, in good agreement with previous studies. Hence, we demonstrate that ground surface deformation is a useful addition to traditional hydrogeological techniques and opens possibilities for characterizing important large-scale properties of fractured aquifers with short-term well tests as a controlled forcing.
NASA Astrophysics Data System (ADS)
Casadei, F.; Ruzzene, M.
2011-04-01
This work illustrates the possibility to extend the field of application of the Multi-Scale Finite Element Method (MsFEM) to structural mechanics problems that involve localized geometrical discontinuities like cracks or notches. The main idea is to construct finite elements with an arbitrary number of edge nodes that describe the actual geometry of the damage with shape functions that are defined as local solutions of the differential operator of the specific problem according to the MsFEM approach. The small scale information are then brought to the large scale model through the coupling of the global system matrices that are assembled using classical finite element procedures. The efficiency of the method is demonstrated through selected numerical examples that constitute classical problems of great interest to the structural health monitoring community.
Time Hierarchies and Model Reduction in Canonical Non-linear Models
Löwe, Hannes; Kremling, Andreas; Marin-Sanguino, Alberto
2016-01-01
The time-scale hierarchies of a very general class of models in differential equations is analyzed. Classical methods for model reduction and time-scale analysis have been adapted to this formalism and a complementary method is proposed. A unified theoretical treatment shows how the structure of the system can be much better understood by inspection of two sets of singular values: one related to the stoichiometric structure of the system and another to its kinetics. The methods are exemplified first through a toy model, then a large synthetic network and finally with numeric simulations of three classical benchmark models of real biological systems. PMID:27708665
Quantum-classical interface based on single flux quantum digital logic
NASA Astrophysics Data System (ADS)
McDermott, R.; Vavilov, M. G.; Plourde, B. L. T.; Wilhelm, F. K.; Liebermann, P. J.; Mukhanov, O. A.; Ohki, T. A.
2018-04-01
We describe an approach to the integrated control and measurement of a large-scale superconducting multiqubit array comprising up to 108 physical qubits using a proximal coprocessor based on the Single Flux Quantum (SFQ) digital logic family. Coherent control is realized by irradiating the qubits directly with classical bitstreams derived from optimal control theory. Qubit measurement is performed by a Josephson photon counter, which provides access to the classical result of projective quantum measurement at the millikelvin stage. We analyze the power budget and physical footprint of the SFQ coprocessor and discuss challenges and opportunities associated with this approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig
It is argued by extrapolation of general relativity and quantum mechanics that a classical inertial frame corresponds to a statistically defined observable that rotationally fluctuates due to Planck scale indeterminacy. Physical effects of exotic nonlocal rotational correlations on large scale field states are estimated. Their entanglement with the strong interaction vacuum is estimated to produce a universal, statistical centrifugal acceleration that resembles the observed cosmological constant.
Spatial distribution of GRBs and large scale structure of the Universe
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Rácz, István I.; Balázs, Lajos G.; Tóth, L. Viktor; Horváth, István
We studied the space distribution of the starburst galaxies from Millennium XXL database at z = 0.82. We examined the starburst distribution in the classical Millennium I (De Lucia et al. (2006)) using a semi-analytical model for the genesis of the galaxies. We simulated a starburst galaxies sample with Markov Chain Monte Carlo method. The connection between the large scale structures homogenous and starburst groups distribution (Kofman and Shandarin 1998), Suhhonenko et al. (2011), Liivamägi et al. (2012), Park et al. (2012), Horvath et al. (2014), Horvath et al. (2015)) on a defined scale were checked too.
Rossby waves and two-dimensional turbulence in a large-scale zonal jet
NASA Technical Reports Server (NTRS)
Shepherd, Theodor G.
1987-01-01
Homogeneous barotropic beta-plane turbulence is investigated, taking into account the effects of spatial inhomogeneity in the form of a zonal shear flows. Attention is given to the case of zonal flows that are barotropically stable and of larger scale than the resulting transient eddy field. Numerical simulations reveal that large-scale zonal flows alter the picture of classical beta-plane turbulence. It is found that the disturbance field penetrates to the largest scales of motion, that the larger disturbance scales show a tendency to meridional rather than zonal anisotropy, and that the initial spectral transfer rate away from an isotropic intermediate-scale source is enhanced by the shear-induced transfer associated with straining by the zonal flow.
Quantum probability, choice in large worlds, and the statistical structure of reality.
Ross, Don; Ladyman, James
2013-06-01
Classical probability models of incentive response are inadequate in "large worlds," where the dimensions of relative risk and the dimensions of similarity in outcome comparisons typically differ. Quantum probability models for choice in large worlds may be motivated pragmatically - there is no third theory - or metaphysically: statistical processing in the brain adapts to the true scale-relative structure of the universe.
Non-classical photon correlation in a two-dimensional photonic lattice.
Gao, Jun; Qiao, Lu-Feng; Lin, Xiao-Feng; Jiao, Zhi-Qiang; Feng, Zhen; Zhou, Zheng; Gao, Zhen-Wei; Xu, Xiao-Yun; Chen, Yuan; Tang, Hao; Jin, Xian-Min
2016-06-13
Quantum interference and quantum correlation, as two main features of quantum optics, play an essential role in quantum information applications, such as multi-particle quantum walk and boson sampling. While many experimental demonstrations have been done in one-dimensional waveguide arrays, it remains unexplored in higher dimensions due to tight requirement of manipulating and detecting photons in large-scale. Here, we experimentally observe non-classical correlation of two identical photons in a fully coupled two-dimensional structure, i.e. photonic lattice manufactured by three-dimensional femtosecond laser writing. Photon interference consists of 36 Hong-Ou-Mandel interference and 9 bunching. The overlap between measured and simulated distribution is up to 0.890 ± 0.001. Clear photon correlation is observed in the two-dimensional photonic lattice. Combining with controllably engineered disorder, our results open new perspectives towards large-scale implementation of quantum simulation on integrated photonic chips.
NASA Astrophysics Data System (ADS)
Fujitani, Y.; Sumino, Y.
2018-04-01
A classically scale invariant extension of the standard model predicts large anomalous Higgs self-interactions. We compute missing contributions in previous studies for probing the Higgs triple coupling of a minimal model using the process e+e- → Zhh. Employing a proper order counting, we compute the total and differential cross sections at the leading order, which incorporate the one-loop corrections between zero external momenta and their physical values. Discovery/exclusion potential of a future e+e- collider for this model is estimated. We also find a unique feature in the momentum dependence of the Higgs triple vertex for this class of models.
A spatially homogeneous and isotropic Einstein-Dirac cosmology
NASA Astrophysics Data System (ADS)
Finster, Felix; Hainzl, Christian
2011-04-01
We consider a spatially homogeneous and isotropic cosmological model where Dirac spinors are coupled to classical gravity. For the Dirac spinors we choose a Hartree-Fock ansatz where all one-particle wave functions are coherent and have the same momentum. If the scale function is large, the universe behaves like the classical Friedmann dust solution. If however the scale function is small, quantum effects lead to oscillations of the energy-momentum tensor. It is shown numerically and proven analytically that these quantum oscillations can prevent the formation of a big bang or big crunch singularity. The energy conditions are analyzed. We prove the existence of time-periodic solutions which go through an infinite number of expansion and contraction cycles.
A Theory For The Variability of The Baroclinic Quasi-geostrophic Winnd Driven Circulation.
NASA Astrophysics Data System (ADS)
Ben Jelloul, M.; Huck, T.
We propose a theory of the wind driven circulation based on the large scale (i.e. small Burger number) quasi-geostrophic assumptions retained in the Rhines and Young (1982) classical study of the steady baroclinic flow. We therefore use multiple time scale and asymptotic expansions to separate steady and the time dependent component of the flow. The barotropic flow is given by the Sverdrup balance. At first order in Burger number, the baroclinic flow can be decom- posed in two parts. A steady contribution ensures no flow in the deep layer which is at rest in absence of dissipative processes. Since the baroclinic instability is inhibited at large scale a spectrum of neutral modes also arises. These are of three type, classical Rossby basin modes deformed through advection by the barotropic flow, recirculating modes localized in the recirculation gyre and blocked modes corresponding to closed potential vorticity contours. At next order in Burger number, amplitude equations for baroclinic modes are derived. If dissipative processes are included at this order, the system adjusts towards Rhines and Young solution with a homogenized potential vorticity pool.
Classical and quantum cosmology of minimal massive bigravity
NASA Astrophysics Data System (ADS)
Darabi, F.; Mousavi, M.
2016-10-01
In a Friedmann-Robertson-Walker (FRW) space-time background we study the classical cosmological models in the context of recently proposed theory of nonlinear minimal massive bigravity. We show that in the presence of perfect fluid the classical field equations acquire contribution from the massive graviton as a cosmological term which is positive or negative depending on the dynamical competition between two scale factors of bigravity metrics. We obtain the classical field equations for flat and open universes in the ordinary and Schutz representation of perfect fluid. Focusing on the Schutz representation for flat universe, we find classical solutions exhibiting singularities at early universe with vacuum equation of state. Then, in the Schutz representation, we study the quantum cosmology for flat universe and derive the Schrodinger-Wheeler-DeWitt equation. We find its exact and wave packet solutions and discuss on their properties to show that the initial singularity in the classical solutions can be avoided by quantum cosmology. Similar to the study of Hartle-Hawking no-boundary proposal in the quantum cosmology of de Rham, Gabadadze and Tolley (dRGT) massive gravity, it turns out that the mass of graviton predicted by quantum cosmology of the minimal massive bigravity is large at early universe. This is in agreement with the fact that at early universe the cosmological constant should be large.
Low Temperature Properties for Correlation Functions in Classical N-Vector Spin Models
NASA Astrophysics Data System (ADS)
Balaban, Tadeusz; O'Carroll, Michael
We obtain convergent multi-scale expansions for the one-and two-point correlation functions of the low temperature lattice classical N- vector spin model in d>= 3 dimensions, N>= 2. The Gibbs factor is taken as
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Classical and quantum stability in putative landscapes
Dine, Michael
2017-01-18
Landscape analyses often assume the existence of large numbers of fields, N, with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N, eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N; scaling of couplings with N may also be necessary for perturbativity.more » We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. Finally, we consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Classical and quantum stability in putative landscapes
NASA Astrophysics Data System (ADS)
Dine, Michael
2017-01-01
Landscape analyses often assume the existence of large numbers of fields, N , with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N , eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N ; scaling of couplings with N may also be necessary for perturbativity. We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. We consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.
Arbitrary-order Hilbert Spectral Analysis and Intermittency in Solar Wind Density Fluctuations
NASA Astrophysics Data System (ADS)
Carbone, Francesco; Sorriso-Valvo, Luca; Alberti, Tommaso; Lepreti, Fabio; Chen, Christopher H. K.; Němeček, Zdenek; Šafránková, Jana
2018-05-01
The properties of inertial- and kinetic-range solar wind turbulence have been investigated with the arbitrary-order Hilbert spectral analysis method, applied to high-resolution density measurements. Due to the small sample size and to the presence of strong nonstationary behavior and large-scale structures, the classical analysis in terms of structure functions may prove to be unsuccessful in detecting the power-law behavior in the inertial range, and may underestimate the scaling exponents. However, the Hilbert spectral method provides an optimal estimation of the scaling exponents, which have been found to be close to those for velocity fluctuations in fully developed hydrodynamic turbulence. At smaller scales, below the proton gyroscale, the system loses its intermittent multiscaling properties and converges to a monofractal process. The resulting scaling exponents, obtained at small scales, are in good agreement with those of classical fractional Brownian motion, indicating a long-term memory in the process, and the absence of correlations around the spectral-break scale. These results provide important constraints on models of kinetic-range turbulence in the solar wind.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Qiang
The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of whichmore » is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next generation atomistic-to-continuum multiscale simulations. In addition, a rigorous studyof nite element discretizations of peridynamics will be considered. Using the fact that peridynamics is spatially derivative free, we will also characterize the space of admissible peridynamic solutions and carry out systematic analyses of the models, in particular rigorously showing how peridynamics encompasses fracture and other failure phenomena. Additional aspects of the project include the mathematical and numerical analysis of peridynamics applied to stochastic peridynamics models. In summary, the project will make feasible mathematically consistent multiscale models for the analysis and design of advanced materials.« less
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
Young's moduli of carbon materials investigated by various classical molecular dynamics schemes
NASA Astrophysics Data System (ADS)
Gayk, Florian; Ehrens, Julian; Heitmann, Tjark; Vorndamme, Patrick; Mrugalla, Andreas; Schnack, Jürgen
2018-05-01
For many applications classical carbon potentials together with classical molecular dynamics are employed to calculate structures and physical properties of such carbon-based materials where quantum mechanical methods fail either due to the excessive size, irregular structure or long-time dynamics. Although such potentials, as for instance implemented in LAMMPS, yield reasonably accurate bond lengths and angles for several carbon materials such as graphene, it is not clear how accurate they are in terms of mechanical properties such as for instance Young's moduli. We performed large-scale classical molecular dynamics investigations of three carbon-based materials using the various potentials implemented in LAMMPS as well as the EDIP potential of Marks. We show how the Young's moduli vary with classical potentials and compare to experimental results. Since classical descriptions of carbon are bound to be approximations it is not astonishing that different realizations yield differing results. One should therefore carefully check for which observables a certain potential is suited. Our aim is to contribute to such a clarification.
Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits
NASA Astrophysics Data System (ADS)
Fröwis, Florian; Yadin, Benjamin; Gisin, Nicolas
2018-04-01
Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.
USDA-ARS?s Scientific Manuscript database
Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...
Long-term drought sensitivity of trees in second-growth forests in a humid region
Neil Pederson; Kacie Tackett; Ryan W. McEwan; Stacy Clark; Adrienne Cooper; Glade Brosi; Ray Eaton; R. Drew Stockwell
2012-01-01
Classical field methods of reconstructing drought using tree rings in humid, temperate regions typically target old trees from drought-prone sites. This approach limits investigators to a handful of species and excludes large amounts of data that might be useful, especially for coverage gaps in large-scale networks. By sampling in more âtypicalâ forests, network...
The trace anomaly and dynamical vacuum energy in cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mottola, Emil
2010-04-30
The trace anomaly of conformal matter implies the existence of massless scalar poles in physical amplitudes involving the stress-energy tensor. These poles may be described by a local effective action with massless scalar fields, which couple to classical sources, contribute to gravitational scattering processes, and can have long range gravitational effects at macroscopic scales. In an effective field theory approach, the effective action of the anomaly is an infrared relevant term that should be added to the Einstein-Hilbert action of classical General Relativity to take account of macroscopic quantum effects. The additional scalar degrees of freedom contained in this effectivemore » action may be understood as responsible for both the Casimir effect in flat spacetime and large quantum backreaction effects at the horizon scale of cosmological spacetimes. These effects of the trace anomaly imply that the cosmological vacuum energy is dynamical, and its value depends on macroscopic boundary conditions at the cosmological horizon scale, rather than sensitivity to the extreme ultraviolet Planck scale.« less
Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S
2014-12-09
Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.
Plasmonic eigenmodes in individual and bow-tie graphene nanotriangles
NASA Astrophysics Data System (ADS)
Wang, Weihua; Christensen, Thomas; Jauho, Antti-Pekka; Thygesen, Kristian S.; Wubs, Martijn; Mortensen, N. Asger
2015-04-01
In classical electrodynamics, nanostructured graphene is commonly modeled by the computationally demanding problem of a three-dimensional conducting film of atomic-scale thickness. Here, we propose an efficient alternative two-dimensional electrostatic approach where all calculation procedures are restricted to the graphene sheet. Furthermore, to explore possible quantum effects, we perform tight-binding calculations, adopting a random-phase approximation. We investigate multiple plasmon modes in 20 nm equilateral triangles of graphene, treating the optical response classically as well as quantum mechanically. Compared to the classical plasmonic spectrum which is ``blind'' to the edge termination, we find that the quantum plasmon frequencies exhibit blueshifts in the case of armchair edge termination of the underlying atomic lattice, while redshifts are found for zigzag edges. Furthermore, we find spectral features in the zigzag case which are associated with electronic edge states not present for armchair termination. Merging pairs of triangles into dimers, plasmon hybridization leads to energy splitting that appears strongest in classical calculations while splitting is lower for armchair edges and even more reduced for zigzag edges. Our various results illustrate a surprising phenomenon: Even 20 nm large graphene structures clearly exhibit quantum plasmonic features due to atomic-scale details in the edge termination.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
NASA Astrophysics Data System (ADS)
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
Bayesian Estimation of Multi-Unidimensional Graded Response IRT Models
ERIC Educational Resources Information Center
Kuo, Tzu-Chun
2015-01-01
Item response theory (IRT) has gained an increasing popularity in large-scale educational and psychological testing situations because of its theoretical advantages over classical test theory. Unidimensional graded response models (GRMs) are useful when polytomous response items are designed to measure a unified latent trait. They are limited in…
Hypertext: Behind the Hype. ERIC Digest.
ERIC Educational Resources Information Center
Bevilacqua, Ann F.
This digest begins by defining the concept of hypertext and describing the two types of hypertext--static and dynamic. Three prototype applications are then discussed: (1) Intermedia, a large-scale multimedia system at Brown University; (2) the Perseus Project at Harvard University, which is developing interactive courseware on classical Greek…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Múnera, Héctor A., E-mail: hmunera@hotmail.com; Retired professor, Department of Physics, Universidad Nacional de Colombia, Bogotá, Colombia, South America
2016-07-07
It is postulated that there exists a fundamental energy-like fluid, which occupies the flat three-dimensional Euclidean space that contains our universe, and obeys the two basic laws of classical physics: conservation of linear momentum, and conservation of total energy; the fluid is described by the classical wave equation (CWE), which was Schrödinger’s first candidate to develop his quantum theory. Novel solutions for the CWE discovered twenty years ago are nonharmonic, inherently quantized, and universal in the sense of scale invariance, thus leading to quantization at all scales of the universe, from galactic clusters to the sub-quark world, and yielding amore » unified Lorentz-invariant quantum theory ab initio. Quingal solutions are isomorphic under both neo-Galilean and Lorentz transformations, and exhibit nother remarkable property: intrinsic unstability for large values of ℓ (a quantum number), thus limiting the size of each system at a given scale. Unstability and scale-invariance together lead to nested structures observed in our solar system; unstability may explain the small number of rows in the chemical periodic table, and nuclear unstability of nuclides beyond lead and bismuth. Quingal functions lend mathematical basis for Boscovich’s unified force (which is compatible with many pieces of evidence collected over the past century), and also yield a simple geometrical solution for the classical three-body problem, which is a useful model for electronic orbits in simple diatomic molecules. A testable prediction for the helicoidal-type force is suggested.« less
Quantum implications of a scale invariant regularization
NASA Astrophysics Data System (ADS)
Ghilencea, D. M.
2018-04-01
We study scale invariance at the quantum level in a perturbative approach. For a scale-invariant classical theory, the scalar potential is computed at a three-loop level while keeping manifest this symmetry. Spontaneous scale symmetry breaking is transmitted at a quantum level to the visible sector (of ϕ ) by the associated Goldstone mode (dilaton σ ), which enables a scale-invariant regularization and whose vacuum expectation value ⟨σ ⟩ generates the subtraction scale (μ ). While the hidden (σ ) and visible sector (ϕ ) are classically decoupled in d =4 due to an enhanced Poincaré symmetry, they interact through (a series of) evanescent couplings ∝ɛ , dictated by the scale invariance of the action in d =4 -2 ɛ . At the quantum level, these couplings generate new corrections to the potential, as scale-invariant nonpolynomial effective operators ϕ2 n +4/σ2 n. These are comparable in size to "standard" loop corrections and are important for values of ϕ close to ⟨σ ⟩. For n =1 , 2, the beta functions of their coefficient are computed at three loops. In the IR limit, dilaton fluctuations decouple, the effective operators are suppressed by large ⟨σ ⟩, and the effective potential becomes that of a renormalizable theory with explicit scale symmetry breaking by the DR scheme (of μ =constant).
NASA Astrophysics Data System (ADS)
Briscese, Fabio
2017-09-01
In this paper it is argued how the dynamics of the classical Newtonian N-body system can be described in terms of the Schrödinger-Poisson equations in the large N limit. This result is based on the stochastic quantization introduced by Nelson, and on the Calogero conjecture. According to the Calogero conjecture, the emerging effective Planck constant is computed in terms of the parameters of the N-body system as \\hbar ˜ M^{5/3} G^{1/2} (N/< ρ > )^{1/6}, where is G the gravitational constant, N and M are the number and the mass of the bodies, and < ρ > is their average density. The relevance of this result in the context of large scale structure formation is discussed. In particular, this finding gives a further argument in support of the validity of the Schrödinger method as numerical double of the N-body simulations of dark matter dynamics at large cosmological scales.
Autonomous quantum to classical transitions and the generalized imaging theorem
NASA Astrophysics Data System (ADS)
Briggs, John S.; Feagin, James M.
2016-03-01
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. Here we prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Currently, the quantum to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.
Autonomous quantum to classical transitions and the generalized imaging theorem
Briggs, John S.; Feagin, James M.
2016-03-16
The mechanism of the transition of a dynamical system from quantum to classical mechanics is of continuing interest. Practically it is of importance for the interpretation of multi-particle coincidence measurements performed at macroscopic distances from a microscopic reaction zone. We prove the generalized imaging theorem which shows that the spatial wave function of any multi-particle quantum system, propagating over distances and times large on an atomic scale but still microscopic, and subject to deterministic external fields and particle interactions, becomes proportional to the initial momentum wave function where the position and momentum coordinates define a classical trajectory. Now, the quantummore » to classical transition is considered to occur via decoherence caused by stochastic interaction with an environment. The imaging theorem arises from unitary Schrödinger propagation and so is valid without any environmental interaction. It implies that a simultaneous measurement of both position and momentum will define a unique classical trajectory, whereas a less complete measurement of say position alone can lead to quantum interference effects.« less
Real-time molecular scale observation of crystal formation.
Schreiber, Roy E; Houben, Lothar; Wolf, Sharon G; Leitus, Gregory; Lang, Zhong-Ling; Carbó, Jorge J; Poblet, Josep M; Neumann, Ronny
2017-04-01
How molecules in solution form crystal nuclei, which then grow into large crystals, is a poorly understood phenomenon. The classical mechanism of homogeneous crystal nucleation proceeds via the spontaneous random aggregation of species from liquid or solution. However, a non-classical mechanism suggests the formation of an amorphous dense phase that reorders to form stable crystal nuclei. So far it has remained an experimental challenge to observe the formation of crystal nuclei from five to thirty molecules. Here, using polyoxometallates, we show that the formation of small crystal nuclei is observable by cryogenic transmission electron microscopy. We observe both classical and non-classical nucleation processes, depending on the identity of the cation present. The experiments verify theoretical studies that suggest non-classical nucleation is the lower of the two energy pathways. The arrangement in just a seven-molecule proto-crystal matches the order found by X-ray diffraction of a single bulk crystal, which demonstrates that the same structure was formed in each case.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Vesselinov, V. V.
2017-12-01
Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.
Chemically intuited, large-scale screening of MOFs by machine learning techniques
NASA Astrophysics Data System (ADS)
Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.
2017-10-01
A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.
Large-scale fluctuations in the diffusive decomposition of solid solutions
NASA Astrophysics Data System (ADS)
Karpov, V. G.; Grimsditch, M.
1995-04-01
The concept of an instability in the classic Ostwald ripening theory with respect to compositional fluctuations is suggested. We show that small statistical fluctuations in the precipitate phase lead to gigantic Coulomb-like fluctuations in the solute concentration which in turn affect the ripening. As a result large-scale fluctuations in both the precipitate and solute concentrations appear. These fluctuations are characterized by amplitudes of the order of the average values of the corresponding quantities and by a space scale L~(na)-1/2 which is considerably greater than both the average nuclear radius and internuclear distance. The Lifshitz-Slyozov theory of ripening is shown to remain locally applicable, over length scales much less than L. The implications of these findings for elastic light scattering in solid solutions that have undergone Ostwald ripening are considered.
Many-Body Subradiant Excitations in Metamaterial Arrays: Experiment and Theory.
Jenkins, Stewart D; Ruostekoski, Janne; Papasimakis, Nikitas; Savo, Salvatore; Zheludev, Nikolay I
2017-08-04
Subradiant excitations, originally predicted by Dicke, have posed a long-standing challenge in physics owing to their weak radiative coupling to environment. Here we engineer massive coherently driven classical subradiance in planar metamaterial arrays as a spatially extended eigenmode comprising over 1000 metamolecules. By comparing the near- and far-field response in large-scale numerical simulations with those in experimental observations we identify strong evidence for classically correlated multimetamolecule subradiant states that dominate the total excitation energy. We show that similar spatially extended many-body subradiance can also exist in plasmonic metamaterial arrays at optical frequencies.
USDA-ARS?s Scientific Manuscript database
The ability to rear a beneficial predatory insect is often required for its use in inoculative releases for classical biological control applications. However, affordable mass production is required before a beneficial predatory insect will be commercialized for large scale repetitive releases. The...
Bosonic seesaw mechanism in a classically conformal extension of the Standard Model
Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...
2016-01-29
We suggest the so-called bosonic seesaw mechanism in the context of a classically conformal U(1) B-L extension of the Standard Model with two Higgs doublet fields. The U(1) B-L symmetry is radiatively broken via the Coleman–Weinberg mechanism, which also generates the mass terms for the two Higgs doublets through quartic Higgs couplings. Their masses are all positive but, nevertheless, the electroweak symmetry breaking is realized by the bosonic seesaw mechanism. Analyzing the renormalization group evolutions for all model couplings, we find that a large hierarchy among the quartic Higgs couplings, which is crucial for the bosonic seesaw mechanism to work,more » is dramatically reduced toward high energies. Therefore, the bosonic seesaw is naturally realized with only a mild hierarchy, if some fundamental theory, which provides the origin of the classically conformal invariance, completes our model at some high energy, for example, the Planck scale. In conclusion, we identify the regions of model parameters which satisfy the perturbativity of the running couplings and the electroweak vacuum stability as well as the naturalness of the electroweak scale.« less
NASA Astrophysics Data System (ADS)
Piñeiro Orioli, Asier; Boguslavski, Kirill; Berges, Jürgen
2015-07-01
We investigate universal behavior of isolated many-body systems far from equilibrium, which is relevant for a wide range of applications from ultracold quantum gases to high-energy particle physics. The universality is based on the existence of nonthermal fixed points, which represent nonequilibrium attractor solutions with self-similar scaling behavior. The corresponding dynamic universality classes turn out to be remarkably large, encompassing both relativistic as well as nonrelativistic quantum and classical systems. For the examples of nonrelativistic (Gross-Pitaevskii) and relativistic scalar field theory with quartic self-interactions, we demonstrate that infrared scaling exponents as well as scaling functions agree. We perform two independent nonperturbative calculations, first by using classical-statistical lattice simulation techniques and second by applying a vertex-resummed kinetic theory. The latter extends kinetic descriptions to the nonperturbative regime of overoccupied modes. Our results open new perspectives to learn from experiments with cold atoms aspects about the dynamics during the early stages of our universe.
Scale invariance in chaotic time series: Classical and quantum examples
NASA Astrophysics Data System (ADS)
Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro
Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.
Concurrent Spectral and Separation-space Views of Small-scale Anisotropy in Rotating Turbulence
NASA Astrophysics Data System (ADS)
Vallefuoco, D.; Godeferd, F. S.; Naso, A.
2017-12-01
Rotating turbulence is central in astrophysical, geophysical and industrial flows. A background rotation about a fixed axis introduces significant anisotropy in the turbulent dynamics through both linear and nonlinear mechanisms. The flow regime can be characterized by two independent non-dimensional parameters, e.g. the Reynolds and Rossby numbers or, equivalently, the ratio of the integral scale to the Kolmogorov scale L/η, and the ratio rZ/L, where rZ=√(ɛ/Ω3) is the Zeman scale, ɛ is the mean dissipation and Ω is the rotation rate. rZ is the scale at which the inertial timescale equals the rotation timescale. According to classical dimensional arguments (Zeman 1994), if the Reynolds number is large, scales much larger than rZ are mainly affected by rotation while scales much smaller than rZare dominated by the nonlinear dynamics and are expected to recover isotropy. In this work, we characterize incompressible rotating turbulence scale- and direction-dependent anisotropy through high Reynolds number pseudo-spectral forced DNS. We first focus on energy direction-dependent spectra in Fourier space: we show that a high anisotropy small wavenumber range and a low anisotropy large wavenumber range arise. Importantly, anisotropy arises even at scales much smaller than rZ and no small-scale isotropy is observed in our DNS, in contrast with previous numerical results (Delache et al. 2014, Mininni et al. 2012) but in agreement with experiments (Lamriben et al. 2011). Then, we estimate the value of the threshold wavenumber kT between these two anisotropic ranges for a large number of runs, and show that it corresponds to the scale at which dissipative effects are of the same order as those of rotation. Therefore, in the asymptotic inviscid limit, kT tends to infinity and only the low-wavenumber anisotropic range should persist. In this range anisotropy decreases with wavenumber, which is consistent with the classical Zeman argument. In addition, anisotropy at scales much smaller than rZ can be detected in physical space too, in particular for the third-order two-point vector moment F=<δu2 δu>, where δu is the velocity increment. We find the expected inertial trends for F (Galtier 2009) at scales sufficiently larger than the dissipative scale, while smaller scales exhibit qualitatively opposite anisotropic features.
Approximation of the ruin probability using the scaled Laplace transform inversion
Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak
2015-01-01
The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796
On the Monte Carlo simulation of electron transport in the sub-1 keV energy range.
Thomson, Rowan M; Kawrakow, Iwan
2011-08-01
The validity of "classic" Monte Carlo (MC) simulations of electron and positron transport at sub-1 keV energies is investigated in the context of quantum theory. Quantum theory dictates that uncertainties on the position and energy-momentum four-vectors of radiation quanta obey Heisenberg's uncertainty relation; however, these uncertainties are neglected in "classical" MC simulations of radiation transport in which position and momentum are known precisely. Using the quantum uncertainty relation and electron mean free path, the magnitudes of uncertainties on electron position and momentum are calculated for different kinetic energies; a validity bound on the classical simulation of electron transport is derived. In order to satisfy the Heisenberg uncertainty principle, uncertainties of 5% must be assigned to position and momentum for 1 keV electrons in water; at 100 eV, these uncertainties are 17 to 20% and are even larger at lower energies. In gaseous media such as air, these uncertainties are much smaller (less than 1% for electrons with energy 20 eV or greater). The classical Monte Carlo transport treatment is questionable for sub-1 keV electrons in condensed water as uncertainties on position and momentum must be large (relative to electron momentum and mean free path) to satisfy the quantum uncertainty principle. Simulations which do not account for these uncertainties are not faithful representations of the physical processes, calling into question the results of MC track structure codes simulating sub-1 keV electron transport. Further, the large difference in the scale at which quantum effects are important in gaseous and condensed media suggests that track structure measurements in gases are not necessarily representative of track structure in condensed materials on a micrometer or a nanometer scale.
Numerical simulation of the geodynamo reaches Earth's core dynamical regime
NASA Astrophysics Data System (ADS)
Aubert, J.; Gastine, T.; Fournier, A.
2016-12-01
Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.
A Classic Test of the Hubbert-Rubey Weakening Mechanism: M7.6 Thrust-Belt Earthquake Taiwan
NASA Astrophysics Data System (ADS)
Yue, L.; Suppe, J.
2005-12-01
The Hubbert-Rubey (1959) fluid-pressure hypothesis has long been accepted as a classic solution to the problem of the apparent weakness of long thin thrust sheets. This hypothesis, in its classic form argues that ambient high pore-fluid pressures, which are common in sedimentary basins, reduce the normalized shear traction on the fault τb/ρ g H = μb(1-λb) where λb=Pf/ρ g H is the normalized pore-fluid pressure and μb is the coefficient of friction. Remarkably, there have been few large-scale tests of this classic hypothesis. Here we document ambient pore-fluid pressures surrounding the active frontal thrusts of western Taiwan, including the Chulungpu thrust that slipped in the 1999 Mw7.6 Chi-Chi earthquake. We show from 3-D mapping of these thrusts that they flatten to a shallow detachment at about 5 km depth in the Pliocene Chinshui Shale. Using critical-taper wedge theory and the dip of the detachment and surface slope we constrain the basal shear traction τb/ρ g H ≍ 0.1 which is substantially weaker than common lab friction values of of Byerlee's law (μb= 0.85-0.6). We have determined the pore-fluid pressures as a function of depth in 76 wells, based on in-situ formation tests, sonic logs and mud densities. Fluid pressures are regionally controlled stratigraphically by sedimentary facies. The top of overpressures is everywhere below the base of the Chinshui Shale, therefore the entire Chinshui thrust system is at ambient hydrostatic pore-fluid pressures (λb ≍ 0.4). According to the classic Hubbert-Rubey hypothesis the required basal coefficient of friction is therefore μb ≍ 0.1-0.2. Therefore the classic Hubbert & Rubey mechanism involving static ambient excess fluid pressures is not the cause of extreme fault weakening in this western Taiwan example. We must look to other mechanisms of large-scale fault weakening, many of which are difficult to test.
ERIC Educational Resources Information Center
Mobley, Catherine; Vagias, Wade M.; DeWard, Sarah L.
2010-01-01
It is often assumed that individuals who are knowledgeable and concerned about the environment will engage in environmentally responsible behavior (ERB). We use data from a large scale Web survey hosted on National Geographic's Web site in 2001-2002 to investigate this premise. We examine whether reading three classic environmental books…
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A; Frankel, Steven H
2014-07-01
In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, "Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow," J. Fluid Mech., 582, pp. 253-280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, "Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method," J. Comput. Phys., 227(13), pp. 6660-6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, "General Circulation Experiments With the Primitive Equations," Mon. Weather Rev., 91(10), pp. 99-164), recently developed Vreman model (Vreman, 2004, "An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications," Phys. Fluids, 16(10), pp. 3670-3681), and the Sigma model (Nicoud et al., 2011, "Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations," Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) ("OpenFOAM," http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo.
ERIC Educational Resources Information Center
Yelboga, Atilla; Tavsancil, Ezel
2010-01-01
In this research, the classical test theory and generalizability theory analyses were carried out with the data obtained by a job performance scale for the years 2005 and 2006. The reliability coefficients obtained (estimated) from the classical test theory and generalizability theory analyses were compared. In classical test theory, test retest…
Universal scaling for the quantum Ising chain with a classical impurity
NASA Astrophysics Data System (ADS)
Apollaro, Tony J. G.; Francica, Gianluca; Giuliano, Domenico; Falcone, Giovanni; Palma, G. Massimo; Plastina, Francesco
2017-10-01
We study finite-size scaling for the magnetic observables of an impurity residing at the end point of an open quantum Ising chain with transverse magnetic field, realized by locally rescaling the field by a factor μ ≠1 . In the homogeneous chain limit at μ =1 , we find the expected finite-size scaling for the longitudinal impurity magnetization, with no specific scaling for the transverse magnetization. At variance, in the classical impurity limit μ =0 , we recover finite scaling for the longitudinal magnetization, while the transverse one basically does not scale. We provide both analytic approximate expressions for the magnetization and the susceptibility as well as numerical evidences for the scaling behavior. At intermediate values of μ , finite-size scaling is violated, and we provide a possible explanation of this result in terms of the appearance of a second, impurity-related length scale. Finally, by going along the standard quantum-to-classical mapping between statistical models, we derive the classical counterpart of the quantum Ising chain with an end-point impurity as a classical Ising model on a square lattice wrapped on a half-infinite cylinder, with the links along the first circle modified as a function of μ .
Radar mapping, archaeology, and ancient land use in the Maya lowlands
NASA Technical Reports Server (NTRS)
Adams, R. E. W.; Brown, W. E., Jr.; Culbert, T. P.
1981-01-01
Data from the use of synthetic aperture radar in aerial survey of the southern Maya lowlands suggest the presence of very large areas drained by ancient canals for the purpose of intensive cultivation. Preliminary ground checks in several very limited areas confirm the existence of canals and raised fields. Excavations and ground surveys by several scholars provide valuable comparative information. Taken together, the new data suggest that Late Classic period Maya civilization was firmly grounded in large-scale and intensive cultivation of swampy zones.
Cosmos in Concert: Combining astronomy and classical music
NASA Astrophysics Data System (ADS)
Kremer, Kyle
2018-01-01
Cosmos in Concert is an outreach initiative designed to combine astronomy education with classical music. Over the past several years, this program has presented large-scale multimedia shows for symphony orchestras, educational programs at K-12 schools, and research-oriented university collaborations designed to develop techniques for the sonification of data. Cosmos in Concert has collaborated with institutions including Fermi National Lab, the Adler Planetarium, the Bienen School of Music, and the Colburn School of Music. In this talk, I will give a brief overview of some of the main Cosmos in Concert initiatives and discuss ways these initiatives may be implemented at other institutions.
Jean, Stéphane; Richer, Louis; Laberge, Luc; Mathieu, Jean
2014-11-26
Myotonic dystrophy type 1 (DM1) is an autosomal dominant genetic multisystem disorder and the commonest adult-onset form of muscular dystrophy. DM1 results from the expansion of an unstable trinucleotide cytosine-thymine-guanine (CTG) repeat mutation. CTG repeats in DM1 patients can range from 50 to several thousands, with a tendency toward increased repeats with successive generations (anticipation). Associated findings can include involvements in almost every systems, including the brain, and cognitive abnormalities occur in the large majority of patients. The objectives are to describe and compare the intellectual abilities of a large sample of DM1 patients with mild and classic adult-onset phenotypes, to estimate the validity of the Wechsler Adult Intelligence Scale-Revised (WAIS-R) in DM1 patients with muscular weakness, and to appraise the relationship of intelligence quotient (IQ) to CTG repeat length, age at onset of symptoms, and disease duration. A seven-subtest WAIS-R was administered to 37 mild and 151 classic adult-onset DM1 patients to measure their Full-Scale (FSIQ), Verbal (VIQ) and Performance IQ (PIQ). To control for potential bias due to muscular weakness, Standard Progressive Matrices (SPM), a motor-independent test of intelligence, were also completed. Total mean FSIQ was 82.6 corresponding to low average IQ, and 82% were below an average intelligence. Mild DM1 patients had a higher mean FSIQ (U=88.7 vs 81.1, p<0.001), VIQ (U=87.8 vs 82.3, p=0.001), and PIQ (U=94.8 vs 83.6, p<0.001) than classic adult-onset DM1 patients. In both mild and classic adult-onset patients, all subtests mean scaled scores were below the normative sample mean. FSIQ also strongly correlate with SPM (r s =0.67, p<0.001), indicating that low intelligence scores are not a consequence of motor impairment. FSIQ scores decreased with both the increase of (CTG)n (r s =-0.41, p<0.001) and disease duration (r s =-0.26, p=0.003). Results show that intellectual impairment is an extremely common and important feature in DM1, not only among the classic adult-onset patients but also among the least severe forms of DM1, with low IQ scores compared to general reference population. Health care providers involved in the follow-up of these patients should be aware of their intellectual capacities and should adapt their interventions accordingly.
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
Klein, Brennan J; Li, Zhi; Durgin, Frank H
2016-04-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides to dissociate egocentric from allocentric reference frames. In Experiment 1, it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Klein, Brennan J.; Li, Zhi; Durgin, Frank H.
2015-01-01
What is the natural reference frame for seeing large-scale spatial scenes in locomotor action space? Prior studies indicate an asymmetric angular expansion in perceived direction in large-scale environments: Angular elevation relative to the horizon is perceptually exaggerated by a factor of 1.5, whereas azimuthal direction is exaggerated by a factor of about 1.25. Here participants made angular and spatial judgments when upright or on their sides in order to dissociate egocentric from allocentric reference frames. In Experiment 1 it was found that body orientation did not affect the magnitude of the up-down exaggeration of direction, suggesting that the relevant orientation reference frame for this directional bias is allocentric rather than egocentric. In Experiment 2, the comparison of large-scale horizontal and vertical extents was somewhat affected by viewer orientation, but only to the extent necessitated by the classic (5%) horizontal-vertical illusion (HVI) that is known to be retinotopic. Large-scale vertical extents continued to appear much larger than horizontal ground extents when observers lay sideways. When the visual world was reoriented in Experiment 3, the bias remained tied to the ground-based allocentric reference frame. The allocentric HVI is quantitatively consistent with differential angular exaggerations previously measured for elevation and azimuth in locomotor space. PMID:26594884
DOE R&D Accomplishments Database
Weinberg, Alvin M.; Noderer, L. C.
1951-05-15
The large scale release of nuclear energy in a uranium fission chain reaction involves two essentially distinct physical phenomena. On the one hand there are the individual nuclear processes such as fission, neutron capture, and neutron scattering. These are essentially quantum mechanical in character, and their theory is non-classical. On the other hand, there is the process of diffusion -- in particular, diffusion of neutrons, which is of fundamental importance in a nuclear chain reaction. This process is classical; insofar as the theory of the nuclear chain reaction depends on the theory of neutron diffusion, the mathematical study of chain reactions is an application of classical, not quantum mechanical, techniques.
Fox, Stephen J; Pittock, Chris; Tautermann, Christofer S; Fox, Thomas; Christ, Clara; Malcolm, N O J; Essex, Jonathan W; Skylaris, Chris-Kriton
2013-08-15
Schemes of increasing sophistication for obtaining free energies of binding have been developed over the years, where configurational sampling is used to include the all-important entropic contributions to the free energies. However, the quality of the results will also depend on the accuracy with which the intermolecular interactions are computed at each molecular configuration. In this context, the energy change associated with the rearrangement of electrons (electronic polarization and charge transfer) upon binding is a very important effect. Classical molecular mechanics force fields do not take this effect into account explicitly, and polarizable force fields and semiempirical quantum or hybrid quantum-classical (QM/MM) calculations are increasingly employed (at higher computational cost) to compute intermolecular interactions in free-energy schemes. In this work, we investigate the use of large-scale quantum mechanical calculations from first-principles as a way of fully taking into account electronic effects in free-energy calculations. We employ a one-step free-energy perturbation (FEP) scheme from a molecular mechanical (MM) potential to a quantum mechanical (QM) potential as a correction to thermodynamic integration calculations within the MM potential. We use this approach to calculate relative free energies of hydration of small aromatic molecules. Our quantum calculations are performed on multiple configurations from classical molecular dynamics simulations. The quantum energy of each configuration is obtained from density functional theory calculations with a near-complete psinc basis set on over 600 atoms using the ONETEP program.
Quantum localization for a kicked rotor with accelerator mode islands.
Iomin, A; Fishman, S; Zaslavsky, G M
2002-03-01
Dynamical localization of classical superdiffusion for the quantum kicked rotor is studied in the semiclassical limit. Both classical and quantum dynamics of the system become more complicated under the conditions of mixed phase space with accelerator mode islands. Recently, long time quantum flights due to the accelerator mode islands have been found. By exploration of their dynamics, it is shown here that the classical-quantum duality of the flights leads to their localization. The classical mechanism of superdiffusion is due to accelerator mode dynamics, while quantum tunneling suppresses the superdiffusion and leads to localization of the wave function. Coupling of the regular type dynamics inside the accelerator mode island structures to dynamics in the chaotic sea proves increasing the localization length. A numerical procedure and an analytical method are developed to obtain an estimate of the localization length which, as it is shown, has exponentially large scaling with the dimensionless Planck's constant (tilde)h<1 in the semiclassical limit. Conditions for the validity of the developed method are specified.
Large-scale quantum photonic circuits in silicon
NASA Astrophysics Data System (ADS)
Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk
2016-08-01
Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards large-scale source integration. Finally, we review monolithic integration strategies for single-photon detectors and their essential role in on-chip feed forward operations.
The massive fermion phase for the U(N) Chern-Simons gauge theory in D=3 at large N
Bardeen, William A.
2014-10-07
We explore the phase structure of fermions in the U(N) Chern-Simons Gauge theory in three dimensions using the large N limit where N is the number of colors and the fermions are taken to be in the fundamental representation of the U(N) gauge group. In the large N limit, the theory retains its classical conformal behavior and considerable attention has been paid to possible AdS/CFT dualities of the theory in the conformal phase. In this paper we present a solution for the massive phase of the fermion theory that is exact to the leading order of ‘t Hooft’s large Nmore » expansion. We present evidence for the spontaneous breaking of the exact scale symmetry and analyze the properties of the dilaton that appears as the Goldstone boson of scale symmetry breaking.« less
Spherical cows in the sky with fab four
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaloper, Nemanja; Sandora, McCullen, E-mail: kaloper@physics.ucdavis.edu, E-mail: mesandora@ucdavis.edu
2014-05-01
We explore spherically symmetric static solutions in a subclass of unitary scalar-tensor theories of gravity, called the 'Fab Four' models. The weak field large distance solutions may be phenomenologically viable, but only if the Gauss-Bonnet term is negligible. Only in this limit will the Vainshtein mechanism work consistently. Further, classical constraints and unitarity bounds constrain the models quite tightly. Nevertheless, in the limits where the range of individual terms at large scales is respectively Kinetic Braiding, Horndeski, and Gauss-Bonnet, the horizon scale effects may occur while the theory satisfies Solar system constraints and, marginally, unitarity bounds. On the other hand,more » to bring the cutoff down to below a millimeter constrains all the couplings scales such that 'Fab Fours' can't be heard outside of the Solar system.« less
NASA Technical Reports Server (NTRS)
Nash, Stephen G.; Polyak, R.; Sofer, Ariela
1994-01-01
When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.
Quantum spin chains with multiple dynamics
NASA Astrophysics Data System (ADS)
Chen, Xiao; Fradkin, Eduardo; Witczak-Krempa, William
2017-11-01
Many-body systems with multiple emergent time scales arise in various contexts, including classical critical systems, correlated quantum materials, and ultracold atoms. We investigate such nontrivial quantum dynamics in a different setting: a spin-1 bilinear-biquadratic chain. It has a solvable entangled ground state, but a gapless excitation spectrum that is poorly understood. By using large-scale density matrix renormalization group simulations, we find that the lowest excitations have a dynamical exponent z that varies from 2 to 3.2 as we vary a coupling in the Hamiltonian. We find an additional gapless mode with a continuously varying exponent 2 ≤z <2.7 , which establishes the presence of multiple dynamics. In order to explain these striking properties, we construct a continuum wave function for the ground state, which correctly describes the correlations and entanglement properties. We also give a continuum parent Hamiltonian, but show that additional ingredients are needed to capture the excitations of the chain. By using an exact mapping to the nonequilibrium dynamics of a classical spin chain, we find that the large dynamical exponent is due to subdiffusive spin motion. Finally, we discuss the connections to other spin chains and to a family of quantum critical models in two dimensions.
NASA Technical Reports Server (NTRS)
Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.
2012-01-01
This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.
Gu, Xun; Wang, Yufeng; Gu, Jianying
2002-06-01
The classical (two-round) hypothesis of vertebrate genome duplication proposes two successive whole-genome duplication(s) (polyploidizations) predating the origin of fishes, a view now being seriously challenged. As the debate largely concerns the relative merits of the 'big-bang mode' theory (large-scale duplication) and the 'continuous mode' theory (constant creation by small-scale duplications), we tested whether a significant proportion of paralogous genes in the contemporary human genome was indeed generated in the early stage of vertebrate evolution. After an extensive search of major databases, we dated 1,739 gene duplication events from the phylogenetic analysis of 749 vertebrate gene families. We found a pattern characterized by two waves (I, II) and an ancient component. Wave I represents a recent gene family expansion by tandem or segmental duplications, whereas wave II, a rapid paralogous gene increase in the early stage of vertebrate evolution, supports the idea of genome duplication(s) (the big-bang mode). Further analysis indicated that large- and small-scale gene duplications both make a significant contribution during the early stage of vertebrate evolution to build the current hierarchy of the human proteome.
NASA Astrophysics Data System (ADS)
Larsen, G. C.; Larsen, T. J.; Chougule, A.
2017-05-01
The aim of the present paper is to demonstrate the capability of medium fidelity modelling of wind turbine component fatigue loading, when the wind turbines are subjected to wake affected non-stationary flow fields under non-neutral atmospheric stability conditions. To accomplish this we combine the classical Dynamic Wake Meandering model with a fundamental conjecture stating: Atmospheric boundary layer stability affects primary wake meandering dynamics driven by large turbulent scales, whereas wake expansion in the meandering frame of reference is hardly affected. Inclusion of stability (i.e. buoyancy) in description of both large- and small scale atmospheric boundary layer turbulence is facilitated by a generalization of the classical Mann spectral tensor, which consistently includes buoyancy effects. With non-stationary wind turbine inflow fields modelled as described above, fatigue loads are obtained using the state-of-the art aeroelastic model HAWC2. The Lillgrund offshore wind farm (WF) constitute an interesting case study for wind farm model validation, because the WT interspacing is small, which in turn means that wake effects are significant. A huge data set, comprising 5 years of blade and tower load recordings, is available for model validation. For a multitude of wake situations this data set displays a considerable scatter, which to a large degree seems to be caused by atmospheric boundary layer stability effects. Notable is also that rotating wind turbine components predominantly experience high fatigue loading for stable stratification with significant shear, whereas high fatigue loading of non-rotating wind turbine components are associated with unstable atmospheric boundary layer stratification.
A DWARF TRANSITIONAL PROTOPLANETARY DISK AROUND XZ TAU B
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osorio, Mayra; Macías, Enrique; Anglada, Guillem
We report the discovery of a dwarf protoplanetary disk around the star XZ Tau B that shows all the features of a classical transitional disk but on a much smaller scale. The disk has been imaged with the Atacama Large Millimeter/submillimeter Array (ALMA), revealing that its dust emission has a quite small radius of ∼3.4 au and presents a central cavity of ∼1.3 au in radius that we attribute to clearing by a compact system of orbiting (proto)planets. Given the very small radii involved, evolution is expected to be much faster in this disk (observable changes in a few months)more » than in classical disks (observable changes requiring decades) and easy to monitor with observations in the near future. From our modeling we estimate that the mass of the disk is large enough to form a compact planetary system.« less
Low-buoyancy thermochemical plumes resolve controversy of classical mantle plume concept
NASA Astrophysics Data System (ADS)
Dannberg, Juliane; Sobolev, Stephan V.
2015-04-01
The Earth's biggest magmatic events are believed to originate from massive melting when hot mantle plumes rising from the lowermost mantle reach the base of the lithosphere. Classical models predict large plume heads that cause kilometre-scale surface uplift, and narrow (100 km radius) plume tails that remain in the mantle after the plume head spreads below the lithosphere. However, in many cases, such uplifts and narrow plume tails are not observed. Here using numerical models, we show that the issue can be resolved if major mantle plumes contain up to 15-20% of recycled oceanic crust in a form of dense eclogite, which drastically decreases their buoyancy and makes it depth dependent. We demonstrate that, despite their low buoyancy, large enough thermochemical plumes can rise through the whole mantle causing only negligible surface uplift. Their tails are bulky (>200 km radius) and remain in the upper mantle for 100 millions of years.
Causality as an emergent macroscopic phenomenon: The Lee-Wick O(N) model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grinstein, Benjamin; O'Connell, Donal; Wise, Mark B.
2009-05-15
In quantum mechanics the deterministic property of classical physics is an emergent phenomenon appropriate only on macroscopic scales. Lee and Wick introduced Lorentz invariant quantum theories where causality is an emergent phenomenon appropriate for macroscopic time scales. In this paper we analyze a Lee-Wick version of the O(N) model. We argue that in the large-N limit this theory has a unitary and Lorentz invariant S matrix and is therefore free of paradoxes in scattering experiments. We discuss some of its acausal properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, William Michael; Plimpton, Steven James; Wang, Peng
2010-03-01
LAMMPS is a classical molecular dynamics code, and an acronym for Large-scale Atomic/Molecular Massively Parallel Simulator. LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Evaluating the Health Impact of Large-Scale Public Policy Changes: Classical and Novel Approaches
Basu, Sanjay; Meghani, Ankita; Siddiqi, Arjumand
2018-01-01
Large-scale public policy changes are often recommended to improve public health. Despite varying widely—from tobacco taxes to poverty-relief programs—such policies present a common dilemma to public health researchers: how to evaluate their health effects when randomized controlled trials are not possible. Here, we review the state of knowledge and experience of public health researchers who rigorously evaluate the health consequences of large-scale public policy changes. We organize our discussion by detailing approaches to address three common challenges of conducting policy evaluations: distinguishing a policy effect from time trends in health outcomes or preexisting differences between policy-affected and -unaffected communities (using difference-in-differences approaches); constructing a comparison population when a policy affects a population for whom a well-matched comparator is not immediately available (using propensity score or synthetic control approaches); and addressing unobserved confounders by utilizing quasi-random variations in policy exposure (using regression discontinuity, instrumental variables, or near-far matching approaches). PMID:28384086
NASA Astrophysics Data System (ADS)
Hartmann, Alfred; Redfield, Steve
1989-04-01
This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.
Fucci, D; Petrosino, L; Banks, M; Zaums, K; Wilcox, C
1996-08-01
The purpose of the present study was to assess the effect of preference for three different types of music on magnitude estimation scaling behavior in young adults. Three groups of college students, 10 who liked rock music, 10 who liked big band music, and 10 who liked classical music were tested. Subjects were instructed to assign numerical values to a random series of nine suprathreshold intensity levels of 10-sec, samples of rock music, big band music, and classical music. Analysis indicated that subjects who liked rock music scaled that stimulus differently from those subjects who liked big band and classical music. Subjects who liked big band music scaled that stimulus differently from those subjects who liked rock music and classical music. All subjects scaled classical music similarly regardless of their musical preferences. Results are discussed in reference to the literature concerned with personality and preference as well as spectrographic analyses of the three different types of music used in this study.
Large-scale quantitative analysis of painting arts.
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-11
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
Owor, Betty E; Shepherd, Dionne N; Taylor, Nigel J; Edema, Richard; Monjane, Adérito L; Thomson, Jennifer A; Martin, Darren P; Varsani, Arvind
2007-03-01
Leaf samples from 155 maize streak virus (MSV)-infected maize plants were collected from 155 farmers' fields in 23 districts in Uganda in May/June 2005 by leaf-pressing infected samples onto FTA Classic Cards. Viral DNA was successfully extracted from cards stored at room temperature for 9 months. The diversity of 127 MSV isolates was analysed by PCR-generated RFLPs. Six representative isolates having different RFLP patterns and causing either severe, moderate or mild disease symptoms, were chosen for amplification from FTA cards by bacteriophage phi29 DNA polymerase using the TempliPhi system. Full-length genomes were inserted into a cloning vector using a unique restriction enzyme site, and sequenced. The 1.3-kb PCR product amplified directly from FTA-eluted DNA and used for RFLP analysis was also cloned and sequenced. Comparison of cloned whole genome sequences with those of the original PCR products indicated that the correct virus genome had been cloned and that no errors were introduced by the phi29 polymerase. This is the first successful large-scale application of FTA card technology to the field, and illustrates the ease with which large numbers of infected samples can be collected and stored for downstream molecular applications such as diversity analysis and cloning of potentially new virus genomes.
Sequestering the standard model vacuum energy.
Kaloper, Nemanja; Padilla, Antonio
2014-03-07
We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tong; Gu, YuanTong, E-mail: yuantong.gu@qut.edu.au
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grainedmore » level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.« less
Direct and inverse energy cascades in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Campagne, Antoine; Gallet, Basile; Moisy, Frédéric; Cortet, Pierre-Philippe
2014-12-01
We present experimental evidence for a double cascade of kinetic energy in a statistically stationary rotating turbulence experiment. Turbulence is generated by a set of vertical flaps, which continuously injects velocity fluctuations towards the center of a rotating water tank. The energy transfers are evaluated from two-point third-order three-component velocity structure functions, which we measure using stereoscopic particle image velocimetry in the rotating frame. Without global rotation, the energy is transferred from large to small scales, as in classical three-dimensional turbulence. For nonzero rotation rates, the horizontal kinetic energy presents a double cascade: a direct cascade at small horizontal scales and an inverse cascade at large horizontal scales. By contrast, the vertical kinetic energy is always transferred from large to small horizontal scales, a behavior reminiscent of the dynamics of a passive scalar in two-dimensional turbulence. At the largest rotation rate, the flow is nearly two-dimensional, and a pure inverse energy cascade is found for the horizontal energy. To describe the scale-by-scale energy budget, we consider a generalization of the Kármán-Howarth-Monin equation to inhomogeneous turbulent flows, in which the energy input is explicitly described as the advection of turbulent energy from the flaps through the surface of the control volume where the measurements are performed.
Stochastic dynamics of genetic broadcasting networks
NASA Astrophysics Data System (ADS)
Potoyan, Davit; Wolynes, Peter
The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a ''time-scale crisis'' of master genes that broadcast their signals to large number of binding sites. We demonstrate that this ''time-scale crisis'' can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκB which broadcasts its signals to many downstream genes that regulate immune response, apoptosis etc.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Constraints on the extremely high-energy cosmic ray accelerators from classical electrodynamics
NASA Astrophysics Data System (ADS)
Aharonian, F. A.; Belyanin, A. A.; Derishev, E. V.; Kocharovsky, V. V.; Kocharovsky, Vl. V.
2002-07-01
We formulate the general requirements, set by classical electrodynamics, on the sources of extremely high-energy cosmic rays (EHECRs). It is shown that the parameters of EHECR accelerators are strongly limited not only by the particle confinement in large-scale magnetic fields or by the difference in electric potentials (generalized Hillas criterion) but also by the synchrotron radiation, the electro-bremsstrahlung, or the curvature radiation of accelerated particles. Optimization of these requirements in terms of an accelerator's size and magnetic field strength results in the ultimate lower limit to the overall source energy budget, which scales as the fifth power of attainable particle energy. Hard γ rays accompanying generation of EHECRs can be used to probe potential acceleration sites. We apply the results to several populations of astrophysical objects-potential EHECR sources-and discuss their ability to accelerate protons to 1020 eV and beyond. The possibility of gain from ultrarelativistic bulk flows is addressed, with active galactic nuclei and gamma-ray bursts being the examples.
Constraints on the extremely high-energy cosmic rays accelerators from classical electrodynamics
NASA Astrophysics Data System (ADS)
Belyanin, A.; Aharonian, F.; Derishev, E.; Kocharovsky, V.; Kocharovsky, V.
We formulate the general requirements, set by classical electrodynamics, to the sources of extremely high-energy cosmic rays (EHECRs). It is shown that the parameters of EHECR accelerators are strongly limited not only by the particle confinement in large-scale magnetic field or by the difference in electric potentials (generalized Hillas criterion), but also by the synchrotron radiation, the electro-bremsstrahlung, or the curvature radiation of accelerated particles. Optimization of these requirements in terms of accelerator's size and magnetic field strength results in the ultimate lower limit to the overall source energy budget, which scales as the fifth power of attainable particle energy. Hard gamma-rays accompanying generation of EHECRs can be used to probe potential acceleration sites. We apply the results to several populations of astrophysical objects - potential EHECR sources - and discuss their ability to accelerate protons to 1020 eV and beyond. A possibility to gain from ultrarelativistic bulk flows is addressed, with Active Galactic Nuclei and Gamma-Ray Bursts being the examples.
Phase Transitions and Scaling in Systems Far from Equilibrium
NASA Astrophysics Data System (ADS)
Täuber, Uwe C.
2017-03-01
Scaling ideas and renormalization group approaches proved crucial for a deep understanding and classification of critical phenomena in thermal equilibrium. Over the past decades, these powerful conceptual and mathematical tools were extended to continuous phase transitions separating distinct nonequilibrium stationary states in driven classical and quantum systems. In concordance with detailed numerical simulations and laboratory experiments, several prominent dynamical universality classes have emerged that govern large-scale, long-time scaling properties both near and far from thermal equilibrium. These pertain to genuine specific critical points as well as entire parameter space regions for steady states that display generic scale invariance. The exploration of nonstationary relaxation properties and associated physical aging scaling constitutes a complementary potent means to characterize cooperative dynamics in complex out-of-equilibrium systems. This review describes dynamic scaling features through paradigmatic examples that include near-equilibrium critical dynamics, driven lattice gases and growing interfaces, correlation-dominated reaction-diffusion systems, and basic epidemic models.
Large-Scale Quantitative Analysis of Painting Arts
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
What are the low- Q and large- x boundaries of collinear QCD factorization theorems?
Moffat, E.; Melnitchouk, W.; Rogers, T. C.; ...
2017-05-26
Familiar factorized descriptions of classic QCD processes such as deeply-inelastic scattering (DIS) apply in the limit of very large hard scales, much larger than nonperturbative mass scales and other nonperturbative physical properties like intrinsic transverse momentum. Since many interesting DIS studies occur at kinematic regions where the hard scale,more » $$Q \\sim$$ 1-2 GeV, is not very much greater than the hadron masses involved, and the Bjorken scaling variable $$x_{bj}$$ is large, $$x_{bj} \\gtrsim 0.5$$, it is important to examine the boundaries of the most basic factorization assumptions and assess whether improved starting points are needed. Using an idealized field-theoretic model that contains most of the essential elements that a factorization derivation must confront, we retrace in this paper the steps of factorization approximations and compare with calculations that keep all kinematics exact. We examine the relative importance of such quantities as the target mass, light quark masses, and intrinsic parton transverse momentum, and argue that a careful accounting of parton virtuality is essential for treating power corrections to collinear factorization. Finally, we use our observations to motivate searches for new or enhanced factorization theorems specifically designed to deal with moderately low-$Q$ and large-$$x_{bj}$$ physics.« less
Stochastic dynamics of genetic broadcasting networks
NASA Astrophysics Data System (ADS)
Potoyan, Davit A.; Wolynes, Peter G.
2017-11-01
The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a "time-scale crisis" for master genes that broadcast their signals to a large number of binding sites. We demonstrate that this time-scale crisis for clearance in a large broadcasting network can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying a model of the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκ B which broadcasts its signals to many downstream genes that regulate immune response, apoptosis, etc.
How well can regional fluxes be derived from smaller-scale estimates?
NASA Technical Reports Server (NTRS)
Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.
1992-01-01
Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.
Compiling Planning into Quantum Optimization Problems: A Comparative Study
2015-06-07
and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum
Large Scale Single Nucleotide Polymorphism Study of PD Susceptibility
2005-03-01
identification of eight genetic loci in the familial PD, the results of intensive investigations of polymorphisms in dozens of genes related to sporadic, late...1) investigate the association between classical, sporadic PD and 2386 SNPs in 23 genes implicated in the pathogenesis of PD; (2) construct...addition, experiences derived from this study may be applied in other complex disorders for the identification of susceptibility genes , as well as in genome
NASA Technical Reports Server (NTRS)
Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.
2016-01-01
In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.
Bouncing cosmologies from quantum gravity condensates
NASA Astrophysics Data System (ADS)
Oriti, Daniele; Sindoni, Lorenzo; Wilson-Ewing, Edward
2017-02-01
We show how the large-scale cosmological dynamics can be obtained from the hydrodynamics of isotropic group field theory condensate states in the Gross-Pitaevskii approximation. The correct Friedmann equations are recovered in the classical limit for some choices of the parameters in the action for the group field theory, and quantum gravity corrections arise in the high-curvature regime causing a bounce which generically resolves the big-bang and big-crunch singularities.
Re'class'ification of 'quant'ified classical simulated annealing
NASA Astrophysics Data System (ADS)
Tanaka, Toshiyuki
2009-12-01
We discuss a classical reinterpretation of quantum-mechanics-based analysis of classical Markov chains with detailed balance, that is based on the quantum-classical correspondence. The classical reinterpretation is then used to demonstrate that it successfully reproduces a sufficient condition for cooling schedule in classical simulated annealing, which has the inverse-logarithmic scaling.
Classical evolution of fractal measures on the lattice
NASA Astrophysics Data System (ADS)
Antoniou, N. G.; Diakonos, F. K.; Saridakis, E. N.; Tsolias, G. A.
2007-04-01
We consider the classical evolution of a lattice of nonlinear coupled oscillators for a special case of initial conditions resembling the equilibrium state of a macroscopic thermal system at the critical point. The displacements of the oscillators define initially a fractal measure on the lattice associated with the scaling properties of the order parameter fluctuations in the corresponding critical system. Assuming a sudden symmetry breaking (quench), leading to a change in the equilibrium position of each oscillator, we investigate in some detail the deformation of the initial fractal geometry as time evolves. In particular, we show that traces of the critical fractal measure can be sustained for large times, and we extract the properties of the chain that determine the associated time scales. Our analysis applies generally to critical systems for which, after a slow developing phase where equilibrium conditions are justified, a rapid evolution, induced by a sudden symmetry breaking, emerges on time scales much shorter than the corresponding relaxation or observation time. In particular, it can be used in the fireball evolution in a heavy-ion collision experiment, where the QCD critical point emerges, or in the study of evolving fractals of astrophysical and cosmological scales, and may lead to determination of the initial critical properties of the Universe through observations in the symmetry-broken phase.
Demonstration of quantum advantage in machine learning
NASA Astrophysics Data System (ADS)
Ristè, Diego; da Silva, Marcus P.; Ryan, Colm A.; Cross, Andrew W.; Córcoles, Antonio D.; Smolin, John A.; Gambetta, Jay M.; Chow, Jerry M.; Johnson, Blake R.
2017-04-01
The main promise of quantum computing is to efficiently solve certain problems that are prohibitively expensive for a classical computer. Most problems with a proven quantum advantage involve the repeated use of a black box, or oracle, whose structure encodes the solution. One measure of the algorithmic performance is the query complexity, i.e., the scaling of the number of oracle calls needed to find the solution with a given probability. Few-qubit demonstrations of quantum algorithms, such as Deutsch-Jozsa and Grover, have been implemented across diverse physical systems such as nuclear magnetic resonance, trapped ions, optical systems, and superconducting circuits. However, at the small scale, these problems can already be solved classically with a few oracle queries, limiting the obtained advantage. Here we solve an oracle-based problem, known as learning parity with noise, on a five-qubit superconducting processor. Executing classical and quantum algorithms using the same oracle, we observe a large gap in query count in favor of quantum processing. We find that this gap grows by orders of magnitude as a function of the error rates and the problem size. This result demonstrates that, while complex fault-tolerant architectures will be required for universal quantum computing, a significant quantum advantage already emerges in existing noisy systems.
Quantum error correction in crossbar architectures
NASA Astrophysics Data System (ADS)
Helsen, Jonas; Steudtner, Mark; Veldhorst, Menno; Wehner, Stephanie
2018-07-01
A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.
Direct and inverse energy cascades in a forced rotating turbulence experiment
NASA Astrophysics Data System (ADS)
Campagne, Antoine; Gallet, Basile; Moisy, Frédéric; Cortet, Pierre-Philippe
2014-11-01
Turbulence in a rotating frame provides a remarkable system where 2D and 3D properties may coexist, with a possible tuning between direct and inverse cascades. We present here experimental evidence for a double cascade of kinetic energy in a statistically stationary rotating turbulence experiment. Turbulence is generated by a set of vertical flaps which continuously injects velocity fluctuations towards the center of a rotating water tank. The energy transfers are evaluated from two-point third-order three-component velocity structure functions, which we measure using stereoscopic PIV in the rotating frame. Without global rotation, the energy is transferred from large to small scales, as in classical 3D turbulence. For nonzero rotation rates, the horizontal kinetic energy presents a double cascade: a direct cascade at small horizontal scales and an inverse cascade at large horizontal scales. By contrast, the vertical kinetic energy is always transferred from large to small horizontal scales, a behavior reminiscent of the dynamics of a passive scalar in 2D turbulence. At the largest rotation rate, the flow is nearly 2D and a pure inverse energy cascade is found for the horizontal energy.
Pal, Abhro; Anupindi, Kameswararao; Delorme, Yann; Ghaisas, Niranjan; Shetty, Dinesh A.; Frankel, Steven H.
2014-01-01
In the present study, we performed large eddy simulation (LES) of axisymmetric, and 75% stenosed, eccentric arterial models with steady inflow conditions at a Reynolds number of 1000. The results obtained are compared with the direct numerical simulation (DNS) data (Varghese et al., 2007, “Direct Numerical Simulation of Stenotic Flows. Part 1. Steady Flow,” J. Fluid Mech., 582, pp. 253–280). An inhouse code (WenoHemo) employing high-order numerical methods for spatial and temporal terms, along with a 2nd order accurate ghost point immersed boundary method (IBM) (Mark, and Vanwachem, 2008, “Derivation and Validation of a Novel Implicit Second-Order Accurate Immersed Boundary Method,” J. Comput. Phys., 227(13), pp. 6660–6680) for enforcing boundary conditions on curved geometries is used for simulations. Three subgrid scale (SGS) models, namely, the classical Smagorinsky model (Smagorinsky, 1963, “General Circulation Experiments With the Primitive Equations,” Mon. Weather Rev., 91(10), pp. 99–164), recently developed Vreman model (Vreman, 2004, “An Eddy-Viscosity Subgrid-Scale Model for Turbulent Shear Flow: Algebraic Theory and Applications,” Phys. Fluids, 16(10), pp. 3670–3681), and the Sigma model (Nicoud et al., 2011, “Using Singular Values to Build a Subgrid-Scale Model for Large Eddy Simulations,” Phys. Fluids, 23(8), 085106) are evaluated in the present study. Evaluation of SGS models suggests that the classical constant coefficient Smagorinsky model gives best agreement with the DNS data, whereas the Vreman and Sigma models predict an early transition to turbulence in the poststenotic region. Supplementary simulations are performed using Open source field operation and manipulation (OpenFOAM) (“OpenFOAM,” http://www.openfoam.org/) solver and the results are inline with those obtained with WenoHemo. PMID:24801556
Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A
2016-12-01
Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.
Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne
2016-01-01
How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.
Active subspace: toward scalable low-rank learning.
Liu, Guangcan; Yan, Shuicheng
2012-12-01
We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.
Modeling near-wall turbulent flows
NASA Astrophysics Data System (ADS)
Marusic, Ivan; Mathis, Romain; Hutchins, Nicholas
2010-11-01
The near-wall region of turbulent boundary layers is a crucial region for turbulence production, but it is also a region that becomes increasing difficult to access and make measurements in as the Reynolds number becomes very high. Consequently, it is desirable to model the turbulence in this region. Recent studies have shown that the classical description, with inner (wall) scaling alone, is insufficient to explain the behaviour of the streamwise turbulence intensities with increasing Reynolds number. Here we will review our recent near-wall model (Marusic et al., Science 329, 2010), where the near-wall turbulence is predicted given information from only the large-scale signature at a single measurement point in the logarithmic layer, considerably far from the wall. The model is consistent with the Townsend attached eddy hypothesis in that the large-scale structures associated with the log-region are felt all the way down to the wall, but also includes a non-linear amplitude modulation effect of the large structures on the near-wall turbulence. Detailed predicted spectra across the entire near- wall region will be presented, together with other higher order statistics over a large range of Reynolds numbers varying from laboratory to atmospheric flows.
A result about scale transformation families in approximation
NASA Astrophysics Data System (ADS)
Apprato, Dominique; Gout, Christian
2000-06-01
Scale transformations are common in approximation. In surface approximation from rapidly varying data, one wants to suppress, or at least dampen the oscillations of the approximation near steep gradients implied by the data. In that case, scale transformations can be used to give some control over overshoot when the surface has large variations of its gradient. Conversely, in image analysis, scale transformations are used in preprocessing to enhance some features present on the image or to increase jumps of grey levels before segmentation of the image. In this paper, we establish the convergence of an approximation method which allows some control over the behavior of the approximation. More precisely, we study the convergence of an approximation from a data set of , while using scale transformations on the values before and after classical approximation. In addition, the construction of scale transformations is also given. The algorithm is presented with some numerical examples.
Low-buoyancy thermochemical plumes resolve controversy of classical mantle plume concept
Dannberg, Juliane; Sobolev, Stephan V.
2015-01-01
The Earth's biggest magmatic events are believed to originate from massive melting when hot mantle plumes rising from the lowermost mantle reach the base of the lithosphere. Classical models predict large plume heads that cause kilometre-scale surface uplift, and narrow (100 km radius) plume tails that remain in the mantle after the plume head spreads below the lithosphere. However, in many cases, such uplifts and narrow plume tails are not observed. Here using numerical models, we show that the issue can be resolved if major mantle plumes contain up to 15–20% of recycled oceanic crust in a form of dense eclogite, which drastically decreases their buoyancy and makes it depth dependent. We demonstrate that, despite their low buoyancy, large enough thermochemical plumes can rise through the whole mantle causing only negligible surface uplift. Their tails are bulky (>200 km radius) and remain in the upper mantle for 100 millions of years. PMID:25907970
Hybridizable discontinuous Galerkin method for the 2-D frequency-domain elastic wave equations
NASA Astrophysics Data System (ADS)
Bonnasse-Gahot, Marie; Calandra, Henri; Diaz, Julien; Lanteri, Stéphane
2018-04-01
Discontinuous Galerkin (DG) methods are nowadays actively studied and increasingly exploited for the simulation of large-scale time-domain (i.e. unsteady) seismic wave propagation problems. Although theoretically applicable to frequency-domain problems as well, their use in this context has been hampered by the potentially large number of coupled unknowns they incur, especially in the 3-D case, as compared to classical continuous finite element methods. In this paper, we address this issue in the framework of the so-called hybridizable discontinuous Galerkin (HDG) formulations. As a first step, we study an HDG method for the resolution of the frequency-domain elastic wave equations in the 2-D case. We describe the weak formulation of the method and provide some implementation details. The proposed HDG method is assessed numerically including a comparison with a classical upwind flux-based DG method, showing better overall computational efficiency as a result of the drastic reduction of the number of globally coupled unknowns in the resulting discrete HDG system.
Team building: electronic management-clinical translational research (eM-CTR) systems.
Cecchetti, Alfred A; Parmanto, Bambang; Vecchio, Marcella L; Ahmad, Sjarif; Buch, Shama; Zgheib, Nathalie K; Groark, Stephen J; Vemuganti, Anupama; Romkes, Marjorie; Sciurba, Frank; Donahoe, Michael P; Branch, Robert A
2009-12-01
Classical drug exposure: response studies in clinical pharmacology represent the quintessential prototype for Bench to Bedside-Clinical Translational Research. A fundamental premise of this approach is for a multidisciplinary team of researchers to design and execute complex, in-depth mechanistic studies conducted in relatively small groups of subjects. The infrastructure support for this genre of clinical research is not well-handled by scaling down of infrastructure used for large Phase III clinical trials. We describe a novel, integrated strategy, whose focus is to support and manage a study using an Information Hub, Communication Hub, and Data Hub design. This design is illustrated by an application to a series of varied projects sponsored by Special Clinical Centers of Research in chronic obstructive pulmonary disease at the University of Pittsburgh. In contrast to classical informatics support, it is readily scalable to large studies. Our experience suggests the culture consequences of research group self-empowerment is not only economically efficient but transformative to the research process.
The evolving Planck mass in classically scale-invariant theories
NASA Astrophysics Data System (ADS)
Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.
2017-04-01
We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.
Kernel methods for large-scale genomic data analysis
Xing, Eric P.; Schaid, Daniel J.
2015-01-01
Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743
Diffuse pollution of soil and water: Long term trends at large scales?
NASA Astrophysics Data System (ADS)
Grathwohl, P.
2012-04-01
Industrialization and urbanization, which consequently increased pressure on the environment to cause degradation of soil and water quality over more than a century, is still ongoing. The number of potential environmental contaminants detected in surface and groundwater is continuously increasing; from classical industrial and agricultural chemicals, to flame retardants, pharmaceuticals, and personal care products. While point sources of pollution can be managed in principle, diffuse pollution is only reversible at very long time scales if at all. Compounds which were phased out many decades ago such as PCBs or DDT are still abundant in soils, sediments and biota. How diffuse pollution is processed at large scales in space (e.g. catchments) and time (centuries) is unknown. The relevance to the field of processes well investigated at the laboratory scale (e.g. sorption/desorption and (bio)degradation kinetics) is not clear. Transport of compounds is often coupled to the water cycle and in order to assess trends in diffuse pollution, detailed knowledge about the hydrology and the solute fluxes at the catchment scale is required (e.g. input/output fluxes, transformation rates at the field scale). This is also a prerequisite in assessing management options for reversal of adverse trends.
Realization of a Tunable Dissipation Scale in a Turbulent Cascade using a Quantum Gas
NASA Astrophysics Data System (ADS)
Navon, Nir; Eigen, Christoph; Zhang, Jinyi; Lopes, Raphael; Smith, Robert; Hadzibabic, Zoran
2017-04-01
Many turbulent flows form so-called cascades, where excitations injected at large length scales, are transported to gradually smaller scales until they reach a dissipation scale. We initiate a turbulent cascade in a dilute Bose fluid by pumping energy at the container scale of an optical box trap using an oscillating magnetic force. In contrast to classical fluids where the dissipation scale is set by the viscosity of the fluid, the turbulent cascade of our quantum gas finishes when the particles kinetic energy exceeds the laser-trap depth. This mechanism thus allows us to effectively tune the dissipation scale where particles (and energy) are lost, and measure the particle flux in the cascade at the dissipation scale. We observe a unit power-law decay of the particle-dissipation rate with trap depth, which confirms the surprising prediction that in a wave-turbulent direct energy cascade, the particle flux vanishes in the ideal limit where the dissipation length scale tends to zero.
Quantum Vertex Model for Reversible Classical Computing
NASA Astrophysics Data System (ADS)
Chamon, Claudio; Mucciolo, Eduardo; Ruckenstein, Andrei; Yang, Zhicheng
We present a planar vertex model that encodes the result of a universal reversible classical computation in its ground state. The approach involves Boolean variables (spins) placed on links of a two-dimensional lattice, with vertices representing logic gates. Large short-ranged interactions between at most two spins implement the operation of each gate. The lattice is anisotropic with one direction corresponding to computational time, and with transverse boundaries storing the computation's input and output. The model displays no finite temperature phase transitions, including no glass transitions, independent of circuit. The computational complexity is encoded in the scaling of the relaxation rate into the ground state with the system size. We use thermal annealing and a novel and more efficient heuristic \\x9Dannealing with learning to study various computational problems. To explore faster relaxation routes, we construct an explicit mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating a novel approach to reversible classical computation based on quantum annealing.
NASA Astrophysics Data System (ADS)
Rastorguev, A. S.; Utkin, N. D.; Chumak, O. V.
2017-08-01
Agekyan's λ-factor that allows for the effect of multiplicity of stellar encounters with large impact parameters has been used for the first time to directly calculate the diffusion coefficients in the phase space of a stellar system. Simple estimates show that the cumulative effect, i.e., the total contribution of distant encounters to the change in the velocity of a test star, given the multiplicity of stellar encounters, is finite, and the logarithmic divergence inherent in the classical description of diffusion is removed, as was shown previously byKandrup using a different, more complex approach. In this case, the expressions for the diffusion coefficients, as in the classical description, contain the logarithm of the ratio of two independent quantities: the mean interparticle distance and the impact parameter of a close encounter. However, the physical meaning of this logarithmic factor changes radically: it reflects not the divergence but the presence of two characteristic length scales inherent in the stellar medium.
{Phi}{sup 4} kinks: Statistical mechanics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, S.
1995-12-31
Some recent investigations of the thermal equilibrium properties of kinks in a 1+1-dimensional, classical {phi}{sup 4} field theory are reviewed. The distribution function, kink density, correlation function, and certain thermodynamic quantities were studied both theoretically and via large scale simulations. A simple double Gaussian variational approach within the transfer operator formalism was shown to give good results in the intermediate temperature range where the dilute gas theory is known to fail.
Large Scale Single Nucleotide Polymorphism Study of PD Susceptibility
2006-03-01
familial PD, the results of intensive investigations of polymorphisms in dozens of genes related to sporadic, late onset, typical PD have not shown...association between classical, sporadic PD and 2386 SNPs in 23 genes implicated in the pathogenesis of PD; (2) construct haplotypes based on the SNP...derived from this study may be applied in other complex disorders for the identification of susceptibility genes , as well as in genome-wide SNP
Using selection bias to explain the observed structure of Internet diffusions
Golub, Benjamin; Jackson, Matthew O.
2010-01-01
Recently, large datasets stored on the Internet have enabled the analysis of processes, such as large-scale diffusions of information, at new levels of detail. In a recent study, Liben-Nowell and Kleinberg [(2008) Proc Natl Acad Sci USA 105:4633–4638] observed that the flow of information on the Internet exhibits surprising patterns whereby a chain letter reaches its typical recipient through long paths of hundreds of intermediaries. We show that a basic Galton–Watson epidemic model combined with the selection bias of observing only large diffusions suffices to explain these patterns. Thus, selection biases of which data we observe can radically change the estimation of classical diffusion processes. PMID:20534439
Stochastic inflation lattice simulations - Ultra-large scale structure of the universe
NASA Technical Reports Server (NTRS)
Salopek, D. S.
1991-01-01
Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.
Fast large-scale object retrieval with binary quantization
NASA Astrophysics Data System (ADS)
Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi
2015-11-01
The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.
Atomic-Scale Lightning Rod Effect in Plasmonic Picocavities: A Classical View to a Quantum Effect.
Urbieta, Mattin; Barbry, Marc; Zhang, Yao; Koval, Peter; Sánchez-Portal, Daniel; Zabala, Nerea; Aizpurua, Javier
2018-01-23
Plasmonic gaps are known to produce nanoscale localization and enhancement of optical fields, providing small effective mode volumes of about a few hundred nm 3 . Atomistic quantum calculations based on time-dependent density functional theory reveal the effect of subnanometric localization of electromagnetic fields due to the presence of atomic-scale features at the interfaces of plasmonic gaps. Using a classical model, we explain this as a nonresonant lightning rod effect at the atomic scale that produces an extra enhancement over that of the plasmonic background. The near-field distribution of atomic-scale hot spots around atomic features is robust against dynamical screening and spill-out effects and follows the potential landscape determined by the electron density around the atomic sites. A detailed comparison of the field distribution around atomic hot spots from full quantum atomistic calculations and from the local classical approach considering the geometrical profile of the atoms' electronic density validates the use of a classical framework to determine the effective mode volume in these extreme subnanometric optical cavities. This finding is of practical importance for the community of surface-enhanced molecular spectroscopy and quantum nanophotonics, as it provides an adequate description of the local electromagnetic fields around atomic-scale features with use of simplified classical methods.
Influence of coherent structures on the evolution of an axisymmetric turbulent jet
NASA Astrophysics Data System (ADS)
Breda, Massimiliano; Buxton, Oliver R. H.
2018-03-01
The role of initial conditions in affecting the evolution toward self-similarity of an axisymmetric turbulent jet is examined. The jet's near-field coherence was manipulated by non-circular exit geometries of identical open area, De2, including a square and a fractal exit, for comparison with a classical round orifice jet. Hot-wire anemometry and 2D-planar particle image velocimetry experiments were performed between the exit and a location 26De downstream, where the Reynolds stress profiles are self-similar. This study shows that a fractal geometry significantly changes the near-field structure of the jet, breaking up the large-scale coherent structures, thereby affecting the entrainment rate of the background fluid into the jet stream. It is found that many of the jet's turbulent characteristics scale with the number of eddy turnover times rather than simply the streamwise coordinate, with the entrainment rate (amongst others) found to be comparable across the different jets after approximately 3-4 eddies have been overturned. The study is concluded by investigating the jet's evolution toward a self-similar state. No differences are found for the large-scale spreading rate of the jets in the weakly self-similar region, so defined as the region for which some, but not all of the terms of the mean turbulent kinetic energy equation are self-similar. However, the dissipation rate of the turbulent kinetic energy was found to vary more gradually in x than predicted according to the classical equilibrium theories of Kolmogorov. Instead, the dissipation was found to vary in a non-equilibrium fashion for all three jets tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witzke, B.J.
1993-03-01
Four large-scale (2--8 Ma) T-R sedimentary sequences of M. Ord. age (late Chaz.-Sherm.) were delimited by Witzke Kolata (1980) in the Iowa area, each bounded by local to regional unconformity/disconformity surfaces. These encompass both siliciclastic and carbonate intervals, in ascending order: (1) St. Peter-Glenwood fms., (2) Platteville Fm., (3) Decorah Fm., (4) Dunleith/upper Decorah fms. Finer-scale resolution of depth-related depositional features has led to regional recognition of smaller-scale shallowing-upward cyclicity contained within each large-scale sequence. Such smaller-scale cyclicity encompasses stratigraphic intervals of 1--10 m thickness, with estimated durations of 0.5--1.5 Ma. The St. Peter Sandst. has long been regarded asmore » a classic transgressive sheet sand. However, four discrete shallowing-upward packages characterize the St. Peter-Glenwood interval regionally (IA, MN, NB, KS), including western facies displaying coarsening-upward sandstone packages with condensed conodont-rich brown shale and phosphatic sediments in their lower part (local oolitic ironstone), commonly above pyritic hardgrounds. Regional continuity of small-scale cyclic patterns in M. Ord. strata of the Iowa area may suggest eustatic controls; this can be tested through inter-regional comparisons.« less
Dudley, Peter N; Bonazza, Riccardo; Porter, Warren P
2013-07-01
Animal momentum and heat transfer analysis has historically used direct animal measurements or approximations to calculate drag and heat transfer coefficients. Research can now use modern 3D rendering and computational fluid dynamics software to simulate animal-fluid interactions. Key questions are the level of agreement between simulations and experiments and how superior they are to classical approximations. In this paper we compared experimental and simulated heat transfer and drag calculations on a scale model solid aluminum African elephant casting. We found good agreement between experimental and simulated data and large differences from classical approximations. We used the simulation results to calculate coefficients for heat transfer and drag of the elephant geometry. Copyright © 2013 Wiley Periodicals, Inc.
Electron-phonon interaction within classical molecular dynamics
Tamm, A.; Samolyuk, G.; Correa, A. A.; ...
2016-07-14
Here, we present a model for nonadiabatic classical molecular dynamics simulations that captures with high accuracy the wave-vector q dependence of the phonon lifetimes, in agreement with quantum mechanics calculations. It is based on a local view of the e-ph interaction where individual atom dynamics couples to electrons via a damping term that is obtained as the low-velocity limit of the stopping power of a moving ion in a host. The model is parameter free, as its components are derived from ab initio-type calculations, is readily extended to the case of alloys, and is adequate for large-scale molecular dynamics computermore » simulations. We also show how this model removes some oversimplifications of the traditional ionic damped dynamics commonly used to describe situations beyond the Born-Oppenheimer approximation.« less
Scalable digital hardware for a trapped ion quantum computer
NASA Astrophysics Data System (ADS)
Mount, Emily; Gaultney, Daniel; Vrijsen, Geert; Adams, Michael; Baek, So-Young; Hudek, Kai; Isabella, Louis; Crain, Stephen; van Rynbach, Andre; Maunz, Peter; Kim, Jungsang
2016-12-01
Many of the challenges of scaling quantum computer hardware lie at the interface between the qubits and the classical control signals used to manipulate them. Modular ion trap quantum computer architectures address scalability by constructing individual quantum processors interconnected via a network of quantum communication channels. Successful operation of such quantum hardware requires a fully programmable classical control system capable of frequency stabilizing the continuous wave lasers necessary for loading, cooling, initialization, and detection of the ion qubits, stabilizing the optical frequency combs used to drive logic gate operations on the ion qubits, providing a large number of analog voltage sources to drive the trap electrodes, and a scheme for maintaining phase coherence among all the controllers that manipulate the qubits. In this work, we describe scalable solutions to these hardware development challenges.
Semi-automatic assessment of skin capillary density: proof of principle and validation.
Gronenschild, E H B M; Muris, D M J; Schram, M T; Karaca, U; Stehouwer, C D A; Houben, A J H M
2013-11-01
Skin capillary density and recruitment have been proven to be relevant measures of microvascular function. Unfortunately, the assessment of skin capillary density from movie files is very time-consuming, since this is done manually. This impedes the use of this technique in large-scale studies. We aimed to develop a (semi-) automated assessment of skin capillary density. CapiAna (Capillary Analysis) is a newly developed semi-automatic image analysis application. The technique involves four steps: 1) movement correction, 2) selection of the frame range and positioning of the region of interest (ROI), 3) automatic detection of capillaries, and 4) manual correction of detected capillaries. To gain insight into the performance of the technique, skin capillary density was measured in twenty participants (ten women; mean age 56.2 [42-72] years). To investigate the agreement between CapiAna and the classic manual counting procedure, we used weighted Deming regression and Bland-Altman analyses. In addition, intra- and inter-observer coefficients of variation (CVs), and differences in analysis time were assessed. We found a good agreement between CapiAna and the classic manual method, with a Pearson's correlation coefficient (r) of 0.95 (P<0.001) and a Deming regression coefficient of 1.01 (95%CI: 0.91; 1.10). In addition, we found no significant differences between the two methods, with an intercept of the Deming regression of 1.75 (-6.04; 9.54), while the Bland-Altman analysis showed a mean difference (bias) of 2.0 (-13.5; 18.4) capillaries/mm(2). The intra- and inter-observer CVs of CapiAna were 2.5% and 5.6% respectively, while for the classic manual counting procedure these were 3.2% and 7.2%, respectively. Finally, the analysis time for CapiAna ranged between 25 and 35min versus 80 and 95min for the manual counting procedure. We have developed a semi-automatic image analysis application (CapiAna) for the assessment of skin capillary density, which agrees well with the classic manual counting procedure, is time-saving, and has a better reproducibility as compared to the classic manual counting procedure. As a result, the use of skin capillaroscopy is feasible in large-scale studies, which importantly extends the possibilities to perform microcirculation research in humans. © 2013.
Quantum no-scale regimes in string theory
NASA Astrophysics Data System (ADS)
Coudarchet, Thibaut; Fleming, Claude; Partouche, Hervé
2018-05-01
We show that in generic no-scale models in string theory, the flat, expanding cosmological evolutions found at the quantum level can be attracted to a "quantum no-scale regime", where the no-scale structure is restored asymptotically. In this regime, the quantum effective potential is dominated by the classical kinetic energies of the no-scale modulus and dilaton. We find that this natural preservation of the classical no-scale structure at the quantum level occurs when the initial conditions of the evolutions sit in a subcritical region of their space. On the contrary, supercritical initial conditions yield solutions that have no analogue at the classical level. The associated intrinsically quantum universes are sentenced to collapse and their histories last finite cosmic times. Our analysis is done at 1-loop, in perturbative heterotic string compactified on tori, with spontaneous supersymmetry breaking implemented by a stringy version of the Scherk-Schwarz mechanism.
NASA Astrophysics Data System (ADS)
Rewieński, M.; Lamecki, A.; Mrozowski, M.
2013-09-01
This paper proposes a technique, based on the Inexact Shift-Invert Lanczos (ISIL) method with Inexact Jacobi Orthogonal Component Correction (IJOCC) refinement, and a preconditioned conjugate-gradient (PCG) linear solver with multilevel preconditioner, for finding several eigenvalues for generalized symmetric eigenproblems. Several eigenvalues are found by constructing (with the ISIL process) an extended projection basis. Presented results of numerical experiments confirm the technique can be effectively applied to challenging, large-scale problems characterized by very dense spectra, such as resonant cavities with spatial dimensions which are large with respect to wavelengths of the resonating electromagnetic fields. It is also shown that the proposed scheme based on inexact linear solves delivers superior performance, as compared to methods which rely on exact linear solves, indicating tremendous potential of the 'inexact solve' concept. Finally, the scheme which generates an extended projection basis is found to provide a cost-efficient alternative to classical deflation schemes when several eigenvalues are computed.
2010-01-01
Objectives. To evaluate, by age, the performance of 2 disability measures based on needing help: one using 5 classic activities of daily living (ADL) and another using an expanded set of 14 activities including instrumental activities of daily living (IADL), walking, getting outside, and ADL (IADL/ADL). Methods. Guttman and item response theory (IRT) scaling methods are used with a large (N = 25,470) nationally representative household survey of individuals aged 18 years and older. Results. Guttman scalability of the ADL items increases steadily with age, reaching a high level at ages 75 years and older. That is reflected in an IRT model by age-related differential item functioning (DIF) resulting in age-biased measurement of ADL. Guttman scalability of the IADL/ADL items also increases with age but is lower than the ADL. Although age-related DIF also occurs with IADL/ADL items, DIF is lower in magnitude and balances out without causing age bias. Discussion. An IADL/ADL scale measuring need for help is hierarchical, unidimensional, and unbiased by age. It has greater content validity for measuring need for help in the community and shows greater sensitivity by age than the classic ADL measure. As demand for community services is increasing among adults of all ages, an expanded IADL/ADL measure is more useful than ADL. PMID:20100786
Experimental two-dimensional quantum walk on a photonic chip
Lin, Xiao-Feng; Feng, Zhen; Chen, Jing-Yuan; Gao, Jun; Sun, Ke; Wang, Chao-Yue; Lai, Peng-Cheng; Xu, Xiao-Yun; Wang, Yao; Qiao, Lu-Feng; Yang, Ai-Lin
2018-01-01
Quantum walks, in virtue of the coherent superposition and quantum interference, have exponential superiority over their classical counterpart in applications of quantum searching and quantum simulation. The quantum-enhanced power is highly related to the state space of quantum walks, which can be expanded by enlarging the photon number and/or the dimensions of the evolution network, but the former is considerably challenging due to probabilistic generation of single photons and multiplicative loss. We demonstrate a two-dimensional continuous-time quantum walk by using the external geometry of photonic waveguide arrays, rather than the inner degree of freedoms of photons. Using femtosecond laser direct writing, we construct a large-scale three-dimensional structure that forms a two-dimensional lattice with up to 49 × 49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum walks using heralded single photons and single photon–level imaging. We analyze the quantum transport properties via observing the ballistic evolution pattern and the variance profile, which agree well with simulation results. We further reveal the transient nature that is the unique feature for quantum walks of beyond one dimension. An architecture that allows a quantum walk to freely evolve in all directions and at a large scale, combining with defect and disorder control, may bring up powerful and versatile quantum walk machines for classically intractable problems. PMID:29756040
Large-scale Advanced Prop-fan (LAP) high speed wind tunnel test report
NASA Technical Reports Server (NTRS)
Campbell, William A.; Wainauski, Harold S.; Arseneaux, Peter J.
1988-01-01
High Speed Wind Tunnel testing of the SR-7L Large Scale Advanced Prop-Fan (LAP) is reported. The LAP is a 2.74 meter (9.0 ft) diameter, 8-bladed tractor type rated for 4475 KW (6000 SHP) at 1698 rpm. It was designated and built by Hamilton Standard under contract to the NASA Lewis Research Center. The LAP employs thin swept blades to provide efficient propulsion at flight speeds up to Mach .85. Testing was conducted in the ONERA S1-MA Atmospheric Wind Tunnel in Modane, France. The test objectives were to confirm that the LAP is free from high speed classical flutter, determine the structural and aerodynamic response to angular inflow, measure blade surface pressures (static and dynamic) and evaluate the aerodynamic performance at various blade angles, rotational speeds and Mach numbers. The measured structural and aerodynamic performance of the LAP correlated well with analytical predictions thereby providing confidence in the computer prediction codes used for the design. There were no signs of classical flutter throughout all phases of the test up to and including the 0.84 maximum Mach number achieved. Steady and unsteady blade surface pressures were successfully measured for a wide range of Mach numbers, inflow angles, rotational speeds and blade angles. No barriers were discovered that would prevent proceeding with the PTA (Prop-Fan Test Assessment) Flight Test Program scheduled for early 1987.
Experimental two-dimensional quantum walk on a photonic chip.
Tang, Hao; Lin, Xiao-Feng; Feng, Zhen; Chen, Jing-Yuan; Gao, Jun; Sun, Ke; Wang, Chao-Yue; Lai, Peng-Cheng; Xu, Xiao-Yun; Wang, Yao; Qiao, Lu-Feng; Yang, Ai-Lin; Jin, Xian-Min
2018-05-01
Quantum walks, in virtue of the coherent superposition and quantum interference, have exponential superiority over their classical counterpart in applications of quantum searching and quantum simulation. The quantum-enhanced power is highly related to the state space of quantum walks, which can be expanded by enlarging the photon number and/or the dimensions of the evolution network, but the former is considerably challenging due to probabilistic generation of single photons and multiplicative loss. We demonstrate a two-dimensional continuous-time quantum walk by using the external geometry of photonic waveguide arrays, rather than the inner degree of freedoms of photons. Using femtosecond laser direct writing, we construct a large-scale three-dimensional structure that forms a two-dimensional lattice with up to 49 × 49 nodes on a photonic chip. We demonstrate spatial two-dimensional quantum walks using heralded single photons and single photon-level imaging. We analyze the quantum transport properties via observing the ballistic evolution pattern and the variance profile, which agree well with simulation results. We further reveal the transient nature that is the unique feature for quantum walks of beyond one dimension. An architecture that allows a quantum walk to freely evolve in all directions and at a large scale, combining with defect and disorder control, may bring up powerful and versatile quantum walk machines for classically intractable problems.
BSIFT: toward data-independent codebook for large scale image search.
Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi
2015-03-01
Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika
We suggest the so-called bosonic seesaw mechanism in the context of a classically conformal U(1) B-L extension of the Standard Model with two Higgs doublet fields. The U(1) B-L symmetry is radiatively broken via the Coleman–Weinberg mechanism, which also generates the mass terms for the two Higgs doublets through quartic Higgs couplings. Their masses are all positive but, nevertheless, the electroweak symmetry breaking is realized by the bosonic seesaw mechanism. Analyzing the renormalization group evolutions for all model couplings, we find that a large hierarchy among the quartic Higgs couplings, which is crucial for the bosonic seesaw mechanism to work,more » is dramatically reduced toward high energies. Therefore, the bosonic seesaw is naturally realized with only a mild hierarchy, if some fundamental theory, which provides the origin of the classically conformal invariance, completes our model at some high energy, for example, the Planck scale. In conclusion, we identify the regions of model parameters which satisfy the perturbativity of the running couplings and the electroweak vacuum stability as well as the naturalness of the electroweak scale.« less
Interfacing spin qubits in quantum dots and donors—hot, dense, and coherent
NASA Astrophysics Data System (ADS)
Vandersypen, L. M. K.; Bluhm, H.; Clarke, J. S.; Dzurak, A. S.; Ishihara, R.; Morello, A.; Reilly, D. J.; Schreiber, L. R.; Veldhorst, M.
2017-09-01
Semiconductor spins are one of the few qubit realizations that remain a serious candidate for the implementation of large-scale quantum circuits. Excellent scalability is often argued for spin qubits defined by lithography and controlled via electrical signals, based on the success of conventional semiconductor integrated circuits. However, the wiring and interconnect requirements for quantum circuits are completely different from those for classical circuits, as individual direct current, pulsed and in some cases microwave control signals need to be routed from external sources to every qubit. This is further complicated by the requirement that these spin qubits currently operate at temperatures below 100 mK. Here, we review several strategies that are considered to address this crucial challenge in scaling quantum circuits based on electron spin qubits. Key assets of spin qubits include the potential to operate at 1 to 4 K, the high density of quantum dots or donors combined with possibilities to space them apart as needed, the extremely long-spin coherence times, and the rich options for integration with classical electronics based on the same technology.
A stochastic two-scale model for pressure-driven flow between rough surfaces
Larsson, Roland; Lundström, Staffan; Wall, Peter; Almqvist, Andreas
2016-01-01
Seal surface topography typically consists of global-scale geometric features as well as local-scale roughness details and homogenization-based approaches are, therefore, readily applied. These provide for resolving the global scale (large domain) with a relatively coarse mesh, while resolving the local scale (small domain) in high detail. As the total flow decreases, however, the flow pattern becomes tortuous and this requires a larger local-scale domain to obtain a converged solution. Therefore, a classical homogenization-based approach might not be feasible for simulation of very small flows. In order to study small flows, a model allowing feasibly-sized local domains, for really small flow rates, is developed. Realization was made possible by coupling the two scales with a stochastic element. Results from numerical experiments, show that the present model is in better agreement with the direct deterministic one than the conventional homogenization type of model, both quantitatively in terms of flow rate and qualitatively in reflecting the flow pattern. PMID:27436975
Molecular dynamics simulations of bubble nucleation in dark matter detectors.
Denzel, Philipp; Diemand, Jürg; Angélil, Raymond
2016-01-01
Bubble chambers and droplet detectors used in dosimetry and dark matter particle search experiments use a superheated metastable liquid in which nuclear recoils trigger bubble nucleation. This process is described by the classical heat spike model of F. Seitz [Phys. Fluids (1958-1988) 1, 2 (1958)PFLDAS0031-917110.1063/1.1724333], which uses classical nucleation theory to estimate the amount and the localization of the deposited energy required for bubble formation. Here we report on direct molecular dynamics simulations of heat-spike-induced bubble formation. They allow us to test the nanoscale process described in the classical heat spike model. 40 simulations were performed, each containing about 20 million atoms, which interact by a truncated force-shifted Lennard-Jones potential. We find that the energy per length unit needed for bubble nucleation agrees quite well with theoretical predictions, but the allowed spike length and the required total energy are about twice as large as predicted. This could be explained by the rapid energy diffusion measured in the simulation: contrary to the assumption in the classical model, we observe significantly faster heat diffusion than the bubble formation time scale. Finally we examine α-particle tracks, which are much longer than those of neutrons and potential dark matter particles. Empirically, α events were recently found to result in louder acoustic signals than neutron events. This distinction is crucial for the background rejection in dark matter searches. We show that a large number of individual bubbles can form along an α track, which explains the observed larger acoustic amplitudes.
Wavelet-based multiscale window transform and energy and vorticity analysis
NASA Astrophysics Data System (ADS)
Liang, Xiang San
A new methodology, Multiscale Energy and Vorticity Analysis (MS-EVA), is developed to investigate sub-mesoscale, meso-scale, and large-scale dynamical interactions in geophysical fluid flows which are intermittent in space and time. The development begins with the construction of a wavelet-based functional analysis tool, the multiscale window transform (MWT), which is local, orthonormal, self-similar, and windowed on scale. The MWT is first built over the real line then modified onto a finite domain. Properties are explored, the most important one being the property of marginalization which brings together a quadratic quantity in physical space with its phase space representation. Based on MWT the MS-EVA is developed. Energy and enstrophy equations for the large-, meso-, and sub-meso-scale windows are derived and their terms interpreted. The processes thus represented are classified into four categories: transport; transfer, conversion, and dissipation/diffusion. The separation of transport from transfer is made possible with the introduction of the concept of perfect transfer. By the property of marginalization, the classical energetic analysis proves to be a particular case of the MS-EVA. The MS-EVA developed is validated with classical instability problems. The validation is carried out through two steps. First, it is established that the barotropic and baroclinic instabilities are indicated by the spatial averages of certain transfer term interaction analyses. Then calculations of these indicators are made with an Eady model and a Kuo model. The results agree precisely with what is expected from their analytical solutions, and the energetics reproduced reveal a consistent and important aspect of the unknown dynamic structures of instability processes. As an application, the MS-EVA is used to investigate the Iceland-Faeroe frontal (IFF) variability. A MS-EVA-ready dataset is first generated, through a forecasting study with the Harvard Ocean Prediction System using the data gathered during the 1993 NRV Alliance cruise. The application starts with a determination of the scale window bounds, which characterize a double-peak structure in either the time wavelet spectrum or the space wavelet spectrum. The resulting energetics, when locally averaged, reveal that there is a clear baroclinic instability happening around the cold tongue intrusion observed in the forecast. Moreover, an interaction analysis shows that the energy released by the instability indeed goes to the meso-scale window and fuel the growth of the intrusion. The sensitivity study shows that, in this case, the key to a successful application is a correct decomposition of the large-scale window from the meso-scale window.
Roadmap for Scaling and Multifractals in Geosciences: still a long way to go ?
NASA Astrophysics Data System (ADS)
Schertzer, Daniel; Lovejoy, Shaun
2010-05-01
The interest in scale symmetries (scaling) in Geosciences has never lessened since the first pioneering EGS session on chaos and fractals 22 years ago. The corresponding NP activities have been steadily increasing, covering a wider and wider diversity of geophysical phenomena and range of space-time scales. Whereas interest was initially largely focused on atmospheric turbulence, rain and clouds at small scales, it has quickly broadened to much larger scales and to much wider scale ranges, to include ocean sciences, solid earth and space physics. Indeed, the scale problem being ubiquitous in Geosciences, it is indispensable to share the efforts and the resulting knowledge as much as possible. There have been numerous achievements which have followed from the exploration of larger and larger datasets with finer and finer resolutions, from both modelling and theoretical discussions, particularly on formalisms for intermittency, anisotropy and scale symmetry, multiple scaling (multifractals) vs. simple scaling,. We are now way beyond the early pioneering but tentative attempts using crude estimates of unique scaling exponents to bring some credence to the fact that scale symmetries are key to most nonlinear geoscience problems. Nowadays, we need to better demonstrate that scaling brings effective solutions to geosciences and therefore to society. A large part of the answer corresponds to our capacity to create much more universal and flexible tools to multifractally analyse in straightforward and reliable manners complex and complicated systems such as the climate. Preliminary steps in this direction are already quite encouraging: they show that such approaches explain both the difficulty of classical techniques to find trends in climate scenarios (particularly for extremes) and resolve them with the help of scaling estimators. The question of the reliability and accuracy of these methods is not trivial. After discussing these important, but rather short term issues, we will point out more general questions, which can be put together into the following provocative question: how to convert the classical time evolving deterministic PDE's into dynamical multifractal systems? We will argue that this corresponds to an already active field of research, which include: multifractals as generic solutions of nonlinear PDE (exact results for 1D Burgers equation and a few other caricatures of Navier-Stokes equations, prospects for 3D Burgers equations), cascade structures of numerical weather models, links between multifractal processes and random dynamical systems, and the challenging debate on the most relevant stochastic multifractal formalism, whereas there is already a rather general consent about the deterministic one.
A method for data handling numerical results in parallel OpenFOAM simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anton, Alin; Muntean, Sebastian
Parallel computational fluid dynamics simulations produce vast amount of numerical result data. This paper introduces a method for reducing the size of the data by replaying the interprocessor traffic. The results are recovered only in certain regions of interest configured by the user. A known test case is used for several mesh partitioning scenarios using the OpenFOAM toolkit{sup ®}[1]. The space savings obtained with classic algorithms remain constant for more than 60 Gb of floating point data. Our method is most efficient on large simulation meshes and is much better suited for compressing large scale simulation results than the regular algorithms.
Photon and graviton mass limits
NASA Astrophysics Data System (ADS)
Goldhaber, Alfred Scharff; Nieto, Michael Martin
2010-01-01
Efforts to place limits on deviations from canonical formulations of electromagnetism and gravity have probed length scales increasing dramatically over time. Historically, these studies have passed through three stages: (1) testing the power in the inverse-square laws of Newton and Coulomb, (2) seeking a nonzero value for the rest mass of photon or graviton, and (3) considering more degrees of freedom, allowing mass while preserving explicit gauge or general-coordinate invariance. Since the previous review the lower limit on the photon Compton wavelength has improved by four orders of magnitude, to about one astronomical unit, and rapid current progress in astronomy makes further advance likely. For gravity there have been vigorous debates about even the concept of graviton rest mass. Meanwhile there are striking observations of astronomical motions that do not fit Einstein gravity with visible sources. “Cold dark matter” (slow, invisible classical particles) fits well at large scales. “Modified Newtonian dynamics” provides the best phenomenology at galactic scales. Satisfying this phenomenology is a requirement if dark matter, perhaps as invisible classical fields, could be correct here too. “Dark energy” might be explained by a graviton-mass-like effect, with associated Compton wavelength comparable to the radius of the visible universe. Significant mass limits are summarized in a table.
Photon and graviton mass limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldhaber, Alfred Scharff; Nieto, Michael Martin; Theoretical Division
2010-01-15
Efforts to place limits on deviations from canonical formulations of electromagnetism and gravity have probed length scales increasing dramatically over time. Historically, these studies have passed through three stages: (1) testing the power in the inverse-square laws of Newton and Coulomb, (2) seeking a nonzero value for the rest mass of photon or graviton, and (3) considering more degrees of freedom, allowing mass while preserving explicit gauge or general-coordinate invariance. Since the previous review the lower limit on the photon Compton wavelength has improved by four orders of magnitude, to about one astronomical unit, and rapid current progress in astronomymore » makes further advance likely. For gravity there have been vigorous debates about even the concept of graviton rest mass. Meanwhile there are striking observations of astronomical motions that do not fit Einstein gravity with visible sources. ''Cold dark matter'' (slow, invisible classical particles) fits well at large scales. ''Modified Newtonian dynamics'' provides the best phenomenology at galactic scales. Satisfying this phenomenology is a requirement if dark matter, perhaps as invisible classical fields, could be correct here too. ''Dark energy''might be explained by a graviton-mass-like effect, with associated Compton wavelength comparable to the radius of the visible universe. Significant mass limits are summarized in a table.« less
Vortex survival in 3D self-gravitating accretion discs
NASA Astrophysics Data System (ADS)
Lin, Min-Kai; Pierens, Arnaud
2018-07-01
Large-scale, dust-trapping vortices may account for observations of asymmetric protoplanetary discs. Disc vortices are also potential sites for accelerated planetesimal formation by concentrating dust grains. However, in 3D discs vortices are subject to destructive `elliptic instabilities', which reduces their viability as dust traps. The survival of vortices in 3D accretion discs is thus an important issue to address. In this work, we perform shearing box simulations to show that disc self-gravity enhances the survival of 3D vortices, even when self-gravity is weak in the classic sense (e.g. with a Toomre Q ≃ 5). We find a 3D self-gravitating vortex can grow on secular time-scales in spite of the elliptic instability. The vortex aspect ratio decreases as it strengthens, which feeds the elliptic instability. The result is a 3D vortex with a turbulent core that persists for ˜103 orbits. We find when gravitational and hydrodynamic stresses become comparable, the vortex may undergo episodic bursts, which we interpret as an interaction between elliptic and gravitational instabilities. We estimate the distribution of dust particles in self-gravitating, turbulent vortices. Our results suggest large-scale vortices in protoplanetary discs are more easily observed at large radii.
Applications of species accumulation curves in large-scale biological data analysis.
Deng, Chao; Daley, Timothy; Smith, Andrew D
2015-09-01
The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.
Applications of species accumulation curves in large-scale biological data analysis
Deng, Chao; Daley, Timothy; Smith, Andrew D
2016-01-01
The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899
The Embedded Atom Model and large-scale MD simulation of tin under shock loading
NASA Astrophysics Data System (ADS)
Sapozhnikov, F. A.; Ionov, G. V.; Dremov, V. V.; Soulard, L.; Durand, O.
2014-05-01
The goal of the work was to develop an interatomic potential, that can be used in large-scale classical MD simulations to predict tin properties near the melting curve, the melting curve itself, and the kinetics of melting and solidification when shock and ramp loading. According to phase diagram, shocked tin melts from bcc phase, and since the main objective was to investigate melting, the EAM was parameterized for bcc phase. The EAM was optimized using isothermal compression data (experimental at T=300 K and ab-initio at T=0 K for bcc, fcc, bct structures), experimental and QMD data on the Hugoniot and on the melting at elevated pressures. The Hugoniostat calculations centred at β-tin at ambient conditions showed that the calculated Hugoniot is in good agreement with experimental and QMD data above p-bct transition pressure. Calculations of overcooled liquid in pressure range corresponding to bcc phase showed crystallization into bcc phase. Since the principal Hugoniot of tin originates from the β-tin that is not described by this EAM the special initial state of bcc samples was constructed to perform large-scale MD simulations of shock loading.
NASA Astrophysics Data System (ADS)
Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.
2018-05-01
The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.
Inherent polarization entanglement generated from a monolithic semiconductor chip
Horn, Rolf T.; Kolenderski, Piotr; Kang, Dongpeng; Abolghasem, Payam; Scarcella, Carmelo; Frera, Adriano Della; Tosi, Alberto; Helt, Lukas G.; Zhukovsky, Sergei V.; Sipe, J. E.; Weihs, Gregor; Helmy, Amr S.; Jennewein, Thomas
2013-01-01
Creating miniature chip scale implementations of optical quantum information protocols is a dream for many in the quantum optics community. This is largely because of the promise of stability and scalability. Here we present a monolithically integratable chip architecture upon which is built a photonic device primitive called a Bragg reflection waveguide (BRW). Implemented in gallium arsenide, we show that, via the process of spontaneous parametric down conversion, the BRW is capable of directly producing polarization entangled photons without additional path difference compensation, spectral filtering or post-selection. After splitting the twin-photons immediately after they emerge from the chip, we perform a variety of correlation tests on the photon pairs and show non-classical behaviour in their polarization. Combined with the BRW's versatile architecture our results signify the BRW design as a serious contender on which to build large scale implementations of optical quantum processing devices. PMID:23896982
A networked voting rule for democratic representation
NASA Astrophysics Data System (ADS)
Hernández, Alexis R.; Gracia-Lázaro, Carlos; Brigatti, Edgardo; Moreno, Yamir
2018-03-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals' interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.
Coral mass spawning predicted by rapid seasonal rise in ocean temperature
Maynard, Jeffrey A.; Edwards, Alasdair J.; Guest, James R.; Rahbek, Carsten
2016-01-01
Coral spawning times have been linked to multiple environmental factors; however, to what extent these factors act as generalized cues across multiple species and large spatial scales is unknown. We used a unique dataset of coral spawning from 34 reefs in the Indian and Pacific Oceans to test if month of spawning and peak spawning month in assemblages of Acropora spp. can be predicted by sea surface temperature (SST), photosynthetically available radiation, wind speed, current speed, rainfall or sunset time. Contrary to the classic view that high mean SST initiates coral spawning, we found rapid increases in SST to be the best predictor in both cases (month of spawning: R2 = 0.73, peak: R2 = 0.62). Our findings suggest that a rapid increase in SST provides the dominant proximate cue for coral mass spawning over large geographical scales. We hypothesize that coral spawning is ultimately timed to ensure optimal fertilization success. PMID:27170709
Hyperresonance Unifying Theory and the resulting Law
NASA Astrophysics Data System (ADS)
Omerbashich, Mensur
2012-07-01
Hyperresonance Unifying Theory (HUT) is herein conceived based on theoretical and experimental geophysics, as that absolute extension of both Multiverse and String Theories, in which all universes (the Hyperverse) - of non-prescribed energies and scales - mutually orbit as well as oscillate in tune. The motivation for this is to explain oddities of "attraction at a distance" and physical unit(s) attached to the Newtonian gravitational constant G. In order to make sure HUT holds absolutely, we operate over non-temporal, unitless and quantities with derived units only. A HUT's harmonic geophysical localization (here for the Earth-Moon system; the Georesonator) is indeed achieved for mechanist and quantum scales, in form of the Moon's Equation of Levitation (of Anti-gravity). HUT holds true for our Solar system the same as its localized equation holds down to the precision of terrestrial G-experiments, regardless of the scale: to 10^-11 and 10^-39 for mechanist and quantum scales, respectively. Due to its absolute accuracy (within NIST experimental limits), the derived equation is regarded a law. HUT can indeed be demonstrated for our entire Solar system in various albeit empirical ways. In summary, HUT shows: (i) how classical gravity can be expressed in terms of scale and the speed of light; (ii) the tuning-forks principle is universal; (iii) the body's fundamental oscillation note is not a random number as previously believed; (iv) earthquakes of about M6 and stronger arise mainly due to Earth's alignments longer than three days to two celestial objects in our Solar system, whereas M7+ earthquakes occur mostly during two simultaneous such alignments; etc. HUT indicates: (v) quantum physics is objectocentric, i.e. trivial in absolute terms so it cannot be generalized beyond classical mass-bodies; (vi) geophysics is largely due to the magnification of mass resonance; etc. HUT can be extended to multiverse (10^17) and string scales (10^-67) too, providing a constraint to String Theory. HUT is the unifying theory as it demotes classical forces to states of stringdom. The String Theory's paradigm on vibrational rather than particlegenic reality has thus been confirmed.
Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities
NASA Astrophysics Data System (ADS)
Doster, F.; Celia, M. A.; Nordbotten, J. M.
2012-12-01
Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.
Sequential estimation and satellite data assimilation in meteorology and oceanography
NASA Technical Reports Server (NTRS)
Ghil, M.
1986-01-01
The role of dynamics in estimating the state of the atmosphere and ocean from incomplete and noisy data is discussed and the classical applications of four-dimensional data assimilation to large-scale atmospheric dynamics are presented. It is concluded that sequential updating of a forecast model with continuously incoming conventional and remote-sensing data is the most natural way of extracting the maximum amount of information from the imperfectly known dynamics, on the one hand, and the inaccurate and incomplete observations, on the other.
2014-05-01
exact one is solved later — as- signed as step 5 of Algorithm 2 — because at each iteration , the ADMM updates the variables in the Gauss - Seidel ...Jacobi ADMM (see Algo- rithm 5 below). Unlike the Gauss - Seidel ADMM, the Jacobi ADMM updates all the 70 blocks in parallel at every iteration : xk+1i...that extending ADMM straightforwardly from the classic Gauss - Seidel setting to the Jacobi setting, from two blocks to multiple blocks, will preserve
Volume weighting the measure of the universe from classical slow-roll expansion
NASA Astrophysics Data System (ADS)
Sloan, David; Silk, Joseph
2016-05-01
One of the most frustrating issues in early universe cosmology centers on how to reconcile the vast choice of universes in string theory and in its most plausible high energy sibling, eternal inflation, which jointly generate the string landscape with the fine-tuned and hence relatively small number of universes that have undergone a large expansion and can accommodate observers and, in particular, galaxies. We show that such observations are highly favored for any system whereby physical parameters are distributed at a high energy scale, due to the conservation of the Liouville measure and the gauge nature of volume, asymptotically approaching a period of large isotropic expansion characterized by w =-1 . Our interpretation predicts that all observational probes for deviations from w =-1 in the foreseeable future are doomed to failure. The purpose of this paper is not to introduce a new measure for the multiverse, but rather to show how what is perhaps the most natural and well-known measure, volume weighting, arises as a consequence of the conservation of the Liouville measure on phase space during the classical slow-roll expansion.
Kicking atoms with finite duration pulses
NASA Astrophysics Data System (ADS)
Fekete, Julia; Chai, Shijie; Daszuta, Boris; Andersen, Mikkel F.
2016-05-01
The atom optics delta-kicked particle is a paradigmatic system for experimental studies of quantum chaos and classical-quantum correspondence. It consists of a cloud of laser cooled atoms exposed to a periodically pulsed standing wave of far off-resonant laser light. A purely quantum phenomena in such systems are quantum resonances which transfers the atoms into a coherent superposition of largely separated momentum states. Using such large momentum transfer ``beamsplitters'' in atom interferometers may have applications in high precision metrology. The growth in momentum separation cannot be maintained indefinitely due to finite laser power. The largest momentum transfer is achieved by violating the usual delta-kick assumption. Therefore we explore the behavior of the atom optics kicked particle with finite pulse duration. We have developed a semi-classical model which shows good agreement with the full quantum description as well as our experiments. Furthermore we have found a simple scaling law that helps to identify optimal parameters for an atom interferometer. We verify this by measurements of the ``Talbot time'' (a measurement of h/m) which together with other well-known constants constitute a measurement of the fine structure constant.
NASA Astrophysics Data System (ADS)
Liu, Cheng-Wei
Phase transitions and their associated critical phenomena are of fundamental importance and play a crucial role in the development of statistical physics for both classical and quantum systems. Phase transitions embody diverse aspects of physics and also have numerous applications outside physics, e.g., in chemistry, biology, and combinatorial optimization problems in computer science. Many problems can be reduced to a system consisting of a large number of interacting agents, which under some circumstances (e.g., changes of external parameters) exhibit collective behavior; this type of scenario also underlies phase transitions. The theoretical understanding of equilibrium phase transitions was put on a solid footing with the establishment of the renormalization group. In contrast, non-equilibrium phase transition are relatively less understood and currently a very active research topic. One important milestone here is the Kibble-Zurek (KZ) mechanism, which provides a useful framework for describing a system with a transition point approached through a non-equilibrium quench process. I developed two efficient Monte Carlo techniques for studying phase transitions, one is for classical phase transition and the other is for quantum phase transitions, both are under the framework of KZ scaling. For classical phase transition, I develop a non-equilibrium quench (NEQ) simulation that can completely avoid the critical slowing down problem. For quantum phase transitions, I develop a new algorithm, named quasi-adiabatic quantum Monte Carlo (QAQMC) algorithm for studying quantum quenches. I demonstrate the utility of QAQMC quantum Ising model and obtain high-precision results at the transition point, in particular showing generalized dynamic scaling in the quantum system. To further extend the methods, I study more complex systems such as spin-glasses and random graphs. The techniques allow us to investigate the problems efficiently. From the classical perspective, using the NEQ approach I verify the universality class of the 3D Ising spin-glasses. I also investigate the random 3-regular graphs in terms of both classical and quantum phase transitions. I demonstrate that under this simulation scheme, one can extract information associated with the classical and quantum spin-glass transitions without any knowledge prior to the simulation.
Randomized central limit theorems: A unified theory.
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
Randomized central limit theorems: A unified theory
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2010-08-01
The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.
On the asymptotic behavior of a subcritical convection-diffusion equation with nonlocal diffusion
NASA Astrophysics Data System (ADS)
Cazacu, Cristian M.; Ignat, Liviu I.; Pazoto, Ademir F.
2017-08-01
In this paper we consider a subcritical model that involves nonlocal diffusion and a classical convective term. In spite of the nonlocal diffusion, we obtain an Oleinik type estimate similar to the case when the diffusion is local. First we prove that the entropy solution can be obtained by adding a small viscous term μ uxx and letting μ\\to 0 . Then, by using uniform Oleinik estimates for the viscous approximation we are able to prove the well-posedness of the entropy solutions with L 1-initial data. Using a scaling argument and hyperbolic estimates given by Oleinik’s inequality, we obtain the first term in the asymptotic behavior of the nonnegative solutions. Finally, the large time behavior of changing sign solutions is proved using the classical flux-entropy method and estimates for the nonlocal operator.
Programmable dispersion on a photonic integrated circuit for classical and quantum applications.
Notaros, Jelena; Mower, Jacob; Heuck, Mikkel; Lupo, Cosmo; Harris, Nicholas C; Steinbrecher, Gregory R; Bunandar, Darius; Baehr-Jones, Tom; Hochberg, Michael; Lloyd, Seth; Englund, Dirk
2017-09-04
We demonstrate a large-scale tunable-coupling ring resonator array, suitable for high-dimensional classical and quantum transforms, in a CMOS-compatible silicon photonics platform. The device consists of a waveguide coupled to 15 ring-based dispersive elements with programmable linewidths and resonance frequencies. The ability to control both quality factor and frequency of each ring provides an unprecedented 30 degrees of freedom in dispersion control on a single spatial channel. This programmable dispersion control system has a range of applications, including mode-locked lasers, quantum key distribution, and photon-pair generation. We also propose a novel application enabled by this circuit - high-speed quantum communications using temporal-mode-based quantum data locking - and discuss the utility of the system for performing the high-dimensional unitary optical transformations necessary for a quantum data locking demonstration.
Efficient quantum transmission in multiple-source networks.
Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-04-02
A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.
The basis for cosmic ray feedback: Written on the wind
Zweibel, Ellen G.
2017-01-01
Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback. Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed. PMID:28579734
The basis for cosmic ray feedback: Written on the wind
NASA Astrophysics Data System (ADS)
Zweibel, Ellen G.
2017-05-01
Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback. Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed.
The basis for cosmic ray feedback: Written on the wind.
Zweibel, Ellen G
2017-05-01
Star formation and supermassive black hole growth in galaxies appear to be self-limiting. The mechanisms for self-regulation are known as feedback . Cosmic rays, the relativistic particle component of interstellar and intergalactic plasma, are among the agents of feedback. Because cosmic rays are virtually collisionless in the plasma environments of interest, their interaction with the ambient medium is primarily mediated by large scale magnetic fields and kinetic scale plasma waves. Because kinetic scales are much smaller than global scales, this interaction is most conveniently described by fluid models. In this paper, I discuss the kinetic theory and the classical theory of cosmic ray hydrodynamics (CCRH) which follows from assuming cosmic rays interact only with self-excited waves. I generalize CCRH to generalized cosmic ray hydrodynamics, which accommodates interactions with extrinsic turbulence, present examples of cosmic ray feedback, and assess where progress is needed.
BULGES OF NEARBY GALAXIES WITH SPITZER: SCALING RELATIONS IN PSEUDOBULGES AND CLASSICAL BULGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fisher, David B.; Drory, Niv, E-mail: dbfisher@astro.as.utexas.ed
2010-06-20
We investigate scaling relations of bulges using bulge-disk decompositions at 3.6 {mu}m and present bulge classifications for 173 E-Sd galaxies within 20 Mpc. Pseudobulges and classical bulges are identified using Sersic index, Hubble Space Telescope morphology, and star formation activity (traced by 8 {mu}m emission). In the near-IR pseudobulges have n{sub b} < 2 and classical bulges have n{sub b} >2, as found in the optical. Sersic index and morphology are essentially equivalent properties for bulge classification purposes. We confirm, using a much more robust sample, that the Sersic index of pseudobulges is uncorrelated with other bulge structural properties, unlikemore » for classical bulges and elliptical galaxies. Also, the half-light radius of pseudobulges is not correlated with any other bulge property. We also find a new correlation between surface brightness and pseudobulge luminosity; pseudobulges become more luminous as they become more dense. Classical bulges follow the well-known scaling relations between surface brightness, luminosity, and half-light radius that are established by elliptical galaxies. We show that those pseudobulges (as indicated by Sersic index and nuclear morphology) that have low specific star formation rates are very similar to models of galaxies in which both a pseudobulge and classical bulge exist. Therefore, pseudobulge identification that relies only on structural indicators is incomplete. Our results, especially those on scaling relations, imply that pseudobulges are very different types of objects than elliptical galaxies.« less
NASA Astrophysics Data System (ADS)
Delle Site, Luigi
2018-01-01
A theoretical scheme for the treatment of an open molecular system with electrons and nuclei is proposed. The idea is based on the Grand Canonical description of a quantum region embedded in a classical reservoir of molecules. Electronic properties of the quantum region are calculated at constant electronic chemical potential equal to that of the corresponding (large) bulk system treated at full quantum level. Instead, the exchange of molecules between the quantum region and the classical environment occurs at the chemical potential of the macroscopic thermodynamic conditions. The Grand Canonical Adaptive Resolution Scheme is proposed for the treatment of the classical environment; such an approach can treat the exchange of molecules according to first principles of statistical mechanics and thermodynamic. The overall scheme is build on the basis of physical consistency, with the corresponding definition of numerical criteria of control of the approximations implied by the coupling. Given the wide range of expertise required, this work has the intention of providing guiding principles for the construction of a well founded computational protocol for actual multiscale simulations from the electronic to the mesoscopic scale.
Quantum chemistry simulation on quantum computers: theories and experiments.
Lu, Dawei; Xu, Boruo; Xu, Nanyang; Li, Zhaokai; Chen, Hongwei; Peng, Xinhua; Xu, Ruixue; Du, Jiangfeng
2012-07-14
It has been claimed that quantum computers can mimic quantum systems efficiently in the polynomial scale. Traditionally, those simulations are carried out numerically on classical computers, which are inevitably confronted with the exponential growth of required resources, with the increasing size of quantum systems. Quantum computers avoid this problem, and thus provide a possible solution for large quantum systems. In this paper, we first discuss the ideas of quantum simulation, the background of quantum simulators, their categories, and the development in both theories and experiments. We then present a brief introduction to quantum chemistry evaluated via classical computers followed by typical procedures of quantum simulation towards quantum chemistry. Reviewed are not only theoretical proposals but also proof-of-principle experimental implementations, via a small quantum computer, which include the evaluation of the static molecular eigenenergy and the simulation of chemical reaction dynamics. Although the experimental development is still behind the theory, we give prospects and suggestions for future experiments. We anticipate that in the near future quantum simulation will become a powerful tool for quantum chemistry over classical computations.
A dynamic regularized gradient model of the subgrid-scale stress tensor for large-eddy simulation
NASA Astrophysics Data System (ADS)
Vollant, A.; Balarac, G.; Corre, C.
2016-02-01
Large-eddy simulation (LES) solves only the large scales part of turbulent flows by using a scales separation based on a filtering operation. The solution of the filtered Navier-Stokes equations requires then to model the subgrid-scale (SGS) stress tensor to take into account the effect of scales smaller than the filter size. In this work, a new model is proposed for the SGS stress model. The model formulation is based on a regularization procedure of the gradient model to correct its unstable behavior. The model is developed based on a priori tests to improve the accuracy of the modeling for both structural and functional performances, i.e., the model ability to locally approximate the SGS unknown term and to reproduce enough global SGS dissipation, respectively. LES is then performed for a posteriori validation. This work is an extension to the SGS stress tensor of the regularization procedure proposed by Balarac et al. ["A dynamic regularized gradient model of the subgrid-scale scalar flux for large eddy simulations," Phys. Fluids 25(7), 075107 (2013)] to model the SGS scalar flux. A set of dynamic regularized gradient (DRG) models is thus made available for both the momentum and the scalar equations. The second objective of this work is to compare this new set of DRG models with direct numerical simulations (DNS), filtered DNS in the case of classic flows simulated with a pseudo-spectral solver and with the standard set of models based on the dynamic Smagorinsky model. Various flow configurations are considered: decaying homogeneous isotropic turbulence, turbulent plane jet, and turbulent channel flows. These tests demonstrate the stable behavior provided by the regularization procedure, along with substantial improvement for velocity and scalar statistics predictions.
Thermodynamics in the vicinity of a relativistic quantum critical point in 2+1 dimensions.
Rançon, A; Kodio, O; Dupuis, N; Lecheminant, P
2013-07-01
We study the thermodynamics of the relativistic quantum O(N) model in two space dimensions. In the vicinity of the zero-temperature quantum critical point (QCP), the pressure can be written in the scaling form P(T)=P(0)+N(T(3)/c(2))F(N)(Δ/T), where c is the velocity of the excitations at the QCP and |Δ| a characteristic zero-temperature energy scale. Using both a large-N approach to leading order and the nonperturbative renormalization group, we compute the universal scaling function F(N). For small values of N (N~10) we find that F(N)(x) is nonmonotonic in the quantum critical regime (|x|~1) with a maximum near x=0. The large-N approach-if properly interpreted-is a good approximation both in the renormalized classical (x~-1) and quantum disordered (x>/~1) regimes, but fails to describe the nonmonotonic behavior of F(N) in the quantum critical regime. We discuss the renormalization-group flows in the various regimes near the QCP and make the connection with the quantum nonlinear sigma model in the renormalized classical regime. We compute the Berezinskii-Kosterlitz-Thouless transition temperature in the quantum O(2) model and find that in the vicinity of the QCP the universal ratio T(BKT)/ρ(s)(0) is very close to π/2, implying that the stiffness ρ(s)(T(BKT)(-)) at the transition is only slightly reduced with respect to the zero-temperature stiffness ρ(s)(0). Finally, we briefly discuss the experimental determination of the universal function F(2) from the pressure of a Bose gas in an optical lattice near the superfluid-Mott-insulator transition.
Functional Basis for Efficient Physical Layer Classical Control in Quantum Processors
NASA Astrophysics Data System (ADS)
Ball, Harrison; Nguyen, Trung; Leong, Philip H. W.; Biercuk, Michael J.
2016-12-01
The rapid progress seen in the development of quantum-coherent devices for information processing has motivated serious consideration of quantum computer architecture and organization. One topic which remains open for investigation and optimization relates to the design of the classical-quantum interface, where control operations on individual qubits are applied according to higher-level algorithms; accommodating competing demands on performance and scalability remains a major outstanding challenge. In this work, we present a resource-efficient, scalable framework for the implementation of embedded physical layer classical controllers for quantum-information systems. Design drivers and key functionalities are introduced, leading to the selection of Walsh functions as an effective functional basis for both programing and controller hardware implementation. This approach leverages the simplicity of real-time Walsh-function generation in classical digital hardware, and the fact that a wide variety of physical layer controls, such as dynamic error suppression, are known to fall within the Walsh family. We experimentally implement a real-time field-programmable-gate-array-based Walsh controller producing Walsh timing signals and Walsh-synthesized analog waveforms appropriate for critical tasks in error-resistant quantum control and noise characterization. These demonstrations represent the first step towards a unified framework for the realization of physical layer controls compatible with large-scale quantum-information processing.
Experimental Observation of Two Features Unexpected from the Classical Theories of Rubber Elasticity
NASA Astrophysics Data System (ADS)
Nishi, Kengo; Fujii, Kenta; Chung, Ung-il; Shibayama, Mitsuhiro; Sakai, Takamasa
2017-12-01
Although the elastic modulus of a Gaussian chain network is thought to be successfully described by classical theories of rubber elasticity, such as the affine and phantom models, verification experiments are largely lacking owing to difficulties in precisely controlling of the network structure. We prepared well-defined model polymer networks experimentally, and measured the elastic modulus G for a broad range of polymer concentrations and connectivity probabilities, p . In our experiment, we observed two features that were distinct from those predicted by classical theories. First, we observed the critical behavior G ˜|p -pc|1.95 near the sol-gel transition. This scaling law is different from the prediction of classical theories, but can be explained by analogy between the electric conductivity of resistor networks and the elasticity of polymer networks. Here, pc is the sol-gel transition point. Furthermore, we found that the experimental G -p relations in the region above C* did not follow the affine or phantom theories. Instead, all the G /G0-p curves fell onto a single master curve when G was normalized by the elastic modulus at p =1 , G0. We show that the effective medium approximation for Gaussian chain networks explains this master curve.
Classical and quantum simulations of warm dense carbon
NASA Astrophysics Data System (ADS)
Whitley, Heather; Sanchez, David; Hamel, Sebastien; Correa, Alfredo; Benedict, Lorin
We have applied classical and DFT-based molecular dynamics (MD) simulations to study the equation of state of carbon in the warm dense matter regime (ρ = 3.7 g/cc, 0.86 eV
Quantum Metrology beyond the Classical Limit under the Effect of Dephasing
NASA Astrophysics Data System (ADS)
Matsuzaki, Yuichiro; Benjamin, Simon; Nakayama, Shojun; Saito, Shiro; Munro, William J.
2018-04-01
Quantum sensors have the potential to outperform their classical counterparts. For classical sensing, the uncertainty of the estimation of the target fields scales inversely with the square root of the measurement time T . On the other hand, by using quantum resources, we can reduce this scaling of the uncertainty with time to 1 /T . However, as quantum states are susceptible to dephasing, it has not been clear whether we can achieve sensitivities with a scaling of 1 /T for a measurement time longer than the coherence time. Here, we propose a scheme that estimates the amplitude of globally applied fields with the uncertainty of 1 /T for an arbitrary time scale under the effect of dephasing. We use one-way quantum-computing-based teleportation between qubits to prevent any increase in the correlation between the quantum state and its local environment from building up and have shown that such a teleportation protocol can suppress the local dephasing while the information from the target fields keeps growing. Our method has the potential to realize a quantum sensor with a sensitivity far beyond that of any classical sensor.
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets.
Savitski, Mikhail M; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-09-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target-decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target-decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The "picked" protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The "picked" target-decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used "classic" protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Flow Control via a Single Spanwise Wire on the Surface of a Stationary Cylinder
NASA Astrophysics Data System (ADS)
Ekmekci, Alis; Rockwell, Donald
2007-11-01
The flow structure arising from a single spanwise wire attached along the surface of a circular stationary cylinder is investigated experimentally via a cinema technique of digital particle image velocimetry (DPIV). Consideration is given to wires that have smaller and larger scales than the thickness of the unperturbed boundary layer that develops around the cylinder prior to flow separation. The wires have diameters that are 1% and 3% of the cylinder diameter. Over a certain range of angular positions with respect to the approach flow, both small- and large-scale wires show important global effects on the entire near-wake. Two critical angles are identified on the basis of the near-wake structure. These critical angles are associated with extension and contraction of the near-wake, relative to the wake in absence of the effect of a surface disturbance. The critical angle of the wire that yields near-wake extension is associated with bistable oscillations of the separating shear layer, at irregular time intervals, much longer that the time scale associated with classical Karman vortex shedding. Moreover, for the large scale wire, in specific cases, either attenuation or enhancement of the Karman mode of vortex formation is observed.
The Multi-Scale Network Landscape of Collaboration.
Bae, Arram; Park, Doheum; Ahn, Yong-Yeol; Park, Juyong
2016-01-01
Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena--which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists.
Local structure of scalar flux in turbulent passive scalar mixing
NASA Astrophysics Data System (ADS)
Konduri, Aditya; Donzis, Diego
2012-11-01
Understanding the properties of scalar flux is important in the study of turbulent mixing. Classical theories suggest that it mainly depends on the large scale structures in the flow. Recent studies suggest that the mean scalar flux reaches an asymptotic value at high Peclet numbers, independent of molecular transport properties of the fluid. A large DNS database of isotropic turbulence with passive scalars forced with a mean scalar gradient with resolution up to 40963, is used to explore the structure of scalar flux based on the local topology of the flow. It is found that regions of small velocity gradients, where dissipation and enstrophy are small, constitute the main contribution to scalar flux. On the other hand, regions of very small scalar gradient (and scalar dissipation) become less important to the scalar flux at high Reynolds numbers. The scaling of the scalar flux spectra is also investigated. The k - 7 / 3 scaling proposed by Lumley (1964) is observed at high Reynolds numbers, but collapse is not complete. A spectral bump similar to that in the velocity spectrum is observed close to dissipative scales. A number of features, including the height of the bump, appear to reach an asymptotic value at high Schmidt number.
Do inertial wave interactions control the rate of energy dissipation of rotating turbulence?
NASA Astrophysics Data System (ADS)
Cortet, Pierre-Philippe; Campagne, Antoine; Machicoane, Nathanael; Gallet, Basile; Moisy, Frederic
2015-11-01
The scaling law of the energy dissipation rate, ɛ ~U3 / L (with U and L the characteristic velocity and lengthscale), is one of the most robust features of fully developed turbulence. How this scaling is affected by a background rotation is still a controversial issue with importance for geo and astrophysical flows. At asymptotically small Rossby numbers Ro = U / ΩL , i.e. in the weakly nonlinear limit, wave-turbulence arguments suggest that ɛ should be reduced by a factor Ro . Such scaling has however never been evidenced directly, neither experimentally nor numerically. We report here direct measurements of the injected power, and therefore of ɛ, in an experiment where a propeller is rotating at a constant rate in a large volume of fluid rotating at Ω. In co-rotation, we find a transition between the wave-turbulence scaling at small Ro and the classical Kolmogorov law at large Ro . The transition between these two regimes is characterized from experiments varying the propeller and tank dimensions. In counter-rotation, the scenario is much richer with the observation of an additional peak of dissipation, similar to the one found in Taylor-Couette experiments.
The Multi-Scale Network Landscape of Collaboration
Ahn, Yong-Yeol; Park, Juyong
2016-01-01
Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena—which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists. PMID:26990088
NASA Astrophysics Data System (ADS)
Folsom, C. P.; Bouvier, J.; Petit, P.; Lèbre, A.; Amard, L.; Palacios, A.; Morin, J.; Donati, J.-F.; Vidotto, A. A.
2018-03-01
There is a large change in surface rotation rates of sun-like stars on the pre-main sequence and early main sequence. Since these stars have dynamo-driven magnetic fields, this implies a strong evolution of their magnetic properties over this time period. The spin-down of these stars is controlled by interactions between stellar and magnetic fields, thus magnetic evolution in turn plays an important role in rotational evolution. We present here the second part of a study investigating the evolution of large-scale surface magnetic fields in this critical time period. We observed stars in open clusters and stellar associations with known ages between 120 and 650 Myr, and used spectropolarimetry and Zeeman Doppler Imaging to characterize their large-scale magnetic field strength and geometry. We report 15 stars with magnetic detections here. These stars have masses from 0.8 to 0.95 M⊙, rotation periods from 0.326 to 10.6 d, and we find large-scale magnetic field strengths from 8.5 to 195 G with a wide range of geometries. We find a clear trend towards decreasing magnetic field strength with age, and a power law decrease in magnetic field strength with Rossby number. There is some tentative evidence for saturation of the large-scale magnetic field strength at Rossby numbers below 0.1, although the saturation point is not yet well defined. Comparing to younger classical T Tauri stars, we support the hypothesis that differences in internal structure produce large differences in observed magnetic fields, however for weak-lined T Tauri stars this is less clear.
Using classical population genetics tools with heterochroneous data: time matters!
Depaulis, Frantz; Orlando, Ludovic; Hänni, Catherine
2009-01-01
New polymorphism datasets from heterochroneous data have arisen thanks to recent advances in experimental and microbial molecular evolution, and the sequencing of ancient DNA (aDNA). However, classical tools for population genetics analyses do not take into account heterochrony between subsets, despite potential bias on neutrality and population structure tests. Here, we characterize the extent of such possible biases using serial coalescent simulations. We first use a coalescent framework to generate datasets assuming no or different levels of heterochrony and contrast most classical population genetic statistics. We show that even weak levels of heterochrony ( approximately 10% of the average depth of a standard population tree) affect the distribution of polymorphism substantially, leading to overestimate the level of polymorphism theta, to star like trees, with an excess of rare mutations and a deficit of linkage disequilibrium, which are the hallmark of e.g. population expansion (possibly after a drastic bottleneck). Substantial departures of the tests are detected in the opposite direction for more heterochroneous and equilibrated datasets, with balanced trees mimicking in particular population contraction, balancing selection, and population differentiation. We therefore introduce simple corrections to classical estimators of polymorphism and of the genetic distance between populations, in order to remove heterochrony-driven bias. Finally, we show that these effects do occur on real aDNA datasets, taking advantage of the currently available sequence data for Cave Bears (Ursus spelaeus), for which large mtDNA haplotypes have been reported over a substantial time period (22-130 thousand years ago (KYA)). Considering serial sampling changed the conclusion of several tests, indicating that neglecting heterochrony could provide significant support for false past history of populations and inappropriate conservation decisions. We therefore argue for systematically considering heterochroneous models when analyzing heterochroneous samples covering a large time scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, A.; Han, T. Y.
Cuprous oxide is a p-type semiconducting material that has been highly researched for its interesting properties. Many small-scale syntheses have exhibited excellent control over size and morphology. As the demand for cuprous oxide grows, the synthesis method need to evolve to facilitate large-scale production. This paper supplies a facile bulk synthesis method for Cu₂O on average, 1-liter reaction volume can produce 1 gram of particles. In order to study the shape and size control mechanisms on such a scale, the reaction volume was diminished to 250 mL producing on average 0.3 grams of nanoparticles per batch. Well-shaped nanoparticles have beenmore » synthesized using an aqueous solution of CuCl₂, NaOH, SDS surfactant, and NH₂OH-HCl at mild temperatures. The time allotted between the addition of NaOH and NH₂OH-HCl was determined to be critical for Cu(OH)2 production, an important precursor to the final produce The effects of stirring rates on a large scale was also analyzed during reagent addition and post reagent addition. A morphological change from rhombic dodecahedra to spheres occurred as the stirring speed was increased. The effects of NH₂OH-HCl concentration were also studied to control the etching effects of the final product.« less
Large eddy simulation of turbine wakes using higher-order methods
NASA Astrophysics Data System (ADS)
Deskos, Georgios; Laizet, Sylvain; Piggott, Matthew D.; Sherwin, Spencer
2017-11-01
Large eddy simulations (LES) of a horizontal-axis turbine wake are presented using the well-known actuator line (AL) model. The fluid flow is resolved by employing higher-order numerical schemes on a 3D Cartesian mesh combined with a 2D Domain Decomposition strategy for an efficient use of supercomputers. In order to simulate flows at relatively high Reynolds numbers for a reasonable computational cost, a novel strategy is used to introduce controlled numerical dissipation to a selected range of small scales. The idea is to mimic the contribution of the unresolved small-scales by imposing a targeted numerical dissipation at small scales when evaluating the viscous term of the Navier-Stokes equations. The numerical technique is shown to behave similarly to the traditional eddy viscosity sub-filter scale models such as the classic or the dynamic Smagorinsky models. The results from the simulations are compared to experimental data for a Reynolds number scaled by the diameter equal to ReD =1,000,000 and both the time-averaged stream wise velocity and turbulent kinetic energy (TKE) are showing a good overall agreement. At the end, suggestions for the amount of numerical dissipation required by our approach are made for the particular case of horizontal-axis turbine wakes.
Calculating the Sachs-Wolfe Effect from Solutions of Null Geodesics in Perturbed FRW Spacetime
NASA Astrophysics Data System (ADS)
Arroyo-Cárdenas, C. A.; Muñoz-Cuartas, J. C.
2017-07-01
In the upcoming precision era in cosmology, fine grained effects will be measured accurately. In particular, the late integrated Sachs-Wolfe (ISW) effect measurements will be improved to levels of unprecedented precision. The ISW consists on temperature fluctuations in the CMB due to gravitational redshift induced by the evolving potential well of large scale structure in the Universe. Currently there is large controversy related to the actual observability of the ISW effect. In principle, it is expected that, as an effect of the late accelerated expansion of the universe motivated by the current amount of dark energy, large scale structures may evolve rapidly, inducing an observable signature in the CMB photons in the way of a ISW anisotropy in the CMB. Tension arises since using galaxy redshift surveys some works report a temperature fluctuations with amplitude smaller than predicted by the Lambda-CDM. We argue that these discrepancies may be originated in the approximation that one has to make to get the classic Sachs-Wolfe effect. In this work, we compare the classic Sachs-Wolfe approximation with an exact solution to the propagation of photons in a dynamical background. We solve numerically the null geodesics on a perturbed FRW spacetime in the Newtonian gauge. From null geodesics, temperature fluctuations in the CMB due to the evolving potential has been calculated. Since solving geodesics accounts for more terms than solving the Sachs-Wolfe (approximated) integral, our results are more accurate. We have been able to substract the background cosmological redshift with the information provided by null geodesics, which allows to get an estimate of the integrated Sachs-Wolfe effect contribution to the temperature of the CMB.
Disruption of circumstellar discs by large-scale stellar magnetic fields
NASA Astrophysics Data System (ADS)
ud-Doula, Asif; Owocki, Stanley P.; Kee, Nathaniel Dylan
2018-05-01
Spectropolarimetric surveys reveal that 8-10% of OBA stars harbor large-scale magnetic fields, but thus far no such fields have been detected in any classical Be stars. Motivated by this, we present here MHD simulations for how a pre-existing Keplerian disc - like that inferred to form from decretion of material from rapidly rotating Be stars - can be disrupted by a rotation-aligned stellar dipole field. For characteristic stellar and disc parameters of a near-critically rotating B2e star, we find that a polar surface field strength of just 10 G can significantly disrupt the disc, while a field of 100 G, near the observational upper limit inferred for most Be stars, completely destroys the disc over just a few days. Our parameter study shows that the efficacy of this magnetic disruption of a disc scales with the characteristic plasma beta (defined as the ratio between thermal and magnetic pressure) in the disc, but is surprisingly insensitive to other variations, e.g. in stellar rotation speed, or the mass loss rate of the star's radiatively driven wind. The disc disruption seen here for even a modest field strength suggests that the presumed formation of such Be discs by decretion of material from the star would likely be strongly inhibited by such fields; this provides an attractive explanation for why no large-scale fields are detected from such Be stars.
The contrasting roles of Planck's constant in classical and quantum theories
NASA Astrophysics Data System (ADS)
Boyer, Timothy H.
2018-04-01
We trace the historical appearance of Planck's constant in physics, and we note that initially the constant did not appear in connection with quanta. Furthermore, we emphasize that Planck's constant can appear in both classical and quantum theories. In both theories, Planck's constant sets the scale of atomic phenomena. However, the roles played in the foundations of the theories are sharply different. In quantum theory, Planck's constant is crucial to the structure of the theory. On the other hand, in classical electrodynamics, Planck's constant is optional, since it appears only as the scale factor for the (homogeneous) source-free contribution to the general solution of Maxwell's equations. Since classical electrodynamics can be solved while taking the homogenous source-free contribution in the solution as zero or non-zero, there are naturally two different theories of classical electrodynamics, one in which Planck's constant is taken as zero and one where it is taken as non-zero. The textbooks of classical electromagnetism present only the version in which Planck's constant is taken to vanish.
Voss, Clifford I.; Soliman, Safaa M.
2014-01-01
Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world’s largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.
NASA Astrophysics Data System (ADS)
Voss, Clifford I.; Soliman, Safaa M.
2014-03-01
Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world's largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.
A human factors approach to range scheduling for satellite control
NASA Technical Reports Server (NTRS)
Wright, Cameron H. G.; Aitken, Donald J.
1991-01-01
Range scheduling for satellite control presents a classical problem: supervisory control of a large-scale dynamic system, with unwieldy amounts of interrelated data used as inputs to the decision process. Increased automation of the task, with the appropriate human-computer interface, is highly desirable. The development and user evaluation of a semi-automated network range scheduling system is described. The system incorporates a synergistic human-computer interface consisting of a large screen color display, voice input/output, a 'sonic pen' pointing device, a touchscreen color CRT, and a standard keyboard. From a human factors standpoint, this development represents the first major improvement in almost 30 years to the satellite control network scheduling task.
Extended self-similarity in the two-dimensional metal-insulator transition
NASA Astrophysics Data System (ADS)
Moriconi, L.
2003-09-01
We show that extended self-similarity, a scaling phenomenon first observed in classical turbulent flows, holds for a two-dimensional metal-insulator transition that belongs to the universality class of random Dirac fermions. Deviations from multifractality, which in turbulence are due to the dominance of diffusive processes at small scales, appear in the condensed-matter context as a large-scale, finite-size effect related to the imposition of an infrared cutoff in the field theory formulation. We propose a phenomenological interpretation of extended self-similarity in the metal-insulator transition within the framework of the random β-model description of multifractal sets. As a natural step, our discussion is bridged to the analysis of strange attractors, where crossovers between multifractal and nonmultifractal regimes are found and extended self-similarity turns out to be verified as well.
Physics of chewing in terrestrial mammals.
Virot, Emmanuel; Ma, Grace; Clanet, Christophe; Jung, Sunghwan
2017-03-07
Previous studies on chewing frequency across animal species have focused on finding a single universal scaling law. Controversy between the different models has been aroused without elucidating the variations in chewing frequency. In the present study we show that vigorous chewing is limited by the maximum force of muscle, so that the upper chewing frequency scales as the -1/3 power of body mass for large animals and as a constant frequency for small animals. On the other hand, gentle chewing to mix food uniformly without excess of saliva describes the lower limit of chewing frequency, scaling approximately as the -1/6 power of body mass. These physical constraints frame the -1/4 power law classically inferred from allometry of animal metabolic rates. All of our experimental data stay within these physical boundaries over six orders of magnitude of body mass regardless of food types.
Physics of chewing in terrestrial mammals
NASA Astrophysics Data System (ADS)
Virot, Emmanuel; Ma, Grace; Clanet, Christophe; Jung, Sunghwan
2017-03-01
Previous studies on chewing frequency across animal species have focused on finding a single universal scaling law. Controversy between the different models has been aroused without elucidating the variations in chewing frequency. In the present study we show that vigorous chewing is limited by the maximum force of muscle, so that the upper chewing frequency scales as the -1/3 power of body mass for large animals and as a constant frequency for small animals. On the other hand, gentle chewing to mix food uniformly without excess of saliva describes the lower limit of chewing frequency, scaling approximately as the -1/6 power of body mass. These physical constraints frame the -1/4 power law classically inferred from allometry of animal metabolic rates. All of our experimental data stay within these physical boundaries over six orders of magnitude of body mass regardless of food types.
Molecular Origins of Mesoscale Ordering in a Metalloamphiphile Phase
2015-01-01
Controlling the assembly of soft and deformable molecular aggregates into mesoscale structures is essential for understanding and developing a broad range of processes including rare earth extraction and cleaning of water, as well as for developing materials with unique properties. By combined synchrotron small- and wide-angle X-ray scattering with large-scale atomistic molecular dynamics simulations we analyze here a metalloamphiphile–oil solution that organizes on multiple length scales. The molecules associate into aggregates, and aggregates flocculate into meso-ordered phases. Our study demonstrates that dipolar interactions, centered on the amphiphile headgroup, bridge ionic aggregate cores and drive aggregate flocculation. By identifying specific intermolecular interactions that drive mesoscale ordering in solution, we bridge two different length scales that are classically addressed separately. Our results highlight the importance of individual intermolecular interactions in driving mesoscale ordering. PMID:27163014
A biological rationale for musical scales.
Gill, Kamraan Z; Purves, Dale
2009-12-03
Scales are collections of tones that divide octaves into specific intervals used to create music. Since humans can distinguish about 240 different pitches over an octave in the mid-range of hearing, in principle a very large number of tone combinations could have been used for this purpose. Nonetheless, compositions in Western classical, folk and popular music as well as in many other musical traditions are based on a relatively small number of scales that typically comprise only five to seven tones. Why humans employ only a few of the enormous number of possible tone combinations to create music is not known. Here we show that the component intervals of the most widely used scales throughout history and across cultures are those with the greatest overall spectral similarity to a harmonic series. These findings suggest that humans prefer tone combinations that reflect the spectral characteristics of conspecific vocalizations. The analysis also highlights the spectral similarity among the scales used by different cultures.
High flexibility of DNA on short length scales probed by atomic force microscopy.
Wiggins, Paul A; van der Heijden, Thijn; Moreno-Herrero, Fernando; Spakowitz, Andrew; Phillips, Rob; Widom, Jonathan; Dekker, Cees; Nelson, Philip C
2006-11-01
The mechanics of DNA bending on intermediate length scales (5-100 nm) plays a key role in many cellular processes, and is also important in the fabrication of artificial DNA structures, but previous experimental studies of DNA mechanics have focused on longer length scales than these. We use high-resolution atomic force microscopy on individual DNA molecules to obtain a direct measurement of the bending energy function appropriate for scales down to 5 nm. Our measurements imply that the elastic energy of highly bent DNA conformations is lower than predicted by classical elasticity models such as the worm-like chain (WLC) model. For example, we found that on short length scales, spontaneous large-angle bends are many times more prevalent than predicted by the WLC model. We test our data and model with an interlocking set of consistency checks. Our analysis also shows how our model is compatible with previous experiments, which have sometimes been viewed as confirming the WLC.
From symptoms to social functioning: differential effects of antidepressant therapy.
Kasper, S
1999-05-01
Significant impairments in social functioning frequently occur simultaneously with depressive symptoms. The implications of such impairments extend beyond the depressed individual to their family, friends and society at large. Classical rating scales such as the Hamilton rating scale for depression primarily assess the core symptoms of depression. A range of rating scales are available, both self-reporting and administered by clinician; however, many have been criticised for their unspecified conceptual background and for being complex and time-consuming. While antidepressants in general appear to improve social functioning, no clear advantage for any single class of agent has been reported. Recently, a new self-report rating scale, the Social Adaptation Self-evaluation Scale, has been developed and used to compare the novel selective noradrenaline reuptake inhibitor, reboxetine, with the selective serotonin re-uptake inhibitor, fluoxetine. The noradrenergic agent, reboxetine, was shown to be significantly more effective in improving social functioning than the serotonergic agent, fluoxetine. These findings are consistent with previous observations that noradrenaline may preferentially improve vigilance, motivation and self-perception.
A Biological Rationale for Musical Scales
Gill, Kamraan Z.; Purves, Dale
2009-01-01
Scales are collections of tones that divide octaves into specific intervals used to create music. Since humans can distinguish about 240 different pitches over an octave in the mid-range of hearing [1], in principle a very large number of tone combinations could have been used for this purpose. Nonetheless, compositions in Western classical, folk and popular music as well as in many other musical traditions are based on a relatively small number of scales that typically comprise only five to seven tones [2]–[6]. Why humans employ only a few of the enormous number of possible tone combinations to create music is not known. Here we show that the component intervals of the most widely used scales throughout history and across cultures are those with the greatest overall spectral similarity to a harmonic series. These findings suggest that humans prefer tone combinations that reflect the spectral characteristics of conspecific vocalizations. The analysis also highlights the spectral similarity among the scales used by different cultures. PMID:19997506
A Systematic Comparison between Classical Optimal Scaling and the Two-Parameter IRT Model
ERIC Educational Resources Information Center
Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J.
2007-01-01
In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.
Preservation of large-scale chromatin structure in FISH experiments
Hepperger, Claudia; Otten, Simone; von Hase, Johann
2006-01-01
The nuclear organization of specific endogenous chromatin regions can be investigated only by fluorescence in situ hybridization (FISH). One of the two fixation procedures is typically applied: (1) buffered formaldehyde or (2) hypotonic shock with methanol acetic acid fixation followed by dropping of nuclei on glass slides and air drying. In this study, we compared the effects of these two procedures and some variations on nuclear morphology and on FISH signals. We analyzed mouse erythroleukemia and mouse embryonic stem cells because their clusters of subcentromeric heterochromatin provide an easy means to assess preservation of chromatin. Qualitative and quantitative analyses revealed that formaldehyde fixation provided good preservation of large-scale chromatin structures, while classical methanol acetic acid fixation after hypotonic treatment severely impaired nuclear shape and led to disruption of chromosome territories, heterochromatin structures, and large transgene arrays. Our data show that such preparations do not faithfully reflect in vivo nuclear architecture. Electronic supplementary material Supplementary material is available in the online version of this article at http://dx.doi.org/10.1007/s00412-006-0084-2 and is accessible for authorized users. PMID:17119992
Photosynthetic Energy Transfer at the Quantum/Classical Border.
Keren, Nir; Paltiel, Yossi
2018-06-01
Quantum mechanics diverges from the classical description of our world when very small scales or very fast processes are involved. Unlike classical mechanics, quantum effects cannot be easily related to our everyday experience and are often counterintuitive to us. Nevertheless, the dimensions and time scales of the photosynthetic energy transfer processes puts them close to the quantum/classical border, bringing them into the range of measurable quantum effects. Here we review recent advances in the field and suggest that photosynthetic processes can take advantage of the sensitivity of quantum effects to the environmental 'noise' as means of tuning exciton energy transfer efficiency. If true, this design principle could be a base for 'nontrivial' coherent wave property nano-devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sagui, Celeste
2006-03-01
An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.
A "Stepping Stone" Approach for Obtaining Quantum Free Energies of Hydration.
Sampson, Chris; Fox, Thomas; Tautermann, Christofer S; Woods, Christopher; Skylaris, Chris-Kriton
2015-06-11
We present a method which uses DFT (quantum, QM) calculations to improve free energies of binding computed with classical force fields (classical, MM). To overcome the incomplete overlap of configurational spaces between MM and QM, we use a hybrid Monte Carlo approach to generate quickly correct ensembles of structures of intermediate states between a MM and a QM/MM description, hence taking into account a great fraction of the electronic polarization of the quantum system, while being able to use thermodynamic integration to compute the free energy of transition between the MM and QM/MM. Then, we perform a final transition from QM/MM to full QM using a one-step free energy perturbation approach. By using QM/MM as a stepping stone toward the full QM description, we find very small convergence errors (<1 kJ/mol) in the transition to full QM. We apply this method to compute hydration free energies, and we obtain consistent improvements over the MM values for all molecules we used in this study. This approach requires large-scale DFT calculations as the full QM systems involved the ligands and all waters in their simulation cells, so the linear-scaling DFT code ONETEP was used for these calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saint-Michel, B.; Aix Marseille Université, CNRS, Centrale Marseille, IRPHE UMR 7342, 13384 Marseille; Herbert, E.
2014-12-15
We report measurements of the dissipation in the Superfluid helium high REynold number von Kármán flow experiment for different forcing conditions. Statistically steady flows are reached; they display a hysteretic behavior similar to what has been observed in a 1:4 scale water experiment. Our macroscopical measurements indicate no noticeable difference between classical and superfluid flows, thereby providing evidence of the same dissipation scaling laws in the two phases. A detailed study of the evolution of the hysteresis cycle with the Reynolds number supports the idea that the stability of the steady states of classical turbulence in this closed flow ismore » partly governed by the dissipative scales. It also supports the idea that the normal and the superfluid components at these temperatures (1.6 K) are locked down to the dissipative length scale.« less
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Thermodynamics of stoichiometric biochemical networks in living systems far from equilibrium.
Qian, Hong; Beard, Daniel A
2005-04-22
The principles of thermodynamics apply to both equilibrium and nonequilibrium biochemical systems. The mathematical machinery of the classic thermodynamics, however, mainly applies to systems in equilibrium. We introduce a thermodynamic formalism for the study of metabolic biochemical reaction (open, nonlinear) networks in both time-dependent and time-independent nonequilibrium states. Classical concepts in equilibrium thermodynamics-enthalpy, entropy, and Gibbs free energy of biochemical reaction systems-are generalized to nonequilibrium settings. Chemical motive force, heat dissipation rate, and entropy production (creation) rate, key concepts in nonequilibrium systems, are introduced. Dynamic equations for the thermodynamic quantities are presented in terms of the key observables of a biochemical network: stoichiometric matrix Q, reaction fluxes J, and chemical potentials of species mu without evoking empirical rate laws. Energy conservation and the Second Law are established for steady-state and dynamic biochemical networks. The theory provides the physiochemical basis for analyzing large-scale metabolic networks in living organisms.
Time-Series Analysis of Intermittent Velocity Fluctuations in Turbulent Boundary Layers
NASA Astrophysics Data System (ADS)
Zayernouri, Mohsen; Samiee, Mehdi; Meerschaert, Mark M.; Klewicki, Joseph
2017-11-01
Classical turbulence theory is modified under the inhomogeneities produced by the presence of a wall. In this regard, we propose a new time series model for the streamwise velocity fluctuations in the inertial sub-layer of turbulent boundary layers. The new model employs tempered fractional calculus and seamlessly extends the classical 5/3 spectral model of Kolmogorov in the inertial subrange to the whole spectrum from large to small scales. Moreover, the proposed time-series model allows the quantification of data uncertainties in the underlying stochastic cascade of turbulent kinetic energy. The model is tested using well-resolved streamwise velocity measurements up to friction Reynolds numbers of about 20,000. The physics of the energy cascade are briefly described within the context of the determined model parameters. This work was supported by the AFOSR Young Investigator Program (YIP) award (FA9550-17-1-0150) and partially by MURI/ARO (W911NF-15-1-0562).
Hybrid propulsion technology program
NASA Technical Reports Server (NTRS)
1990-01-01
Technology was identified which will enable application of hybrid propulsion to manned and unmanned space launch vehicles. Two design concepts are proposed. The first is a hybrid propulsion system using the classical method of regression (classical hybrid) resulting from the flow of oxidizer across a fuel grain surface. The second system uses a self-sustaining gas generator (gas generator hybrid) to produce a fuel rich exhaust that was mixed with oxidizer in a separate combustor. Both systems offer cost and reliability improvement over the existing solid rocket booster and proposed liquid boosters. The designs were evaluated using life cycle cost and reliability. The program consisted of: (1) identification and evaluation of candidate oxidizers and fuels; (2) preliminary evaluation of booster design concepts; (3) preparation of a detailed point design including life cycle costs and reliability analyses; (4) identification of those hybrid specific technologies needing improvement; and (5) preperation of a technology acquisition plan and large scale demonstration plan.
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Saini, Subhash (Technical Monitor)
1998-01-01
The tubular forms of fullerenes popularly known as carbon nanotubes are experimentally produced as single-, multiwall, and rope configurations. The nanotubes and nanoropes have shown to exhibit unusual mechanical and electronic properties. The single wall nanotubes exhibit both semiconducting and metallic behavior. In short undefected lengths they are the known strongest fibers which are unbreakable even when bent in half. Grown in ropes their tensile strength is approximately 100 times greater than steel at only one sixth the weight. Employing large scale classical and quantum molecular dynamics simulations we will explore the use of carbon nanotubes and carbon nanotube junctions in 2-, 3-, and 4-point molecular electronic device components, dynamic strength characterization for compressive, bending and torsional strains, and chemical functionalization for possible use in a nanoscale molecular motor. The above is an unclassified material produced for non-competitive basic research in the nanotechnology area.
Efficient Quantum Transmission in Multiple-Source Networks
Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun
2014-01-01
A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency. PMID:24691590
Role of inhibitory control in modulating focal seizure spread.
Liou, Jyun-You; Ma, Hongtao; Wenzel, Michael; Zhao, Mingrui; Baird-Daniel, Eliza; Smith, Elliot H; Daniel, Andy; Emerson, Ronald; Yuste, Rafael; Schwartz, Theodore H; Schevon, Catherine A
2018-05-10
Focal seizure propagation is classically thought to be spatially contiguous. However, distribution of seizures through a large-scale epileptic network has been theorized. Here, we used a multielectrode array, wide field calcium imaging, and two-photon calcium imaging to study focal seizure propagation pathways in an acute rodent neocortical 4-aminopyridine model. Although ictal neuronal bursts did not propagate beyond a 2-3-mm region, they were associated with hemisphere-wide field potential fluctuations and parvalbumin-positive interneuron activity outside the seizure focus. While bicuculline surface application enhanced contiguous seizure propagation, focal bicuculline microinjection at sites distant to the 4-aminopyridine focus resulted in epileptic network formation with maximal activity at the two foci. Our study suggests that both classical and epileptic network propagation can arise from localized inhibition defects, and that the network appearance can arise in the context of normal brain structure without requirement for pathological connectivity changes between sites.
Reliable quantum certification of photonic state preparations
Aolita, Leandro; Gogolin, Christian; Kliesch, Martin; Eisert, Jens
2015-01-01
Quantum technologies promise a variety of exciting applications. Even though impressive progress has been achieved recently, a major bottleneck currently is the lack of practical certification techniques. The challenge consists of ensuring that classically intractable quantum devices perform as expected. Here we present an experimentally friendly and reliable certification tool for photonic quantum technologies: an efficient certification test for experimental preparations of multimode pure Gaussian states, pure non-Gaussian states generated by linear-optical circuits with Fock-basis states of constant boson number as inputs, and pure states generated from the latter class by post-selecting with Fock-basis measurements on ancillary modes. Only classical computing capabilities and homodyne or hetorodyne detection are required. Minimal assumptions are made on the noise or experimental capabilities of the preparation. The method constitutes a step forward in many-body quantum certification, which is ultimately about testing quantum mechanics at large scales. PMID:26577800
Conditions for Aeronomic Applicability of the Classical Electron Heat Conduction Formula
NASA Technical Reports Server (NTRS)
Cole, K. D.; Hoegy, W. R.
1998-01-01
Conditions for the applicability of the classical formula for heat conduction in the electrons in ionized gas are investigated. In a fully ionised gas ( V(sub en) much greater than V(sub ei)), when the mean free path for electron-electron (or electron-ion) collisions is much larger than the characteristic thermal scale length of the observed system, the conditions for applicability break down. In the case of the Venus ionosphere this breakdown is indicated for a large fraction of the electron temperature data from altitudes greater than 180 km, for electron densities less than 10(exp 4)/cc cm. In a partially ionised gas such that V(sub en) much greater than V(sub ei) there is breakdown of the formula not only when the mean free path of electrons greatly exceeds the thermal scale length, but also when the gradient of neutral particle density exceeds the electron thermal gradient. It is shown that electron heat conduction may be neglected in estimating the temperature of joule heated electrons by observed strong 100 Hz electric fields when the conduction flux is limited by the saturation flux. The results of this paper support our earlier aeronomical arguments against the hypothesis of planetary scale whistlers for the 100 Hz electric field signal. In turn this means that data from the 100 Hz signal may not be used to support the case for lightning on Venus.
NASA Astrophysics Data System (ADS)
Pabon, Rommel; Barnard, Casey; Ukeiley, Lawrence; Sheplak, Mark
2016-11-01
Particle image velocimetry (PIV) and fluctuating wall shear stress experiments were performed on a flat plate turbulent boundary layer (TBL) under zero pressure gradient conditions. The fluctuating wall shear stress was measured using a microelectromechanical 1mm × 1mm floating element capacitive shear stress sensor (CSSS) developed at the University of Florida. The experiments elucidated the imprint of the organized motions in a TBL on the wall shear stress through its direct measurement. Spatial autocorrelation of the streamwise velocity from the PIV snapshots revealed large scale motions that scale on the order of boundary layer thickness. However, the captured inclination angle was lower than that determined using the classic method by means of wall shear stress and hot-wire anemometry (HWA) temporal cross-correlations and a frozen field hypothesis using a convection velocity. The current study suggests the large size of these motions begins to degrade the applicability of the frozen field hypothesis for the time resolved HWA experiments. The simultaneous PIV and CSSS measurements are also used for spatial reconstruction of the velocity field during conditionally sampled intense wall shear stress events. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1315138.
Late stages of accumulation and early evolution of the planets
NASA Technical Reports Server (NTRS)
Vityazev, Andrey V.; Perchernikova, G. V.
1991-01-01
Recently developed solutions of problems are discussed that were traditionally considered fundamental in classical solar system cosmogony: determination of planetary orbit distribution patterns, values for mean eccentricity and orbital inclinations of the planets, and rotation periods and rotation axis inclinations of the planets. Two important cosmochemical aspects of accumulation are examined: the time scale for gas loss from the terrestrial planet zone, and the composition of the planets in terms of isotope data. It was concluded that the early beginning of planet differentiation is a function of the heating of protoplanets during collisions with large (thousands of kilometers) bodies. Energetics, heat mass transfer processes, and characteristic time scales of these processes at the early stages of planet evolution are considered.
NASA Astrophysics Data System (ADS)
Kooi, Henk; Beaumont, Christopher
1996-02-01
Linear systems analysis is used to investigate the response of a surface processes model (SPM) to tectonic forcing. The SPM calculates subcontinental scale denudational landscape evolution on geological timescales (1 to hundreds of million years) as the result of simultaneous hillslope transport, modeled by diffusion, and fluvial transport, modeled by advection and reaction. The tectonically forced SPM accommodates the large-scale behavior envisaged in classical and contemporary conceptual geomorphic models and provides a framework for their integration and unification. The following three model scales are considered: micro-, meso-, and macroscale. The concepts of dynamic equilibrium and grade are quantified at the microscale for segments of uniform gradient subject to tectonic uplift. At the larger meso- and macroscales (which represent individual interfluves and landscapes including a number of drainage basins, respectively) the system response to tectonic forcing is linear for uplift geometries that are symmetric with respect to baselevel and which impose a fully integrated drainage to baselevel. For these linear models the response time and the transfer function as a function of scale characterize the model behavior. Numerical experiments show that the styles of landscape evolution depend critically on the timescales of the tectonic processes in relation to the response time of the landscape. When tectonic timescales are much longer than the landscape response time, the resulting dynamic equilibrium landscapes correspond to those envisaged by Hack (1960). When tectonic timescales are of the same order as the landscape response time and when tectonic variations take the form of pulses (much shorter than the response time), evolving landscapes conform to the Penck type (1972) and to the Davis (1889, 1899) and King (1953, 1962) type frameworks, respectively. The behavior of the SPM highlights the importance of phase shifts or delays of the landform response and sediment yield in relation to the tectonic forcing. Finally, nonlinear behavior resulting from more general uplift geometries is discussed. A number of model experiments illustrate the importance of "fundamental form," which is an expression of the conformity of antecedent topography with the current tectonic regime. Lack of conformity leads to models that exhibit internal thresholds and a complex response.
Computational Nanotechnology of Materials, Devices, and Machines: Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Srivastava, Deepak; Kwak, Dolhan (Technical Monitor)
2000-01-01
The mechanics and chemistry of carbon nanotubes have relevance for their numerous electronic applications. Mechanical deformations such as bending and twisting affect the nanotube's conductive properties, and at the same time they possess high strength and elasticity. Two principal techniques were utilized including the analysis of large scale classical molecular dynamics on a shared memory architecture machine and a quantum molecular dynamics methodology. In carbon based electronics, nanotubes are used as molecular wires with topological defects which are mediated through various means. Nanotubes can be connected to form junctions.
Computational Nanotechnology of Materials, Electronics and Machines: Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Srivastava, Deepak
2001-01-01
This report presents the goals and research of the Integrated Product Team (IPT) on Devices and Nanotechnology. NASA's needs for this technology are discussed and then related to the research focus of the team. The two areas of focus for technique development are: 1) large scale classical molecular dynamics on a shared memory architecture machine; and 2) quantum molecular dynamics methodology. The areas of focus for research are: 1) nanomechanics/materials; 2) carbon based electronics; 3) BxCyNz composite nanotubes and junctions; 4) nano mechano-electronics; and 5) nano mechano-chemistry.
An efficient quantum scheme for Private Set Intersection
NASA Astrophysics Data System (ADS)
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fishman, S., E-mail: fishman@physics.technion.ac.il; Soffer, A., E-mail: soffer@math.rutgers.edu
2016-07-15
We employ the recently developed multi-time scale averaging method to study the large time behavior of slowly changing (in time) Hamiltonians. We treat some known cases in a new way, such as the Zener problem, and we give another proof of the adiabatic theorem in the gapless case. We prove a new uniform ergodic theorem for slowly changing unitary operators. This theorem is then used to derive the adiabatic theorem, do the scattering theory for such Hamiltonians, and prove some classical propagation estimates and asymptotic completeness.
Effects of Structural Deformation and Tube Chirality on Electronic Conductance of Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Svizhenko, Alexei; Maiti, Amitesh; Anantram, M. P.; Biegel, Bryan A. (Technical Monitor)
2002-01-01
A combination of large scale classical force-field (UFF), density functional theory (DFT), and tight-binding Green's function transport calculations is used to study the electronic properties of carbon nanotubes under the twist, bending, and atomic force microscope (AFM)-tip deformation. We found that in agreement with experiment a significant change in electronic conductance can be induced by AFM-tip deformation of metallic zigzag tubes and by twist deformation of armchair tubes. The effect is explained in terms of bandstructure change under deformation.
Molecular biology of bladder cancer.
Martin-Doyle, William; Kwiatkowski, David J
2015-04-01
Classic as well as more recent large-scale genomic analyses have uncovered multiple genes and pathways important for bladder cancer development. Genes involved in cell-cycle control, chromatin regulation, and receptor tyrosine and PI3 kinase-mammalian target of rapamycin signaling pathways are commonly mutated in muscle-invasive bladder cancer. Expression-based analyses have identified distinct types of bladder cancer that are similar to subsets of breast cancer, and have prognostic and therapeutic significance. These observations are leading to novel therapeutic approaches in bladder cancer, providing optimism for therapeutic progress. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Roeck, W., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Maes, C., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be; Schütz, M., E-mail: wojciech.deroeck@fys.kuleuven.be, E-mail: christian.maes@fys.kuleuven.be, E-mail: netocny@fzu.cz, E-mail: marius.schutz@fys.kuleuven.be
2015-02-15
We study the projection on classical spins starting from quantum equilibria. We show Gibbsianness or quasi-locality of the resulting classical spin system for a class of gapped quantum systems at low temperatures including quantum ground states. A consequence of Gibbsianness is the validity of a large deviation principle in the quantum system which is known and here recovered in regimes of high temperature or for thermal states in one dimension. On the other hand, we give an example of a quantum ground state with strong nonlocality in the classical restriction, giving rise to what we call measurement induced entanglement andmore » still satisfying a large deviation principle.« less
Classically and quantum stable emergent universe from conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campo, Sergio del; Herrera, Ramón; Guendelman, Eduardo I.
It has been recently pointed out by Mithani-Vilenkin [1-4] that certain emergent universe scenarios which are classically stable are nevertheless unstable semiclassically to collapse. Here, we show that there is a class of emergent universes derived from scale invariant two measures theories with spontaneous symmetry breaking (s.s.b) of the scale invariance, which can have both classical stability and do not suffer the instability pointed out by Mithani-Vilenkin towards collapse. We find that this stability is due to the presence of a symmetry in the 'emergent phase', which together with the non linearities of the theory, does not allow that themore » FLRW scale factor to be smaller that a certain minimum value a {sub 0} in a certain protected region.« less
Integrability in AdS/CFT correspondence: quasi-classical analysis
NASA Astrophysics Data System (ADS)
Gromov, Nikolay
2009-06-01
In this review, we consider a quasi-classical method applicable to integrable field theories which is based on a classical integrable structure—the algebraic curve. We apply it to the Green-Schwarz superstring on the AdS5 × S5 space. We show that the proposed method reproduces perfectly the earlier results obtained by expanding the string action for some simple classical solutions. The construction is explicitly covariant and is not based on a particular parameterization of the fields and as a result is free from ambiguities. On the other hand, the finite size corrections in some particularly important scaling limit are studied in this paper for a system of Bethe equations. For the general superalgebra \\su(N|K) , the result for the 1/L corrections is obtained. We find an integral equation which describes these corrections in a closed form. As an application, we consider the conjectured Beisert-Staudacher (BS) equations with the Hernandez-Lopez dressing factor where the finite size corrections should reproduce quasi-classical results around a general classical solution. Indeed, we show that our integral equation can be interpreted as a sum of all physical fluctuations and thus prove the complete one-loop consistency of the BS equations. We demonstrate that any local conserved charge (including the AdS energy) computed from the BS equations is indeed given at one loop by the sum of the charges of fluctuations with an exponential precision for large S5 angular momentum of the string. As an independent result, the BS equations in an \\su(2) sub-sector were derived from Zamolodchikovs's S-matrix. The paper is based on the author's PhD thesis.
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
2017-10-26
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
A networked voting rule for democratic representation
Brigatti, Edgardo; Moreno, Yamir
2018-01-01
We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process. PMID:29657817
A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto
In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less
[Industrialization condition and development strategy of Notopterygii Rhizoma et Radix].
Jiang, Shun-Yuan; Sun, Hui; Wang, Hong-Lan; Ma, Xiao-Jun; Qin, Ji-Hong; Xin, Jun; Sun, Hong-Bing; Du, Jiu-Zhen; Yin, Li
2017-07-01
Notopterygii Rhizoma et Radix, the underground part of Notopterygium incisum and N. franchetii, is used as a classical traditional Chinese medicine, and as raw materials for 262 Chinese patent drugs production in 694 pharmaceutical factories currently. It plays an important role in the whole Chinese medicine industry with irreplaceable important economic and social values. However, wild resource of was abruptly depleted, and large-scale artificial cultivation has been inapplicable. In this study, Utilization history and the industrialization status of Notopterygii Rhizoma et Radix were summarized. Resource distribution, ecological suitability of Notopterygii Rhizoma et Radix and core technologies for seeds production, seedling breeding, large-scale cultivation has been reported by current studies, and basic conditions are already available for industrialization production of Notopterygii Rhizoma et Radix. However, there still some key technical problems need to be solved in the further research, and some policy dimensions need to be focused on in the coming industrialization cultivation of Notopterygii Rhizoma et Radix. Copyright© by the Chinese Pharmaceutical Association.
Ingestion of bacterially expressed double-stranded RNA inhibits gene expression in planarians.
Newmark, Phillip A; Reddien, Peter W; Cebrià, Francesc; Sánchez Alvarado, Alejandro
2003-09-30
Freshwater planarian flatworms are capable of regenerating complete organisms from tiny fragments of their bodies; the basis for this regenerative prowess is an experimentally accessible stem cell population that is present in the adult planarian. The study of these organisms, classic experimental models for investigating metazoan regeneration, has been revitalized by the application of modern molecular biological approaches. The identification of thousands of unique planarian ESTs, coupled with large-scale whole-mount in situ hybridization screens, and the ability to inhibit planarian gene expression through double-stranded RNA-mediated genetic interference, provide a wealth of tools for studying the molecular mechanisms that regulate tissue regeneration and stem cell biology in these organisms. Here we show that, as in Caenorhabditis elegans, ingestion of bacterially expressed double-stranded RNA can inhibit gene expression in planarians. This inhibition persists throughout the process of regeneration, allowing phenotypes with disrupted regenerative patterning to be identified. These results pave the way for large-scale screens for genes involved in regenerative processes.
Fractional Transport in Strongly Turbulent Plasmas.
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-28
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
Fractional Transport in Strongly Turbulent Plasmas
NASA Astrophysics Data System (ADS)
Isliker, Heinz; Vlahos, Loukas; Constantinescu, Dana
2017-07-01
We analyze statistically the energization of particles in a large scale environment of strong turbulence that is fragmented into a large number of distributed current filaments. The turbulent environment is generated through strongly perturbed, 3D, resistive magnetohydrodynamics simulations, and it emerges naturally from the nonlinear evolution, without a specific reconnection geometry being set up. Based on test-particle simulations, we estimate the transport coefficients in energy space for use in the classical Fokker-Planck (FP) equation, and we show that the latter fails to reproduce the simulation results. The reason is that transport in energy space is highly anomalous (strange), the particles perform Levy flights, and the energy distributions show extended power-law tails. Newly then, we motivate the use and derive the specific form of a fractional transport equation (FTE), we determine its parameters and the order of the fractional derivatives from the simulation data, and we show that the FTE is able to reproduce the high energy part of the simulation data very well. The procedure for determining the FTE parameters also makes clear that it is the analysis of the simulation data that allows us to make the decision whether a classical FP equation or a FTE is appropriate.
How decoherence affects the probability of slow-roll eternal inflation
NASA Astrophysics Data System (ADS)
Boddy, Kimberly K.; Carroll, Sean M.; Pollack, Jason
2017-07-01
Slow-roll inflation can become eternal if the quantum variance of the inflaton field around its slowly rolling classical trajectory is converted into a distribution of classical spacetimes inflating at different rates, and if the variance is large enough compared to the rate of classical rolling that the probability of an increased rate of expansion is sufficiently high. Both of these criteria depend sensitively on whether and how perturbation modes of the inflaton interact and decohere. Decoherence is inevitable as a result of gravitationally sourced interactions whose strength are proportional to the slow-roll parameters. However, the weakness of these interactions means that decoherence is typically delayed until several Hubble times after modes grow beyond the Hubble scale. We present perturbative evidence that decoherence of long-wavelength inflaton modes indeed leads to an ensemble of classical spacetimes with differing cosmological evolutions. We introduce the notion of per-branch observables—expectation values with respect to the different decohered branches of the wave function—and show that the evolution of modes on individual branches varies from branch to branch. Thus, single-field slow-roll inflation fulfills the quantum-mechanical criteria required for the validity of the standard picture of eternal inflation. For a given potential, the delayed decoherence can lead to slight quantitative adjustments to the regime in which the inflaton undergoes eternal inflation.
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.
2016-12-01
The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.
Norris, Scott A; Brenner, Michael P; Aziz, Michael J
2009-06-03
We develop a methodology for deriving continuum partial differential equations for the evolution of large-scale surface morphology directly from molecular dynamics simulations of the craters formed from individual ion impacts. Our formalism relies on the separation between the length scale of ion impact and the characteristic scale of pattern formation, and expresses the surface evolution in terms of the moments of the crater function. We demonstrate that the formalism reproduces the classical Bradley-Harper results, as well as ballistic atomic drift, under the appropriate simplifying assumptions. Given an actual set of converged molecular dynamics moments and their derivatives with respect to the incidence angle, our approach can be applied directly to predict the presence and absence of surface morphological instabilities. This analysis represents the first work systematically connecting molecular dynamics simulations of ion bombardment to partial differential equations that govern topographic pattern-forming instabilities.
Suppressed ion-scale turbulence in a hot high-β plasma
NASA Astrophysics Data System (ADS)
Schmitz, L.; Fulton, D. P.; Ruskov, E.; Lau, C.; Deng, B. H.; Tajima, T.; Binderbauer, M. W.; Holod, I.; Lin, Z.; Gota, H.; Tuszewski, M.; Dettrick, S. A.; Steinhauer, L. C.
2016-12-01
An economic magnetic fusion reactor favours a high ratio of plasma kinetic pressure to magnetic pressure in a well-confined, hot plasma with low thermal losses across the confining magnetic field. Field-reversed configuration (FRC) plasmas are potentially attractive as a reactor concept, achieving high plasma pressure in a simple axisymmetric geometry. Here, we show that FRC plasmas have unique, beneficial microstability properties that differ from typical regimes in toroidal confinement devices. Ion-scale fluctuations are found to be absent or strongly suppressed in the plasma core, mainly due to the large FRC ion orbits, resulting in near-classical thermal ion confinement. In the surrounding boundary layer plasma, ion- and electron-scale turbulence is observed once a critical pressure gradient is exceeded. The critical gradient increases in the presence of sheared plasma flow induced via electrostatic biasing, opening the prospect of active boundary and transport control in view of reactor requirements.
Suppressed ion-scale turbulence in a hot high-β plasma
Schmitz, L.; Fulton, D. P.; Ruskov, E.; Lau, C.; Deng, B. H.; Tajima, T.; Binderbauer, M. W.; Holod, I.; Lin, Z.; Gota, H.; Tuszewski, M.; Dettrick, S. A.; Steinhauer, L. C.
2016-01-01
An economic magnetic fusion reactor favours a high ratio of plasma kinetic pressure to magnetic pressure in a well-confined, hot plasma with low thermal losses across the confining magnetic field. Field-reversed configuration (FRC) plasmas are potentially attractive as a reactor concept, achieving high plasma pressure in a simple axisymmetric geometry. Here, we show that FRC plasmas have unique, beneficial microstability properties that differ from typical regimes in toroidal confinement devices. Ion-scale fluctuations are found to be absent or strongly suppressed in the plasma core, mainly due to the large FRC ion orbits, resulting in near-classical thermal ion confinement. In the surrounding boundary layer plasma, ion- and electron-scale turbulence is observed once a critical pressure gradient is exceeded. The critical gradient increases in the presence of sheared plasma flow induced via electrostatic biasing, opening the prospect of active boundary and transport control in view of reactor requirements. PMID:28000675
Trapped-Ion Quantum Logic with Global Radiation Fields.
Weidt, S; Randall, J; Webster, S C; Lake, K; Webb, A E; Cohen, I; Navickas, T; Lekitsch, B; Retzker, A; Hensinger, W K
2016-11-25
Trapped ions are a promising tool for building a large-scale quantum computer. However, the number of required radiation fields for the realization of quantum gates in any proposed ion-based architecture scales with the number of ions within the quantum computer, posing a major obstacle when imagining a device with millions of ions. Here, we present a fundamentally different approach for trapped-ion quantum computing where this detrimental scaling vanishes. The method is based on individually controlled voltages applied to each logic gate location to facilitate the actual gate operation analogous to a traditional transistor architecture within a classical computer processor. To demonstrate the key principle of this approach we implement a versatile quantum gate method based on long-wavelength radiation and use this method to generate a maximally entangled state of two quantum engineered clock qubits with fidelity 0.985(12). This quantum gate also constitutes a simple-to-implement tool for quantum metrology, sensing, and simulation.
ERIC Educational Resources Information Center
Sussman, Joshua; Beaujean, A. Alexander; Worrell, Frank C.; Watson, Stevie
2013-01-01
Item response models (IRMs) were used to analyze Cross Racial Identity Scale (CRIS) scores. Rasch analysis scores were compared with classical test theory (CTT) scores. The partial credit model demonstrated a high goodness of fit and correlations between Rasch and CTT scores ranged from 0.91 to 0.99. CRIS scores are supported by both methods.…
Dispersion and Cluster Scales in the Ocean
NASA Astrophysics Data System (ADS)
Kirwan, A. D., Jr.; Chang, H.; Huntley, H.; Carlson, D. F.; Mensa, J. A.; Poje, A. C.; Fox-Kemper, B.
2017-12-01
Ocean flow space scales range from centimeters to thousands of kilometers. Because of their large Reynolds number these flows are considered turbulent. However, because of rotation and stratification constraints they do not conform to classical turbulence scaling theory. Mesoscale and large-scale motions are well described by geostrophic or "2D turbulence" theory, however extending this theory to submesoscales has proved to be problematic. One obvious reason is the difficulty in obtaining reliable data over many orders of magnitude of spatial scales in an ocean environment. The goal of this presentation is to provide a preliminary synopsis of two recent experiments that overcame these obstacles. The first experiment, the Grand LAgrangian Deployment (GLAD) was conducted during July 2012 in the eastern half of the Gulf of Mexico. Here approximately 300 GPS-tracked drifters were deployed with the primary goal to determine whether the relative dispersion of an initially densely clustered array was driven by processes acting at local pair separation scales or by straining imposed by mesoscale motions. The second experiment was a component of the LAgrangian Submesoscale Experiment (LASER) conducted during the winter of 2016. Here thousands of bamboo plates were tracked optically from an Aerostat. Together these two deployments provided an unprecedented data set on dispersion and clustering processes from 1 to 106 meter scales. Calculations of statistics such as two point separations, structure functions, and scale dependent relative diffusivities showed: inverse energy cascade as expected for scales above 10 km, a forward energy cascade at scales below 10 km with a possible energy input at Langmuir circulation scales. We also find evidence from structure function calculations for surface flow convergence at scales less than 10 km that account for material clustering at the ocean surface.
Hirshberg, Barak; Sagiv, Lior; Gerber, R Benny
2017-03-14
Algorithms for quantum molecular dynamics simulations that directly use ab initio methods have many potential applications. In this article, the ab initio classical separable potentials (AICSP) method is proposed as the basis for approximate algorithms of this type. The AICSP method assumes separability of the total time-dependent wave function of the nuclei and employs mean-field potentials that govern the dynamics of each degree of freedom. In the proposed approach, the mean-field potentials are determined by classical ab initio molecular dynamics simulations. The nuclear wave function can thus be propagated in time using the effective potentials generated "on the fly". As a test of the method for realistic systems, calculations of the stationary anharmonic frequencies of hydrogen stretching modes were carried out for several polyatomic systems, including three amino acids and the guanine-cytosine pair of nucleobases. Good agreement with experiments was found. The method scales very favorably with the number of vibrational modes and should be applicable for very large molecules, e.g., peptides. The method should also be applicable for properties such as vibrational line widths and line shapes. Work in these directions is underway.
2018-04-23
Grade 3a Follicular Lymphoma; Grade 3b Follicular Lymphoma; Recurrent Classical Hodgkin Lymphoma; Recurrent Diffuse Large B-Cell Lymphoma; Recurrent Follicular Lymphoma; Recurrent Grade 1 Follicular Lymphoma; Recurrent Grade 2 Follicular Lymphoma; Recurrent Mediastinal (Thymic) Large B-Cell Cell Lymphoma; Refractory Classical Hodgkin Lymphoma; Refractory Diffuse Large B-Cell Lymphoma; Refractory Follicular Lymphoma; Refractory Mediastinal (Thymic) Large B-Cell Cell Lymphoma
An energy dependent earthquake frequency-magnitude distribution
NASA Astrophysics Data System (ADS)
Spassiani, I.; Marzocchi, W.
2017-12-01
The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.
Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions
McLerran, Larry; Tribedy, Prithwish
2015-11-02
High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less
On the distribution of local dissipation scales in turbulent flows
NASA Astrophysics Data System (ADS)
May, Ian; Morshed, Khandakar; Venayagamoorthy, Karan; Dasi, Lakshmi
2014-11-01
Universality of dissipation scales in turbulence relies on self-similar scaling and large scale independence. We show that the probability density function of dissipation scales, Q (η) , is analytically defined by the two-point correlation function, and the Reynolds number (Re). We also present a new analytical form for the two-point correlation function for the dissipation scales through a generalized definition of a directional Taylor microscale. Comparison of Q (η) predicted within this framework and published DNS data shows excellent agreement. It is shown that for finite Re no single similarity law exists even for the case of homogeneous isotropic turbulence. Instead a family of scaling is presented, defined by Re and a dimensionless local inhomogeneity parameter based on the spatial gradient of the rms velocity. For moderate Re inhomogeneous flows, we note a strong directional dependence of Q (η) dictated by the principal Reynolds stresses. It is shown that the mode of the distribution Q (η) significantly shifts to sub-Kolmogorov scales along the inhomogeneous directions, as in wall bounded turbulence. This work extends the classical Kolmogorov's theory to finite Re homogeneous isotropic turbulence as well as the case of inhomogeneous anisotropic turbulence.
Holography as a highly efficient renormalization group flow. I. Rephrasing gravity
NASA Astrophysics Data System (ADS)
Behr, Nicolas; Kuperstein, Stanislav; Mukhopadhyay, Ayan
2016-07-01
We investigate how the holographic correspondence can be reformulated as a generalization of Wilsonian renormalization group (RG) flow in a strongly interacting large-N quantum field theory. We first define a highly efficient RG flow as one in which the Ward identities related to local conservation of energy, momentum and charges preserve the same form at each scale. To achieve this, it is necessary to redefine the background metric and external sources at each scale as functionals of the effective single-trace operators. These redefinitions also absorb the contributions of the multitrace operators to these effective Ward identities. Thus, the background metric and external sources become effectively dynamical, reproducing the dual classical gravity equations in one higher dimension. Here, we focus on reconstructing the pure gravity sector as a highly efficient RG flow of the energy-momentum tensor operator, leaving the explicit constructive field theory approach for generating such RG flows to the second part of the work. We show that special symmetries of the highly efficient RG flows carry information through which we can decode the gauge fixing of bulk diffeomorphisms in the corresponding gravity equations. We also show that the highly efficient RG flow which reproduces a given classical gravity theory in a given gauge is unique provided the endpoint can be transformed to a nonrelativistic fixed point with a finite number of parameters under a universal rescaling. The results obtained here are used in the second part of this work, where we do an explicit field-theoretic construction of the RG flow and obtain the dual classical gravity theory.
NASA Astrophysics Data System (ADS)
Oldroyd, H. J.; Pardyjak, E.; Higgins, C. W.; Parlange, M. B.
2015-12-01
As micrometeorological research shifts to increasingly non-idealized environments, the lens through which we view classical atmospheric boundary layer theory must also shift to accommodate unfamiliar behavior. We present observations of katabatic flow over a steep (35.5 degree), alpine slope and draw comparisons with classical theory for nocturnal boundary layers (NBL) over flat terrain to delineate key physical differences and similarities. In both cases, the NBL is characterized by a strong, terrain-aligned thermal stratification. Over flat terrain, this temperature inversion tends to stabilize perturbations and suppresses vertical motions. Hence, the buoyancy term in the TKE budget equation acts as a sink. In contrast, the steep-slope katabatic flow regime is characterized by buoyant TKE production despite NBL thermal stratification. This buoyant TKE production occurs because streamwise (upslope) heat fluxes, which are typically treated as unimportant over flat terrain, contribute to the total vertical buoyancy flux since the gravity vector is not terrain-normal. Due to a relatively small number of observations over steep terrain, the turbulence structure of such flows and the implications of buoyant TKE production in the NBL have gone largely unexplored. As an important consequence of this characteristic, we show that conventional stability characterizations require careful coordinate system alignment and interpretation for katabatic flows. The streamwise heat fluxes play an integral role in characterizing stability and turbulent transport, more broadly, in katabatic flows. Therefore, multi-scale statistics and budget analyses describing physical interactions between turbulent fluxes at various scales are presented to interpret similarities and differences between the observations and classical theories regarding streamwise heat fluxes.
NASA Astrophysics Data System (ADS)
Berenstein, David; Miller, Alexandra
2016-09-01
In this paper, we argue that for classical configurations of gravity in the AdS/CFT setup, it is in general impossible to reconstruct the bulk geometry from the leading asymptotic behavior of the classical fields in gravity alone. This is possible sufficiently near the vacuum, but not more generally. We argue this by using a counter-example that utilizes the supersymmetric geometries constructed by Lin, Lunin, and Maldacena. In the dual quantum field theory, the additional data required to complete the geometry is encoded in modes that near the vacuum geometry lie beyond the Planck scale.
Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul
2016-01-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257
NASA Astrophysics Data System (ADS)
Tseng, Yu-Heng; Meneveau, Charles; Parlange, Marc B.
2004-11-01
Large Eddy Simulations (LES) of atmospheric boundary-layer air movement in urban environments are especially challenging due to complex ground topography. Typically in such applications, fairly coarse grids must be used where the subgrid-scale (SGS) model is expected to play a crucial role. A LES code using pseudo-spectral discretization in horizontal planes and second-order differencing in the vertical is implemented in conjunction with the immersed boundary method to incorporate complex ground topography, with the classic equilibrium log-law boundary condition in the new-wall region, and with several versions of the eddy-viscosity model: (1) the constant-coefficient Smagorinsky model, (2) the dynamic, scale-invariant Lagrangian model, and (3) the dynamic, scale-dependent Lagrangian model. Other planar-averaged type dynamic models are not suitable because spatial averaging is not possible without directions of statistical homogeneity. These SGS models are tested in LES of flow around a square cylinder and of flow over surface-mounted cubes. Effects on the mean flow are documented and found not to be major. Dynamic Lagrangian models give a physically more realistic SGS viscosity field, and in general, the scale-dependent Lagrangian model produces larger Smagorinsky coefficient than the scale-invariant one, leading to reduced distributions of resolved rms velocities especially in the boundary layers near the bluff bodies.
Study of near SOL decay lengths in ASDEX Upgrade under attached and detached divertor conditions
NASA Astrophysics Data System (ADS)
Sun, H. J.; Wolfrum, E.; Kurzan, B.; Eich, T.; Lackner, K.; Scarabosio, A.; Paradela Pérez, I.; Kardaun, O.; Faitsch, M.; Potzel, S.; Stroth, U.; the ASDEX Upgrade Team
2017-10-01
A database with attached, partially detached and completely detached divertors has been constructed in ASDEX Upgrade discharges in both H-mode and L-mode plasmas with Thomson Scattering data suitable for the analysis of the upstream SOL electron profiles. By comparing upstream temperature decay width, {λ }{Te,u}, with the scaling of the SOL power decay width, {λ }{q\\parallel e}, based on the downstream IR measurements, it is found that a simple relation based on classical electron conduction can relate {λ }{Te,u} and {λ }{q\\parallel e} well. The combined dataset can be described by both a single scaling and a separate scaling for H-modes and L-modes. For the single scaling, a strong inverse dependence of, {λ }{Te,u} on the separatrix temperature, {T}e,u, is found, suggesting the classical parallel Spitzer-Harm conductivity as dominant mechanism controlling the SOL width in both L-mode and H-mode over a large set of plasma parameters. This dependence on {T}e,u explains why, for the same global plasma parameters, {λ }{q\\parallel e} in L-mode is approximately twice that in H-mode and under detached conditions, the SOL upstream electron profile broadens when the density reaches a critical value. Comparing the derived scaling from experimental data with power balance, gives the cross-field thermal diffusivity as {χ }\\perp \\propto {T}e{1/2}/{n}e, consistent with earlier studies on Compass-D, JET and Alcator C-Mod. However, the possibility of the separate scalings for different regimes cannot be excluded, which gives results similar to those previously reported for the H-mode, but here the wider SOL width for L-mode plasmas is explained simply by the larger premultiplying coefficient. The relative merits of the two scalings in representing the data and their theoretical implications are discussed.
A Parameterization of Dry Thermals and Shallow Cumuli for Mesoscale Numerical Weather Prediction
NASA Astrophysics Data System (ADS)
Pergaud, Julien; Masson, Valéry; Malardel, Sylvie; Couvreux, Fleur
2009-07-01
For numerical weather prediction models and models resolving deep convection, shallow convective ascents are subgrid processes that are not parameterized by classical local turbulent schemes. The mass flux formulation of convective mixing is now largely accepted as an efficient approach for parameterizing the contribution of larger plumes in convective dry and cloudy boundary layers. We propose a new formulation of the EDMF scheme (for Eddy DiffusivityMass Flux) based on a single updraft that improves the representation of dry thermals and shallow convective clouds and conserves a correct representation of stratocumulus in mesoscale models. The definition of entrainment and detrainment in the dry part of the updraft is original, and is specified as proportional to the ratio of buoyancy to vertical velocity. In the cloudy part of the updraft, the classical buoyancy sorting approach is chosen. The main closure of the scheme is based on the mass flux near the surface, which is proportional to the sub-cloud layer convective velocity scale w *. The link with the prognostic grid-scale cloud content and cloud cover and the projection on the non- conservative variables is processed by the cloud scheme. The validation of this new formulation using large-eddy simulations focused on showing the robustness of the scheme to represent three different boundary layer regimes. For dry convective cases, this parameterization enables a correct representation of the countergradient zone where the mass flux part represents the top entrainment (IHOP case). It can also handle the diurnal cycle of boundary-layer cumulus clouds (EUROCSARM) and conserve a realistic evolution of stratocumulus (EUROCSFIRE).
Microphysically Derived Expressions for Rate-and-State Friction Parameters, a, b, and Dc
NASA Astrophysics Data System (ADS)
Chen, Jianye; Niemeijer, A. R.; Spiers, Christopher J.
2017-12-01
Rate-and-state friction (RSF) laws are extensively applied in fault mechanics but have a largely empirical basis reflecting only limited understanding of the underlying physical mechanisms. We recently proposed a microphysical model describing the frictional behavior of a granular fault gouge undergoing deformation in terms of granular flow accompanied by thermally activated creep and intergranular sliding at grain contacts. Numerical solutions reproduced typical experimental results well. Here we extend our model to obtain physically meaningful, analytical expressions for the steady state frictional strength and standard RSF parameters, a, b, and Dc. The frictional strength contains two components, namely, grain boundary friction and friction due to intergranular dilatation. The expressions obtained for a and b linearly reflect the rate dependence of these two terms. Dc scales with slip band thickness and varies only slightly with velocity. The values of a, b, and Dc predicted show quantitative agreement with previous experimental results, and inserting their values into classical RSF laws gives simulated friction behavior that is consistent with the predictions of our numerically implemented model for small departures from steady state. For large velocity steps, the model produces mixed RSF behavior that falls between the Slowness and Slip laws, for example, with an intermediate equivalent slip(-weakening) distance d0. Our model possesses the interesting property not only that a and b are velocity dependent but also that Dc and d0 scale differently from classical RSF models, potentially explaining behaviour seen in many hydrothermal friction experiments and having substantial implications for natural fault friction.
Statistical measures of Planck scale signal correlations in interferometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, Craig J.; Kwon, Ohkyung
2015-06-22
A model-independent statistical framework is presented to interpret data from systems where the mean time derivative of positional cross correlation between world lines, a measure of spreading in a quantum geometrical wave function, is measured with a precision smaller than the Planck time. The framework provides a general way to constrain possible departures from perfect independence of classical world lines, associated with Planck scale bounds on positional information. A parametrized candidate set of possible correlation functions is shown to be consistent with the known causal structure of the classical geometry measured by an apparatus, and the holographic scaling of informationmore » suggested by gravity. Frequency-domain power spectra are derived that can be compared with interferometer data. As a result, simple projections of sensitivity for specific experimental set-ups suggests that measurements will directly yield constraints on a universal time derivative of the correlation function, and thereby confirm or rule out a class of Planck scale departures from classical geometry.« less
Plasmon mass scale and quantum fluctuations of classical fields on a real time lattice
NASA Astrophysics Data System (ADS)
Kurkela, Aleksi; Lappi, Tuomas; Peuron, Jarkko
2018-03-01
Classical real-time lattice simulations play an important role in understanding non-equilibrium phenomena in gauge theories and are used in particular to model the prethermal evolution of heavy-ion collisions. Above the Debye scale the classical Yang-Mills (CYM) theory can be matched smoothly to kinetic theory. First we study the limits of the quasiparticle picture of the CYM fields by determining the plasmon mass of the system using 3 different methods. Then we argue that one needs a numerical calculation of a system of classical gauge fields and small linearized fluctuations, which correspond to quantum fluctuations, in a way that keeps the separation between the two manifest. We demonstrate and test an implementation of an algorithm with the linearized fluctuation showing that the linearization indeed works and that the Gauss's law is conserved.
Large Impact Features on Europa: Results of the Galileo Nominal Mission
NASA Technical Reports Server (NTRS)
Moore, Jeffrey M.; Asphaug, Erik; Sullivan, Robert J.; Klemaszewski, James E.; Bender, Kelly C.; Greeley, Ronald; Geissler, Paul E.; McEwen, Alfred S.; Turtle, Elizabeth P.; Phillips, Cynthia B.
1998-01-01
The Galileo Orbiter examined several impact features on Europa at considerably better resolution than was possible from Voyager. The new data allow us to describe the morphology and infer the geology of the largest impact features on Europa, which are probes into the crust. We observe two basic types of large impact features: (1) "classic" impact craters that grossly resemble well-preserved lunar craters of similar size but are more topographically subdued (e.g., Pwyll) and (2) very flat circular features that lack the basic topographic structures of impact craters such as raised rims, a central depression, or central peaks, and which largely owe their identification as impact features to the field of secondary craters radially sprayed about them (e.g., Callanish). Our interpretation is that the classic craters (all <30 km diameter) formed entirely within a solid target at least 5 to 10 km thick that exhibited brittle behavior on time scales of the impact events. Some of the classic craters have a more subdued topography than fresh craters of similar size on other icy bodies such as Ganymede and Callisto, probably due to the enhanced viscous relaxation produced by a steeper thermal gradient on Europa. Pedestal ejecta facies on Europa (and Ganymede) may be produced by the relief-flattening movement of plastically deforming but otherwise solid ice that was warm at the time of emplacement. Callanish and Tyre do not appear to be larger and even more viscously relaxed versions of the classic craters; rather they display totally different morphologies such as distinctive textures and a series of large concentric structural rings cutting impact-feature-related materials. Impact simulations suggest that the distinctive morphologies would not be produced by impact into a solid ice target, but may be explained by impact into an ice layer approximately 10 to 15 km thick overlying a low-viscosity material such as water. The very wide (near antipodal) separation of Callanish and Tyre imply that approximately 10-15 km may have been the global average thickness of the rigid crust of Europa when these impacts occurred. The absence of detectable craters superposed on the interior deposits of Callanish suggests that it is geologically young (< 10(exp 8) years). Hence, it seems likely that our preliminary conclusions about the subsurface structure of Europa apply to the current day.
Large Impact Features on Europa: Results of the Galileo Nominal Mission
Moore, Johnnie N.; Asphaug, E.; Sullivan, R.J.; Klemaszewski, J.E.; Bender, K.C.; Greeley, R.; Geissler, P.E.; McEwen, A.S.; Turtle, E.P.; Phillips, C.B.; Tufts, B.R.; Head, J. W.; Pappalardo, R.T.; Jones, K.B.; Chapman, C.R.; Belton, M.J.S.; Kirk, R.L.; Morrison, D.
1998-01-01
The Galileo Orbiter examined several impact features on Europa at considerably better resolution than was possible from Voyager. The new data allow us to describe the morphology and infer the geology of the largest impact features on Europa, which are probes into the crust. We observe two basic types of large impact features: (1) "classic" impact craters that grossly resemble well-preserved lunar craters of similar size but are more topographically subdued (e.g., Pwyll) and (2) very flat circular features that lack the basic topographic structures of impact craters such as raised rims, a central depression, or central peaks, and which largely owe their identification as impact features to the field of secondary craters radially sprayed about them (e.g., Callanish). Our interpretation is that the classic craters (all <30 km diameter) formed entirely within a solid target at least 5 to 10 km thick that exhibited brittle behavior on time scales of the impact events. Some of the classic craters have a more subdued topography than fresh craters of similar size on other icy bodies such as Ganymede and Callisto, probably due to the enhanced viscous relaxation produced by a steeper thermal gradient on Europa. Pedestal ejecta facies on Europa (and Ganymede) may be produced by the relief-flattening movement of plastically deforming but otherwise solid ice that was warm at the time of emplacement. Callanish and Tyre do not appear to be larger and even more viscously relaxed versions of the classic craters; rather they display totally different morphologies such as distinctive textures and a series of large concentric structural rings cutting impact-feature-related materials. Impact simulations suggest that the distinctive morphologies would not be produced by impact into a solid ice target, but may be explained by impact into an ice layer ~10 to 15 km thick overlying a low-viscosity material such as water. The very wide (near antipodal) separation of Callanish and Tyre imply that ~10-15 km may have been the global average thickness of the rigid crust of Europa when these impacts occurred. The absence of detectable craters superposed on the interior deposits of Callanish suggests that it is geologically young (<108years). Hence, it seems likely that our preliminary conclusions about the subsurface structure of Europa apply to the current day. ?? 1998 Academic Press.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
Quantum superposition at the half-metre scale.
Kovachy, T; Asenbaum, P; Overstreet, C; Donnelly, C A; Dickerson, S M; Sugarbaker, A; Hogan, J M; Kasevich, M A
2015-12-24
The quantum superposition principle allows massive particles to be delocalized over distant positions. Though quantum mechanics has proved adept at describing the microscopic world, quantum superposition runs counter to intuitive conceptions of reality and locality when extended to the macroscopic scale, as exemplified by the thought experiment of Schrödinger's cat. Matter-wave interferometers, which split and recombine wave packets in order to observe interference, provide a way to probe the superposition principle on macroscopic scales and explore the transition to classical physics. In such experiments, large wave-packet separation is impeded by the need for long interaction times and large momentum beam splitters, which cause susceptibility to dephasing and decoherence. Here we use light-pulse atom interferometry to realize quantum interference with wave packets separated by up to 54 centimetres on a timescale of 1 second. These results push quantum superposition into a new macroscopic regime, demonstrating that quantum superposition remains possible at the distances and timescales of everyday life. The sub-nanokelvin temperatures of the atoms and a compensation of transverse optical forces enable a large separation while maintaining an interference contrast of 28 per cent. In addition to testing the superposition principle in a new regime, large quantum superposition states are vital to exploring gravity with atom interferometers in greater detail. We anticipate that these states could be used to increase sensitivity in tests of the equivalence principle, measure the gravitational Aharonov-Bohm effect, and eventually detect gravitational waves and phase shifts associated with general relativity.
Experimental investigation of 4 K pulse tube refrigerator
NASA Astrophysics Data System (ADS)
Gao, J. L.; Matsubara, Y.
During the last decades superconducting electronics has been the most prominent area of research for small scale applications of superconductivity. It has experienced quite a stormy development, from individual low frequency devices to devices with high integration density and pico second switching time. Nowadays it offers small losses, high speed and the potential for large scale integration and is superior to semiconducting devices in many ways — apart from the need for cooling by liquid helium for devices based on classical superconductors like niobium, or cooling by liquid nitrogen or cryocoolers (40K to 77K) for high-T c superconductors like YBa 2Cu 3O 7. This article gives a short overview over the current state of the art on typical devices out of the main application areas of superconducting electronics.
General Entanglement Scaling Laws from Time Evolution
NASA Astrophysics Data System (ADS)
Eisert, Jens; Osborne, Tobias J.
2006-10-01
We establish a general scaling law for the entanglement of a large class of ground states and dynamically evolving states of quantum spin chains: we show that the geometric entropy of a distinguished block saturates, and hence follows an entanglement-boundary law. These results apply to any ground state of a gapped model resulting from dynamics generated by a local Hamiltonian, as well as, dually, to states that are generated via a sudden quench of an interaction as recently studied in the case of dynamics of quantum phase transitions. We achieve these results by exploiting ideas from quantum information theory and tools provided by Lieb-Robinson bounds. We also show that there exist noncritical fermionic systems and equivalent spin chains with rapidly decaying interactions violating this entanglement-boundary law. Implications for the classical simulatability are outlined.
NASA Astrophysics Data System (ADS)
Shmerlin, B. Ya; Kalashnik, M. V.
2013-05-01
Convective motions in moist saturated air are accompanied by the release of latent heat of condensation. Taking this effect into account, we consider the problem of convective instability of a moist saturated air layer, generalizing the formulation of the classical Rayleigh problem. An analytic solution demonstrating the fundamental difference between moist convection and Rayleigh convection is obtained. Upon losing stability in the two-dimensional case, localized convective rolls or spatially periodic chains of rollers with localized areas of upward motion evolve. In the case of axial symmetry, the growth of localized convective vortices with circulation characteristic of tropical cyclones (hurricanes) is possible at the early stages of development and on the scale of tornados to tropical cyclones.
Astrophysical constraints on Planck scale dissipative phenomena.
Liberati, Stefano; Maccione, Luca
2014-04-18
The emergence of a classical spacetime from any quantum gravity model is still a subtle and only partially understood issue. If indeed spacetime is arising as some sort of large scale condensate of more fundamental objects, then it is natural to expect that matter, being a collective excitation of the spacetime constituents, will present modified kinematics at sufficiently high energies. We consider here the phenomenology of the dissipative effects necessarily arising in such a picture. Adopting dissipative hydrodynamics as a general framework for the description of the energy exchange between collective excitations and the spacetime fundamental degrees of freedom, we discuss how rates of energy loss for elementary particles can be derived from dispersion relations and used to provide strong constraints on the base of current astrophysical observations of high-energy particles.
Large scale spontaneous synchronization of cell cycles in amoebae
NASA Astrophysics Data System (ADS)
Segota, Igor; Boulet, Laurent; Franck, Carl
2014-03-01
Unicellular eukaryotic amoebae Dictyostelium discoideum are generally believed to grow in their vegetative state as single cells until starvation, when their collective aspect emerges and they differentiate to form a multicellular slime mold. While major efforts continue to be aimed at their starvation-induced social aspect, our understanding of population dynamics and cell cycle in the vegetative growth phase has remained incomplete. We show that substrate-growtn cell populations spontaneously synchronize their cell cycles within several hours. These collective population-wide cell cycle oscillations span millimeter length scales and can be completely suppressed by washing away putative cell-secreted signals, implying signaling by means of a diffusible growth factor or mitogen. These observations give strong evidence for collective proliferation behavior in the vegetative state and provide opportunities for synchronization theories beyond classic Kuramoto models.
Dynamical Crossovers in Prethermal Critical States.
Chiocchetta, Alessio; Gambassi, Andrea; Diehl, Sebastian; Marino, Jamir
2017-03-31
We study the prethermal dynamics of an interacting quantum field theory with an N-component order parameter and O(N) symmetry, suddenly quenched in the vicinity of a dynamical critical point. Depending on the initial conditions, the evolution of the order parameter, and of the response and correlation functions, can exhibit a temporal crossover between universal dynamical scaling regimes governed, respectively, by a quantum and a classical prethermal fixed point, as well as a crossover from a Gaussian to a non-Gaussian prethermal dynamical scaling. Together with a recent experiment, this suggests that quenches may be used in order to explore the rich variety of dynamical critical points occurring in the nonequilibrium dynamics of a quantum many-body system. We illustrate this fact by using a combination of renormalization group techniques and a nonperturbative large-N limit.
Amarante Andrade, Pedro; Švec, Jan G
2016-07-01
Differences in classical and non-classical singing are due primarily to aesthetic style requirements. The head position can affect the sound quality. This study aimed at comparing the head position for famous classical and non-classical male singers performing high notes. Images of 39 Western classical and 34 non-classical male singers during live performances were obtained from YouTube. Ten raters evaluated the frontal rotational head position (depression versus elevation) and transverse head position (retraction versus protraction) visually using a visual analogue scale. The results showed a significant difference for frontal rotational head position. Most non-classical singers in the sample elevated their heads for high notes while the classical singers were observed to keep it around the neutral position. This difference may be attributed to different singing techniques and phonatory system adjustments utilized by each group.
A randomized controlled trial of single point acupuncture in primary dysmenorrhea.
Liu, Cun-Zhi; Xie, Jie-Ping; Wang, Lin-Peng; Liu, Yu-Qi; Song, Jia-Shan; Chen, Yin-Ying; Shi, Guang-Xia; Zhou, Wei; Gao, Shu-Zhong; Li, Shi-Liang; Xing, Jian-Min; Ma, Liang-Xiao; Wang, Yan-Xia; Zhu, Jiang; Liu, Jian-Ping
2014-06-01
Acupuncture is often used for primary dysmenorrhea. But there is no convincing evidence due to low methodological quality. We aim to assess immediate effect of acupuncture at specific acupoint compared with unrelated acupoint and nonacupoint on primary dysmenorrhea. The Acupuncture Analgesia Effect in Primary Dysmenorrhoea-II is a multicenter controlled trial conducted in six large hospitals of China. Patients who met inclusion criteria were randomly assigned to classic acupoint (N = 167), unrelated acupoint (N = 167), or non-acupoint (N = 167) group on a 1:1:1 basis. They received three sessions with electro-acupuncture at a classic acupoint (Sanyinjiao, SP6), or an unrelated acupoint (Xuanzhong, GB39), or nonacupoint location, respectively. The primary outcome was subjective pain as measured by a 100-mm visual analog scale (VAS). Measurements were obtained at 0, 5, 10, 30, and 60 minutes following the first intervention. In addition, patients scored changes of general complaints using Cox retrospective symptom scales (RSS-Cox) and 7-point verbal rating scale (VRS) during three menstrual cycles. Secondary outcomes included VAS score for average pain, pain total time, additional in-bed time, and proportion of participants using analgesics during three menstrual cycles. Five hundred and one people underwent random assignment. The primary comparison of VAS scores following the first intervention demonstrated that classic acupoint group was more effective both than unrelated acupoint (-4.0 mm, 95% CI -7.1 to -0.9, P = 0.010) and nonacupoint (-4.0 mm, 95% CI -7.0 to -0.9, P = 0.012) groups. However, no significant differences were detected among the three acupuncture groups for RSS-Cox or VRS outcomes. The per-protocol analysis showed similar pattern. No serious adverse events were noted. Specific acupoint acupuncture produced a statistically, but not clinically, significant effect compared with unrelated acupoint and nonacupoint acupuncture in primary dysmenorrhea patients. Future studies should focus on effects of multiple points acupuncture on primary dysmenorrhea. Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus
2016-04-01
Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.
Everaers, Ralf; Rosa, Angelo
2012-01-07
The quantitative description of polymeric systems requires hierarchical modeling schemes, which bridge the gap between the atomic scale, relevant to chemical or biomolecular reactions, and the macromolecular scale, where the longest relaxation modes occur. Here, we use the formalism for diffusion-controlled reactions in polymers developed by Wilemski, Fixman, and Doi to discuss the renormalisation of the reactivity parameters in polymer models with varying spatial resolution. In particular, we show that the adjustments are independent of chain length. As a consequence, it is possible to match reactions times between descriptions with different resolution for relatively short reference chains and to use the coarse-grained model to make quantitative predictions for longer chains. We illustrate our results by a detailed discussion of the classical problem of chain cyclization in the Rouse model, which offers the simplest example of a multi-scale descriptions, if we consider differently discretized Rouse models for the same physical system. Moreover, we are able to explore different combinations of compact and non-compact diffusion in the local and large-scale dynamics by varying the embedding dimension.
Simulation of surface processes
Jónsson, Hannes
2011-01-01
Computer simulations of surface processes can reveal unexpected insight regarding atomic-scale structure and transitions. Here, the strengths and weaknesses of some commonly used approaches are reviewed as well as promising avenues for improvements. The electronic degrees of freedom are usually described by gradient-dependent functionals within Kohn–Sham density functional theory. Although this level of theory has been remarkably successful in numerous studies, several important problems require a more accurate theoretical description. It is important to develop new tools to make it possible to study, for example, localized defect states and band gaps in large and complex systems. Preliminary results presented here show that orbital density-dependent functionals provide a promising avenue, but they require the development of new numerical methods and substantial changes to codes designed for Kohn–Sham density functional theory. The nuclear degrees of freedom can, in most cases, be described by the classical equations of motion; however, they still pose a significant challenge, because the time scale of interesting transitions, which typically involve substantial free energy barriers, is much longer than the time scale of vibrations—often 10 orders of magnitude. Therefore, simulation of diffusion, structural annealing, and chemical reactions cannot be achieved with direct simulation of the classical dynamics. Alternative approaches are needed. One such approach is transition state theory as implemented in the adaptive kinetic Monte Carlo algorithm, which, thus far, has relied on the harmonic approximation but could be extended and made applicable to systems with rougher energy landscape and transitions through quantum mechanical tunneling. PMID:21199939
Using harmonic oscillators to determine the spot size of Hermite-Gaussian laser beams
NASA Technical Reports Server (NTRS)
Steely, Sidney L.
1993-01-01
The similarity of the functional forms of quantum mechanical harmonic oscillators and the modes of Hermite-Gaussian laser beams is illustrated. This functional similarity provides a direct correlation to investigate the spot size of large-order mode Hermite-Gaussian laser beams. The classical limits of a corresponding two-dimensional harmonic oscillator provide a definition of the spot size of Hermite-Gaussian laser beams. The classical limits of the harmonic oscillator provide integration limits for the photon probability densities of the laser beam modes to determine the fraction of photons detected therein. Mathematica is used to integrate the probability densities for large-order beam modes and to illustrate the functional similarities. The probabilities of detecting photons within the classical limits of Hermite-Gaussian laser beams asymptotically approach unity in the limit of large-order modes, in agreement with the Correspondence Principle. The classical limits for large-order modes include all of the nodes for Hermite Gaussian laser beams; Sturm's theorem provides a direct proof.
NASA Astrophysics Data System (ADS)
Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim
2017-06-01
Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water body acted as a barrier for the horizontal transport of air pollutants from the largest street in the valley and along the valley bottom, transporting them vertically instead and hence diluting them. We found that the stable stratification accumulates the street-level pollution from the transport corridor in shallow air pockets near the surface. The polluted air pockets are transported by the local recirculations to other less polluted areas with only slow dilution. This combination of relatively long distance and complex transport paths together with weak dispersion is not sufficiently resolved in classical air pollution models. The findings have important implications for the air quality predictions over urban areas. Any prediction not resolving these, or similar local dynamic features, might not be able to correctly simulate the dispersion of pollutants in cities.
Large scale electromechanical transistor with application in mass sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Leisheng; Li, Lijie, E-mail: L.Li@swansea.ac.uk
Nanomechanical transistor (NMT) has evolved from the single electron transistor, a device that operates by shuttling electrons with a self-excited central conductor. The unfavoured aspects of the NMT are the complexity of the fabrication process and its signal processing unit, which could potentially be overcome by designing much larger devices. This paper reports a new design of large scale electromechanical transistor (LSEMT), still taking advantage of the principle of shuttling electrons. However, because of the large size, nonlinear electrostatic forces induced by the transistor itself are not sufficient to drive the mechanical member into vibration—an external force has to bemore » used. In this paper, a LSEMT device is modelled, and its new application in mass sensing is postulated using two coupled mechanical cantilevers, with one of them being embedded in the transistor. The sensor is capable of detecting added mass using the eigenstate shifts method by reading the change of electrical current from the transistor, which has much higher sensitivity than conventional eigenfrequency shift approach used in classical cantilever based mass sensors. Numerical simulations are conducted to investigate the performance of the mass sensor.« less
Primordial black holes from polynomial potentials in single field inflation
NASA Astrophysics Data System (ADS)
Hertzberg, Mark P.; Yamada, Masaki
2018-04-01
Within canonical single field inflation models, we provide a method to reverse engineer and reconstruct the inflaton potential from a given power spectrum. This is not only a useful tool to find a potential from observational constraints, but also gives insight into how to generate a large amplitude spike in density perturbations, especially those that may lead to primordial black holes (PBHs). In accord with other works, we find that the usual slow-roll conditions need to be violated in order to generate a significant spike in the spectrum. We find that a way to achieve a very large amplitude spike in single field models is for the classical roll of the inflaton to overshoot a local minimum during inflation. We provide an example of a quintic polynomial potential that implements this idea and leads to the observed spectral index, observed amplitude of fluctuations on large scales, significant PBH formation on small scales, and is compatible with other observational constraints. We quantify how much fine-tuning is required to achieve this in a family of random polynomial potentials, which may be useful to estimate the probability of PBH formation in the string landscape.
Impact of compressibility on heat transport characteristics of large terrestrial planets
NASA Astrophysics Data System (ADS)
Čížková, Hana; van den Berg, Arie; Jacobs, Michel
2017-07-01
We present heat transport characteristics for mantle convection in large terrestrial exoplanets (M ⩽ 8M⊕) . Our thermal convection model is based on a truncated anelastic liquid approximation (TALA) for compressible fluids and takes into account a selfconsistent thermodynamic description of material properties derived from mineral physics based on a multi-Einstein vibrational approach. We compare heat transport characteristics in compressible models with those obtained with incompressible models based on the classical- and extended Boussinesq approximation (BA and EBA respectively). Our scaling analysis shows that heat flux scales with effective dissipation number as Nu ∼Dieff-0.71 and with Rayleigh number as Nu ∼Raeff0.27. The surface heat flux of the BA models strongly overestimates the values from the corresponding compressible models, whereas the EBA models systematically underestimate the heat flux by ∼10%-15% with respect to a corresponding compressible case. Compressible models are also systematically warmer than the EBA models. Compressibility effects are therefore important for mantle dynamic processes, especially for large rocky exoplanets and consequently also for formation of planetary atmospheres, through outgassing, and the existence of a magnetic field, through thermal coupling of mantle and core dynamic systems.
Vortex survival in 3D self-gravitating accretion discs
NASA Astrophysics Data System (ADS)
Lin, Min-Kai; Pierens, Arnaud
2018-04-01
Large-scale, dust-trapping vortices may account for observations of asymmetric protoplanetary discs. Disc vortices are also potential sites for accelerated planetesimal formation by concentrating dust grains. However, in 3D discs vortices are subject to destructive `elliptic instabilities', which reduces their viability as dust traps. The survival of vortices in 3D accretion discs is thus an important issue to address. In this work, we perform shearing box simulations to show that disc self-gravity enhances the survival of 3D vortices, even when self-gravity is weak in the classic sense (e.g. with a Toomre Q ≃ 5). We find a 3D, self-gravitating vortex can grow on secular timescales in spite of the elliptic instability. The vortex aspect-ratio decreases as it strengthens, which feeds the elliptic instability. The result is a 3D vortex with a turbulent core that persists for ˜103 orbits. We find when gravitational and hydrodynamic stresses become comparable, the vortex may undergo episodic bursts, which we interpret as interaction between elliptic and gravitational instabilities. We estimate the distribution of dust particles in self-gravitating, turbulent vortices. Our results suggest large-scale vortices in protoplanetary discs are more easily observed at large radii.
Kitaura, Francisco-Shu; Chuang, Chia-Hsun; Liang, Yu; Zhao, Cheng; Tao, Charling; Rodríguez-Torres, Sergio; Eisenstein, Daniel J; Gil-Marín, Héctor; Kneib, Jean-Paul; McBride, Cameron; Percival, Will J; Ross, Ashley J; Sánchez, Ariel G; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magana, Mariana; Zhao, Gong-Bo
2016-04-29
Sound waves from the primordial fluctuations of the Universe imprinted in the large-scale structure, called baryon acoustic oscillations (BAOs), can be used as standard rulers to measure the scale of the Universe. These oscillations have already been detected in the distribution of galaxies. Here we propose to measure BAOs from the troughs (minima) of the density field. Based on two sets of accurate mock halo catalogues with and without BAOs in the seed initial conditions, we demonstrate that the BAO signal cannot be obtained from the clustering of classical disjoint voids, but it is clearly detected from overlapping voids. The latter represent an estimate of all troughs of the density field. We compute them from the empty circumsphere centers constrained by tetrahedra of galaxies using Delaunay triangulation. Our theoretical models based on an unprecedented large set of detailed simulated void catalogues are remarkably well confirmed by observational data. We use the largest recently publicly available sample of luminous red galaxies from SDSS-III BOSS DR11 to unveil for the first time a >3σ BAO detection from voids in observations. Since voids are nearly isotropically expanding regions, their centers represent the most quiet places in the Universe, keeping in mind the cosmos origin and providing a new promising window in the analysis of the cosmological large-scale structure from galaxy surveys.
Life histories of hosts and pathogens predict patterns in tropical fungal plant diseases.
García-Guzmán, Graciela; Heil, Martin
2014-03-01
Plant pathogens affect the fitness of their hosts and maintain biodiversity. However, we lack theories to predict the type and intensity of infections in wild plants. Here we demonstrate using fungal pathogens of tropical plants that an examination of the life histories of hosts and pathogens can reveal general patterns in their interactions. Fungal infections were more commonly reported for light-demanding than for shade-tolerant species and for evergreen rather than for deciduous hosts. Both patterns are consistent with classical defence theory, which predicts lower resistance in fast-growing species and suggests that the deciduous habit can reduce enemy populations. In our literature survey, necrotrophs were found mainly to infect shade-tolerant woody species whereas biotrophs dominated in light-demanding herbaceous hosts. Far-red signalling and its inhibitory effects on jasmonic acid signalling are likely to explain this phenomenon. Multiple changes between the necrotrophic and the symptomless endophytic lifestyle at the ecological and evolutionary scale indicate that endophytes should be considered when trying to understand large-scale patterns in the fungal infections of plants. Combining knowledge about the molecular mechanisms of pathogen resistance with classical defence theory enables the formulation of testable predictions concerning general patterns in the infections of wild plants by fungal pathogens. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Bioassays as one of the Green Chemistry tools for assessing environmental quality: A review.
Wieczerzak, M; Namieśnik, J; Kudłak, B
2016-09-01
For centuries, mankind has contributed to irreversible environmental changes, but due to the modern science of recent decades, scientists are able to assess the scale of this impact. The introduction of laws and standards to ensure environmental cleanliness requires comprehensive environmental monitoring, which should also meet the requirements of Green Chemistry. The broad spectrum of Green Chemistry principle applications should also include all of the techniques and methods of pollutant analysis and environmental monitoring. The classical methods of chemical analyses do not always match the twelve principles of Green Chemistry, and they are often expensive and employ toxic and environmentally unfriendly solvents in large quantities. These solvents can generate hazardous and toxic waste while consuming large volumes of resources. Therefore, there is a need to develop reliable techniques that would not only meet the requirements of Green Analytical Chemistry, but they could also complement and sometimes provide an alternative to conventional classical analytical methods. These alternatives may be found in bioassays. Commercially available certified bioassays often come in the form of ready-to-use toxkits, and they are easy to use and relatively inexpensive in comparison with certain conventional analytical methods. The aim of this study is to provide evidence that bioassays can be a complementary alternative to classical methods of analysis and can fulfil Green Analytical Chemistry criteria. The test organisms discussed in this work include single-celled organisms, such as cell lines, fungi (yeast), and bacteria, and multicellular organisms, such as invertebrate and vertebrate animals and plants. Copyright © 2016 Elsevier Ltd. All rights reserved.
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.
Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk
2015-01-01
Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
Dynamics of Topological Excitations in a Model Quantum Spin Ice
NASA Astrophysics Data System (ADS)
Huang, Chun-Jiong; Deng, Youjin; Wan, Yuan; Meng, Zi Yang
2018-04-01
We study the quantum spin dynamics of a frustrated X X Z model on a pyrochlore lattice by using large-scale quantum Monte Carlo simulation and stochastic analytic continuation. In the low-temperature quantum spin ice regime, we observe signatures of coherent photon and spinon excitations in the dynamic spin structure factor. As the temperature rises to the classical spin ice regime, the photon disappears from the dynamic spin structure factor, whereas the dynamics of the spinon remain coherent in a broad temperature window. Our results provide experimentally relevant, quantitative information for the ongoing pursuit of quantum spin ice materials.
Aeroacoustics of Flight Vehicles: Theory and Practice. Volume 1: Noise Sources
NASA Technical Reports Server (NTRS)
Hubbard, Harvey H. (Editor)
1991-01-01
Methodology recommended to evaluate aeroacoustic related problems is provided, and approaches to their solutions are suggested without extensive tables, nomographs, and derivations. Orientation is toward flight vehicles and emphasis is on underlying physical concepts. Theoretical, experimental, and applied aspects are covered, including the main formulations and comparisons of theory and experiment. The topics covered include: propeller and propfan noise, rotor noise, turbomachinery noise, jet noise classical theory and experiments, noise from turbulent shear flows, jet noise generated by large-scale coherent motion, airframe noise, propulsive lift noise, combustion and core noise, and sonic booms.
Geoid, topography, and convection-driven crustal deformation on Venus
NASA Technical Reports Server (NTRS)
Simons, Mark; Hager, Bradford H.; Solomon, Sean C.
1992-01-01
High-resolution Magellan images and altimetry of Venus reveal a wide range of styles and scales of surface deformation that cannot readily be explained within the classical terrestrial plate tectonic paradigm. The high correlation of long-wavelength topography and gravity and the large apparent depths of compensation suggest that Venus lacks an upper-mantle low-viscosity zone. A key difference between Earth and Venus may be the degree of coupling between the convecting mantle and the overlying lithosphere. Mantle flow should then have recognizable signatures in the relationships between surface topography, crustal deformation, and the observed gravity field.
Quantum Corrections in Nanoplasmonics: Shape, Scale, and Material
NASA Astrophysics Data System (ADS)
Christensen, Thomas; Yan, Wei; Jauho, Antti-Pekka; Soljačić, Marin; Mortensen, N. Asger
2017-04-01
The classical treatment of plasmonics is insufficient at the nanometer-scale due to quantum mechanical surface phenomena. Here, an extension of the classical paradigm is reported which rigorously remedies this deficiency through the incorporation of first-principles surface response functions—the Feibelman d parameters—in general geometries. Several analytical results for the leading-order plasmonic quantum corrections are obtained in a first-principles setting; particularly, a clear separation of the roles of shape, scale, and material is established. The utility of the formalism is illustrated by the derivation of a modified sum rule for complementary structures, a rigorous reformulation of Kreibig's phenomenological damping prescription, and an account of the small-scale resonance shifting of simple and noble metal nanostructures.
A low-order model for long-range infrasound propagation in random atmospheric waveguides
NASA Astrophysics Data System (ADS)
Millet, C.; Lott, F.
2014-12-01
In numerical modeling of long-range infrasound propagation in the atmosphere, the wind and temperature profiles are usually obtained as a result of matching atmospheric models to empirical data. The atmospheric models are classically obtained from operational numerical weather prediction centers (NOAA Global Forecast System or ECMWF Integrated Forecast system) as well as atmospheric climate reanalysis activities and thus, do not explicitly resolve atmospheric gravity waves (GWs). The GWs are generally too small to be represented in Global Circulation Models, and their effects on the resolved scales need to be parameterized in order to account for fine-scale atmospheric inhomogeneities (for length scales less than 100 km). In the present approach, the sound speed profiles are considered as random functions, obtained by superimposing a stochastic GW field on the ECMWF reanalysis ERA-Interim. The spectral domain is binned by a large number of monochromatic GWs, and the breaking of each GW is treated independently from the others. The wave equation is solved using a reduced-order model, starting from the classical normal mode technique. We focus on the asymptotic behavior of the transmitted waves in the weakly heterogeneous regime (for which the coupling between the wave and the medium is weak), with a fixed number of propagating modes that can be obtained by rearranging the eigenvalues by decreasing Sobol indices. The most important feature of the stochastic approach lies in the fact that the model order (i.e. the number of relevant eigenvalues) can be computed to satisfy a given statistical accuracy whatever the frequency. As the low-order model preserves the overall structure of waveforms under sufficiently small perturbations of the profile, it can be applied to sensitivity analysis and uncertainty quantification. The gain in CPU cost provided by the low-order model is essential for extracting statistical information from simulations. The statistics of a transmitted broadband pulse are computed by decomposing the original pulse into a sum of modal pulses that propagate with different phase speeds and can be described by a front pulse stabilization theory. The method is illustrated on two large-scale infrasound calibration experiments, that were conducted at the Sayarim Military Range, Israel, in 2009 and 2011.
Using Classical Population Genetics Tools with Heterochroneous Data: Time Matters!
Depaulis, Frantz; Orlando, Ludovic; Hänni, Catherine
2009-01-01
Background New polymorphism datasets from heterochroneous data have arisen thanks to recent advances in experimental and microbial molecular evolution, and the sequencing of ancient DNA (aDNA). However, classical tools for population genetics analyses do not take into account heterochrony between subsets, despite potential bias on neutrality and population structure tests. Here, we characterize the extent of such possible biases using serial coalescent simulations. Methodology/Principal Findings We first use a coalescent framework to generate datasets assuming no or different levels of heterochrony and contrast most classical population genetic statistics. We show that even weak levels of heterochrony (∼10% of the average depth of a standard population tree) affect the distribution of polymorphism substantially, leading to overestimate the level of polymorphism θ, to star like trees, with an excess of rare mutations and a deficit of linkage disequilibrium, which are the hallmark of e.g. population expansion (possibly after a drastic bottleneck). Substantial departures of the tests are detected in the opposite direction for more heterochroneous and equilibrated datasets, with balanced trees mimicking in particular population contraction, balancing selection, and population differentiation. We therefore introduce simple corrections to classical estimators of polymorphism and of the genetic distance between populations, in order to remove heterochrony-driven bias. Finally, we show that these effects do occur on real aDNA datasets, taking advantage of the currently available sequence data for Cave Bears (Ursus spelaeus), for which large mtDNA haplotypes have been reported over a substantial time period (22–130 thousand years ago (KYA)). Conclusions/Significance Considering serial sampling changed the conclusion of several tests, indicating that neglecting heterochrony could provide significant support for false past history of populations and inappropriate conservation decisions. We therefore argue for systematically considering heterochroneous models when analyzing heterochroneous samples covering a large time scale. PMID:19440242
Holography and the Coleman-Mermin-Wagner theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anninos, Dionysios; Hartnoll, Sean A.; Iqbal, Nabil
2010-09-15
In 2+1 dimensions at finite temperature, spontaneous symmetry breaking of global symmetries is precluded by large thermal fluctuations of the order parameter. The holographic correspondence implies that analogous effects must also occur in 3+1 dimensional theories with gauged symmetries in certain curved spacetimes with horizon. By performing a one loop computation in the background of a holographic superconductor, we show that bulk quantum fluctuations wash out the classical order parameter at sufficiently large distance scales. The low temperature phase is seen to exhibit algebraic long-range order. Beyond the specific example we study, holography suggests that IR singular quantum fluctuations ofmore » the fields and geometry will play an interesting role for many 3+1 dimensional asymptotically anti-de Sitter spacetimes with planar horizon.« less
Cavallo's multiplier for in situ generation of high voltage
NASA Astrophysics Data System (ADS)
Clayton, S. M.; Ito, T. M.; Ramsey, J. C.; Wei, W.; Blatnik, M. A.; Filippone, B. W.; Seidel, G. M.
2018-05-01
A classic electrostatic induction machine, Cavallo's multiplier, is suggested for in situ production of very high voltage in cryogenic environments. The device is suitable for generating a large electrostatic field under conditions of very small load current. Operation of the Cavallo multiplier is analyzed, with quantitative description in terms of mutual capacitances between electrodes in the system. A demonstration apparatus was constructed, and measured voltages are compared to predictions based on measured capacitances in the system. The simplicity of the Cavallo multiplier makes it amenable to electrostatic analysis using finite element software, and electrode shapes can be optimized to take advantage of a high dielectric strength medium such as liquid helium. A design study is presented for a Cavallo multiplier in a large-scale, cryogenic experiment to measure the neutron electric dipole moment.
Conditional Dispersive Readout of a CMOS Single-Electron Memory Cell
NASA Astrophysics Data System (ADS)
Schaal, S.; Barraud, S.; Morton, J. J. L.; Gonzalez-Zalba, M. F.
2018-05-01
Quantum computers require interfaces with classical electronics for efficient qubit control, measurement, and fast data processing. Fabricating the qubit and the classical control layer using the same technology is appealing because it will facilitate the integration process, improving feedback speeds and offering potential solutions to wiring and layout challenges. Integrating classical and quantum devices monolithically, using complementary metal-oxide-semiconductor (CMOS) processes, enables the processor to profit from the most mature industrial technology for the fabrication of large-scale circuits. We demonstrate a CMOS single-electron memory cell composed of a single quantum dot and a transistor that locks charge on the quantum-dot gate. The single-electron memory cell is conditionally read out by gate-based dispersive sensing using a lumped-element L C resonator. The control field-effect transistor (FET) and quantum dot are fabricated on the same chip using fully depleted silicon-on-insulator technology. We obtain a charge sensitivity of δ q =95 ×10-6e Hz-1 /2 when the quantum-dot readout is enabled by the control FET, comparable to results without the control FET. Additionally, we observe a single-electron retention time on the order of a second when storing a single-electron charge on the quantum dot at millikelvin temperatures. These results demonstrate first steps towards time-based multiplexing of gate-based dispersive readout in CMOS quantum devices opening the path for the development of an all-silicon quantum-classical processor.
Statistical nature of infrared dynamics on de Sitter background
NASA Astrophysics Data System (ADS)
Tokuda, Junsei; Tanaka, Takahiro
2018-02-01
In this study, we formulate a systematic way of deriving an effective equation of motion(EoM) for long wavelength modes of a massless scalar field with a general potential V(phi) on de Sitter background, and investigate whether or not the effective EoM can be described as a classical stochastic process. Our formulation gives an extension of the usual stochastic formalism to including sub-leading secular growth coming from the nonlinearity of short wavelength modes. Applying our formalism to λ phi4 theory, we explicitly derive an effective EoM which correctly recovers the next-to-leading secularly growing part at a late time, and show that this effective EoM can be seen as a classical stochastic process. Our extended stochastic formalism can describe all secularly growing terms which appear in all correlation functions with a specific operator ordering. The restriction of the operator ordering will not be a big drawback because the commutator of a light scalar field becomes negligible at large scales owing to the squeezing.
Tensor network method for reversible classical computation
NASA Astrophysics Data System (ADS)
Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.
2018-03-01
We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
On the shape of the hospital industry long run average cost curve.
Finkler, S A
1979-01-01
Empirical studies of the hospital industry have produced conflicting results with respect to the shape of the industry's long run average cost (LRAC) curve. Some of the studies have found a classical U-shaped curve. Others have produced results indicating that the LRAC curve is much closer to being L-shaped. Some theoretical support exists for both sets of findings. While classical theory predicts that the LRAC curve will be U-shaped, Alchian has presented theoretical arguments explaining why such curves would be L-shaped. This paper reconciles the results of these studies. The basis for the reconciliation is recognition of the failure of individual hospitals to produce all their individual product lines at efficient volumes. Such inefficient production is feasible and perhaps common, given the incentive structure which exists under current cost reimbursement systems. The implication of this paper is that large hospitals may have a greater potential for scale economies than has previously been recognized. PMID:528221
On the shape of the hospital industry long run average cost curve.
Finkler, S A
1979-01-01
Empirical studies of the hospital industry have produced conflicting results with respect to the shape of the industry's long run average cost (LRAC) curve. Some of the studies have found a classical U-shaped curve. Others have produced results indicating that the LRAC curve is much closer to being L-shaped. Some theoretical support exists for both sets of findings. While classical theory predicts that the LRAC curve will be U-shaped, Alchian has presented theoretical arguments explaining why such curves would be L-shaped. This paper reconciles the results of these studies. The basis for the reconciliation is recognition of the failure of individual hospitals to produce all their individual product lines at efficient volumes. Such inefficient production is feasible and perhaps common, given the incentive structure which exists under current cost reimbursement systems. The implication of this paper is that large hospitals may have a greater potential for scale economies than has previously been recognized.
Foundation plan. San Bernardino Valley Union Junior College, Classics Building. ...
Foundation plan. San Bernardino Valley Union Junior College, Classics Building. Also includes sections AA-KK (except DD). Howard E. Jones, Architect, San Bernardino, California. Sheet 1, job no. 312. Scales 1/8 inch to the foot (plan) and 1/2 inch to the foot (sections). February 15, 1927. - San Bernardino Valley College, Classics Building, 701 South Mount Vernon Avenue, San Bernardino, San Bernardino County, CA
The state of Hawking radiation is non-classical
NASA Astrophysics Data System (ADS)
Brustein, Ram; Medved, A. J. M.; Zigdon, Yoav
2018-01-01
We show that the state of the Hawking radiation emitted from a large Schwarzschild black hole (BH) deviates significantly from a classical state, in spite of its apparent thermal nature. For this state, the occupation numbers of single modes of massless asymptotic fields, such as photons, gravitons and possibly neutrinos, are small and, as a result, their relative fluctuations are large. The occupation numbers of massive fields are much smaller and suppressed beyond even the expected Boltzmann suppression. It follows that this type of thermal state cannot be viewed as classical or even semiclassical. We substantiate this claim by showing that, in a state with low occupation numbers, physical observables have large quantum fluctuations and, as such, cannot be faithfully described by a mean-field or by a WKB-like semiclassical state. Since the evolution of the BH is unitary, our results imply that the state of the BH interior must also be non-classical when described in terms of the asymptotic fields. We show that such a non-classical interior cannot be described in terms of a semiclassical geometry, even though the average curvature is sub-Planckian.
NASA Astrophysics Data System (ADS)
Most, S.; Dentz, M.; Bolster, D.; Bijeljic, B.; Nowak, W.
2017-12-01
Transport in real porous media shows non-Fickian characteristics. In the Lagrangian perspective this leads to skewed distributions of particle arrival times. The skewness is triggered by particles' memory of velocity that persists over a characteristic length. Capturing process memory is essential to represent non-Fickianity thoroughly. Classical non-Fickian models (e.g., CTRW models) simulate the effects of memory but not the mechanisms leading to process memory. CTRWs have been applied successfully in many studies but nonetheless they have drawbacks. In classical CTRWs each particle makes a spatial transition for which each particle adapts a random transit time. Consecutive transit times are drawn independently from each other, and this is only valid for sufficiently large spatial transitions. If we want to apply a finer numerical resolution than that, we have to implement memory into the simulation. Recent CTRW methods use transitions matrices to simulate correlated transit times. However, deriving such transition matrices require transport data of a fine-scale transport simulation, and the obtained transition matrix is solely valid for this single Péclet regime. The CTRW method we propose overcomes all three drawbacks: 1) We simulate transport without restrictions in transition length. 2) We parameterize our CTRW without requiring a transport simulation. 3) Our parameterization scales across Péclet regimes. We do so by sampling the pore-scale velocity distribution to generate correlated transit times as a Lévy flight on the CDF-axis of velocities with reflection at 0 and 1. The Lévy flight is parametrized only by the correlation length. We explicitly model memory including the evolution and decay of non-Fickianity, so it extends from local via pre-asymptotic to asymptotic scales.
First passage times and asymmetry of DNA translocation
NASA Astrophysics Data System (ADS)
Lua, Rhonald C.; Grosberg, Alexander Y.
2005-12-01
Motivated by experiments in which single-stranded DNA with a short hairpin loop at one end undergoes unforced diffusion through a narrow pore, we study the first passage times for a particle, executing one-dimensional Brownian motion in an asymmetric sawtooth potential, to exit one of the boundaries. We consider the first passage times for the case of classical diffusion, characterized by a mean-square displacement of the form ⟨(Δx)2⟩˜t , and for the case of anomalous diffusion or subdiffusion, characterized by a mean-square displacement of the form ⟨(Δx)2⟩˜tγ with 0<γ<1 . In the context of classical diffusion, we obtain an expression for the mean first passage time and show that this quantity changes when the direction of the sawtooth is reversed or, equivalently, when the reflecting and absorbing boundaries are exchanged. We discuss at which numbers of “teeth” N (or number of DNA nucleotides) and at which heights of the sawtooth potential this difference becomes significant. For large N , it is well known that the mean first passage time scales as N2 . In the context of subdiffusion, the mean first passage time does not exist. Therefore, we obtain instead the distribution of first passage times in the limit of long times. We show that the prefactor in the power relation for this distribution is simply the expression for the mean first passage time in classical diffusion. We also describe a hypothetical experiment to calculate the average of the first passage times for a fraction of passage events that each end within some time t* . We show that this average first passage time scales as N2/γ in subdiffusion.
NASA Astrophysics Data System (ADS)
Carlowitz, Christian; Girg, Thomas; Ghaleb, Hatem; Du, Xuan-Quang
2017-09-01
For ultra-high speed communication systems at high center frequencies above 100 GHz, we propose a disruptive change in system architecture to address major issues regarding amplifier chains with a large number of amplifier stages. They cause a high noise figure and high power consumption when operating close to the frequency limits of the underlying semiconductor technologies. Instead of scaling a classic homodyne transceiver system, we employ repeated amplification in single-stage amplifiers through positive feedback as well as synthesizer-free self-mixing demodulation at the receiver to simplify the system architecture notably. Since the amplitude and phase information for the emerging oscillation is defined by the input signal and the oscillator is only turned on for a very short time, it can be left unstabilized and thus come without a PLL. As soon as gain is no longer the most prominent issue, relaxed requirements for all the other major components allow reconsidering their implementation concepts to achieve further improvements compared to classic systems. This paper provides the first comprehensive overview of all major design aspects that need to be addressed upon realizing a SPARS-based transceiver. At system level, we show how to achieve high data rates and a noise performance comparable to classic systems, backed by scaled demonstrator experiments. Regarding the transmitter, design considerations for efficient quadrature modulation are discussed. For the frontend components that replace PA and LNA amplifier chains, implementation techniques for regenerative sampling circuits based on super-regenerative oscillators are presented. Finally, an analog-to-digital converter with outstanding performance and complete interfaces both to the analog baseband as well as to the digital side completes the set of building blocks for efficient ultra-high speed communication.
Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R
2016-12-01
Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Optimization of protein electroextraction from microalgae by a flow process.
Coustets, Mathilde; Joubert-Durigneux, Vanessa; Hérault, Josiane; Schoefs, Benoît; Blanckaert, Vincent; Garnier, Jean-Pierre; Teissié, Justin
2015-06-01
Classical methods, used for large scale treatments such as mechanical or chemical extractions, affect the integrity of extracted cytosolic protein by releasing proteases contained in vacuoles. Our previous experiments on flow processes electroextraction on yeasts proved that pulsed electric field technology allows preserving the integrity of released cytosolic proteins, by not affecting vacuole membranes. Furthermore, large cell culture volumes are easily treated by the flow technology. Based on this previous knowledge, we developed a new protocol in order to electro-extract total cytoplasmic proteins from microalgae (Nannochloropsis salina, Chlorella vulgaris and Haematococcus pluvialis). Given that induction of electropermeabilization is under the control of target cell size, as the mean diameter for N. salina is only 2.5 μm, we used repetitive 2 ms long pulses of alternating polarities with stronger field strengths than previously described for yeasts. The electric treatment was followed by a 24h incubation period in a salty buffer. The amount of total protein release was observed by a classical Bradford assay. A more accurate evaluation of protein release was obtained by SDS-PAGE. Similar results were obtained with C. vulgaris and H. pluvialis under milder electrical conditions as expected from their larger size. Copyright © 2014 Elsevier B.V. All rights reserved.
Cavity QED at the quantum-classical boundary
NASA Astrophysics Data System (ADS)
Fink, J. M.; Steffen, L.; Bishop, L. S.; Wallraff, A.
2010-03-01
The quantum limit of cavity QED is characterized by a well resolved vacuum Rabi mode splitting spectrum. If the number of excitations n in the resonantly coupled matter-light system is increased from one, the nonlinear √n scaling of the dressed eigenstates is observed [1]. At very large photon numbers the transmission spectrum turns into a single Lorentzian line as expected from the correspondence principle. This classical limit emerges when the occupancy of the low energy dressed states is increased until the quantum nonlinearity of the available transitions becomes small compared to dephasing and relaxation rates [2]. We explore this quantum-classical crossover in a circuit QED system where we vary the thermal occupation of the resonator by 5 orders of magnitude using a quasi-thermal noise source. From vacuum Rabi spectra measured in linear response and from time resolved vacuum Rabi oscillation measurements we consistently extract cavity field temperatures between 100 mK and 10 K using a master equation model. The presented experimental approach is useful to determine the thermal occupation of a quantum system and offers the possibility to study entanglement and decoherence at elevated temperatures. [1] J. M. Fink et al. Nature 454, 315 (2008). [2] I. Rau, et al. Phys. Rev. B 70, 054521 (2004).
De Sanctis, Bianca; Krukov, Ivan; de Koning, A P Jason
2017-09-19
Determination of the age of an allele based on its population frequency is a well-studied problem in population genetics, for which a variety of approximations have been proposed. We present a new result that, surprisingly, allows the expectation and variance of allele age to be computed exactly (within machine precision) for any finite absorbing Markov chain model in a matter of seconds. This approach makes none of the classical assumptions (e.g., weak selection, reversibility, infinite sites), exploits modern sparse linear algebra techniques, integrates over all sample paths, and is rapidly computable for Wright-Fisher populations up to N e = 100,000. With this approach, we study the joint effect of recurrent mutation, dominance, and selection, and demonstrate new examples of "selective strolls" where the classical symmetry of allele age with respect to selection is violated by weakly selected alleles that are older than neutral alleles at the same frequency. We also show evidence for a strong age imbalance, where rare deleterious alleles are expected to be substantially older than advantageous alleles observed at the same frequency when population-scaled mutation rates are large. These results highlight the under-appreciated utility of computational methods for the direct analysis of Markov chain models in population genetics.
Forstmann, Matthias; Sagioglou, Christina
2017-08-01
In a large-scale ( N = 1487) general population online study, we investigated the relationship between past experience with classic psychedelic substances (e.g. LSD, psilocybin, mescaline), nature relatedness, and ecological behavior (e.g. saving water, recycling). Using structural equation modeling we found that experience with classic psychedelics uniquely predicted self-reported engagement in pro-environmental behaviors, and that this relationship was statistically explained by people's degree of self-identification with nature. Our model controlled for experiences with other classes of psychoactive substances (cannabis, dissociatives, empathogens, popular legal drugs) as well as common personality traits that usually predict drug consumption and/or nature relatedness (openness to experience, conscientiousness, conservatism). Although correlational in nature, results suggest that lifetime experience with psychedelics in particular may indeed contribute to people's pro-environmental behavior by changing their self-construal in terms of an incorporation of the natural world, regardless of core personality traits or general propensity to consume mind-altering substances. Thereby, the present research adds to the contemporary literature on the beneficial effects of psychedelic substance use on mental wellbeing, hinting at a novel area for future research investigating their potentially positive effects on a societal level. Limitations of the present research and future directions are discussed.
NASA Astrophysics Data System (ADS)
de Rooij, G. H.
2010-09-01
Soil water is confined behind the menisci of its water-air interface. Catchment-scale fluxes (groundwater recharge, evaporation, transpiration, precipitation, etc.) affect the matric potential, and thereby the interface curvature and the configuration of the phases. In turn, these affect the fluxes (except precipitation), creating feedbacks between pore-scale and catchment-scale processes. Tracking pore-scale processes beyond the Darcy scale is not feasible. Instead, for a simplified system based on the classical Darcy's Law and Laplace-Young Law we i) clarify how menisci transfer pressure from the atmosphere to the soil water, ii) examine large-scale phenomena arising from pore-scale processes, and iii) analyze the relationship between average meniscus curvature and average matric potential. In stagnant water, changing the gravitational potential or the curvature of the air-water interface changes the pressure throughout the water. Adding small amounts of water can thus profoundly affect water pressures in a much larger volume. The pressure-regulating effect of the interface curvature showcases the meniscus as a pressure port that transfers the atmospheric pressure to the water with an offset directly proportional to its curvature. This property causes an extremely rapid rise of phreatic levels in soils once the capillary fringe extends to the soil surface and the menisci flatten. For large bodies of subsurface water, the curvature and vertical position of any meniscus quantify the uniform hydraulic potential under hydrostatic equilibrium. During unit-gradient flow, the matric potential corresponding to the mean curvature of the menisci should provide a good approximation of the intrinsic phase average of the matric potential.
Coupling LAMMPS with Lattice Boltzmann fluid solver: theory, implementation, and applications
NASA Astrophysics Data System (ADS)
Tan, Jifu; Sinno, Talid; Diamond, Scott
2016-11-01
Studying of fluid flow coupled with solid has many applications in biological and engineering problems, e.g., blood cell transport, particulate flow, drug delivery. We present a partitioned approach to solve the coupled Multiphysics problem. The fluid motion is solved by the Lattice Boltzmann method, while the solid displacement and deformation is simulated by Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS). The coupling is achieved through the immersed boundary method so that the expensive remeshing step is eliminated. The code can model both rigid and deformable solids. The code also shows very good scaling results. It was validated with classic problems such as migration of rigid particles, ellipsoid particle's orbit in shear flow. Examples of the applications in blood flow, drug delivery, platelet adhesion and rupture are also given in the paper. NIH.
NASA Astrophysics Data System (ADS)
Kiani, Keivan
2017-09-01
Large deformation regime of micro-scale slender beam-like structures subjected to axially pointed loads is of high interest to nanotechnologists and applied mechanics community. Herein, size-dependent nonlinear governing equations are derived by employing modified couple stress theory. Under various boundary conditions, analytical relations between axially applied loads and deformations are presented. Additionally, a novel Galerkin-based assumed mode method (AMM) is established to solve the highly nonlinear equations. In some particular cases, the predicted results by the analytical approach are also checked with those of AMM and a reasonably good agreement is reported. Subsequently, the key role of the material length scale on the load-deformation of microbeams is discussed and the deficiencies of the classical elasticity theory in predicting such a crucial mechanical behavior are explained in some detail. The influences of slenderness ratio and thickness of the microbeam on the obtained results are also examined. The present work could be considered as a pivotal step in better realizing the postbuckling behavior of nano-/micro- electro-mechanical systems consist of microbeams.
AIDS chief says nonoxynol-9 not effective against HIV.
According to Dr. Peter Piot, executive director of the Joint UN Programme on HIV/AIDS (UNAIDS), the international multi-site trial of the spermicide nonoxynol-9 in gel form has shown that it is not effective in protecting women from HIV infection. This large-scale Phase III efficacy trial was conducted among female sex workers in Benin, Cote d'Ivoire, South Africa, and Thailand. Apart from receiving the trial microbicide or a placebo, participants also received classical HIV prevention support, such as free condoms, free treatment for sexually transmitted infections, counseling, and peer support. One positive outcome of the trial is that fewer of the sex workers who participated became infected with HIV, compared with the sex workers who did not participate at all in the study. However, Piot states that even if the results of the trials are disappointing, the search for an effective microbicide continues. To this effect, at least 36 compounds are at the preclinical testing stage, while 20 are ready for early safety trials in human volunteers and three additional compounds are being considered for large-scale trials.
Performing a Large-Scale Modal Test on the B2 Stand Crane at NASA's Stennis Space Center
NASA Technical Reports Server (NTRS)
Stasiunas, Eric C.; Parks, Russel A.; Sontag, Brendan D.
2018-01-01
A modal test of NASA's Space Launch System (SLS) Core Stage is scheduled to occur at the Stennis Space Center B2 test stand. A derrick crane with a 150-ft long boom, located at the top of the stand, will be used to suspend the Core Stage in order to achieve defined boundary conditions. During this suspended modal test, it is expected that dynamic coupling will occur between the crane and the Core Stage. Therefore, a separate modal test was performed on the B2 crane itself, in order to evaluate the varying dynamic characteristics and correlate math models of the crane. Performing a modal test on such a massive structure was challenging and required creative test setup and procedures, including implementing both AC and DC accelerometers, and performing both classical hammer and operational modal analysis. This paper describes the logistics required to perform this large-scale test, as well as details of the test setup, the modal test methods used, and an overview and application of the results.
Task-driven dictionary learning.
Mairal, Julien; Bach, Francis; Ponce, Jean
2012-04-01
Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.
Performing a Large-Scale Modal Test on the B2 Stand Crane at NASA's Stennis Space Center
NASA Technical Reports Server (NTRS)
Stasiunas, Eric C.; Parks, Russel A.
2018-01-01
A modal test of NASA’s Space Launch System (SLS) Core Stage is scheduled to occur prior to propulsion system verification testing at the Stennis Space Center B2 test stand. A derrick crane with a 180-ft long boom, located at the top of the stand, will be used to suspend the Core Stage in order to achieve defined boundary conditions. During this suspended modal test, it is expected that dynamic coupling will occur between the crane and the Core Stage. Therefore, a separate modal test was performed on the B2 crane itself, in order to evaluate the varying dynamic characteristics and correlate math models of the crane. Performing a modal test on such a massive structure was challenging and required creative test setup and procedures, including implementing both AC and DC accelerometers, and performing both classical hammer and operational modal analysis. This paper describes the logistics required to perform this large-scale test, as well as details of the test setup, the modal test methods used, and an overview of the results.
Flutter performance of bend-twist coupled large-scale wind turbine blades
NASA Astrophysics Data System (ADS)
Hayat, Khazar; de Lecea, Alvaro Gorostidi Martinez; Moriones, Carlos Donazar; Ha, Sung Kyu
2016-05-01
The bend-twist coupling (BTC) is proven to be effective in mitigating the fatigue loads for large-scale wind turbine blades, but at the same time it may cause the risk of flutter instability. The BTC is defined as a feature of twisting of the blade induced by the primary bending deformation. In the classical flutter, the BTC arises from the aerodynamic loads changing with the angle of attack. In this study, the effects of the structural BTC on the flutter are investigated by considering the layup unbalances (ply angle, material and thickness of the composite laminates) in the NREL 5-MW wind turbine rotor blade of glass fiber/epoxy [02/+45/-45]S laminates. It is numerically shown that the flutter speed may decrease by about 5 percent with unbalanced ply-angle only (one side angle, from 45° to 25°). It was then demonstrated that the flutter performance of the wind turbine blade can be increased by using lighter and stiffer carbon fibers which ensures the higher structural BTC at the same time.
Exploratory Item Classification Via Spectral Graph Clustering
Chen, Yunxiao; Li, Xiaoou; Liu, Jingchen; Xu, Gongjun; Ying, Zhiliang
2017-01-01
Large-scale assessments are supported by a large item pool. An important task in test development is to assign items into scales that measure different characteristics of individuals, and a popular approach is cluster analysis of items. Classical methods in cluster analysis, such as the hierarchical clustering, K-means method, and latent-class analysis, often induce a high computational overhead and have difficulty handling missing data, especially in the presence of high-dimensional responses. In this article, the authors propose a spectral clustering algorithm for exploratory item cluster analysis. The method is computationally efficient, effective for data with missing or incomplete responses, easy to implement, and often outperforms traditional clustering algorithms in the context of high dimensionality. The spectral clustering algorithm is based on graph theory, a branch of mathematics that studies the properties of graphs. The algorithm first constructs a graph of items, characterizing the similarity structure among items. It then extracts item clusters based on the graphical structure, grouping similar items together. The proposed method is evaluated through simulations and an application to the revised Eysenck Personality Questionnaire. PMID:29033476
Spectral Gap Energy Transfer in Atmospheric Boundary Layer
NASA Astrophysics Data System (ADS)
Bhushan, S.; Walters, K.; Barros, A. P.; Nogueira, M.
2012-12-01
Experimental measurements of atmospheric turbulence energy spectra show E(k) ~ k-3 slopes at synoptic scales (~ 600 km - 2000 km) and k-5/3 slopes at the mesoscales (< 400 km). The -5/3 spectra is presumably related to 3D turbulence which is dominated by the classical Kolmogrov energy cascade. The -3 spectra is related to 2D turbulence, which is dominated by strong forward scatter of enstrophy and weak forward scatter of energy. In classical 2D turbulence theory, it is expected that a strong backward energy cascade would develop at the synoptic scale, and that circulation would grow infinitely. To limit this backward transfer, energy arrest at macroscales must be introduced. The most commonly used turbulence models developed to mimic the above energy transfer include the energy backscatter model for 2D turbulence in the horizontal plane via Large Eddy Simulation (LES) models, dissipative URANS models in the vertical plane, and Ekman friction for the energy arrest. One of the controversial issues surrounding the atmospheric turbulence spectra is the explanation of the generation of the 2D and 3D spectra and transition between them, for energy injection at the synoptic scales. Lilly (1989) proposed that the existence of 2D and 3D spectra can only be explained by the presence of an additional energy injection in the meso-scale region. A second issue is related to the observations of dual peak spectra with small variance in meso-scale, suggesting that the energy transfer occurs across a spectral gap (Van Der Hoven, 1957). Several studies have confirmed the spectral gap for the meso-scale circulations, and have suggested that they are enhanced by smaller scale vertical convection rather than by the synoptic scales. Further, the widely accepted energy arrest mechanism by boundary layer friction is closely related to the spectral gap transfer. This study proposes an energy transfer mechanism for atmospheric turbulence with synoptic scale injection, wherein the generation of 2D and 3D spectra is explained using spectral gap energy transfer. The existence of the spectral gap energy transfer is validated by performing LES for the interaction of large scale circulation with a wall, and studying the evolution of the energy spectra both near to and far from the wall. Simulations are also performed using the Advanced Weather and Research Forecasting (WRF-ARW) for moist zonal flow over Gaussian ridge, and the energy spectra close and away from the ground are studied. The energy spectra predicted by WRF-ARW are qualitatively compared with LES results to emphasize the limitations of the currently used turbulence parameterizations. Ongoing validation efforts include: (1) extending the interaction of large scale circulation with wall simulations to finer grids to capture a wider range of wavenumbers; and (2) a coupled 2D-3D simulation is planned to predict the entire atmospheric turbulence spectra at a very low computational expense. The overarching objective of this study to develop turbulence modeling capability based on the energy transfer mechanisms proposed in this study. Such a model will be implemented in WRF-ARW, and applied to atmospheric simulations, for example the prediction of moisture convergence patterns at the meso-scale in the southeast United States (Tao & Barros, 2008).
NASA Astrophysics Data System (ADS)
Khani, Sina; Porté-Agel, Fernando
2017-12-01
The performance of the modulated-gradient subgrid-scale (SGS) model is investigated using large-eddy simulation (LES) of the neutral atmospheric boundary layer within the weather research and forecasting model. Since the model includes a finite-difference scheme for spatial derivatives, the discretization errors may affect the simulation results. We focus here on understanding the effects of finite-difference schemes on the momentum balance and the mean velocity distribution, and the requirement (or not) of the ad hoc canopy model. We find that, unlike the Smagorinsky and turbulent kinetic energy (TKE) models, the calculated mean velocity and vertical shear using the modulated-gradient model, are in good agreement with Monin-Obukhov similarity theory, without the need for an extra near-wall canopy model. The structure of the near-wall turbulent eddies is better resolved using the modulated-gradient model in comparison with the classical Smagorinsky and TKE models, which are too dissipative and yield unrealistic smoothing of the smallest resolved scales. Moreover, the SGS fluxes obtained from the modulated-gradient model are much smaller near the wall in comparison with those obtained from the regular Smagorinsky and TKE models. The apparent inability of the LES model in reproducing the mean streamwise component of the momentum balance using the total (resolved plus SGS) stress near the surface is probably due to the effect of the discretization errors, which can be calculated a posteriori using the Taylor-series expansion of the resolved velocity field. Overall, we demonstrate that the modulated-gradient model is less dissipative and yields more accurate results in comparison with the classical Smagorinsky model, with similar computational costs.
Transient and residual stresses in large castings, taking time effects into account
NASA Astrophysics Data System (ADS)
Thorborg, J.; Klinkhammer, J.; Heitzer, M.
2012-07-01
Casting of large scale steel and iron parts leads to long solidification and cooling times. Solid mechanical calculations for these castings have to take the time scale of the process into account, in order to predict the transient and residual stress levels with a reasonable accuracy. This paper presents a study on the modelling of the thermo-mechanical conditions in the cast material using a unified approach to describe the constitutive behaviour. This means a classical splitting of the mechanical strain into an elastic and an inelastic contribution, where the inelastic strain is only formulated in the deviatoric space in terms of the J2 invariant. At high temperatures, creep is pronounced. Since the cooling time is long, the model includes a type of Norton's power law to integrate the significant contribution of creep to the inelastic strains. At these temperature levels, annealing effects are also dominant and hence no hardening is modelled. However, at intermediate and lower temperature levels, hardening is more pronounced and isotropic hardening is considered. Different hardening models have been studied and selected based on their ability to describe the behaviour at the different temperature levels. At the lower temperature levels, time effects decrease and the formulation reduces to a time independent formulation, like classical J2-flow theory. Several tensile and creep experiments have been made at different temperature levels to provide input data for selecting the appropriate contributions to the material model. The measurements have furthermore been used as input for extracting material data for the model. The numerical model is applied on different industrial examples to verify the agreement between measured and calculated deformations.
Rayleigh instability at small length scales.
Gopan, Nandu; Sathian, Sarith P
2014-09-01
The Rayleigh instability (also called the Plateau-Rayleigh instability) of a nanosized liquid propane thread is investigated using molecular dynamics (MD). The validity of classical predictions at small length scales is verified by comparing the temporal evolution of liquid thread simulated by MD against classical predictions. Previous works have shown that thermal fluctuations become dominant at small length scales. The role and influence of the stochastic nature of thermal fluctuations in determining the instability at small length scale is also investigated. Thermal fluctuations are seen to dominate and accelerate the breakup process only during the last stages of breakup. The simulations also reveal that the breakup profile of nanoscale threads undergo modification due to reorganization of molecules by the evaporation-condensation process.
NASA Astrophysics Data System (ADS)
Berloff, P. S.
2016-12-01
This work aims at developing a framework for dynamically consistent parameterization of mesoscale eddy effects for use in non-eddy-resolving ocean circulation models. The proposed eddy parameterization framework is successfully tested on the classical, wind-driven double-gyre model, which is solved both with explicitly resolved vigorous eddy field and in the non-eddy-resolving configuration with the eddy parameterization replacing the eddy effects. The parameterization focuses on the effect of the stochastic part of the eddy forcing that backscatters and induces eastward jet extension of the western boundary currents and its adjacent recirculation zones. The parameterization locally approximates transient eddy flux divergence by spatially localized and temporally periodic forcing, referred to as the plunger, and focuses on the linear-dynamics flow solution induced by it. The nonlinear self-interaction of this solution, referred to as the footprint, characterizes and quantifies the induced eddy forcing exerted on the large-scale flow. We find that spatial pattern and amplitude of each footprint strongly depend on the underlying large-scale flow, and the corresponding relationships provide the basis for the eddy parameterization and its closure on the large-scale flow properties. Dependencies of the footprints on other important parameters of the problem are also systematically analyzed. The parameterization utilizes the local large-scale flow information, constructs and scales the corresponding footprints, and then sums them up over the gyres to produce the resulting eddy forcing field, which is interactively added to the model as an extra forcing. Thus, the assumed ensemble of plunger solutions can be viewed as a simple model for the cumulative effect of the stochastic eddy forcing. The parameterization framework is implemented in the simplest way, but it provides a systematic strategy for improving the implementation algorithm.
NASA Astrophysics Data System (ADS)
Bonne, François; Alamir, Mazen; Bonnay, Patrick
2014-01-01
In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonne, François; Bonnay, Patrick; Alamir, Mazen
2014-01-29
In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection,more » to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.« less
Locking classical correlations in quantum States.
DiVincenzo, David P; Horodecki, Michał; Leung, Debbie W; Smolin, John A; Terhal, Barbara M
2004-02-13
We show that there exist bipartite quantum states which contain a large locked classical correlation that is unlocked by a disproportionately small amount of classical communication. In particular, there are (2n+1)-qubit states for which a one-bit message doubles the optimal classical mutual information between measurement results on the subsystems, from n/2 bits to n bits. This phenomenon is impossible classically. However, states exhibiting this behavior need not be entangled. We study the range of states exhibiting this phenomenon and bound its magnitude.
Polymer physics of chromosome large-scale 3D organisation
NASA Astrophysics Data System (ADS)
Chiariello, Andrea M.; Annunziatella, Carlo; Bianco, Simona; Esposito, Andrea; Nicodemi, Mario
2016-07-01
Chromosomes have a complex architecture in the cell nucleus, which serves vital functional purposes, yet its structure and folding mechanisms remain still incompletely understood. Here we show that genome-wide chromatin architecture data, as mapped by Hi-C methods across mammalian cell types and chromosomes, are well described by classical scaling concepts of polymer physics, from the sub-Mb to chromosomal scales. Chromatin is a complex mixture of different regions, folded in the conformational classes predicted by polymer thermodynamics. The contact matrix of the Sox9 locus, a region linked to severe human congenital diseases, is derived with high accuracy in mESCs and its molecular determinants identified by the theory; Sox9 self-assembles hierarchically in higher-order domains, involving abundant many-body contacts. Our approach is also applied to the Bmp7 locus. Finally, the model predictions on the effects of mutations on folding are tested against available data on a deletion in the Xist locus. Our results can help progressing new diagnostic tools for diseases linked to chromatin misfolding.
Cappelleri, Joseph C; Jason Lundy, J; Hays, Ron D
2014-05-01
The US Food and Drug Administration's guidance for industry document on patient-reported outcomes (PRO) defines content validity as "the extent to which the instrument measures the concept of interest" (FDA, 2009, p. 12). According to Strauss and Smith (2009), construct validity "is now generally viewed as a unifying form of validity for psychological measurements, subsuming both content and criterion validity" (p. 7). Hence, both qualitative and quantitative information are essential in evaluating the validity of measures. We review classical test theory and item response theory (IRT) approaches to evaluating PRO measures, including frequency of responses to each category of the items in a multi-item scale, the distribution of scale scores, floor and ceiling effects, the relationship between item response options and the total score, and the extent to which hypothesized "difficulty" (severity) order of items is represented by observed responses. If a researcher has few qualitative data and wants to get preliminary information about the content validity of the instrument, then descriptive assessments using classical test theory should be the first step. As the sample size grows during subsequent stages of instrument development, confidence in the numerical estimates from Rasch and other IRT models (as well as those of classical test theory) would also grow. Classical test theory and IRT can be useful in providing a quantitative assessment of items and scales during the content-validity phase of PRO-measure development. Depending on the particular type of measure and the specific circumstances, the classical test theory and/or the IRT should be considered to help maximize the content validity of PRO measures. Copyright © 2014 Elsevier HS Journals, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Laymon, C.; Quattrochi, D.; Malek, E.; Hipps, L.; Boettinger, J.; McCurdy, G.
1998-01-01
Landsat thematic mapper data are used to estimate instantaneous regional-scale surface water and energy fluxes in a semi-arid Great Basin desert of the western United States. Results suggest that it is possible to scale from point measurements of environmental state variables to regional estimates of water and energy exchange. This research characterizes the unifying thread in the classical climate-topography-soil-vegetation relation -the surface water and energy balance-through maps of the partitioning of energy throughout the landscape. The study was conducted in Goshute Valley of northeastern Nevada, which is characteristic of most faulted graben valleys of the Basin and Range Province of the western United States. The valley comprises a central playa and lake plain bordered by alluvial fans emanating from the surrounding mountains. The distribution of evapotranspiration (ET) is lowest in the middle reaches of the fans where the water table is deep and plants are small, resulting in low evaporation and transpiration. Highest ET occurs in the center of the valley, particularly in the playa, where limited to no vegetation occurs, but evaporation is relatively high because of a shallow water table and silty clay soil capable of large capillary movement. Intermediate values of ET are associated with large shrubs and is dominated by transpiration.
NASA Technical Reports Server (NTRS)
Laymon, C.; Quattrochi, D.; Malek, E.; Hipps, L.; Boettinger, J.; McCurdy, G.
1997-01-01
Landsat Thematic Mapper data is used to estimate instantaneous regional-scale surface water and energy fluxes in a semi-arid Great Basin desert of the western United States. Results suggest that it is possible to scale from point measurements of environmental state variables to regional estimates of water and energy exchange. This research characterizes the unifying thread in the classical climate-topography-soil-vegetation relation-the surface water and energy balance-through maps of the partitioning of energy throughout the landscape. The study was conducted in Goshute Valley of northeastern Nevada, which is characteristic of most faulted graben valleys of the Basin and Range Province of the western United States. The valley comprises a central playa and lake plain bordered by alluvial fans emanating from the surrounding mountains. The distribution of evapotranspiration (ET) is lowest in the middle reaches of the fans where the water table is deep and plants are small, resulting in low evaporation and transpiration. Highest ET occurs in the center of the valley, particularly in the playa, where limited to no vegetation occurs, but evaporation is relatively high because of a shallow water table and silty clay soil capable of large capillary movement. Intermediate values of ET are associated with large shrubs and is dominated by transpiration.
Local patches of turbulent boundary layer behaviour in classical-state vertical natural convection
NASA Astrophysics Data System (ADS)
Ng, Chong Shen; Ooi, Andrew; Lohse, Detlef; Chung, Daniel
2016-11-01
We present evidence of local patches in vertical natural convection that are reminiscent of Prandtl-von Kármán turbulent boundary layers, for Rayleigh numbers 105-109 and Prandtl number 0.709. These local patches exist in the classical state, where boundary layers exhibit a laminar-like Prandtl-Blasius-Polhausen scaling at the global level, and are distinguished by regions dominated by high shear and low buoyancy flux. Within these patches, the locally averaged mean temperature profiles appear to obey a log-law with the universal constants of Yaglom (1979). We find that the local Nusselt number versus Rayleigh number scaling relation agrees with the logarithmically corrected power-law scaling predicted in the ultimate state of thermal convection, with an exponent consistent with Rayleigh-Bénard convection and Taylor-Couette flows. The local patches grow in size with increasing Rayleigh number, suggesting that the transition from the classical state to the ultimate state is characterised by increasingly larger patches of the turbulent boundary layers.
Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?
NASA Astrophysics Data System (ADS)
Ferrie, Chris; Blume-Kohout, Robin
2011-03-01
Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.
NASA Astrophysics Data System (ADS)
Zamani Kouhpanji, Mohammad Reza; Behzadirad, Mahmoud; Busani, Tito
2017-12-01
We used the stable strain gradient theory including acceleration gradients to investigate the classical and nonclassical mechanical properties of gallium nitride (GaN) nanowires (NWs). We predicted the static length scales, Young's modulus, and shear modulus of the GaN NWs from the experimental data. Combining these results with atomic simulations, we also found the dynamic length scale of the GaN NWs. Young's modulus, shear modulus, static, and dynamic length scales were found to be 318 GPa, 131 GPa, 8 nm, and 8.9 nm, respectively, usable for demonstrating the static and dynamic behaviors of GaN NWs having diameters from a few nm to bulk dimensions. Furthermore, the experimental data were analyzed with classical continuum theory (CCT) and compared with the available literature to illustrate the size-dependency of the mechanical properties of GaN NWs. This practice resolves the previous published discrepancies that happened due to the limitations of CCT used for determining the mechanical properties of GaN NWs and their size-dependency.
Reichmann, H; Jost, W H
2010-09-01
The MAO-B inhibitor rasagiline is indicated for the treatment of idiopathic Parkinson's disease (PD), and its use is supported by evidence from large-scale, controlled clinical studies. The post-marketing observational study presented here investigated the efficacy and tolerability of rasagiline treatment (monotherapy or combination therapy) in daily clinical practice. The study included patients with idiopathic PD who received rasagiline (recommended dose 1 mg, once daily) as monotherapy or combination therapy. The treatment and observation period was approximately 4 months. Outcome measures included the change from baseline in the Columbia University Rating Scale (CURS), the Unified PD Rating Scale fluctuation subscale, daily OFF time (patient home diaries) and the PD Questionnaire-39. Adverse drug reactions/adverse events (ADRs/AEs) and the physician's global judgement of tolerability and efficacy were also examined. Overall, 754 patients received rasagiline during the study. Patients treated with rasagiline (monotherapy or combination therapy) showed significant improvements from baseline in symptom severity (including classical motor and non-classical motor/non-motor symptoms) and quality of life (QoL). Patients receiving combination therapy also experienced significant reductions in daily OFF time. Tolerability was rated as good/very good in over 90% of patients. In daily clinical practice, monotherapy or combination therapy with rasagiline is able to improve PD symptoms, reduce OFF time, and improve QoL, whilst demonstrating favourable tolerability. In addition, rasagiline has a simple dosing schedule of one tablet, once daily, with no titration. These results are consistent with the pivotal rasagiline clinical studies (TEMPO, LARGO and PRESTO).
Flow topologies and turbulence scales in a jet-in-cross-flow
Oefelein, Joseph C.; Ruiz, Anthony M.; Lacaze, Guilhem
2015-04-03
This study presents a detailed analysis of the flow topologies and turbulence scales in the jet-in-cross-flow experiment of [Su and Mungal JFM 2004]. The analysis is performed using the Large Eddy Simulation (LES) technique with a highly resolved grid and time-step and well controlled boundary conditions. This enables quantitative agreement with the first and second moments of turbulence statistics measured in the experiment. LES is used to perform the analysis since experimental measurements of time-resolved 3D fields are still in their infancy and because sampling periods are generally limited with direct numerical simulation. A major focal point is the comprehensivemore » characterization of the turbulence scales and their evolution. Time-resolved probes are used with long sampling periods to obtain maps of the integral scales, Taylor microscales, and turbulent kinetic energy spectra. Scalar-fluctuation scales are also quantified. In the near-field, coherent structures are clearly identified, both in physical and spectral space. Along the jet centerline, turbulence scales grow according to a classical one-third power law. However, the derived maps of turbulence scales reveal strong inhomogeneities in the flow. From the modeling perspective, these insights are useful to design optimized grids and improve numerical predictions in similar configurations.« less
Chaussenot, A; Rouzier, C; Quere, M; Plutino, M; Ait-El-Mkadem, S; Bannwarth, S; Barth, M; Dollfus, H; Charles, P; Nicolino, M; Chabrol, B; Vialettes, B; Paquis-Flucklinger, V
2015-05-01
WFS1 mutations are responsible for Wolfram syndrome (WS) characterized by juvenile-onset diabetes mellitus and optic atrophy, and for low-frequency sensorineural hearing loss (LFSNHL). Our aim was to analyze the French cohort of 96 patients with WFS1-related disorders in order (i) to update clinical and molecular data with 37 novel affected individuals, (ii) to describe uncommon phenotypes and, (iii) to precise the frequency of large-scale rearrangements in WFS1. We performed quantitative polymerase chain reaction (PCR) in 13 patients, carrying only one heterozygous variant, to identify large-scale rearrangements in WFS1. Among the 37 novel patients, 15 carried 15 novel deleterious putative mutations, including one large deletion of 17,444 base pairs. The analysis of the cohort revealed unexpected phenotypes including (i) late-onset symptoms in 13.8% of patients with a probable autosomal recessive transmission; (ii) two siblings with recessive optic atrophy without diabetes mellitus and, (iii) six patients from four families with dominantly-inherited deafness and optic atrophy. We highlight the expanding spectrum of WFS1-related disorders and we show that, even if large deletions are rare events, they have to be searched in patients with classical WS carrying only one WFS1 mutation after sequencing. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Large-scale functional networks connect differently for processing words and symbol strings.
Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta
2018-01-01
Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.
Towards Simulating the Transverse Ising Model in a 2D Array of Trapped Ions
NASA Astrophysics Data System (ADS)
Sawyer, Brian
2013-05-01
Two-dimensional Coulomb crystals provide a useful platform for large-scale quantum simulation. Penning traps enable confinement of large numbers of ions (>100) and allow for the tunable-range spin-spin interactions demonstrated in linear ion strings, facilitating simulation of quantum magnetism at a scale that is currently intractable on classical computers. We readily confine hundreds of Doppler laser-cooled 9Be+ within a Penning trap, producing a planar array of ions with self-assembled triangular order. The transverse ``drumhead'' modes of our 2D crystal along with the valence electron spin of Be+ serve as a resource for generating spin-motion and spin-spin entanglement. Applying a spin-dependent optical dipole force (ODF) to the ion array, we perform spectroscopy and thermometry of individual drumhead modes. This ODF also allows us to engineer long-range Ising spin couplings of either ferromagnetic or anti-ferromagnetic character whose approximate power-law scaling with inter-ion distance, d, may be varied continuously from 1 /d0 to 1 /d3. An effective transverse magnetic field is applied via microwave radiation at the ~124-GHz spin-flip frequency, and ground states of the effective Ising Hamiltonian may in principle be prepared adiabatically by slowly decreasing this transverse field in the presence of the induced Ising coupling. Long-range anti-ferromagnetic interactions are of particular interest due to their inherent spin frustration and resulting large, near-degenerate manifold of ground states. We acknowledge support from NIST and the DARPA-OLE program.
NASA Astrophysics Data System (ADS)
Volwerk, Martin; Goetz, Charlotte; Richter, Ingo; Delva, Magda; Ostaszewski, Katharina; Schwingenschuh, Konrad; Glassmeier, Karl-Heinz
2018-06-01
Context. The Rosetta Plasma Consortium (RPC) magnetometer (MAG) data during the tail excursion in March-April 2016 are used to investigate the magnetic structure of and activity in the tail region of the weakly outgassing comet 67P/Churyumov-Gerasimenko (67P). Aims: The goal of this study is to compare the large scale (near) tail structure with that of earlier missions to strong outgassing comets, and the small scale turbulent energy cascade (un)related to the singing comet phenomenon. Methods: The usual methods of space plasma physics are used to analyse the magnetometer data, such as minimum variance analysis, spectral analysis, and power law fitting. Also the cone angle and clock angle of the magnetic field are calculated to interpret the data. Results: It is found that comet 67P does not have a classical draped magnetic field and no bi-lobal tail structure at this late stage of the mission when the comet is already at 2.7 AU distance from the Sun. The main magnetic field direction seems to be more across the tail direction, which may implicate an asymmetric pick-up cloud. During periods of singing comet activity the propagation direction of the waves is at large angles with respect to the magnetic field and to the radial direction towards the comet. Turbulent cascade of magnetic energy from large to small scales is different in the presence of singing as without it.
NASA Astrophysics Data System (ADS)
Tan, Xuezhi; Gan, Thian Yew; Chen, Shu; Liu, Bingjun
2018-05-01
Climate change and large-scale climate patterns may result in changes in probability distributions of climate variables that are associated with changes in the mean and variability, and severity of extreme climate events. In this paper, we applied a flexible framework based on the Bayesian spatiotemporal quantile (BSTQR) model to identify climate changes at different quantile levels and their teleconnections to large-scale climate patterns such as El Niño-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO) and Pacific-North American (PNA). Using the BSTQR model with time (year) as a covariate, we estimated changes in Canadian winter precipitation and their uncertainties at different quantile levels. There were some stations in eastern Canada showing distributional changes in winter precipitation such as an increase in low quantiles but a decrease in high quantiles. Because quantile functions in the BSTQR model vary with space and time and assimilate spatiotemporal precipitation data, the BSTQR model produced much spatially smoother and less uncertain quantile changes than the classic regression without considering spatiotemporal correlations. Using the BSTQR model with five teleconnection indices (i.e., SOI, PDO, PNA, NP and NAO) as covariates, we investigated effects of large-scale climate patterns on Canadian winter precipitation at different quantile levels. Winter precipitation responses to these five teleconnections were found to occur differently at different quantile levels. Effects of five teleconnections on Canadian winter precipitation were stronger at low and high than at medium quantile levels.
Nonextensive Entropy Approach to Space Plasma Fluctuations and Turbulence
NASA Astrophysics Data System (ADS)
Leubner, M. P.; Vörös, Z.; Baumjohann, W.
Spatial intermittency in fully developed turbulence is an established feature of astrophysical plasma fluctuations and in particular apparent in the interplanetary medium by in situ observations. In this situation, the classical Boltzmann— Gibbs extensive thermo-statistics, applicable when microscopic interactions and memory are short ranged and the environment is a continuous and differentiable manifold, fails. Upon generalization of the entropy function to nonextensivity, accounting for long-range interactions and thus for correlations in the system, it is demonstrated that the corresponding probability distribution functions (PDFs) are members of a family of specific power-law distributions. In particular, the resulting theoretical bi-κ functional reproduces accurately the observed global leptokurtic, non-Gaussian shape of the increment PDFs of characteristic solar wind variables on all scales, where nonlocality in turbulence is controlled via a multiscale coupling parameter. Gradual decoupling is obtained by enhancing the spatial separation scale corresponding to increasing κ-values in case of slow solar wind conditions where a Gaussian is approached in the limit of large scales. Contrary, the scaling properties in the high speed solar wind are predominantly governed by the mean energy or variance of the distribution, appearing as second parameter in the theory. The PDFs of solar wind scalar field differences are computed from WIND and ACE data for different time-lags and bulk speeds and analyzed within the nonextensive theory, where also a particular nonlinear dependence of the coupling parameter and variance with scale arises for best fitting theoretical PDFs. Consequently, nonlocality in fluctuations, related to both, turbulence and its large scale driving, should be related to long-range interactions in the context of nonextensive entropy generalization, providing fundamentally the physical background of the observed scale dependence of fluctuations in intermittent space plasmas.
Gross, Rainer; Buehler, Katja; Schmid, Andreas
2013-02-01
This study evaluates the technical feasibility of biofilm-based biotransformations at an industrial scale by theoretically designing a process employing membrane fiber modules as being used in the chemical industry and compares the respective process parameters to classical stirred-tank studies. To our knowledge, catalytic biofilm processes for fine chemicals production have so far not been reported on a technical scale. As model reactions, we applied the previously studied asymmetric styrene epoxidation employing Pseudomonas sp. strain VLB120ΔC biofilms and the here-described selective alkane hydroxylation. Using the non-heme iron containing alkane hydroxylase system (AlkBGT) from P. putida Gpo1 in the recombinant P. putida PpS81 pBT10 biofilm, we were able to continuously produce 1-octanol from octane with a maximal productivity of 1.3 g L ⁻¹(aq) day⁻¹ in a single tube micro reactor. For a possible industrial application, a cylindrical membrane fiber module packed with 84,000 polypropylene fibers is proposed. Based on the here presented calculations, 59 membrane fiber modules (of 0.9 m diameter and 2 m length) would be feasible to realize a production process of 1,000 tons/year for styrene oxide. Moreover, the product yield on carbon can at least be doubled and over 400-fold less biomass waste would be generated compared to classical stirred-tank reactor processes. For the octanol process, instead, further intensification in biological activity and/or surface membrane enlargement is required to reach production scale. By taking into consideration challenges such as biomass growth control and maintaining a constant biological activity, this study shows that a biofilm process at an industrial scale for the production of fine chemicals is a sustainable alternative in terms of product yield and biomass waste production. Copyright © 2012 Wiley Periodicals, Inc.
Valera, Alexandra; Epistolio, Samantha; Colomo, Lluis; Riva, Alice; Balagué, Olga; Dlouhy, Ivan; Tzankov, Alexandar; Bühler, Marco; Haralambieva, Eugenia; Campo, Elias; Soldini, Davide; Mazzucchelli, Luca; Martin, Vittoria
2016-08-01
MYC rearrangement can be detected in a subgroup of diffuse large B-cell lymphoma characterized by unfavorable prognosis. In contrast to Burkitt lymphoma, the correlation between MYC rearrangement and MYC protein expression in diffuse large B-cell lymphoma is less clear, as approximately one-third of rearranged cases show negative or low expression by immunohistochemistry. To better understand whether specific characteristics of the MYC rearrangement may influence its protein expression, we investigated 43 de novo diffuse large B-cell lymphoma positive for 8q24 rearrangement by FISH, using 14 Burkitt lymphoma for comparison. Different cell populations (clones), breakpoints (classical vs non-classical FISH patterns), partner genes (IGH vs non-IGH) and immunostaining were detected and analyzed using computerized image systems. In a subgroup of diffuse large B-cell lymphoma, we observed different clones within the same tumor distinguishing the founder clone with MYC rearrangement alone from other subclones, carrying MYC rearrangement coupled with loss/extra copies of derivatives/normal alleles. This picture, which we defined MYC genetic heteroclonality, was found in 42% of cases and correlated to negative MYC expression (P=0.026). Non-classical FISH breakpoints were detected in 16% of diffuse large B-cell lymphoma without affecting expression (P=0.040). Non-IGH gene was the preferential partner of rearrangement in those diffuse large B-cell lymphoma showing MYC heteroclonality (P=0.016) and/or non-classical FISH breakpoints (P=0.058). MYC heteroclonality was not observed in Burkitt lymphoma and all cases had positive MYC expression. Non-classical FISH MYC breakpoint and non-IGH partner were found in 29 and 20% of Burkitt lymphoma, respectively. In conclusion, MYC genetic heteroclonality is a frequent event in diffuse large B-cell lymphoma and may have a relevant role in modulating MYC expression.
Quantum algorithms for topological and geometric analysis of data
Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo
2016-01-01
Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491
A KPC-scale X-ray jet in the BL LAC Source S5 2007+777
NASA Technical Reports Server (NTRS)
Sambruna, Rita; Maraschi, Laura; Tavecchio, Fabrizio
2008-01-01
The BL Lac S3 2007++777, a classical radio-selected BL Lac from the sample of Stirkel et al. exhibiting an extended (19") radio jet. was observed with Chandra revealing an X-ray jet with simi1ar morphology. The hard X-ray spectrum and broad band SED is consistent with an IC/CMB origin for the X-ray emission, implying a highly relativistic flow at small angle to the line of sight with an unusually large deprojected length, 300 kpc. A structured jet consisting of a fast spine and slow wall is consistent with the observations.
Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile
NASA Astrophysics Data System (ADS)
Halverson, Thomas; Poirier, Bill
2015-03-01
'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.
Fault-tolerant Remote Quantum Entanglement Establishment for Secure Quantum Communications
NASA Astrophysics Data System (ADS)
Tsai, Chia-Wei; Lin, Jason
2016-07-01
This work presents a strategy for constructing long-distance quantum communications among a number of remote users through collective-noise channel. With the assistance of semi-honest quantum certificate authorities (QCAs), the remote users can share a secret key through fault-tolerant entanglement swapping. The proposed protocol is feasible for large-scale distributed quantum networks with numerous users. Each pair of communicating parties only needs to establish the quantum channels and the classical authenticated channels with his/her local QCA. Thus, it enables any user to communicate freely without point-to-point pre-establishing any communication channels, which is efficient and feasible for practical environments.
Two-dimensional electromagnetic Child-Langmuir law of a short-pulse electron flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S. H.; Tai, L. C.; Liu, Y. L.
Two-dimensional electromagnetic particle-in-cell simulations were performed to study the effect of the displacement current and the self-magnetic field on the space charge limited current density or the Child-Langmuir law of a short-pulse electron flow with a propagation distance of {zeta} and an emitting width of W from the classical regime to the relativistic regime. Numerical scaling of the two-dimensional electromagnetic Child-Langmuir law was constructed and it scales with ({zeta}/W) and ({zeta}/W){sup 2} at the classical and relativistic regimes, respectively. Our findings reveal that the displacement current can considerably enhance the space charge limited current density as compared to the well-knownmore » two-dimensional electrostatic Child-Langmuir law even at the classical regime.« less
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.
Universal attractor in a highly occupied non-Abelian plasma
NASA Astrophysics Data System (ADS)
Berges, J.; Boguslavski, K.; Schlichting, S.; Venugopalan, R.
2014-06-01
We study the thermalization process in highly occupied non-Abelian plasmas at weak coupling. The nonequilibrium dynamics of such systems is classical in nature and can be simulated with real-time lattice gauge theory techniques. We provide a detailed discussion of this framework and elaborate on the results reported in J. Berges, K. Boguslavski, S. Schlichting, and R. Venugopalan, Phys. Rev. D 89, 074011 (2014), 10.1103/PhysRevD.89.074011 along with novel findings. We demonstrate the emergence of universal attractor solutions, which govern the nonequilibrium evolution on large time scales both for nonexpanding and expanding non-Abelian plasmas. The turbulent attractor for a nonexpanding plasma drives the system close to thermal equilibrium on a time scale t ˜Q-1αs-7/4. The attractor solution for an expanding non-Abelian plasma leads to a strongly interacting albeit highly anisotropic system at the transition to the low-occupancy or quantum regime. This evolution in the classical regime is, within the uncertainties of our simulations, consistent with the "bottom up" thermalization scenario [R. Baier, A. H. Mueller, D. Schiff, and D. T. Son, Phys. Lett. B 502, 51 (2001), 10.1016/S0370-2693(01)00191-5]. While the focus of this paper is to understand the nonequilibrium dynamics in weak coupling asymptotics, we also discuss the relevance of our results for larger couplings in the early time dynamics of heavy ion collision experiments.
Network Theory: A Primer and Questions for Air Transportation Systems Applications
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.
2004-01-01
A new understanding (with potential applications to air transportation systems) has emerged in the past five years in the scientific field of networks. This development emerges in large part because we now have a new laboratory for developing theories about complex networks: The Internet. The premise of this new understanding is that most complex networks of interest, both of nature and of human contrivance, exhibit a fundamentally different behavior than thought for over two hundred years under classical graph theory. Classical theory held that networks exhibited random behavior, characterized by normal, (e.g., Gaussian or Poisson) degree distributions of the connectivity between nodes by links. The new understanding turns this idea on its head: networks of interest exhibit scale-free (or small world) degree distributions of connectivity, characterized by power law distributions. The implications of scale-free behavior for air transportation systems include the potential that some behaviors of complex system architectures might be analyzed through relatively simple approximations of local elements of the system. For air transportation applications, this presentation proposes a framework for constructing topologies (architectures) that represent the relationships between mobility, flight operations, aircraft requirements, and airspace capacity, and the related externalities in airspace procedures and architectures. The proposed architectures or topologies may serve as a framework for posing comparative and combinative analyses of performance, cost, security, environmental, and related metrics.
Investigations on the Bundle Adjustment Results from Sfm-Based Software for Mapping Purposes
NASA Astrophysics Data System (ADS)
Lumban-Gaol, Y. A.; Murtiyoso, A.; Nugroho, B. H.
2018-05-01
Since its first inception, aerial photography has been used for topographic mapping. Large-scale aerial photography contributed to the creation of many of the topographic maps around the world. In Indonesia, a 2013 government directive on spatial management has re-stressed the need for topographic maps, with aerial photogrammetry providing the main method of acquisition. However, the large need to generate such maps is often limited by budgetary reasons. Today, SfM (Structure-from-Motion) offers quicker and less expensive solutions to this problem. However, considering the required precision for topographic missions, these solutions need to be assessed to see if they provide enough level of accuracy. In this paper, a popular SfM-based software Agisoft PhotoScan is used to perform bundle adjustment on a set of large-scale aerial images. The aim of the paper is to compare its bundle adjustment results with those generated by more classical photogrammetric software, namely Trimble Inpho and ERDAS IMAGINE. Furthermore, in order to provide more bundle adjustment statistics to be compared, the Damped Bundle Adjustment Toolbox (DBAT) was also used to reprocess the PhotoScan project. Results show that PhotoScan results are less stable than those generated by the two photogrammetric software programmes. This translates to lower accuracy, which may impact the final photogrammetric product.
Quantifying the Extremity of Windstorms for Regions Featuring Infrequent Events
NASA Astrophysics Data System (ADS)
Walz, M. A.; Leckebusch, G. C.; Kruschke, T.; Rust, H.; Ulbrich, U.
2017-12-01
This paper introduces the Distribution-Independent Storm Severity Index (DI-SSI). The DI-SSI represents an approach to quantify the severity of exceptional surface wind speeds of large scale windstorms that is complementary to the Storm Severity Index (SSI) introduced by Leckebusch et al. (2008). While the SSI approaches the extremeness of a storm from a meteorological and potential loss (impact) perspective, the DI-SSI defines the severity in a more climatological perspective. The idea is to assign equal index values to wind speeds of the same singularity (e.g. the 99th percentile) under consideration of the shape of the tail of the local wind speed climatology. Especially in regions at the edge of the classical storm track the DI-SSI shows more equitable severity estimates, e.g. for the extra-tropical cyclone Klaus. Here were compare the integral severity indices for several prominent windstorm in the European domain and discuss the advantages and disadvantages of the respective index. In order to compare the indices, their relation with the North Atlantic Oscillation (NAO) is studied, which is one of the main large scale drivers for the intensity of European windstorms. Additionally we can identify a significant relationship between the frequency and intensity of windstorms for large parts of the European domain.
NASA Astrophysics Data System (ADS)
Junqueira Leão, Rodrigo; Raffaelo Baldo, Crhistian; Collucci da Costa Reis, Maria Luisa; Alves Trabanco, Jorge Luiz
2018-03-01
The building blocks of particle accelerators are magnets responsible for keeping beams of charged particles at a desired trajectory. Magnets are commonly grouped in support structures named girders, which are mounted on vertical and horizontal stages. The performance of this type of machine is highly dependent on the relative alignment between its main components. The length of particle accelerators ranges from small machines to large-scale national or international facilities, with typical lengths of hundreds of meters to a few kilometers. This relatively large volume together with micrometric positioning tolerances make the alignment activity a classical large-scale dimensional metrology problem. The alignment concept relies on networks of fixed monuments installed on the building structure to which all accelerator components are referred. In this work, the Sirius accelerator is taken as a case study, and an alignment network is optimized via computational methods in terms of geometry, densification, and surveying procedure. Laser trackers are employed to guide the installation and measure the girders’ positions, using the optimized network as a reference and applying the metric developed in part I of this paper. Simulations demonstrate the feasibility of aligning the 220 girders of the Sirius synchrotron to better than 0.080 mm, at a coverage probability of 95%.
ERIC Educational Resources Information Center
Lange, Elizabeth
2015-01-01
This article argues that sociology has been a foundational discipline for the field of adult education, but it has been largely implicit, until recently. This article contextualizes classical theories of sociology within contemporary critiques, reviews the historical roots of sociology and then briefly introduces the classical theories…
Temperature structure and kinematics of the IRDC G035.39-00.33
NASA Astrophysics Data System (ADS)
Sokolov, Vlas; Wang, Ke; Pineda, Jaime E.; Caselli, Paola; Henshaw, Jonathan D.; Tan, Jonathan C.; Fontani, Francesco; Jiménez-Serra, Izaskun; Lim, Wanggi
2017-10-01
Aims: Infrared dark clouds represent the earliest stages of high-mass star formation. Detailed observations of their physical conditions on all physical scales are required to improve our understanding of their role in fueling star formation. Methods: We investigate the large-scale structure of the IRDC G035.39-00.33, probing the dense gas with the classical ammonia thermometer. This allows us to put reliable constraints on the temperature of the extended, pc-scale dense gas reservoir and to probe the magnitude of its non-thermal motions. Available far-infrared observations can be used in tandem with the observed ammonia emission to estimate the total gas mass contained in G035.39-00.33. Results: We identify a main velocity component as a prominent filament, manifested as an ammonia emission intensity ridge spanning more than 6 pc, consistent with the previous studies on the Northern part of the cloud. A number of additional line-of-sight components are found, and a large-scale linear velocity gradient of 0.2km s-1 pc-1 is found along the ridge of the IRDC. In contrast to the dust temperature map, an ammonia-derived kinetic temperature map, presented for the entirety of the cloud, reveals local temperature enhancements towards the massive protostellar cores. We show that without properly accounting for the line of sight contamination, the dust temperature is 2-3 K larger than the gas temperature measured with NH3. Conclusions: While both the large-scale kinematics and temperature structure are consistent with that of starless dark filaments, the kinetic gas temperature profile on smaller scales is suggestive of tracing the heating mechanism coincident with the locations of massive protostellar cores. The reduced spectral cubes (FITS format) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A133
What Actually Happens When Granular Materials Deform Under Shear: A Look Within
NASA Astrophysics Data System (ADS)
Viggiani, C.
2012-12-01
We all know that geomaterials (soil and rock) are composed of particles. However, when dealing with them, we often use continuum models, which ignore particles and make use of abstract variables such stress and strain. Continuum mechanics is the classical tool that geotechnical engineers have always used for their everyday calculations: estimating settlements of an embankment, the deformation of a sheet pile wall, the stability of a dam or a foundation, etc. History tells us that, in general, this works fine. While we are happily ignoring particles, they will at times come back to haunt us. This happens when deformation is localized in regions so small that the detail of the soil's (or rock's) particular structure cannot safely be ignored. Failure is the perfect example of this. Researchers in geomechanics (and more generally in solid mechanics) have long since known that all classical continuum models typically break down when trying to model failure. All sorts of numerical troubles ensue - all of them pointing to a fundamental deficiency of the model: the lack of microstructure. N.B.: the term microstructure doesn't prescribe a dimension (e.g., microns), but rather a scale - the scale of the mechanisms responsible for failure. A possible remedy to this deficiency is represented by the so-called "double scale" models, in which the small scale (the microstructure) is explicitly taken into account. Typically, two numerical problems are defined and solved - one at the large (continuum) scale, and the other at the small scale. This sort of approach requires a link between the two scales, to complete the picture. Imagine we are solving at the small scale a simulation of an assembly of a few grains, for example using the Discrete Element Method, whose results are in turn fed back to the large scale Finite Element simulation. The key feature of a double scale model is that one can inject the relevant physics at the appropriate scale. The success of such a model crucially depends on the quality of the physics one injects: ideally, this comes directly from experiments. In Grenoble, this is what we do, combining various advanced experimental techniques. We are able to image, in three dimensions and at small scales, the deformation processes accompanying failure in geomaterials. This allows us to understand these processes and subsequently to define models at a pertinently small scale. I will present a few examples of the kind of experimental results which could inform a micro scale model. X-ray micro tomography imaging is the key measurement tool. This is used during loading, providing complete 3D images of a sand specimen at several stages throughout a triaxial compression test. Images from x-rays are then analyzed either in a continuum sense (using 3D Digital Image Correlation) or looking at the individual particle kinematics (Particle Tracking). I will show some of our most recent results, in which individual sand grains are followed with a technique combining very recent developments in image correlation and particle tracking. These advanced techniques offer us a look at what actually happens when a granular material deforms and eventually fails.
NASA Astrophysics Data System (ADS)
Johnson, Ben T.; Haywood, James M.; Langridge, Justin M.; Darbyshire, Eoghan; Morgan, William T.; Szpek, Kate; Brooke, Jennifer K.; Marenco, Franco; Coe, Hugh; Artaxo, Paulo; Longo, Karla M.; Mulcahy, Jane P.; Mann, Graham W.; Dalvi, Mohit; Bellouin, Nicolas
2016-11-01
We present observations of biomass burning aerosol from the South American Biomass Burning Analysis (SAMBBA) and other measurement campaigns, and use these to evaluate the representation of biomass burning aerosol properties and processes in a state-of-the-art climate model. The evaluation includes detailed comparisons with aircraft and ground data, along with remote sensing observations from MODIS and AERONET. We demonstrate several improvements to aerosol properties following the implementation of the Global Model for Aerosol Processes (GLOMAP-mode) modal aerosol scheme in the HadGEM3 climate model. This predicts the particle size distribution, composition, and optical properties, giving increased accuracy in the representation of aerosol properties and physical-chemical processes over the Coupled Large-scale Aerosol Scheme for Simulations in Climate Models (CLASSIC) bulk aerosol scheme previously used in HadGEM2. Although both models give similar regional distributions of carbonaceous aerosol mass and aerosol optical depth (AOD), GLOMAP-mode is better able to capture the observed size distribution, single scattering albedo, and Ångström exponent across different tropical biomass burning source regions. Both aerosol schemes overestimate the uptake of water compared to recent observations, CLASSIC more so than GLOMAP-mode, leading to a likely overestimation of aerosol scattering, AOD, and single scattering albedo at high relative humidity. Observed aerosol vertical distributions were well captured when biomass burning aerosol emissions were injected uniformly from the surface to 3 km. Finally, good agreement between observed and modelled AOD was gained only after scaling up GFED3 emissions by a factor of 1.6 for CLASSIC and 2.0 for GLOMAP-mode. We attribute this difference in scaling factor mainly to different assumptions for the water uptake and growth of aerosol mass during ageing via oxidation and condensation of organics. We also note that similar agreement with observed AOD could have been achieved with lower scaling factors if the ratio of organic carbon to primary organic matter was increased in the models toward the upper range of observed values. Improved knowledge from measurements is required to reduce uncertainties in emission ratios for black carbon and organic carbon, and the ratio of organic carbon to primary organic matter for primary emissions from biomass burning.
NASA Astrophysics Data System (ADS)
Kato, K.; Sato, S.; Kato, H.; Akagi, R.; Sueishi, N.; Mori, T.; Nakakura, T.; Irie, I.
2012-04-01
There are many steps of the rapid seasonal transitions in East Asia influenced by the seasonal cycle of the Asian monsoon system, resulting in the variety of "seasonal feeling" there. For example, the extremely cold air flowing from the Siberian continent to the Japan Islands is transformed by the huge supply of heat and moisture from the underlying sea (the Japan Sea) in midwinter, which brings the large amount of snowfall in the Japan Sea side of the Japan Islands. However, although the air temperature there is still rather higher from November to early December than in the midwinter, such wintertime weather pattern often appears due to the early development of the Siberian high (however, the precipitation is brought not as in snow but as rain). The intermittent rainfall in such situation due to the shallow cumulus clouds from late autumn to early winter is called the word "Shi-gu-re" in Japanese. It is also well known that the "Shi-gu-re" is often used for expression of the "seasonal feeling" in the Japanese classical literature (especially we can see in the Japanese classic poems called "Wa-Ka"). The present study reports a trial of cross-disciplinary class on the seasonal cycle in East Asia in association with the "seasonal feeling" from autumn to winter, by the joint activity of meteorology with the Japanese classical literature, the music, and the art. Firstly, we will summarize the characteristics of the large-scale climate systems and the daily weather situations from autumn to winter. We will also introduce some examples of the expression of the weather situation found in the Japanese classical poems. Next the outline of the cross-disciplinary classes on such topics at the Faculty of Education, Okayama University, and those at Okayama-Ichinomiya High School and Attached Junior High School of Okayama University will be presented together with the analyses of these practices. We should note that the present trial of the classes might also contribute to providing the study materials for the cultural understanding, which is one of the important elements for the ESD (Education for Sustainable Development).
Models for the rise of the dinosaurs.
Benton, Michael J; Forth, Jonathan; Langer, Max C
2014-01-20
Dinosaurs arose in the early Triassic in the aftermath of the greatest mass extinction ever and became hugely successful in the Mesozoic. Their initial diversification is a classic example of a large-scale macroevolutionary change. Diversifications at such deep-time scales can now be dissected, modelled and tested. New fossils suggest that dinosaurs originated early in the Middle Triassic, during the recovery of life from the devastating Permo-Triassic mass extinction. Improvements in stratigraphic dating and a new suite of morphometric and comparative evolutionary numerical methods now allow a forensic dissection of one of the greatest turnovers in the history of life. Such studies mark a move from the narrative to the analytical in macroevolutionary research, and they allow us to begin to answer the proposal of George Gaylord Simpson, to explore adaptive radiations using numerical methods. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Greatest soil microbial diversity found in micro-habitats
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bach, Elizabeth M.; Williams, Ryan J.; Hargreaves, Sarah K.
Microbial interactions occur in habitats much smaller than typically considered in classic ecological studies. This study uses soil aggregates to examine soil microbial community composition and structure of both bacteria and fungi at a microbially relevant scale. Aggregates were isolated from three land management systems in central Iowa, USA to test if aggregate-level microbial responses were sensitive to large-scale shifts in plant community and management practices. Bacteria and fungi exhibited similar patterns of community structure and diversity among soil aggregates, regardless of land management. Microaggregates supported more diverse microbial communities, both taxonomically and functionally. Calculation of a weighted proportional wholemore » soil diversity, which accounted for microbes found in aggregate fractions, resulted in 65% greater bacterial richness and 100% greater fungal richness over independently sampled whole soil. Our results show microaggregates support a previously unrecognized diverse microbial community that likely effects microbial access and metabolism of soil substrates.« less
Unfolding an electronic integrate-and-fire circuit.
Carrillo, Humberto; Hoppensteadt, Frank
2010-01-01
Many physical and biological phenomena involve accumulation and discharge processes that can occur on significantly different time scales. Models of these processes have contributed to understand excitability self-sustained oscillations and synchronization in arrays of oscillators. Integrate-and-fire (I+F) models are popular minimal fill-and-flush mathematical models. They are used in neuroscience to study spiking and phase locking in single neuron membranes, large scale neural networks, and in a variety of applications in physics and electrical engineering. We show here how the classical first-order I+F model fits into the theory of nonlinear oscillators of van der Pol type by demonstrating that a particular second-order oscillator having small parameters converges in a singular perturbation limit to the I+F model. In this sense, our study provides a novel unfolding of such models and it identifies a constructible electronic circuit that is closely related to I+F.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Yin; Jiang, Yuanwen; Cherukara, Mathew J.
Large-scale assembly of individual atoms over smooth surfaces is difficult to achieve. A configuration of an atom reservoir, in which individual atoms can be readily extracted, may successfully address this challenge. In this work, we demonstrate that a liquid gold-silicon alloy established in classical vapor-liquid-solid growth can deposit ordered and three-dimensional rings of isolated gold atoms over silicon nanowire sidewalls. Here, we perform ab initio molecular dynamics simulation and unveil a surprising single atomic gold-catalyzed chemical etching of silicon. Experimental verification of this catalytic process in silicon nanowires yields dopant-dependent, massive and ordered 3D grooves with spacing down to similarmore » to 5 nm. Finally, we use these grooves as self-labeled and ex situ markers to resolve several complex silicon growths, including the formation of nodes, kinks, scale-like interfaces, and curved backbones.« less
Thermal Destruction of TETS: Experiments and Modeling ...
Symposium Paper In the event of a contamination event involving chemical warfare agents (CWAs) or toxic industrial chemicals (TICs), large quantities of potentially contaminated materials, both indoor and outdoor, may be treated with thermal incineration during the site remediation process. Even if the CWAs or TICs of interest are not particularly thermally stable and might be expected to decompose readily in a high temperature combustion environment, the refractory nature of many materials found inside and outside buildings may present heat transfer challenges in an incineration system depending on how the materials are packaged and fed into the incinerator. This paper reports on a study to examine the thermal decomposition of a banned rodenticide, tetramethylene disulfotetramine (TETS) in a laboratory reactor, analysis of the results using classical reactor design theory, and subsequent scale-up of the results to a computer-simulation of a full-scale commercial hazardous waste incinerator processing ceiling tile contaminated with residual TETS.
Grain-scale supercharging and breakdown on airless regoliths
NASA Astrophysics Data System (ADS)
Zimmerman, M. I.; Farrell, W. M.; Hartzell, C. M.; Wang, X.; Horanyi, M.; Hurley, D. M.; Hibbitts, K.
2016-10-01
Interactions of the solar wind and emitted photoelectrons with airless bodies have been studied extensively. However, the details of how charged particles interact with the regolith at the scale of a single grain have remained largely uncharacterized. Recent efforts have focused upon determining total surface charge under photoemission and solar wind bombardment and the associated electric field and potential. In this work, theory and simulations are used to show that grain-grain charge differences can exceed classical sheath predictions by several orders of magnitude, sometimes reaching dielectric breakdown levels. Temperature-dependent electrical conductivity works against supercharging by allowing current to leak through individual grains; the balance between internal conduction and surface charging controls the maximum possible grain-to-grain electric field. Understanding the finer details of regolith grain charging, conductive equilibrium, and dielectric breakdown will improve future numerical studies of space weathering and dust levitation on airless bodies.
Scaling properties of ballistic nano-transistors
2011-01-01
Recently, we have suggested a scale-invariant model for a nano-transistor. In agreement with experiments a close-to-linear thresh-old trace was found in the calculated ID - VD-traces separating the regimes of classically allowed transport and tunneling transport. In this conference contribution, the relevant physical quantities in our model and its range of applicability are discussed in more detail. Extending the temperature range of our studies it is shown that a close-to-linear thresh-old trace results at room temperatures as well. In qualitative agreement with the experiments the ID - VG-traces for small drain voltages show thermally activated transport below the threshold gate voltage. In contrast, at large drain voltages the gate-voltage dependence is weaker. As can be expected in our relatively simple model, the theoretical drain current is larger than the experimental one by a little less than a decade. PMID:21711899
Zero-temperature quantum annealing bottlenecks in the spin-glass phase.
Knysh, Sergey
2016-08-05
A promising approach to solving hard binary optimization problems is quantum adiabatic annealing in a transverse magnetic field. An instantaneous ground state-initially a symmetric superposition of all possible assignments of N qubits-is closely tracked as it becomes more and more localized near the global minimum of the classical energy. Regions where the energy gap to excited states is small (for instance at the phase transition) are the algorithm's bottlenecks. Here I show how for large problems the complexity becomes dominated by O(log N) bottlenecks inside the spin-glass phase, where the gap scales as a stretched exponential. For smaller N, only the gap at the critical point is relevant, where it scales polynomially, as long as the phase transition is second order. This phenomenon is demonstrated rigorously for the two-pattern Gaussian Hopfield model. Qualitative comparison with the Sherrington-Kirkpatrick model leads to similar conclusions.
The onset of electrospray: the universal scaling laws of the first ejection
Gañán-Calvo, A. M.; López-Herrera, J. M.; Rebollo-Muñoz, N.; Montanero, J. M.
2016-01-01
The disintegration of liquid drops with low electrical conductivity and subject to an electric field is investigated both theoretically and experimentally. This disintegration takes place through the development of a conical cusp that eventually ejects an ultrathin liquid ligament. A first tiny drop is emitted from the end of this ligament. Due to its exceptionally small size and large electric charge per unit volume, that drop has been the object of relevant recent studies. In this paper, universal scaling laws for the diameter and electric charge of the first issued droplet are proposed and validated both numerically and experimentally. Our analysis shows how charge relaxation is the mechanism that differentiates the onset of electrospray, including the first droplet ejection, from the classical steady cone-jet mode. In this way, our study identifies when and where charge relaxation and electrokinetic phenomena come into play in electrospray, a subject of live controversy in the field. PMID:27581554
Fang, Yin; Jiang, Yuanwen; Cherukara, Mathew J.; ...
2017-12-08
Large-scale assembly of individual atoms over smooth surfaces is difficult to achieve. A configuration of an atom reservoir, in which individual atoms can be readily extracted, may successfully address this challenge. In this work, we demonstrate that a liquid gold-silicon alloy established in classical vapor-liquid-solid growth can deposit ordered and three-dimensional rings of isolated gold atoms over silicon nanowire sidewalls. Here, we perform ab initio molecular dynamics simulation and unveil a surprising single atomic gold-catalyzed chemical etching of silicon. Experimental verification of this catalytic process in silicon nanowires yields dopant-dependent, massive and ordered 3D grooves with spacing down to similarmore » to 5 nm. Finally, we use these grooves as self-labeled and ex situ markers to resolve several complex silicon growths, including the formation of nodes, kinks, scale-like interfaces, and curved backbones.« less
NASA Astrophysics Data System (ADS)
Csordás, A.; Graham, R.; Szépfalusy, P.; Vattay, G.
1994-01-01
One wall of an Artin's billiard on the Poincaré half-plane is replaced by a one-parameter (cp) family of nongeodetic walls. A brief description of the classical phase space of this system is given. In the quantum domain, the continuous and gradual transition from the Poisson-like to Gaussian-orthogonal-ensemble (GOE) level statistics due to the small perturbations breaking the symmetry responsible for the ``arithmetic chaos'' at cp=1 is studied. Another GOE-->Poisson transition due to the mixed phase space for large perturbations is also investigated. A satisfactory description of the intermediate level statistics by the Brody distribution was found in both cases. The study supports the existence of a scaling region around cp=1. A finite-size scaling relation for the Brody parameter as a function of 1-cp and the number of levels considered can be established.
NASA Astrophysics Data System (ADS)
Green, J. A.; Gray, M. D.; Robishaw, T.; Caswell, J. L.; McClure-Griffiths, N. M.
2014-06-01
Recent comparisons of magnetic field directions derived from maser Zeeman splitting with those derived from continuum source rotation measures have prompted new analysis of the propagation of the Zeeman split components, and the inferred field orientation. In order to do this, we first review differing electric field polarization conventions used in past studies. With these clearly and consistently defined, we then show that for a given Zeeman splitting spectrum, the magnetic field direction is fully determined and predictable on theoretical grounds: when a magnetic field is oriented away from the observer, the left-hand circular polarization is observed at higher frequency and the right-hand polarization at lower frequency. This is consistent with classical Lorentzian derivations. The consequent interpretation of recent measurements then raises the possibility of a reversal between the large-scale field (traced by rotation measures) and the small-scale field (traced by maser Zeeman splitting).
Grain-Scale Supercharging and Breakdown on Airless Regoliths
NASA Technical Reports Server (NTRS)
Zimmerman, M. I.; Farrell, W. M.; Hartzell, C.M.; Wang, X.; Horanyi, M.; Hurley, D. M.; Hibbitts, K.
2016-01-01
Interactions of the solar wind and emitted photoelectrons with airless bodies have been studied extensively. However, the details of how charged particles interact with the regolith at the scale of a single grain have remained largely uncharacterized. Recent efforts have focused upon determining total surface charge under photoemission and solar wind bombardment and the associated electric field and potential. In this work, theory and simulations are used to show that grain-grain charge differences can exceed classical sheath predictions by several orders of magnitude, sometimes reaching dielectric breakdown levels. Temperature-dependent electrical conductivity works against supercharging by allowing current to leak through individual grains; the balance between internal conduction and surface charging controls the maximum possible grain-to-grain electric field. Understanding the finer details of regolith grain charging, conductive equilibrium, and dielectric breakdown will improve future numerical studies of space weathering and dust levitation on airless bodies.
Dispersion/dilution enhances phytoplankton blooms in low-nutrient waters
NASA Astrophysics Data System (ADS)
Lehahn, Yoav; Koren, Ilan; Sharoni, Shlomit; D'Ovidio, Francesco; Vardi, Assaf; Boss, Emmanuel
2017-03-01
Spatial characteristics of phytoplankton blooms often reflect the horizontal transport properties of the oceanic turbulent flow in which they are embedded. Classically, bloom response to horizontal stirring is regarded in terms of generation of patchiness following large-scale bloom initiation. Here, using satellite observations from the North Pacific Subtropical Gyre and a simple ecosystem model, we show that the opposite scenario of turbulence dispersing and diluting fine-scale (~1-100 km) nutrient-enriched water patches has the critical effect of regulating the dynamics of nutrients-phytoplankton-zooplankton ecosystems and enhancing accumulation of photosynthetic biomass in low-nutrient oceanic environments. A key factor in determining ecological and biogeochemical consequences of turbulent stirring is the horizontal dilution rate, which depends on the effective eddy diffusivity and surface area of the enriched patches. Implementation of the notion of horizontal dilution rate explains quantitatively plankton response to turbulence and improves our ability to represent ecological and biogeochemical processes in oligotrophic oceans.
Single-user MIMO system, Painlevé transcendents, and double scaling
NASA Astrophysics Data System (ADS)
Chen, Hongmei; Chen, Min; Blower, Gordon; Chen, Yang
2017-12-01
In this paper, we study a particular Painlevé V (denoted PV) that arises from multi-input-multi-output wireless communication systems. Such PV appears through its intimate relation with the Hankel determinant that describes the moment generating function (MGF) of the Shannon capacity. This originates through the multiplication of the Laguerre weight or the gamma density xαe-x, x > 0, for α > -1 by (1 + x/t)λ with t > 0 a scaling parameter. Here the λ parameter "generates" the Shannon capacity; see Chen, Y. and McKay, M. R. [IEEE Trans. Inf. Theory 58, 4594-4634 (2012)]. It was found that the MGF has an integral representation as a functional of y(t) and y'(t), where y(t) satisfies the "classical form" of PV. In this paper, we consider the situation where n, the number of transmit antennas, (or the size of the random matrix), tends to infinity and the signal-to-noise ratio, P, tends to infinity such that s = 4n2/P is finite. Under such double scaling, the MGF, effectively an infinite determinant, has an integral representation in terms of a "lesser" PIII. We also consider the situations where α =k +1 /2 ,k ∈N , and α ∈ {0, 1, 2, …}, λ ∈ {1, 2, …}, linking the relevant quantity to a solution of the two-dimensional sine-Gordon equation in radial coordinates and a certain discrete Painlevé-II. From the large n asymptotic of the orthogonal polynomials, which appears naturally, we obtain the double scaled MGF for small and large s, together with the constant term in the large s expansion. With the aid of these, we derive a number of cumulants and find that the capacity distribution function is non-Gaussian.
Renosh, P R; Schmitt, Francois G; Loisel, Hubert
2015-01-01
Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.
Tang, Yuan-Yuan; Li, Peng-Fei; Zhang, Wan-Ying; Ye, Heng-Yun; You, Yu-Meng; Xiong, Ren-Gen
2017-10-04
The classical organic ferroelectric, poly(vinylidene fluoride) (PVDF), has attracted much attention as a promising candidate for data storage applications compatible with all-organic electronics. However, it is the low crystallinity, the large coercive field, and the limited thermal stability of remanent polarization that severely hinder large-scale integration. In light of that, we show a molecular ferroelectric thin film of [Hdabco][ReO 4 ] (dabco = 1,4-diazabicyclo[2.2.2]octane) (1), belonging to another class of typical organic ferroelectrics. Remarkably, it displays not only the highest Curie temperature of 499.6 K but also the fastest polarization switching of 100k Hz among all reported molecular ferroelectrics. Combined with the large remanent polarization values (∼9 μC/cm 2 ), the low coercive voltages (∼10 V), and the unique multiaxial ferroelectric nature, 1 becomes a promising and viable alternative to PVDF for data storage applications in next-generation flexible devices, wearable devices, and bionics.
Sauer, Jeremy A.; Munoz-Esparza, Domingo; Canfield, Jesse M.; ...
2016-06-24
In this study, the impact of atmospheric boundary layer (ABL) interactions with large-scale stably stratified flow over an isolated, two-dimensional hill is investigated using turbulence-resolving large-eddy simulations. The onset of internal gravity wave breaking and leeside flow response regimes of trapped lee waves and nonlinear breakdown (or hydraulic-jump-like state) as they depend on the classical inverse Froude number, Fr -1 = Nh/U g, is explored in detail. Here, N is the Brunt–Väisälä frequency, h is the hill height, and U g is the geostrophic wind. The results here demonstrate that the presence of a turbulent ABL influences mountain wave (MW) development in critical aspects, such as dissipation of trapped lee waves and amplified stagnation zone turbulence through Kelvin–Helmholtz instability. It is shown that the nature of interactions between the large-scale flow and the ABL is better characterized by a proposed inverse compensated Froude number, Frmore » $$-1\\atop{c}$$ = N(h - z i)/U g, where z i is the ABL height. In addition, it is found that the onset of the nonlinear-breakdown regime, Fr$$-1\\atop{c}$$ ≈ 1.0, is initiated when the vertical wavelength becomes comparable to the sufficiently energetic scales of turbulence in the stagnation zone and ABL, yielding an abrupt change in leeside flow response. Lastly, energy spectra are presented in the context of MW flows, supporting the existence of a clear transition in leeside flow response, and illustrating two distinct energy distribution states for the trapped-lee-wave and the nonlinear-breakdown regimes.« less
Shrestha, Sourya; Bjørnstad, Ottar N.; King, Aaron A.
2014-01-01
Classical life-history theory predicts that acute, immunizing pathogens should maximize between-host transmission. When such pathogens induce violent epidemic outbreaks, however, a pathogen’s short-term advantage at invasion may come at the expense of its ability to persist in the population over the long term. Here, we seek to understand how the classical and invasion-persistence trade-offs interact to shape pathogen life-history evolution as a function of the size and structure of the host population. We develop an individual-based infection model at three distinct levels of organization: within an individual host, among hosts within a local population, and among local populations within a metapopulation. We find a continuum of evolutionarily stable pathogen strategies. At one end of the spectrum—in large well-mixed populations—pathogens evolve to greater acuteness to maximize between-host transmission: the classical trade-off theory applies in this regime. At the other end of the spectrum—when the host population is broken into many small patches—selection favors less acute pathogens, which persist longer within a patch and thereby achieve enhanced between-patch transmission: the invasion-persistence tradeoff dominates in this regime. Between these extremes, we explore the effects of the size and structure of the host population in determining pathogen strategy. In general, pathogen strategies respond to evolutionary pressures arising at both scales. PMID:25214895
Calibrating EASY-Care independence scale to improve accuracy
Jotheeswaran, A. T.; Dias, Amit; Philp, Ian; Patel, Vikram; Prince, Martin
2016-01-01
Background there is currently limited support for the reliability and validity of the EASY-Care independence scale, with little work carried out in low- or middle-income countries. Therefore, we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Objective we assessed the internal construct validity and hierarchical and classical scaling properties among frail dependent older people in the community. Methods three primary care physicians administered EASY-Care comprehensive geriatric assessment for 150 frail and/or dependent older people in the primary care setting. A Mokken model was applied to investigate hierarchical scaling properties of EASY-Care independence scale, and internal consistency (Cronbach's alpha) of the scale was also examined. Results we found that EASY-Care independence scale is highly internally consistent and is a strong hierarchical scale, hence providing strong evidence for unidimensionality. However, two items in the scale (unable to use telephone and manage finances) had much lower item Loevinger H coefficients than others. Exclusion of these two items improved the overall internal consistency of the scale. Conclusions the strong performance of the EASY-Care independence scale among community-dwelling frail older people is encouraging. This study confirms that EASY-Care independence scale is highly internally consistent and a strong hierarchical scale. PMID:27496925
Young, Paul E; Kum Jew, Stephen; Buckland, Michael E; Pamphlett, Roger; Suter, Catherine M
2017-01-01
Amyotrophic lateral sclerosis (ALS) is a devastating late-onset neurodegenerative disorder in which only a small proportion of patients carry an identifiable causative genetic lesion. Despite high heritability estimates, a genetic etiology for most sporadic ALS remains elusive. Here we report the epigenetic profiling of five monozygotic twin pairs discordant for ALS, four with classic ALS and one with the progressive muscular atrophy ALS variant, in whom previous whole genome sequencing failed to uncover a genetic basis for their disease discordance. By studying cytosine methylation patterns in peripheral blood DNA we identified thousands of large between-twin differences at individual CpGs. While the specific sites of differences were mostly idiosyncratic to a twin pair, a proportion involving GABA signalling were common to all ALS individuals. For both idiosyncratic and common sites the differences occurred within genes and pathways related to neurobiological functions or dysfunctions, some of particular relevance to ALS such as glutamate metabolism and the Golgi apparatus. All four classic ALS patients were epigenetically older than their unaffected co-twins, suggesting accelerated aging in multiple tissues in this disease. In conclusion, widespread changes in methylation patterns were found in ALS-affected co-twins, consistent with an epigenetic contribution to disease. These DNA methylation findings could be used to develop blood-based ALS biomarkers, gain insights into disease pathogenesis, and provide a reference for future large-scale ALS epigenetic studies.
Dykstra, Andrew R.; Halgren, Eric; Thesen, Thomas; Carlson, Chad E.; Doyle, Werner; Madsen, Joseph R.; Eskandar, Emad N.; Cash, Sydney S.
2011-01-01
The auditory system must constantly decompose the complex mixture of sound arriving at the ear into perceptually independent streams constituting accurate representations of individual sources in the acoustic environment. How the brain accomplishes this task is not well understood. The present study combined a classic behavioral paradigm with direct cortical recordings from neurosurgical patients with epilepsy in order to further describe the neural correlates of auditory streaming. Participants listened to sequences of pure tones alternating in frequency and indicated whether they heard one or two “streams.” The intracranial EEG was simultaneously recorded from sub-dural electrodes placed over temporal, frontal, and parietal cortex. Like healthy subjects, patients heard one stream when the frequency separation between tones was small and two when it was large. Robust evoked-potential correlates of frequency separation were observed over widespread brain areas. Waveform morphology was highly variable across individual electrode sites both within and across gross brain regions. Surprisingly, few evoked-potential correlates of perceptual organization were observed after controlling for physical stimulus differences. The results indicate that the cortical areas engaged during the streaming task are more complex and widespread than has been demonstrated by previous work, and that, by-and-large, correlates of bistability during streaming are probably located on a spatial scale not assessed – or in a brain area not examined – by the present study. PMID:21886615
Diverse Power Iteration Embeddings and Its Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang H.; Yoo S.; Yu, D.
2014-12-14
Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less
Integrated control-system design via generalized LQG (GLQG) theory
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Hyland, David C.; Richter, Stephen; Haddad, Wassim M.
1989-01-01
Thirty years of control systems research has produced an enormous body of theoretical results in feedback synthesis. Yet such results see relatively little practical application, and there remains an unsettling gap between classical single-loop techniques (Nyquist, Bode, root locus, pole placement) and modern multivariable approaches (LQG and H infinity theory). Large scale, complex systems, such as high performance aircraft and flexible space structures, now demand efficient, reliable design of multivariable feedback controllers which optimally tradeoff performance against modeling accuracy, bandwidth, sensor noise, actuator power, and control law complexity. A methodology is described which encompasses numerous practical design constraints within a single unified formulation. The approach, which is based upon coupled systems or modified Riccati and Lyapunov equations, encompasses time-domain linear-quadratic-Gaussian theory and frequency-domain H theory, as well as classical objectives such as gain and phase margin via the Nyquist circle criterion. In addition, this approach encompasses the optimal projection approach to reduced-order controller design. The current status of the overall theory will be reviewed including both continuous-time and discrete-time (sampled-data) formulations.
A quantum physical design flow using ILP and graph drawing
NASA Astrophysics Data System (ADS)
Yazdani, Maryam; Saheb Zamani, Morteza; Sedighi, Mehdi
2013-10-01
Implementing large-scale quantum circuits is one of the challenges of quantum computing. One of the central challenges of accurately modeling the architecture of these circuits is to schedule a quantum application and generate the layout while taking into account the cost of communications and classical resources as well as the maximum exploitable parallelism. In this paper, we present and evaluate a design flow for arbitrary quantum circuits in ion trap technology. Our design flow consists of two parts. First, a scheduler takes a description of a circuit and finds the best order for the execution of its quantum gates using integer linear programming regarding the classical resources (qubits) and instruction dependencies. Then a layout generator receives the schedule produced by the scheduler and generates a layout for this circuit using a graph-drawing algorithm. Our experimental results show that the proposed flow decreases the average latency of quantum circuits by about 11 % for a set of attempted benchmarks and by about 9 % for another set of benchmarks compared with the best in literature.
Smart Kirigami open honeycombs in shape changing actuation and dynamics
NASA Astrophysics Data System (ADS)
Neville, R. M.; Scarpa, F.; Leng, J.
2017-04-01
Kirigami is the ancient Japanese art of cutting and folding paper, widespread in Asia since the 17th century. Kirigami offers a broader set of geometries and topologies than classical fold/valleys Origami, because of the presence of cuts. Moreover, Kirigami can be readily applied to a large set of composite and smart 2D materials, and can be used to up-scaled productions with modular molding. We describe the manufacturing and testing of a topology of Kirigami cellular structures defined as Open Honeycombs. Open Honeycombs (OHs) can assume fully closed shape and be alike classical hexagonal centresymmetric honeycombs, or can vary their morphology by tuning the opening angle and rotational stiffness of the folds. We show the performance of experimental PEEK OHs with cable actuation and morphing shape characteristics, and the analogous morphing behavior of styrene SMPs under combined mechanical and thermal loading. We also show the dynamic (modal analysis) behavior of OHs configurations parameterized against their geometry characteristics, and the controllable modal density characteristics that one could obtain by tuning the topology and folding properties.
NASA Astrophysics Data System (ADS)
Ignatyev, A. V.; Ignatyev, V. A.; Onischenko, E. V.
2017-11-01
This article is the continuation of the work made bt the authors on the development of the algorithms that implement the finite element method in the form of a classical mixed method for the analysis of geometrically nonlinear bar systems [1-3]. The paper describes an improved algorithm of the formation of the nonlinear governing equations system for flexible plane frames and bars with large displacements of nodes based on the finite element method in a mixed classical form and the use of the procedure of step-by-step loading. An example of the analysis is given.
The small length scale effect for a non-local cantilever beam: a paradox solved.
Challamel, N; Wang, C M
2008-08-27
Non-local continuum mechanics allows one to account for the small length scale effect that becomes significant when dealing with microstructures or nanostructures. This paper presents some simplified non-local elastic beam models, for the bending analyses of small scale rods. Integral-type or gradient non-local models abandon the classical assumption of locality, and admit that stress depends not only on the strain value at that point but also on the strain values of all points on the body. There is a paradox still unresolved at this stage: some bending solutions of integral-based non-local elastic beams have been found to be identical to the classical (local) solution, i.e. the small scale effect is not present at all. One example is the Euler-Bernoulli cantilever nanobeam model with a point load which has application in microelectromechanical systems and nanoelectromechanical systems as an actuator. In this paper, it will be shown that this paradox may be overcome with a gradient elastic model as well as an integral non-local elastic model that is based on combining the local and the non-local curvatures in the constitutive elastic relation. The latter model comprises the classical gradient model and Eringen's integral model, and its application produces small length scale terms in the non-local elastic cantilever beam solution.
Statistical Extremes of Turbulence and a Cascade Generalisation of Euler's Gyroscope Equation
NASA Astrophysics Data System (ADS)
Tchiguirinskaia, Ioulia; Scherzer, Daniel
2016-04-01
Turbulence refers to a rather well defined hydrodynamical phenomenon uncovered by Reynolds. Nowadays, the word turbulence is used to designate the loss of order in many different geophysical fields and the related fundamental extreme variability of environmental data over a wide range of scales. Classical statistical techniques for estimating the extremes, being largely limited to statistical distributions, do not take into account the mechanisms generating such extreme variability. An alternative approaches to nonlinear variability are based on a fundamental property of the non-linear equations: scale invariance, which means that these equations are formally invariant under given scale transforms. Its specific framework is that of multifractals. In this framework extreme variability builds up scale by scale leading to non-classical statistics. Although multifractals are increasingly understood as a basic framework for handling such variability, there is still a gap between their potential and their actual use. In this presentation we discuss how to dealt with highly theoretical problems of mathematical physics together with a wide range of geophysical applications. We use Euler's gyroscope equation as a basic element in constructing a complex deterministic system that preserves not only the scale symmetry of the Navier-Stokes equations, but some more of their symmetries. Euler's equation has been not only the object of many theoretical investigations of the gyroscope device, but also generalised enough to become the basic equation of fluid mechanics. Therefore, there is no surprise that a cascade generalisation of this equation can be used to characterise the intermittency of turbulence, to better understand the links between the multifractal exponents and the structure of a simplified, but not simplistic, version of the Navier-Stokes equations. In a given way, this approach is similar to that of Lorenz, who studied how the flap of a butterfly wing could generate a cyclone with the help of a 3D ordinary differential system. Being well supported by the extensive numerical results, the cascade generalisation of Euler's gyroscope equation opens new horizons for predictability and predictions of processes having long-range dependences.
Sweat, Noah W; Bates, Larry W; Hendricks, Peter S
2016-01-01
Developing methods for improving creativity is of broad interest. Classic psychedelics may enhance creativity; however, the underlying mechanisms of action are unknown. This study was designed to assess whether a relationship exists between naturalistic classic psychedelic use and heightened creative problem-solving ability and if so, whether this is mediated by lifetime mystical experience. Participants (N = 68) completed a survey battery assessing lifetime mystical experience and circumstances surrounding the most memorable experience. They were then administered a functional fixedness task in which faster completion times indicate greater creative problem-solving ability. Participants reporting classic psychedelic use concurrent with mystical experience (n = 11) exhibited significantly faster times on the functional fixedness task (Cohen's d = -.87; large effect) and significantly greater lifetime mystical experience (Cohen's d = .93; large effect) than participants not reporting classic psychedelic use concurrent with mystical experience. However, lifetime mystical experience was unrelated to completion times on the functional fixedness task (standardized β = -.06), and was therefore not a significant mediator. Classic psychedelic use may increase creativity independent of its effects on mystical experience. Maximizing the likelihood of mystical experience may need not be a goal of psychedelic interventions designed to boost creativity.
Berges, Jürgen; Reygers, Klaus; Tanji, Naoto; ...
2017-05-09
Recent classical-statistical numerical simulations have established the “bottom-up” thermalization scenario of Baier et al. [Phys. Lett. B 502, 51 (2001)] as the correct weak coupling effective theory for thermalization in ultrarelativistic heavy-ion collisions. In this paper, we perform a parametric study of photon production in the various stages of this bottom-up framework to ascertain the relative contribution of the off-equilibrium “glasma” relative to that of a thermalized quark-gluon plasma. Taking into account the constraints imposed by the measured charged hadron multiplicities at Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC), we find that glasma contributions are importantmore » especially for large values of the saturation scale at both energies. Finally, these nonequilibrium effects should therefore be taken into account in studies where weak coupling methods are employed to compute photon yields.« less
Single-copy entanglement in critical quantum spin chains
NASA Astrophysics Data System (ADS)
Eisert, J.; Cramer, M.
2005-10-01
We consider the single-copy entanglement as a quantity to assess quantum correlations in the ground state in quantum many-body systems. We show for a large class of models that already on the level of single specimens of spin chains, criticality is accompanied with the possibility of distilling a maximally entangled state of arbitrary dimension from a sufficiently large block deterministically, with local operations and classical communication. These analytical results—which refine previous results on the divergence of block entropy as the rate at which maximally entangled pairs can be distilled from many identically prepared chains—are made quantitative for general isotropic translationally invariant spin chains that can be mapped onto a quasifree fermionic system, and for the anisotropic XY model. For the XX model, we provide the asymptotic scaling of ˜(1/6)log2(L) , and contrast it with the block entropy.
Müller, Jakob C; Hidde, Dennis; Seitz, Alfred
2002-01-01
Since the mid-1980s the zebra mussel, Dreissena polymorpha, Pallas 1771, has become the protagonist of a spectacular freshwater invasion in North America due to its large economic and biological impact. Several genetic studies on American populations have failed to detect any large-scale geographical patterns. In western Europe, where D. polymorpha has been a classical invader from the Pontocaspian since the early 19th century, the situation is strikingly different. Here, we show with genetic markers that two major western European invasion lineages with lowered genetic variability within and among populations can be discriminated. These two invasion lineages correspond with two separate navigable waterways to western Europe. We found a rapid and asymmetrical genetic interchange of the two invasion lines after the construction of the Main-Danube canal in 1992, which interconnected the two waterways across the main watershed. PMID:12061957
6th International Conference on Nanomaterials by Severe Plastic Deformation (NanoSPD6)
NASA Astrophysics Data System (ADS)
2014-08-01
''NanoSPD'' means Nano-material by Severe Plastic Deformation (SPD), which is an efficient way to obtain bulk nano-structured materials. During SPD, the microstructure of the material is transformed into a very fine structure consisting of ultra fine grains (UFG) approaching even the nano-scale. SPD is different from classical large strain forming processes in two aspects: 1. The sample undergoes extremely large strains without significant change in its dimensions, 2. In most SPD processes high hydrostatic stress is applied which makes it possible to deform difficult-to-form materials. This conference is part of a series of conferences taking place every third year; the history of NanoSPD conferences began in 1999 in Moscow (Russia), followed by Vienna in 2002 (Austria), Fukuoka in 2005 (Japan), Goslar in 2008 (Germany), Nanjing in 2011 (China), and Metz in 2014 (France). The preface continues in the pdf.
Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency
Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio
2015-01-01
Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB. PMID:26558254
Fast Poisson noise removal by biorthogonal Haar domain hypothesis testing
NASA Astrophysics Data System (ADS)
Zhang, B.; Fadili, M. J.; Starck, J.-L.; Digel, S. W.
2008-07-01
Methods based on hypothesis tests (HTs) in the Haar domain are widely used to denoise Poisson count data. Facing large datasets or real-time applications, Haar-based denoisers have to use the decimated transform to meet limited-memory or computation-time constraints. Unfortunately, for regular underlying intensities, decimation yields discontinuous estimates and strong “staircase” artifacts. In this paper, we propose to combine the HT framework with the decimated biorthogonal Haar (Bi-Haar) transform instead of the classical Haar. The Bi-Haar filter bank is normalized such that the p-values of Bi-Haar coefficients (p) provide good approximation to those of Haar (pH) for high-intensity settings or large scales; for low-intensity settings and small scales, we show that p are essentially upper-bounded by pH. Thus, we may apply the Haar-based HTs to Bi-Haar coefficients to control a prefixed false positive rate. By doing so, we benefit from the regular Bi-Haar filter bank to gain a smooth estimate while always maintaining a low computational complexity. A Fisher-approximation-based threshold implementing the HTs is also established. The efficiency of this method is illustrated on an example of hyperspectral-source-flux estimation.
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Lijun Long; Jun Zhao
2017-04-01
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
An investigation of the sound field above the audience in large lecture halls with a scale model.
Kahn, D W; Tichy, J
1986-09-01
Measurements of steady-state sound pressure levels above the audience in large lecture halls show that the classical equation for predicting the sound pressure level is not accurate. The direct field above the seats was measured on a 1:10 scale model and was found to be dependent on the incidence angle and direction of sound propagation across the audience. The reverberant field above the seats in the model was calculated by subtracting the direct field from the measured total field and was found to be dependent on the magnitude and particularly on the placement of absorption. The decrease of sound pressure level versus distance in the total field depends on the angle (controlled by absorption placement) at which the strong reflections are incident upon the audience area. Sound pressure level decreases at a fairly constant rate with distance from the sound source in both the direct and reverberant field, and the decrease rate depends strongly on the absorption placement. The lowest rate of decay occurs when the side walls are absorptive, and both the ceiling and rear wall are reflective. These consequences are discussed with respect to prediction of speech intelligibility.
[Radar as imaging tool in ecology and conservation biology].
Matyjasiak, Piotr
2017-01-01
Migrations and dispersal are among the most important ecological processes that shape ecosystems and influence our economy, health and safety. Movements of birds, bats and insects occur in a large spatial scale - regional, continental, or intercontinental. However, studies of these phenomena using classic methods are usually local. Breakthrough came with the development of radar technology, which enabled researchers to study animal movements in the atmosphere in a large spatial and temporal scale. The aim of this article was to present the radar imaging methods used in the research of aerial movements of birds, bats and insects. The types of radars used in research are described, and examples of the use of radar in basic research and in conservation biology are discussed. Radar visualizations are used in studies on the effect of meteorological conditions on bird migration, on spatial and temporal dynamics of movements of birds, bats and insects, and on the mechanism of orientation of migrating birds and insects. In conservation biology research radars are used in the monitoring of endangered species of birds and bats, to monitor bird activity at airports, as well as in assessing the impact of high constructions on flying birds and bats.
Extracting Communities from Complex Networks by the k-Dense Method
NASA Astrophysics Data System (ADS)
Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro
To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.
Consensus properties and their large-scale applications for the gene duplication problem.
Moon, Jucheol; Lin, Harris T; Eulenstein, Oliver
2016-06-01
Solving the gene duplication problem is a classical approach for species tree inference from gene trees that are confounded by gene duplications. This problem takes a collection of gene trees and seeks a species tree that implies the minimum number of gene duplications. Wilkinson et al. posed the conjecture that the gene duplication problem satisfies the desirable Pareto property for clusters. That is, for every instance of the problem, all clusters that are commonly present in the input gene trees of this instance, called strict consensus, will also be found in every solution to this instance. We prove that this conjecture does not generally hold. Despite this negative result we show that the gene duplication problem satisfies a weaker version of the Pareto property where the strict consensus is found in at least one solution (rather than all solutions). This weaker property contributes to our design of an efficient scalable algorithm for the gene duplication problem. We demonstrate the performance of our algorithm in analyzing large-scale empirical datasets. Finally, we utilize the algorithm to evaluate the accuracy of standard heuristics for the gene duplication problem using simulated datasets.
Physics of Magnetospheric Variability
NASA Astrophysics Data System (ADS)
Vasyliūnas, Vytenis M.
2011-01-01
Many widely used methods for describing and understanding the magnetosphere are based on balance conditions for quasi-static equilibrium (this is particularly true of the classical theory of magnetosphere/ionosphere coupling, which in addition presupposes the equilibrium to be stable); they may therefore be of limited applicability for dealing with time-variable phenomena as well as for determining cause-effect relations. The large-scale variability of the magnetosphere can be produced both by changing external (solar-wind) conditions and by non-equilibrium internal dynamics. Its developments are governed by the basic equations of physics, especially Maxwell's equations combined with the unique constraints of large-scale plasma; the requirement of charge quasi-neutrality constrains the electric field to be determined by plasma dynamics (generalized Ohm's law) and the electric current to match the existing curl of the magnetic field. The structure and dynamics of the ionosphere/magnetosphere/solar-wind system can then be described in terms of three interrelated processes: (1) stress equilibrium and disequilibrium, (2) magnetic flux transport, (3) energy conversion and dissipation. This provides a framework for a unified formulation of settled as well as of controversial issues concerning, e.g., magnetospheric substorms and magnetic storms.
Mena, Jorge Humberto; Sanchez, Alvaro Ignacio; Rubiano, Andres M.; Peitzman, Andrew B.; Sperry, Jason L.; Gutierrez, Maria Isabel; Puyana, Juan Carlos
2011-01-01
Objective The Glasgow Coma Scale (GCS) classifies Traumatic Brain Injuries (TBI) as Mild (14–15); Moderate (9–13) or Severe (3–8). The ATLS modified this classification so that a GCS score of 13 is categorized as mild TBI. We investigated the effect of this modification on mortality prediction, comparing patients with a GCS of 13 classified as moderate TBI (Classic Model) to patients with GCS of 13 classified as mild TBI (Modified Model). Methods We selected adult TBI patients from the Pennsylvania Outcome Study database (PTOS). Logistic regressions adjusting for age, sex, cause, severity, trauma center level, comorbidities, and isolated TBI were performed. A second evaluation included the time trend of mortality. A third evaluation also included hypothermia, hypotension, mechanical ventilation, screening for drugs, and severity of TBI. Discrimination of the models was evaluated using the area under receiver operating characteristic curve (AUC). Calibration was evaluated using the Hoslmer-Lemershow goodness of fit (GOF) test. Results In the first evaluation, the AUCs were 0.922 (95 %CI, 0.917–0.926) and 0.908 (95 %CI, 0.903–0.912) for classic and modified models, respectively. Both models showed poor calibration (p<0.001). In the third evaluation, the AUCs were 0.946 (95 %CI, 0.943 – 0.949) and 0.938 (95 %CI, 0.934 –0.940) for the classic and modified models, respectively, with improvements in calibration (p=0.30 and p=0.02 for the classic and modified models, respectively). Conclusion The lack of overlap between ROC curves of both models reveals a statistically significant difference in their ability to predict mortality. The classic model demonstrated better GOF than the modified model. A GCS of 13 classified as moderate TBI in a multivariate logistic regression model performed better than a GCS of 13 classified as mild. PMID:22071923
Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics.
Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso
2016-10-17
Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law.
Transient chaos - a resolution of breakdown of quantum-classical correspondence in optomechanics
Wang, Guanglei; Lai, Ying-Cheng; Grebogi, Celso
2016-01-01
Recently, the phenomenon of quantum-classical correspondence breakdown was uncovered in optomechanics, where in the classical regime the system exhibits chaos but in the corresponding quantum regime the motion is regular - there appears to be no signature of classical chaos whatsoever in the corresponding quantum system, generating a paradox. We find that transient chaos, besides being a physically meaningful phenomenon by itself, provides a resolution. Using the method of quantum state diffusion to simulate the system dynamics subject to continuous homodyne detection, we uncover transient chaos associated with quantum trajectories. The transient behavior is consistent with chaos in the classical limit, while the long term evolution of the quantum system is regular. Transient chaos thus serves as a bridge for the quantum-classical transition (QCT). Strikingly, as the system transitions from the quantum to the classical regime, the average chaotic transient lifetime increases dramatically (faster than the Ehrenfest time characterizing the QCT for isolated quantum systems). We develop a physical theory to explain the scaling law. PMID:27748418
Vortex scaling ranges in two-dimensional turbulence
NASA Astrophysics Data System (ADS)
Burgess, B. H.; Dritschel, D. G.; Scott, R. K.
2017-11-01
We survey the role of coherent vortices in two-dimensional turbulence, including formation mechanisms, implications for classical similarity and inertial range theories, and characteristics of the vortex populations. We review early work on the spatial and temporal scaling properties of vortices in freely evolving turbulence and more recent developments, including a spatiotemporal scaling theory for vortices in the forced inverse energy cascade. We emphasize that Kraichnan-Batchelor similarity theories and vortex scaling theories are best viewed as complementary and together provide a more complete description of two-dimensional turbulence. In particular, similarity theory has a continued role in describing the weak filamentary sea between the vortices. Moreover, we locate both classical inertial and vortex scaling ranges within the broader framework of scaling in far-from-equilibrium systems, which generically exhibit multiple fixed point solutions with distinct scaling behaviour. We describe how stationary transport in a range of scales comoving with the dilatation of flow features, as measured by the growth in vortex area, constrains the vortex number density in both freely evolving and forced two-dimensional turbulence. The new theories for coherent vortices reveal previously hidden nontrivial scaling, point to new dynamical understanding, and provide a novel exciting window into two-dimensional turbulence.
Quantum communication complexity advantage implies violation of a Bell inequality
Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii
2016-01-01
We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600
Some Properties of Estimated Scale Invariant Covariance Structures.
ERIC Educational Resources Information Center
Dijkstra, T. K.
1990-01-01
An example of scale invariance is provided via the LISREL model that is subject only to classical normalizations and zero constraints on the parameters. Scale invariance implies that the estimated covariance matrix must satisfy certain equations, and the nature of these equations depends on the fitting function used. (TJH)
The Research Identity Scale: Psychometric Analyses and Scale Refinement
ERIC Educational Resources Information Center
Jorgensen, Maribeth F.; Schweinle, William E.
2018-01-01
The 68-item Research Identity Scale (RIS) was informed through qualitative exploration of research identity development in master's-level counseling students and practitioners. Classical psychometric analyses revealed the items had strong validity and reliability and a single factor. A one-parameter Rasch analysis and item review was used to…
Méndez, Lídice; González, Nemecio; Parra, Francisco; Martín-Alonso, José M.; Limonta, Miladys; Sánchez, Kosara; Cabrales, Ania; Estrada, Mario P.; Rodríguez-Mallón, Alina; Farnós, Omar
2013-01-01
Recombinant virus-like particles (VLP) antigenically similar to rabbit hemorrhagic disease virus (RHDV) were recently expressed at high levels inside Pichia pastoris cells. Based on the potential of RHDV VLP as platform for diverse vaccination purposes we undertook the design, development and scale-up of a production process. Conformational and stability issues were addressed to improve process control and optimization. Analyses on the structure, morphology and antigenicity of these multimers were carried out at different pH values during cell disruption and purification by size-exclusion chromatography. Process steps and environmental stresses in which aggregation or conformational instability can be detected were included. These analyses revealed higher stability and recoveries of properly assembled high-purity capsids at acidic and neutral pH in phosphate buffer. The use of stabilizers during long-term storage in solution showed that sucrose, sorbitol, trehalose and glycerol acted as useful aggregation-reducing agents. The VLP emulsified in an oil-based adjuvant were subjected to accelerated thermal stress treatments. None to slight variations were detected in the stability of formulations and in the structure of recovered capsids. A comprehensive analysis on scale-up strategies was accomplished and a nine steps large-scale production process was established. VLP produced after chromatographic separation protected rabbits against a lethal challenge. The minimum protective dose was identified. Stabilized particles were ultimately assayed as carriers of a foreign viral epitope from another pathogen affecting a larger animal species. For that purpose, a linear protective B-cell epitope from Classical Swine Fever Virus (CSFV) E2 envelope protein was chemically coupled to RHDV VLP. Conjugates were able to present the E2 peptide fragment for immune recognition and significantly enhanced the peptide-specific antibody response in vaccinated pigs. Overall these results allowed establishing improved conditions regarding conformational stability and recovery of these multimers for their production at large-scale and potential use on different animal species or humans. PMID:23460801
Is patience a virtue? Cosmic censorship of infrared effects in de Sitter
NASA Astrophysics Data System (ADS)
Ferreira, Ricardo Z.; Sandora, Mccullen; Sloth, Martin S.
While the accumulation of long wavelength modes during inflation wreaks havoc on the large scale structure of spacetime, the question of even observability of their presence by any local observer has lead to considerable confusion. Though, it is commonly agreed that infrared effects are not visible to a single sub-horizon observer at late times, we argue that the question is less trivial for a patient observer who has lived long enough to have a record of the state before the soft mode was created. Though classically, there is no obstruction to measuring this effect locally, we give several indications that quantum mechanical uncertainties censor the effect, rendering the observation of long modes ultimately forbidden.
Characterizing and modeling the dynamics of online popularity.
Ratkiewicz, Jacob; Fortunato, Santo; Flammini, Alessandro; Menczer, Filippo; Vespignani, Alessandro
2010-10-08
Online popularity has an enormous impact on opinions, culture, policy, and profits. We provide a quantitative, large scale, temporal analysis of the dynamics of online content popularity in two massive model systems: the Wikipedia and an entire country's Web space. We find that the dynamics of popularity are characterized by bursts, displaying characteristic features of critical systems such as fat-tailed distributions of magnitude and interevent time. We propose a minimal model combining the classic preferential popularity increase mechanism with the occurrence of random popularity shifts due to exogenous factors. The model recovers the critical features observed in the empirical analysis of the systems analyzed here, highlighting the key factors needed in the description of popularity dynamics.
A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search
NASA Astrophysics Data System (ADS)
Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd
2017-08-01
A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.
[Rhabdomyosarcoma of soft palate. A case on purpose].
Arias Marzán, F; De Bonis Redondo, M; Redondo Ventura, F; Betancor Martínez, L; Sanginés Yzzo, M; Arias Marzán, J; De Bonis Braun, C; Zurita Expósito, V; Reig Ripoll, F; De Lucas Carmona, G
2006-01-01
The rabdomiosarcoma (RMS) are infrequent tumors. They are principally described in infancy and located in 35% of the cases in head and neck. The nasopharynx localisation is relatively rare, being in these cases the tongue, palate and oral mucosa the preferent places of establishment. Classically the patient presented very low standard healing with surgery and radiotherapy. The introduction in the middle 70 of systematic chimiotherapy as complementary treatment, improved the survival rate in large scale. In this article the case of an adolescent patient, who presented a RMS at the level of the soft palate, the diagnostic procedure and the therapeutic decision adopted, after revision of the last studies at this respect, are described.
NASA Astrophysics Data System (ADS)
James, L. Allan; Phillips, Jonathan D.; Lecce, Scott A.
2017-10-01
This special issue celebrates the centennial of the publication of G.K. Gilbert's (1917) monograph, Hydraulic-Mining Débris in the Sierra Nevada, U.S. Geological Survey Professional Paper 105 (PP105). Reasons to celebrate PP105 are manifold. It was the last of four classic monographs that Gilbert wrote in a career that spanned five decades. The monograph, PP105, introduced several important concepts and provided an integrated view of watersheds that was uncommon in its day. It also provided an extreme, lucid example of anthropogenic changes and legacy sediment and how to approach such large-scale phenomena from an objective, quantitative basis.
Continuous-variable quantum key distribution with 1 Mbps secure key rate.
Huang, Duan; Lin, Dakai; Wang, Chao; Liu, Weiqi; Fang, Shuanghong; Peng, Jinye; Huang, Peng; Zeng, Guihua
2015-06-29
We report the first continuous-variable quantum key distribution (CVQKD) experiment to enable the creation of 1 Mbps secure key rate over 25 km standard telecom fiber in a coarse wavelength division multiplexers (CWDM) environment. The result is achieved with two major technological advances: the use of a 1 GHz shot-noise-limited homodyne detector and the implementation of a 50 MHz clock system. The excess noise due to noise photons from local oscillator and classical data channels in CWDM is controlled effectively. We note that the experimental verification of high-bit-rate CVQKD in the multiplexing environment is a significant step closer toward large-scale deployment in fiber networks.
Bennett, Kochise; Mukamel, Shaul
2014-01-28
The semi-classical theory of radiation-matter coupling misses local-field effects that may alter the pulse time-ordering and cascading that leads to the generation of new signals. These are then introduced macroscopically by solving Maxwell's equations. This procedure is convenient and intuitive but ad hoc. We show that both effects emerge naturally by including coupling to quantum modes of the radiation field that are initially in the vacuum state to second order. This approach is systematic and suggests a more general class of corrections that only arise in a QED framework. In the semi-classical theory, which only includes classical field modes, the susceptibility of a collection of N non-interacting molecules is additive and scales as N. Second-order coupling to a vacuum mode generates an effective retarded interaction that leads to cascading and local field effects both of which scale as N(2).
Micro-Macro Simulation of Viscoelastic Fluids in Three Dimensions
NASA Astrophysics Data System (ADS)
Rüttgers, Alexander; Griebel, Michael
2012-11-01
The development of the chemical industry resulted in various complex fluids that cannot be correctly described by classical fluid mechanics. For instance, this includes paint, engine oils with polymeric additives and toothpaste. We currently perform multiscale viscoelastic flow simulations for which we have coupled our three-dimensional Navier-Stokes solver NaSt3dGPF with the stochastic Brownian configuration field method on the micro-scale. In this method, we represent a viscoelastic fluid as a dumbbell system immersed in a three-dimensional Newtonian liquid which leads to a six-dimensional problem in space. The approach requires large computational resources and therefore depends on an efficient parallelisation strategy. Our flow solver is parallelised with a domain decomposition approach using MPI. It shows excellent scale-up results for up to 128 processors. In this talk, we present simulation results for viscoelastic fluids in square-square contractions due to their relevance for many engineering applications such as extrusion. Another aspect of the talk is the parallel implementation in NaSt3dGPF and the parallel scale-up and speed-up behaviour.
NASA Astrophysics Data System (ADS)
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
Local dispersal promotes biodiversity in a real-life game of rock-paper-scissors
NASA Astrophysics Data System (ADS)
Kerr, Benjamin; Riley, Margaret A.; Feldman, Marcus W.; Bohannan, Brendan J. M.
2002-07-01
One of the central aims of ecology is to identify mechanisms that maintain biodiversity. Numerous theoretical models have shown that competing species can coexist if ecological processes such as dispersal, movement, and interaction occur over small spatial scales. In particular, this may be the case for non-transitive communities, that is, those without strict competitive hierarchies. The classic non-transitive system involves a community of three competing species satisfying a relationship similar to the children's game rock-paper-scissors, where rock crushes scissors, scissors cuts paper, and paper covers rock. Such relationships have been demonstrated in several natural systems. Some models predict that local interaction and dispersal are sufficient to ensure coexistence of all three species in such a community, whereas diversity is lost when ecological processes occur over larger scales. Here, we test these predictions empirically using a non-transitive model community containing three populations of Escherichia coli. We find that diversity is rapidly lost in our experimental community when dispersal and interaction occur over relatively large spatial scales, whereas all populations coexist when ecological processes are localized.
Learning about the scale of the solar system using digital planetarium visualizations
NASA Astrophysics Data System (ADS)
Yu, Ka Chun; Sahami, Kamran; Dove, James
2017-07-01
We studied the use of a digital planetarium for teaching relative distances and sizes in introductory undergraduate astronomy classes. Inspired in part by the classic short film The Powers of Ten and large physical scale models of the Solar System that can be explored on foot, we created lectures using virtual versions of these two pedagogical approaches for classes that saw either an immersive treatment in the planetarium or a non-immersive version in the regular classroom (with N = 973 students participating in total). Students who visited the planetarium had not only the greatest learning gains, but their performance increased with time, whereas students who saw the same visuals projected onto a flat display in their classroom showed less retention over time. The gains seen in the students who visited the planetarium reveal that this medium is a powerful tool for visualizing scale over multiple orders of magnitude. However the modest gains for the students in the regular classroom also show the utility of these visualization approaches for the broader category of classroom physics simulations.
Timescales of orogeny: Jurassic construction of the Klamath Mountains
NASA Astrophysics Data System (ADS)
Hacker, Bradley R.; Donato, Mary M.; Barnes, Calvin G.; McWilliams, M. O.; Ernst, W. G.
1995-06-01
An electronic supplement of this material may be obtained on a diskette or Anonymous FTP from KOSMOS.AGU.ORG (LOGIN to AGU's FTP account using ANONYMOUS as the username and GUEST as the password. Go to the right directory by typing CD APEND. Type LS to see what files are available. Type GET and the name of the file to get it. Finally, type EXIT to leave the system.) (Paper 94YCJ2454, Timescales of orogeny: Jurassic construction of the Klamath Mountains, B.R. Hacker, M.M. Donato, C.G. Barnes, M.O. McWilliams, and W.G. Ernst). Diskette may be ordered from American Geophysical Union, 2000 Florida Avenue, N.W., Washington, DC 20009; $15.00. Payment must accompany order. Classical interpretations of orogeny were based on relatively imprecise biostratigraphic and isotopic age determinations that necessitated grouping apparently related features that may in reality have been greatly diachronous. Isotopic age techniques now have the precision required to resolve the timing of orogenic events on a scale much smaller than that of entire mountain belts. Forty-five new 40Ar/39Ar ages from the Klamath Mountains illuminate the deformation, metamorphism, magmatism, and sedimentation involved in the Jurassic construction of that orogen, leading to a new level of understanding regarding how preserved orogenic features relate to ancient plate tectonic processes. The new geochronologic relationships show that many Jurassic units of the Klamath Mountains had 200 Ma or older volcanoplutonic basement. Subsequent formation of a large ˜170 Ma arc was followed by contractional collapse of the arc. Collision with a spreading ridge may have led to large-scale NW-SE extension in the central and northern Klamaths from 167 to ˜155 Ma, coincident with the crystallization of voluminous plutonic and volcanic suites. Marked cooling of a large region of the central Klamath Mountains to below ˜350°C at ˜150 Ma may have occurred as the igneous belt was extinguished by subduction of colder material at deeper structural levels. These data demonstrate that the Klamath Mountains—and perhaps other similar orogens—were constructed during areally and temporally variant episodes of contraction, extension, and magmatism that do not fit classical definitions of orogeny.
Quantum realization of the bilinear interpolation method for NEQR.
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou
2017-05-31
In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.
Optical, analog and digital domain architectural considerations for visual communications
NASA Astrophysics Data System (ADS)
Metz, W. A.
2008-01-01
The end of the performance entitlement historically achieved by classic scaling of CMOS devices is within sight, driven ultimately by fundamental limits. Performance entitlements predicted by classic CMOS scaling have progressively failed to be realized in recent process generations due to excessive leakage, increasing interconnect delays and scaling of gate dielectrics. Prior to reaching fundamental limits, trends in technology, architecture and economics will pressure the industry to adopt new paradigms. A likely response is to repartition system functions away from digital implementations and into new architectures. Future architectures for visual communications will require extending the implementation into the optical and analog processing domains. The fundamental properties of these domains will in turn give rise to new architectural concepts. The limits of CMOS scaling and impact on architectures will be briefly reviewed. Alternative approaches in the optical, electronic and analog domains will then be examined for advantages, architectural impact and drawbacks.
Wang, Jing-Jing; Chen, Tzu-An; Baranowski, Tom; Lau, Patrick W C
2017-09-16
This study aimed to evaluate the psychometric properties of four self-efficacy scales (i.e., self-efficacy for fruit (FSE), vegetable (VSE), and water (WSE) intakes, and physical activity (PASE)) and to investigate their differences in item functioning across sex, age, and body weight status groups using item response modeling (IRM) and differential item functioning (DIF). Four self-efficacy scales were administrated to 763 Hong Kong Chinese children (55.2% boys) aged 8-13 years. Classical test theory (CTT) was used to examine the reliability and factorial validity of scales. IRM was conducted and DIF analyses were performed to assess the characteristics of item parameter estimates on the basis of children's sex, age and body weight status. All self-efficacy scales demonstrated adequate to excellent internal consistency reliability (Cronbach's α: 0.79-0.91). One FSE misfit item and one PASE misfit item were detected. Small DIF were found for all the scale items across children's age groups. Items with medium to large DIF were detected in different sex and body weight status groups, which will require modification. A Wright map revealed that items covered the range of the distribution of participants' self-efficacy for each scale except VSE. Several self-efficacy scales' items functioned differently by children's sex and body weight status. Additional research is required to modify the four self-efficacy scales to minimize these moderating influences for application.
Lenalidomide and Blinatumomab in Treating Patients With Relapsed Non-Hodgkin Lymphoma
2018-06-11
CD19 Positive; Mediastinal Lymphoma; Recurrent B-Cell Lymphoma, Unclassifiable, With Features Intermediate Between Diffuse Large B-Cell Lymphoma and Classic Hodgkin Lymphoma; Recurrent Burkitt Lymphoma; Recurrent Diffuse Large B-Cell Lymphoma; Recurrent Grade 1 Follicular Lymphoma; Recurrent Grade 2 Follicular Lymphoma; Recurrent Grade 3 Follicular Lymphoma; Recurrent Mantle Cell Lymphoma; Recurrent Marginal Zone Lymphoma; Recurrent Non-Hodgkin Lymphoma; Recurrent Small Lymphocytic Lymphoma; Refractory B-Cell Lymphoma, Unclassifiable, With Features Intermediate Between Diffuse Large B-Cell Lymphoma and Classic Hodgkin Lymphoma; Refractory Burkitt Lymphoma; Refractory Diffuse Large B-Cell Lymphoma; Refractory Follicular Lymphoma; Refractory Mantle Cell Lymphoma; Refractory Small Lymphocytic Lymphoma
Fischbein, D; Corley, J C
2015-02-01
Classical biological control is a key method for managing populations of pests in long-lived crops such as plantation forestry. The execution of biological control programmes in general, as the evaluation of potential natural enemies remains, to a large extent, an empirical endeavour. Thus, characterizing specific cases to determine patterns that may lead to more accurate predictions of success is an important goal of the much applied ecological research. We review the history of introduction, ecology and behaviour of the parasitoid Ibalia leucospoides. The species is a natural enemy of Sirex noctilio, one of the most important pests of pine afforestation worldwide. We use an invasion ecology perspective given the analogy between the main stages involved in classical biological control and the biological invasion processes. We conclude that success in the establishment, a common reason of failure in biocontrol, is not a limiting factor of success by I. leucospoides. A mismatch between the spread capacity of the parasitoid and that of its host could nevertheless affect control at a regional scale. In addition, we suggest that given its known life history traits, this natural enemy may be a better regulator than suppressor of the host population. Moreover, spatial and temporal refuges of the host population that may favour the local persistence of the interaction probably reduce the degree to which S. noctilio population is suppressed by the parasitoid. We emphasize the fact that some of the biological attributes that promote establishment may negatively affect suppression levels achieved. Studies on established non-native pest-parasitoid interactions may contribute to defining selection criteria for classical biological control which may prove especially useful in integrated pest management IPM programmes of invasive forest insects.
Use of ruthenium dyes for subnanosecond detector fidelity testing in real time transient absorption
NASA Astrophysics Data System (ADS)
Byrdin, Martin; Thiagarajan, Viruthachalam; Villette, Sandrine; Espagne, Agathe; Brettel, Klaus
2009-04-01
Transient absorption spectroscopy is a powerful tool for the study of photoreactions on time scales from femtoseconds to seconds. Typically, reactions slower than ˜1 ns are recorded by the "classical" technique; the reaction is triggered by an excitation flash, and absorption changes accompanying the reaction are recorded in real time using a continuous monitoring light beam and a detection system with sufficiently fast response. The pico- and femtosecond region can be accessed by the more recent "pump-probe" technique, which circumvents the difficulties of real time detection on a subnanosecond time scale. This is paid for by accumulation of an excessively large number of shots to sample the reaction kinetics. Hence, it is of interest to extend the classical real time technique as far as possible to the subnanosecond range. In order to identify and minimize detection artifacts common on a subnanosecond scale, like overshoot, ringing, and signal reflections, rigorous testing is required of how the detection system responds to fast changes of the monitoring light intensity. Here, we introduce a novel method to create standard signals for detector fidelity testing on a time scale from a few picoseconds to tens of nanoseconds. The signals result from polarized measurements of absorption changes upon excitation of ruthenium complexes {[Ru(bpy)3]2+ and a less symmetric derivative} by a short laser flash. Two types of signals can be created depending on the polarization of the monitoring light with respect to that of the excitation flash: a fast steplike bleaching at magic angle and a monoexponentially decaying bleaching for parallel polarizations. The lifetime of the decay can be easily varied via temperature and viscosity of the solvent. The method is applied to test the performance of a newly developed real time transient absorption setup with 300 ps time resolution and high sensitivity.
Pisani, Pasquale; Rastelli, Giulio
2016-01-01
Protein kinases are key regulatory nodes in cellular networks and their function has been shown to be intimately coupled with their structural flexibility. However, understanding the key structural mechanisms of large conformational transitions remains a difficult task. CDK2 is a crucial regulator of cell cycle. Its activity is finely tuned by Cyclin E/A and the catalytic segment phosphorylation, whereas its deregulation occurs in many types of cancer. ATP competitive inhibitors have failed to be approved for clinical use due to toxicity issues raised by a lack of selectivity. However, in the last few years type III allosteric inhibitors have emerged as an alternative strategy to selectively modulate CDK2 activity. In this study we have investigated the conformational variability of CDK2. A low dimensional conformational landscape of CDK2 was modeled using classical multidimensional scaling on a set of 255 crystal structures. Microsecond-scale plain and accelerated MD simulations were used to populate this landscape by using an out-of-sample extension of multidimensional scaling. CDK2 was simulated in the apo-form and in complex with the allosteric inhibitor 8-anilino-1-napthalenesulfonic acid (ANS). The apo-CDK2 landscape analysis showed a conformational equilibrium between an Src-like inactive conformation and an active-like form. These two states are separated by different metastable states that share hybrid structural features with both forms of the kinase. In contrast, the CDK2/ANS complex landscape is compatible with a conformational selection picture where the binding of ANS in proximity of the αC helix causes a population shift toward the inactive conformation. Interestingly, the new metastable states could enlarge the pool of candidate structures for the development of selective allosteric CDK2 inhibitors. The method here presented should not be limited to the CDK2 case but could be used to systematically unmask similar mechanisms throughout the human kinome. PMID:27100206
Pisani, Pasquale; Caporuscio, Fabiana; Carlino, Luca; Rastelli, Giulio
2016-01-01
Protein kinases are key regulatory nodes in cellular networks and their function has been shown to be intimately coupled with their structural flexibility. However, understanding the key structural mechanisms of large conformational transitions remains a difficult task. CDK2 is a crucial regulator of cell cycle. Its activity is finely tuned by Cyclin E/A and the catalytic segment phosphorylation, whereas its deregulation occurs in many types of cancer. ATP competitive inhibitors have failed to be approved for clinical use due to toxicity issues raised by a lack of selectivity. However, in the last few years type III allosteric inhibitors have emerged as an alternative strategy to selectively modulate CDK2 activity. In this study we have investigated the conformational variability of CDK2. A low dimensional conformational landscape of CDK2 was modeled using classical multidimensional scaling on a set of 255 crystal structures. Microsecond-scale plain and accelerated MD simulations were used to populate this landscape by using an out-of-sample extension of multidimensional scaling. CDK2 was simulated in the apo-form and in complex with the allosteric inhibitor 8-anilino-1-napthalenesulfonic acid (ANS). The apo-CDK2 landscape analysis showed a conformational equilibrium between an Src-like inactive conformation and an active-like form. These two states are separated by different metastable states that share hybrid structural features with both forms of the kinase. In contrast, the CDK2/ANS complex landscape is compatible with a conformational selection picture where the binding of ANS in proximity of the αC helix causes a population shift toward the inactive conformation. Interestingly, the new metastable states could enlarge the pool of candidate structures for the development of selective allosteric CDK2 inhibitors. The method here presented should not be limited to the CDK2 case but could be used to systematically unmask similar mechanisms throughout the human kinome.
Squamation and ecology of thelodonts.
Ferrón, Humberto G; Botella, Héctor
2017-01-01
Thelodonts are an enigmatic group of Paleozoic jawless vertebrates that have been well studied from taxonomical, biostratigraphic and paleogeographic points of view, although our knowledge of their ecology and mode of life is still scant. Their bodies were covered by micrometric scales whose morphology, histology and the developmental process are extremely similar to those of extant sharks. Based on these similarities and on the well-recognized relationship between squamation and ecology in sharks, here we explore the ecological diversity and lifestyles of thelodonts. For this we use classic morphometrics and discriminant analysis to characterize the squamation patterns of a significant number of extant shark species whose ecology is well known. Multivariate analyses have defined a characteristic squamation pattern for each ecological group, thus establishing a comparative framework for inferring lifestyles in thelodonts. We then use this information to study the squamation of the currently described 147 species of thelodonts, known from both articulated and disarticulated remains. Discriminant analysis has allowed recognizing squamation patterns comparable to those of sharks and links them to specific ecological groups. Our results suggest a remarkable ecological diversity in thelodonts. A large number of them were probably demersal species inhabiting hard substrates, within caves and crevices in rocky environments or reefs, taking advantage of the flexibility provided by their micromeric squamations. Contrary to classical interpretations, only few thelodonts were placed among demersal species inhabiting sandy and muddy substrates. Schooling species with defensive scales against ectoparasites could be also abundant suggesting that social interactions and pressure of ectoparasites were present in vertebrates as early the Silurian. The presence of species showing scales suggestive of low to moderate speed and a lifestyle presumably associated with open water environments indicates adaptation of thelodonts to deep water habitats. Scale morphology suggests that some other thelodonts were strong-swimming pelagic species, most of them radiating during the Early Devonian in association with the Nekton Revolution.
Squamation and ecology of thelodonts
Botella, Héctor
2017-01-01
Thelodonts are an enigmatic group of Paleozoic jawless vertebrates that have been well studied from taxonomical, biostratigraphic and paleogeographic points of view, although our knowledge of their ecology and mode of life is still scant. Their bodies were covered by micrometric scales whose morphology, histology and the developmental process are extremely similar to those of extant sharks. Based on these similarities and on the well-recognized relationship between squamation and ecology in sharks, here we explore the ecological diversity and lifestyles of thelodonts. For this we use classic morphometrics and discriminant analysis to characterize the squamation patterns of a significant number of extant shark species whose ecology is well known. Multivariate analyses have defined a characteristic squamation pattern for each ecological group, thus establishing a comparative framework for inferring lifestyles in thelodonts. We then use this information to study the squamation of the currently described 147 species of thelodonts, known from both articulated and disarticulated remains. Discriminant analysis has allowed recognizing squamation patterns comparable to those of sharks and links them to specific ecological groups. Our results suggest a remarkable ecological diversity in thelodonts. A large number of them were probably demersal species inhabiting hard substrates, within caves and crevices in rocky environments or reefs, taking advantage of the flexibility provided by their micromeric squamations. Contrary to classical interpretations, only few thelodonts were placed among demersal species inhabiting sandy and muddy substrates. Schooling species with defensive scales against ectoparasites could be also abundant suggesting that social interactions and pressure of ectoparasites were present in vertebrates as early the Silurian. The presence of species showing scales suggestive of low to moderate speed and a lifestyle presumably associated with open water environments indicates adaptation of thelodonts to deep water habitats. Scale morphology suggests that some other thelodonts were strong-swimming pelagic species, most of them radiating during the Early Devonian in association with the Nekton Revolution. PMID:28241029
Classical conformal blocks and accessory parameters from isomonodromic deformations
NASA Astrophysics Data System (ADS)
Lencsés, Máté; Novaes, Fábio
2018-04-01
Classical conformal blocks appear in the large central charge limit of 2D Virasoro conformal blocks. In the AdS3 /CFT2 correspondence, they are related to classical bulk actions and used to calculate entanglement entropy and geodesic lengths. In this work, we discuss the identification of classical conformal blocks and the Painlevé VI action showing how isomonodromic deformations naturally appear in this context. We recover the accessory parameter expansion of Heun's equation from the isomonodromic τ -function. We also discuss how the c = 1 expansion of the τ -function leads to a novel approach to calculate the 4-point classical conformal block.
Scaling analysis and instantons for thermally assisted tunneling and quantum Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Jiang, Zhang; Smelyanskiy, Vadim N.; Isakov, Sergei V.; Boixo, Sergio; Mazzola, Guglielmo; Troyer, Matthias; Neven, Hartmut
2017-01-01
We develop an instantonic calculus to derive an analytical expression for the thermally assisted tunneling decay rate of a metastable state in a fully connected quantum spin model. The tunneling decay problem can be mapped onto the Kramers escape problem of a classical random dynamical field. This dynamical field is simulated efficiently by path-integral quantum Monte Carlo (QMC). We show analytically that the exponential scaling with the number of spins of the thermally assisted quantum tunneling rate and the escape rate of the QMC process are identical. We relate this effect to the existence of a dominant instantonic tunneling path. The instanton trajectory is described by nonlinear dynamical mean-field theory equations for a single-site magnetization vector, which we solve exactly. Finally, we derive scaling relations for the "spiky" barrier shape when the spin tunneling and QMC rates scale polynomially with the number of spins N while a purely classical over-the-barrier activation rate scales exponentially with N .
NASA Astrophysics Data System (ADS)
Verrier, Sébastien; Crépon, Michel; Thiria, Sylvie
2014-09-01
Spectral scaling properties have already been evidenced on oceanic numerical simulations and have been subject to several interpretations. They can be used to evaluate classical turbulence theories that predict scaling with specific exponents and to evaluate the quality of GCM outputs from a statistical and multiscale point of view. However, a more complete framework based on multifractal cascades is able to generalize the classical but restrictive second-order spectral framework to other moment orders, providing an accurate description of probability distributions of the fields at multiple scales. The predictions of this formalism still needed systematic verification in oceanic GCM while they have been confirmed recently for their atmospheric counterparts by several papers. The present paper is devoted to a systematic analysis of several oceanic fields produced by the NEMO oceanic GCM. Attention is focused to regional, idealized configurations that permit to evaluate the NEMO engine core from a scaling point of view regardless of limitations involved by land masks. Based on classical multifractal analysis tools, multifractal properties were evidenced for several oceanic state variables (sea surface temperature and salinity, velocity components, etc.). While first-order structure functions estimated a different nonconservativity parameter H in two scaling ranges, the multiorder statistics of turbulent fluxes were scaling over almost the whole available scaling range. This multifractal scaling was then parameterized with the help of the universal multifractal framework, providing parameters that are coherent with existing empirical literature. Finally, we argue that the knowledge of these properties may be useful for oceanographers. The framework seems very well suited for the statistical evaluation of OGCM outputs. Moreover, it also provides practical solutions to simulate subpixel variability stochastically for GCM downscaling purposes. As an independent perspective, the existence of multifractal properties in oceanic flows seems also interesting for investigating scale dependencies in remote sensing inversion algorithms.
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets
Savitski, Mikhail M.; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-01-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target–decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target–decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The “picked” protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The “picked” target–decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used “classic” protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. PMID:25987413
Stress transfer mechanisms at the submicron level for graphene/polymer systems.
Anagnostopoulos, George; Androulidakis, Charalampos; Koukaras, Emmanuel N; Tsoukleri, Georgia; Polyzos, Ioannis; Parthenios, John; Papagelis, Konstantinos; Galiotis, Costas
2015-02-25
The stress transfer mechanism from a polymer substrate to a nanoinclusion, such as a graphene flake, is of extreme interest for the production of effective nanocomposites. Previous work conducted mainly at the micron scale has shown that the intrinsic mechanism of stress transfer is shear at the interface. However, since the interfacial shear takes its maximum value at the very edge of the nanoinclusion it is of extreme interest to assess the effect of edge integrity upon axial stress transfer at the submicron scale. Here, we conduct a detailed Raman line mapping near the edges of a monolayer graphene flake that is simply supported onto an epoxy-based photoresist (SU8)/poly(methyl methacrylate) matrix at steps as small as 100 nm. We show for the first time that the distribution of axial strain (stress) along the flake deviates somewhat from the classical shear-lag prediction for a region of ∼ 2 μm from the edge. This behavior is mainly attributed to the presence of residual stresses, unintentional doping, and/or edge effects (deviation from the equilibrium values of bond lengths and angles, as well as different edge chiralities). By considering a simple balance of shear-to-normal stresses at the interface we are able to directly convert the strain (stress) gradient to values of interfacial shear stress for all the applied tensile levels without assuming classical shear-lag behavior. For large flakes a maximum value of interfacial shear stress of 0.4 MPa is obtained prior to flake slipping.
Mesoscopic fluctuations in biharmonically driven flux qubits
NASA Astrophysics Data System (ADS)
Ferrón, Alejandro; Domínguez, Daniel; Sánchez, María José
2017-01-01
We investigate flux qubits driven by a biharmonic magnetic signal, with a phase lag that acts as an effective time reversal broken parameter. The driving induced transition rate between the ground and the excited state of the flux qubit can be thought of as an effective transmittance, profiting from a direct analogy between interference effects at avoided level crossings and scattering events in disordered electronic systems. For time scales prior to full relaxation, but large compared to the decoherence time, this characteristic rate has been accessed experimentally by Gustavsson et al. [Phys. Rev. Lett. 110, 016603 (2013)], 10.1103/PhysRevLett.110.016603 and its sensitivity with both the phase lag and the dc flux detuning explored. In this way, signatures of universal conductance fluctuationslike effects have been analyzed and compared with predictions from a phenomenological model that only accounts for decoherence, as a classical noise. Here we go beyond the classical noise model and solve the full dynamics of the driven flux qubit in contact with a quantum bath employing the Floquet-Born-Markov master equation. Within this formalism, the computed relaxation and decoherence rates turn out to be strongly dependent on both the phase lag and the dc flux detuning. Consequently, the associated pattern of fluctuations in the characteristic rates display important differences with those obtained within the mentioned phenomenological model. In particular, we demonstrate the weak localizationlike effect in the average values of the relaxation rate. Our predictions can be tested for accessible but longer time scales than the current experimental times.
Studying Radiation Damage in Structural Materials by Using Ion Accelerators
NASA Astrophysics Data System (ADS)
Hosemann, Peter
2011-02-01
Radiation damage in structural materials is of major concern and a limiting factor for a wide range of engineering and scientific applications, including nuclear power production, medical applications, or components for scientific radiation sources. The usefulness of these applications is largely limited by the damage a material can sustain in the extreme environments of radiation, temperature, stress, and fatigue, over long periods of time. Although a wide range of materials has been extensively studied in nuclear reactors and neutron spallation sources since the beginning of the nuclear age, ion beam irradiations using particle accelerators are a more cost-effective alternative to study radiation damage in materials in a rather short period of time, allowing researchers to gain fundamental insights into the damage processes and to estimate the property changes due to irradiation. However, the comparison of results gained from ion beam irradiation, large-scale neutron irradiation, and a variety of experimental setups is not straightforward, and several effects have to be taken into account. It is the intention of this article to introduce the reader to the basic phenomena taking place and to point out the differences between classic reactor irradiations and ion irradiations. It will also provide an assessment of how accelerator-based ion beam irradiation is used today to gain insight into the damage in structural materials for large-scale engineering applications.
NASA Astrophysics Data System (ADS)
Ma, Yulong; Liu, Heping
2017-12-01
Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.
Senthil, Raja; Mohapatra, Ranjan Kumar; Sampath, Mouleeswaran Koramadai; Sundaraiya, Sumati
2016-01-01
Anaplastic large cell lymphoma (ALCL) is a rare type of nonHodgkin's lymphoma (NHL), but one of the most common subtypes of T-cell lymphoma. It is an aggressive T-cell lymphoma, and some ALCL may mimic less aggressive classical HL histopathlogically. It may be misdiagnosed unless careful immunohistochemical examination is performed. As the prognosis and management of these two lymphomas vary significantly, it is important to make a correct diagnosis. We describe a case who was diagnosed as classical HL by histopathological examination of cervical lymph node, in whom (18)F-flouro deoxyglucose positron emission tomography/computed tomography appearances were unusual for HL and warranted review of histopathology that revealed anaplastic lymphoma kinase-1 negative anaplastic large T-cell lymphoma, Hodgkin-like variant, thereby changing the management.
Introduction to Methods of Approximation in Physics and Astronomy
NASA Astrophysics Data System (ADS)
van Putten, Maurice H. P. M.
2017-04-01
Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify secular behavior. For instance, secular evolution of orbital parameters may derive from averaging over essentially periodic behavior on relatively short, orbital periods. When the original number of degrees of freedom is large, averaging over dynamical time scales may lead to a formulation in terms of a system in approximately thermodynamic equilibrium subject to evolution on a secular time scale by a regular or singular perturbation. In modern astrophysics and cosmology, gravitation is being probed across an increasingly broad range of scales and more accurately so than ever before. These observations probe weak gravitational interactions below what is encountered in our solar system by many orders of magnitude. These observations hereby probe (curved) spacetime at low energy scales that may reveal novel properties hitherto unanticipated in the classical vacuum of Newtonian mechanics and Minkowski spacetime. Dark energy and dark matter encountered on the scales of galaxies and beyond, therefore, may be, in part, revealing our ignorance of the vacuum at the lowest energy scales encountered in cosmology. In this context, our application of Newtonian mechanics to globular clusters, galaxies and cosmology is an approximation assuming a classical vacuum, ignoring the potential for hidden low energy scales emerging on cosmological scales. Given our ignorance of the latter, this poses a challenge in the potential for unknown systematic deviations. If of quantum mechanical origin, such deviations are often referred to as anomalies. While they are small in traditional, macroscopic Newtonian experiments in the laboratory, they same is not a given in the limit of arbitrarily weak gravitational interactions. We hope this selection of introductory material is useful and kindles the reader's interest to become a creative member of modern astrophysics and cosmology.
NASA Astrophysics Data System (ADS)
Piniewski, Mikołaj
2016-05-01
The objective of this study was to apply a previously developed large-scale and high-resolution SWAT model of the Vistula and the Odra basins, calibrated with the focus of natural flow simulation, in order to assess the impact of three different dam reservoirs on streamflow using the Indicators of Hydrologic Alteration (IHA). A tailored spatial calibration approach was designed, in which calibration was focused on a large set of relatively small non-nested sub-catchments with semi-natural flow regime. These were classified into calibration clusters based on the flow statistics similarity. After performing calibration and validation that gave overall positive results, the calibrated parameter values were transferred to the remaining part of the basins using an approach based on hydrological similarity of donor and target catchments. The calibrated model was applied in three case studies with the purpose of assessing the effect of dam reservoirs (Włocławek, Siemianówka and Czorsztyn Reservoirs) on streamflow alteration. Both the assessment based on gauged streamflow (Before-After design) and the one based on simulated natural streamflow showed large alterations in selected flow statistics related to magnitude, duration, high and low flow pulses and rate of change. Some benefits of using a large-scale and high-resolution hydrological model for the assessment of streamflow alteration include: (1) providing an alternative or complementary approach to the classical Before-After designs, (2) isolating the climate variability effect from the dam (or any other source of alteration) effect, (3) providing a practical tool that can be applied at a range of spatial scales over large area such as a country, in a uniform way. Thus, presented approach can be applied for designing more natural flow regimes, which is crucial for river and floodplain ecosystem restoration in the context of the European Union's policy on environmental flows.
NASA Astrophysics Data System (ADS)
Charbonnier, A.; Combet, C.; Daniel, M.; Funk, S.; Hinton, J. A.; Maurin, D.; Power, C.; Read, J. I.; Sarkar, S.; Walker, M. G.; Wilkinson, M. I.
2011-12-01
Due to their large dynamical mass-to-light ratios, dwarf spheroidal galaxies (dSphs) are promising targets for the indirect detection of dark matter (DM) in γ-rays. We examine their detectability by present and future γ-ray observatories. The key innovative features of our analysis are as follows: (i) we take into account the angular size of the dSphs; while nearby objects have higher γ-ray flux, their larger angular extent can make them less attractive targets for background-dominated instruments; (ii) we derive DM profiles and the astrophysical J-factor (which parametrizes the expected γ-ray flux, independently of the choice of DM particle model) for the classical dSphs directly from photometric and kinematic data. We assume very little about the DM profile, modelling this as a smooth split-power-law distribution, with and without subclumps; (iii) we use a Markov chain Monte Carlo technique to marginalize over unknown parameters and determine the sensitivity of our derived J-factors to both model and measurement uncertainties; and (iv) we use simulated DM profiles to demonstrate that our J-factor determinations recover the correct solution within our quoted uncertainties. Our key findings are as follows: (i) subclumps in the dSphs do not usefully boost the signal; (ii) the sensitivity of atmospheric Cherenkov telescopes to dSphs within ˜20 kpc with cored haloes can be up to ˜50 times worse than when estimated assuming them to be point-like. Even for the satellite-borne Fermi-Large Area Telescope (Fermi-LAT), the sensitivity is significantly degraded on the relevant angular scales for long exposures; hence, it is vital to consider the angular extent of the dSphs when selecting targets; (iii) no DM profile has been ruled out by current data, but using a prior on the inner DM cusp slope 0 ≤γprior≤ 1 provides J-factor estimates accurate to a factor of a few if an appropriate angular scale is chosen; (iv) the J-factor is best constrained at a critical integration angle αc= 2rh/d (where rh is the half-light radius and d is the distance from the dwarf) and we estimate the corresponding sensitivity of γ-ray observatories; (v) the 'classical' dSphs can be grouped into three categories: well constrained and promising (Ursa Minor, Sculptor and Draco), well constrained but less promising (Carina, Fornax and Leo I), and poorly constrained (Sextans and Leo II); and (vi) observations of classical dSphs with the Fermi-LAT integrated over the mission lifetime are more promising than observations with the planned Cherenkov Telescope Array for DM particle mass ≲ 700 GeV. However, even the Fermi-LAT will not have sufficient integrated signal from the classical dwarfs to detect DM in the 'vanilla' Minimal Supersymmetric Standard Model. Both the Galactic Centre and the 'ultrafaint' dwarfs are likely to be better targets and will be considered in future work.
Three-dimensional Numerical Simulations of Rayleigh-Taylor Unstable Flames in Type Ia Supernovae
NASA Astrophysics Data System (ADS)
Zingale, M.; Woosley, S. E.; Rendleman, C. A.; Day, M. S.; Bell, J. B.
2005-10-01
Flame instabilities play a dominant role in accelerating the burning front to a large fraction of the speed of sound in a Type Ia supernova. We present a three-dimensional numerical simulation of a Rayleigh-Taylor unstable carbon flame, following its evolution through the transition to turbulence. A low-Mach number hydrodynamics method is used, freeing us from the harsh time step restrictions imposed by sound waves. We fully resolve the thermal structure of the flame and its reaction zone, eliminating the need for a flame model. A single density is considered, 1.5×107 g cm-3, and half-carbon, half-oxygen fuel: conditions under which the flame propagated in the flamelet regime in our related two-dimensional study. We compare to a corresponding two-dimensional simulation and show that while fire polishing keeps the small features suppressed in two dimensions, turbulence wrinkles the flame on far smaller scales in the three-dimensional case, suggesting that the transition to the distributed burning regime occurs at higher densities in three dimensions. Detailed turbulence diagnostics are provided. We show that the turbulence follows a Kolmogorov spectrum and is highly anisotropic on the large scales, with a much larger integral scale in the direction of gravity. Furthermore, we demonstrate that it becomes more isotropic as it cascades down to small scales. On the basis of the turbulent statistics and the flame properties of our simulation, we compute the Gibson scale. We show the progress of the turbulent flame through a classic combustion regime diagram, indicating that the flame just enters the distributed burning regime near the end of our simulation.
Instability mechanisms in microfluidics and nanomaterials
NASA Astrophysics Data System (ADS)
Thamida, Sunil Kumar
Recent scientific advances in chemical engineering are leading to synthesis of micro-scale and nano-scale functional devices and materials. However, optimal design and performance of these devices and materials requires a fundamental under standing of the interfacial phenomena at micro-scale and nano-scale. Due to new physical forces unique to small scales, new phenomena appear that are unexpected at large scales. A study of new interfacial patterns that arise from various interfacial instabilities at these scales is carried out in this dissertation. Nevertheless, interfacial patterns ranging from micro to macro scale are ubiquitous in multiphase systems and material synthesis involving a surface reaction. Fractal break up of a thin viscous oil film dewetting between two separating plates is studied experimentally. Unlike the classical patterns of pores and dendrites, it forms a fractal pattern like a branching tree with its origin at the center of the circular film. Lubrication theory is extended to such a fractal geometry, which is unlike the circular geometry of a classical dewetting problem. A power law scaling is obtained for the radial air finger length distribution to construct an idealized Cayley fractal structure. Our theory yields a result that the plate detach time decreases by half in the limit of a fully fractal pattern that is confirmed experimentally. Nanopore formation in anodized alumina is also found to bear similarities to the interfacial pattern formation of the dewetting film between two separating plates. The oxide layer formed on the aluminum during the initial stages of anodizing is found to be unstable to perturbations on the scale of a few nanometers and hence it leads to the nanopore formation. A linear stability analysis of the dual interfacial dynamics followed by a leading mode projection produces a single evolution equation for the pores. Numerical simulations of the nonlinear model reveals the hexagonal packing and self-organization dynamics of the pores. In microfluidic devices, electrokinetic flow produces spiral vortices and corner aggregation of particles and proteins at an inner corner of a channel turn that is unexplained by the short ranged DLVO forces. Field leakage effect due to the non perfectly insulating wall reveals a nonlinear singular and ejecting slip velocity condition at an acute angled sharp corner. The complete flow streamlines, vortices and the corner entrainment are revealed by conformal mapping, harmonic analysis and numerical simulation using Lattice-Boltzmann-Method (LBM). The method of hodograph transform developed for the earlier projects to solve the Laplace equation is also applied to find optimum shapes of dispersion free turns for electro-osmotic microfluidic channels.
Duration of classicality in highly degenerate interacting Bosonic systems
Sikivie, Pierre; Todarello, Elisa M.
2017-04-28
We study sets of oscillators that have high quantum occupancy and that interact by exchanging quanta. It is shown by analytical arguments and numerical simulation that such systems obey classical equations of motion only on time scales of order their relaxation time τ and not longer than that. The results are relevant to the cosmology of axions and axion-like particles.
An O({radical}nL) primal-dual affine scaling algorithm for linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Siming
1994-12-31
We present a new primal-dual affine scaling algorithm for linear programming. The search direction of the algorithm is a combination of classical affine scaling direction of Dikin and a recent new affine scaling direction of Jansen, Roos and Terlaky. The algorithm has an iteration complexity of O({radical}nL), comparing to O(nL) complexity of Jansen, Roos and Terlaky.
Modifiying shallow-water equations as a model for wave-vortex turbulence
NASA Astrophysics Data System (ADS)
Mohanan, A. V.; Augier, P.; Lindborg, E.
2017-12-01
The one-layer shallow-water equations is a simple two-dimensional model to study the complex dynamics of the oceans and the atmosphere. We carry out forced-dissipative numerical simulations, either by forcing medium-scale wave modes, or by injecting available potential energy (APE). With pure wave forcing in non-rotating cases, a statistically stationary regime is obtained for a range of forcing Froude numbers Ff = ɛ /(kf c), where ɛ is the energy dissipation rate, kf the forcing wavenumber and c the wave speed. Interestingly, the spectra scale as k-2 and third and higher order structure functions scale as r. Such statistics is a manifestation of shock turbulence or Burgulence, which dominate the flow. Rotating cases exhibit some inverse energy cascade, along with a stronger forward energy cascade, dominated by wave-wave interactions. We also propose two modifications to the classical shallow-water equations to construct a toy model. The properties of the model are explored by forcing in APE at a small and a medium wavenumber. The toy model simulations are then compared with results from shallow-water equations and a full General Circulation Model (GCM) simulation. The most distinctive feature of this model is that, unlike shallow-water equations, it avoids shocks and conserves quadratic energy. In Fig. 1, for the shallow-water equations, shocks appear as thin dark lines in the divergence (∇ .{u}) field, and as discontinuities in potential temperature (θ ) field; whereas only waves appear in the corresponding fields from toy model simulation. Forward energy cascade results in a wave field with k-5/3 spectrum, along with equipartition of KE and APE at small scales. The vortical field develops into a k-3 spectrum. With medium forcing wavenumber, at large scales, energy converted from APE to KE undergoes inverse cascade as a result of nonlinear fluxes composed of vortical modes alone. Gradually, coherent vortices emerge with a strong preference for anticyclonic motion. The model can serve as a closer representation of real geophysical turbulence than the classical shallow-water equations. Fig 1. Divergence and potential temperature fields of shallow-water (top row) and toy model (bottom row) simulations.
Classical-to-Quantum Transition with Broadband Four-Wave Mixing
NASA Astrophysics Data System (ADS)
Vered, Rafi Z.; Shaked, Yaakov; Ben-Or, Yelena; Rosenbluh, Michael; Pe'er, Avi
2015-02-01
A key question of quantum optics is how nonclassical biphoton correlations at low power evolve into classical coherence at high power. Direct observation of the crossover from quantum to classical behavior is desirable, but difficult due to the lack of adequate experimental techniques that cover the ultrawide dynamic range in photon flux from the single photon regime to the classical level. We investigate biphoton correlations within the spectrum of light generated by broadband four-wave mixing over a large dynamic range of ˜80 dB in photon flux across the classical-to-quantum transition using a two-photon interference effect that distinguishes between classical and quantum behavior. We explore the quantum-classical nature of the light by observing the interference contrast dependence on internal loss and demonstrate quantum collapse and revival of the interference when the four-wave mixing gain in the fiber becomes imaginary.
Listening to classical music ameliorates unilateral neglect after stroke.
Tsai, Pei-Luen; Chen, Mei-Ching; Huang, Yu-Ting; Lin, Keh-Chung; Chen, Kuan-Lin; Hsu, Yung-Wen
2013-01-01
OBJECTIVE. We determined whether listening to excerpts of classical music ameliorates unilateral neglect (UN) in stroke patients. METHOD. In this within-subject study, we recruited and separately tested 16 UN patients with a right-hemisphere stroke under three conditions within 1 wk. In each condition, participants were asked to complete three subtests of the Behavioral Inattention Test while listening to classical music, white noise, or nothing. All conditions and the presentation of the tests were counterbalanced across participants. Visual analog scales were used to provide self-reported ratings of arousal and mood. RESULTS. Participants generally had the highest scores under the classical music condition and the lowest scores under the silence condition. In addition, most participants rated their arousal as highest after listening to classical music. CONCLUSION. Listening to classical music may improve visual attention in stroke patients with UN. Future research with larger study populations is necessary to validate these findings. Copyright © 2013 by the American Occupational Therapy Association, Inc.
Earthquake cycle deformation in the Tibetan plateau with a weak mid-crustal layer
NASA Astrophysics Data System (ADS)
DeVries, Phoebe M. R.; Meade, Brendan J.
2013-06-01
observations of interseismic deformation across the Tibetan plateau contain information about both tectonic and earthquake cycle processes. Time-variations in surface velocities between large earthquakes are sensitive to the rheological structure of the subseismogenic crust, and, in particular, the viscosity of the middle and lower crust. Here we develop a semianalytic solution for time-dependent interseismic velocities resulting from viscoelastic stress relaxation in a localized midcrustal layer in response to forcing by a sequence of periodic earthquakes. Earthquake cycle models with a weak midcrustal layer exhibit substantially more near-fault preseismic strain localization than do classic two-layer models at short (<100 yr) Maxwell times. We apply both this three-layer model and the classic two-layer model to geodetic observations before and after the 1997 MW = 7.6 Manyi and 2001 MW = 7.8 Kokoxili strike-slip earthquakes in Tibet to estimate the viscosity of the crust below a 20 km thick seismogenic layer. For these events, interseismic stress relaxation in a weak (viscosity ≤1018.5 Paṡs) and thin (height ≤20 km) midcrustal layer explains observations of both preseismic near-fault strain localization and rapid (>50 mm/yr) postseismic velocities in the years following the coseismic ruptures. We suggest that earthquake cycle models with a localized midcrustal layer can simultaneously explain both preseismic and postseismic geodetic observations with a single Maxwell viscosity, while the classic two-layer model requires a rheology with multiple relaxation time scales.
On Ruch's Principle of Decreasing Mixing Distance in classical statistical physics
NASA Astrophysics Data System (ADS)
Busch, Paul; Quadt, Ralf
1990-10-01
Ruch's Principle of Decreasing Mixing Distance is reviewed as a statistical physical principle and its basic suport and geometric interpretation, the Ruch-Schranner-Seligman theorem, is generalized to be applicable to a large representative class of classical statistical systems.
Origin of leucite-rich and sanidine-rich flow layers in the Leucite Hills Volcanic Field, Wyoming
NASA Astrophysics Data System (ADS)
Gunter, W. D.; Hoinkes, Georg; Ogden, Palmer; Pajari, G. E.
1990-09-01
Two types of orendite (sanidine-phlogopite lamproite) and wyomingite (leucite-phlogopite lamproite) intraflow layering are present in the ultrapotassic Leucite Hills Volcanic Field, Wyoming. In large-scale layering, wyomingites are confined to the base of the flow, while in centimeter-scale layering, orendite and wyomingite alternate throughout the flow. The mineralogy of the orendites and wyomingites are the same; only the relative amount of each mineral vary substantially. The chemical compositions of adjacent layers of wyomingite and orendite are almost identical except for water. The centimeter-scale flow layering probably represents fossil streamlines of the lava and therefore defines the path of circulation of the viscous melt. Toward the front of the flow, the layers are commonly folded. Structures present which are indicative that the flows may have possessed a yield strength are limb shears, boudinage, and slumping. Phlogopite phenocrysts are poorly aligned in the orendite layers, while they are often in subparallel alignment in the wyomingite layers; and they are used as a measure of shearing intensity during emplacement of the flow. Vesicle volumes are concentrated in the orendite layers. In the large-scale layering, a discontinuous base rubble zone of autobreccia is overlain by a thin platy zone followed by a massive zone which composes more than the upper 75% of the flow. Consequently, we feel that the origin of the layering may be related to shearing. Two extremes in the geometry of shearing are proposed: closely spaced, thin, densely sheared layers separated by discrete intervals throughout a lava flow as in the centimeter-scale layering and classical plug flow where all the shearing is confined to the base as in the large-scale layering. A mechanism is proposed which causes thixotropic behavior and localizes shearing: the driving force is the breakdown of molecular water to form T-OH bonds which establishes a chemical potential gradient for water in the melt. The higher activity of water in the nonsheared regions allows sandine to crystallize, whereas the lower activity of water in the areas of active shearing causes leucite to crystallize.
Gravitational self-interactions of a degenerate quantum scalar field
NASA Astrophysics Data System (ADS)
Chakrabarty, Sankha S.; Enomoto, Seishi; Han, Yaqi; Sikivie, Pierre; Todarello, Elisa M.
2018-02-01
We develop a formalism to help calculate in quantum field theory the departures from the description of a system by classical field equations. We apply the formalism to a homogeneous condensate with attractive contact interactions and to a homogeneous self-gravitating condensate in critical expansion. In their classical descriptions, such condensates persist forever. We show that in their quantum description, parametric resonance causes quanta to jump in pairs out of the condensate into all modes with wave vector less than some critical value. We calculate, in each case, the time scale over which the homogeneous condensate is depleted and after which a classical description is invalid. We argue that the duration of classicality of inhomogeneous condensates is shorter than that of homogeneous condensates.
FLARE VERSUS SHOCK ACCELERATION OF HIGH-ENERGY PROTONS IN SOLAR ENERGETIC PARTICLE EVENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cliver, E. W.
2016-12-01
Recent studies have presented evidence for a significant to dominant role for a flare-resident acceleration process for high-energy protons in large (“gradual”) solar energetic particle (SEP) events, contrary to the more generally held view that such protons are primarily accelerated at shock waves driven by coronal mass ejections (CMEs). The new support for this flare-centric view is provided by correlations between the sizes of X-ray and/or microwave bursts and associated SEP events. For one such study that considered >100 MeV proton events, we present evidence based on CME speeds and widths, shock associations, and electron-to-proton ratios that indicates that eventsmore » omitted from that investigation’s analysis should have been included. Inclusion of these outlying events reverses the study’s qualitative result and supports shock acceleration of >100 MeV protons. Examination of the ratios of 0.5 MeV electron intensities to >100 MeV proton intensities for the Grechnev et al. event sample provides additional support for shock acceleration of high-energy protons. Simply scaling up a classic “impulsive” SEP event to produce a large >100 MeV proton event implies the existence of prompt 0.5 MeV electron events that are approximately two orders of magnitude larger than are observed. While classic “impulsive” SEP events attributed to flares have high electron-to-proton ratios (≳5 × 10{sup 5}) due to a near absence of >100 MeV protons, large poorly connected (≥W120) gradual SEP events, attributed to widespread shock acceleration, have electron-to-proton ratios of ∼2 × 10{sup 3}, similar to those of comparably sized well-connected (W20–W90) SEP events.« less
Flare vs. Shock Acceleration of High-energy Protons in Solar Energetic Particle Events
NASA Astrophysics Data System (ADS)
Cliver, E. W.
2016-12-01
Recent studies have presented evidence for a significant to dominant role for a flare-resident acceleration process for high-energy protons in large (“gradual”) solar energetic particle (SEP) events, contrary to the more generally held view that such protons are primarily accelerated at shock waves driven by coronal mass ejections (CMEs). The new support for this flare-centric view is provided by correlations between the sizes of X-ray and/or microwave bursts and associated SEP events. For one such study that considered >100 MeV proton events, we present evidence based on CME speeds and widths, shock associations, and electron-to-proton ratios that indicates that events omitted from that investigation’s analysis should have been included. Inclusion of these outlying events reverses the study’s qualitative result and supports shock acceleration of >100 MeV protons. Examination of the ratios of 0.5 MeV electron intensities to >100 MeV proton intensities for the Grechnev et al. event sample provides additional support for shock acceleration of high-energy protons. Simply scaling up a classic “impulsive” SEP event to produce a large >100 MeV proton event implies the existence of prompt 0.5 MeV electron events that are approximately two orders of magnitude larger than are observed. While classic “impulsive” SEP events attributed to flares have high electron-to-proton ratios (≳5 × 105) due to a near absence of >100 MeV protons, large poorly connected (≥W120) gradual SEP events, attributed to widespread shock acceleration, have electron-to-proton ratios of ˜2 × 103, similar to those of comparably sized well-connected (W20-W90) SEP events.
A robust method of thin plate spline and its application to DEM construction
NASA Astrophysics Data System (ADS)
Chen, Chuanfa; Li, Yanyan
2012-11-01
In order to avoid the ill-conditioning problem of thin plate spline (TPS), the orthogonal least squares (OLS) method was introduced, and a modified OLS (MOLS) was developed. The MOLS of TPS (TPS-M) can not only select significant points, termed knots, from large and dense sampling data sets, but also easily compute the weights of the knots in terms of back-substitution. For interpolating large sampling points, we developed a local TPS-M, where some neighbor sampling points around the point being estimated are selected for computation. Numerical tests indicate that irrespective of sampling noise level, the average performance of TPS-M can advantage with smoothing TPS. Under the same simulation accuracy, the computational time of TPS-M decreases with the increase of the number of sampling points. The smooth fitting results on lidar-derived noise data indicate that TPS-M has an obvious smoothing effect, which is on par with smoothing TPS. The example of constructing a series of large scale DEMs, located in Shandong province, China, was employed to comparatively analyze the estimation accuracies of the two versions of TPS and the classical interpolation methods including inverse distance weighting (IDW), ordinary kriging (OK) and universal kriging with the second-order drift function (UK). Results show that regardless of sampling interval and spatial resolution, TPS-M is more accurate than the classical interpolation methods, except for the smoothing TPS at the finest sampling interval of 20 m, and the two versions of kriging at the spatial resolution of 15 m. In conclusion, TPS-M, which avoids the ill-conditioning problem, is considered as a robust method for DEM construction.
NASA Astrophysics Data System (ADS)
Kacem, I.; Jacquey, C.; Génot, V.; Lavraud, B.; Vernisse, Y.; Marchaudon, A.; Le Contel, O.; Breuillard, H.; Phan, T. D.; Hasegawa, H.; Oka, M.; Trattner, K. J.; Farrugia, C. J.; Paulson, K.; Eastwood, J. P.; Fuselier, S. A.; Turner, D.; Eriksson, S.; Wilder, F.; Russell, C. T.; Øieroset, M.; Burch, J.; Graham, D. B.; Sauvaud, J.-A.; Avanov, L.; Chandler, M.; Coffey, V.; Dorelli, J.; Gershman, D. J.; Giles, B. L.; Moore, T. E.; Saito, Y.; Chen, L.-J.; Penou, E.
2018-03-01
The occurrence of spatially and temporally variable reconnection at the Earth's magnetopause leads to the complex interaction of magnetic fields from the magnetosphere and magnetosheath. Flux transfer events (FTEs) constitute one such type of interaction. Their main characteristics are (1) an enhanced core magnetic field magnitude and (2) a bipolar magnetic field signature in the component normal to the magnetopause, reminiscent of a large-scale helicoidal flux tube magnetic configuration. However, other geometrical configurations which do not fit this classical picture have also been observed. Using high-resolution measurements from the Magnetospheric Multiscale mission, we investigate an event in the vicinity of the Earth's magnetopause on 7 November 2015. Despite signatures that, at first glance, appear consistent with a classic FTE, based on detailed geometrical and dynamical analyses as well as on topological signatures revealed by suprathermal electron properties, we demonstrate that this event is not consistent with a single, homogenous helicoidal structure. Our analysis rather suggests that it consists of the interaction of two separate sets of magnetic field lines with different connectivities. This complex three-dimensional interaction constructively conspires to produce signatures partially consistent with that of an FTE. We also show that, at the interface between the two sets of field lines, where the observed magnetic pileup occurs, a thin and strong current sheet forms with a large ion jet, which may be consistent with magnetic flux dissipation through magnetic reconnection in the interaction region.
Young, Paul E.; Kum Jew, Stephen; Buckland, Michael E.; Pamphlett, Roger
2017-01-01
Amyotrophic lateral sclerosis (ALS) is a devastating late-onset neurodegenerative disorder in which only a small proportion of patients carry an identifiable causative genetic lesion. Despite high heritability estimates, a genetic etiology for most sporadic ALS remains elusive. Here we report the epigenetic profiling of five monozygotic twin pairs discordant for ALS, four with classic ALS and one with the progressive muscular atrophy ALS variant, in whom previous whole genome sequencing failed to uncover a genetic basis for their disease discordance. By studying cytosine methylation patterns in peripheral blood DNA we identified thousands of large between-twin differences at individual CpGs. While the specific sites of differences were mostly idiosyncratic to a twin pair, a proportion involving GABA signalling were common to all ALS individuals. For both idiosyncratic and common sites the differences occurred within genes and pathways related to neurobiological functions or dysfunctions, some of particular relevance to ALS such as glutamate metabolism and the Golgi apparatus. All four classic ALS patients were epigenetically older than their unaffected co-twins, suggesting accelerated aging in multiple tissues in this disease. In conclusion, widespread changes in methylation patterns were found in ALS-affected co-twins, consistent with an epigenetic contribution to disease. These DNA methylation findings could be used to develop blood-based ALS biomarkers, gain insights into disease pathogenesis, and provide a reference for future large-scale ALS epigenetic studies. PMID:28797086
Charretier, Cédric; Saulnier, Aure; Benair, Loïc; Armanet, Corinne; Bassard, Isabelle; Daulon, Sandra; Bernigaud, Bertrand; Rodrigues de Sousa, Emanuel; Gonthier, Clémence; Zorn, Edouard; Vetter, Emmanuelle; Saintpierre, Claire; Riou, Patrice; Gaillac, David
2018-02-01
The classical cell-culture methods, such as cell culture infectious dose 50% (CCID 50 ) assays, are time-consuming, end-point assays currently used during the development of a viral vaccine production process to measure viral infectious titers. However, they are not suitable for handling the large number of tests required for high-throughput and large-scale screening analyses. Impedance-based bio-sensing techniques used in real-time cell analysis (RTCA) to assess cell layer biological status in vitro, provide real-time data. In this proof-of-concept study, we assessed the correlation between the results from CCID 50 and RTCA assays and compared time and costs using monovalent and tetravalent chimeric yellow fever dengue (CYD) vaccine strains. For the RTCA assay, Vero cells were infected with the CYD sample and real-time impedance was recorded, using the dimensionless cell index (CI). The CI peaked just after infection and decreased as the viral cytopathic effect occurred in a dose-dependent manner. The time to the median CI (CIT med ) was correlated with viral titers determined by CCID 50 over a range of about 4-5log 10 CCID 50 /ml. This in-house RTCA virus-titration assay was shown to be a robust method for determining real-time viral infectious titers, and could be an alternative to the classical CCID 50 assay during the development of viral vaccine production process. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Psychometric Properties of the Fatigue Severity Scale in Polio Survivors
ERIC Educational Resources Information Center
Burger, Helena; Franchignoni, Franco; Puzic, Natasa; Giordano, Andrea
2010-01-01
The objective of this study was to evaluate by means of classical test theory and Rasch analysis the scaling characteristics and psychometric properties of the Fatigue Severity Scale (FSS) in polio survivors. A questionnaire, consisting of five general questions (sex, age, age at time of acute polio, sequelae of polio, and new symptoms), the FSS,…
EMBAYMENT CHARACTERISTIC TIME AND BIOLOGY VIA TIDAL PRISM MODEL
Transport time scales in water bodies are classically based on their physical and chemical aspects rather than on their ecological and biological character. The direct connection between a physical time scale and ecological effects has to be investigated in order to quantitativel...
Item Response Modeling of Forced-Choice Questionnaires
ERIC Educational Resources Information Center
Brown, Anna; Maydeu-Olivares, Alberto
2011-01-01
Multidimensional forced-choice formats can significantly reduce the impact of numerous response biases typically associated with rating scales. However, if scored with classical methodology, these questionnaires produce ipsative data, which lead to distorted scale relationships and make comparisons between individuals problematic. This research…
Cosine problem in EPRL/FK spinfoam model
NASA Astrophysics Data System (ADS)
Vojinović, Marko
2014-01-01
We calculate the classical limit effective action of the EPRL/FK spinfoam model of quantum gravity coupled to matter fields. By employing the standard QFT background field method adapted to the spinfoam setting, we find that the model has many different classical effective actions. Most notably, these include the ordinary Einstein-Hilbert action coupled to matter, but also an action which describes antigravity. All those multiple classical limits appear as a consequence of the fact that the EPRL/FK vertex amplitude has cosine-like large spin asymptotics. We discuss some possible ways to eliminate the unwanted classical limits.
Dark matter from a classically scale-invariant S U (3 )X
NASA Astrophysics Data System (ADS)
Karam, Alexandros; Tamvakis, Kyriakos
2016-09-01
In this work we study a classically scale-invariant extension of the Standard Model in which the dark matter and electroweak scales are generated through the Coleman-Weinberg mechanism. The extra S U (3 )X gauge factor gets completely broken by the vacuum expectation values of two scalar triplets. Out of the eight resulting massive vector bosons the three lightest are stable due to an intrinsic Z2×Z2' discrete symmetry and can constitute dark matter candidates. We analyze the phenomenological viability of the predicted multi-Higgs sector imposing theoretical and experimental constraints. We perform a comprehensive analysis of the dark matter predictions of the model solving numerically the set of coupled Boltzmann equations involving all relevant dark matter processes and explore the direct detection prospects of the dark matter candidates.
Numerical studies from quantum to macroscopic scales of carbon nanoparticules in hydrogen plasma
NASA Astrophysics Data System (ADS)
Lombardi, Guillaume; Ngandjong, Alain; Mezei, Zsolt; Mougenot, Jonathan; Michau, Armelle; Hassouni, Khaled; Seydou, Mahamadou; Maurel, François
2016-09-01
Dusty plasmas take part in large scientific domains from Universe Science to nanomaterial synthesis processes. They are often generated by growth from molecular precursor. This growth leads to the formation of larger clusters which induce solid germs nucleation. Particle formed are described by an aerosol dynamic taking into account coagulation, molecular deposition and transport processes. These processes are controlled by the elementary particle. So there is a strong coupling between particle dynamics and plasma discharge equilibrium. This study is focused on the development of a multiscale physic and numeric model of hydrogen plasmas and carbon particles around three essential coupled axes to describe the various physical phenomena: (i) Macro/mesoscopic fluid modeling describing in an auto-coherent way, characteristics of the plasma, molecular clusters and aerosol behavior; (ii) the classic molecular dynamics offering a description to the scale molecular of the chains of chemical reactions and the phenomena of aggregation; (iii) the quantum chemistry to establish the activation barriers of the different processes driving the nanopoarticule formation.
Effect of interfaces on the nearby Brownian motion
Huang, Kai; Szlufarska, Izabela
2015-01-01
Near-boundary Brownian motion is a classic hydrodynamic problem of great importance in a variety of fields, from biophysics to micro-/nanofluidics. However, owing to challenges in experimental measurements of near-boundary dynamics, the effect of interfaces on Brownian motion has remained elusive. Here we report a computational study of this effect using μs-long large-scale molecular dynamics simulations and our newly developed Green–Kubo relation for friction at the liquid–solid interface. Our computer experiment unambiguously reveals that the t−3/2 long-time decay of the velocity autocorrelation function of a Brownian particle in bulk liquid is replaced by a t−5/2 decay near a boundary. We discover a general breakdown of traditional no-slip boundary condition at short time scales and we show that this breakdown has a profound impact on the near-boundary Brownian motion. Our results demonstrate the potential of Brownian-particle-based micro-/nanosonar to probe the local wettability of liquid–solid interfaces. PMID:26438034
Athay, M. Michele
2012-01-01
This paper presents the psychometric evaluation of the Satisfaction with Life Scale (SWLS; Diener, Emmons, Larson & Griffen, 1985) used with a large sample (N = 610) of caregivers for youth receiving mental health services. Methods from classical test theory, factor analysis and item response theory are utilized. Additionally, this paper investigates whether caregiver strain mediates the effect of youth symptom severity on caregiver life satisfaction (N = 356). Bootstrapped confidence intervals are used to determine the significance of the mediated effects. Results indicate that the SWLS is a psychometrically sound instrument to be used with caregivers of clinically-referred youth. Mediation analyses found that the effect of youth symptom severity on caregiver life satisfaction is mediated by caregiver strain but that the mediation effect differs based on the type of youth symptoms. Caregiver strain is a partial mediator when externalizing symptoms are measured and a full mediator when internalizing symptoms are measured. Implications for future research and clinical practice are discussed. PMID:22407554
Effect of interfaces on the nearby Brownian motion.
Huang, Kai; Szlufarska, Izabela
2015-10-06
Near-boundary Brownian motion is a classic hydrodynamic problem of great importance in a variety of fields, from biophysics to micro-/nanofluidics. However, owing to challenges in experimental measurements of near-boundary dynamics, the effect of interfaces on Brownian motion has remained elusive. Here we report a computational study of this effect using μs-long large-scale molecular dynamics simulations and our newly developed Green-Kubo relation for friction at the liquid-solid interface. Our computer experiment unambiguously reveals that the t(-3/2) long-time decay of the velocity autocorrelation function of a Brownian particle in bulk liquid is replaced by a t(-5/2) decay near a boundary. We discover a general breakdown of traditional no-slip boundary condition at short time scales and we show that this breakdown has a profound impact on the near-boundary Brownian motion. Our results demonstrate the potential of Brownian-particle-based micro-/nanosonar to probe the local wettability of liquid-solid interfaces.
OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials
NASA Astrophysics Data System (ADS)
Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu
The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.
Quantum matter bounce with a dark energy expanding phase
NASA Astrophysics Data System (ADS)
Colin, Samuel; Pinto-Neto, Nelson
2017-09-01
Analyzing quantum cosmological scenarios containing one scalar field with exponential potential, we have obtained a universe model which realizes a classical dust contraction from very large scales, the initial repeller of the model, and moves to a stiff matter contraction near the singularity, which is avoided due to a quantum bounce. The universe is then launched in a stiff matter expanding phase, which then moves to a dark energy era, finally returning to the dust expanding phase, the final attractor of the model. Hence, one has obtained a nonsingular cosmological model where a single scalar field can describe both the matter contracting phase of a bouncing model, necessary to give an almost scale invariant spectrum of scalar cosmological perturbations, and a transient expanding dark energy phase. As the universe is necessarily dust dominated in the far past, usual adiabatic vacuum initial conditions can be easily imposed in this era, avoiding the usual issues appearing when dark energy is considered in bouncing models.
Qiao, Zhen-An; Chai, Song-Hai; Nelson, Kimberly; Bi, Zhonghe; Chen, Jihua; Mahurin, Shannon M; Zhu, Xiang; Dai, Sheng
2014-04-16
High-performance polymeric membranes for gas separation are attractive for molecular-level separations in industrial-scale chemical, energy and environmental processes. Molecular sieving materials are widely regarded as the next-generation membranes to simultaneously achieve high permeability and selectivity. However, most polymeric molecular sieve membranes are based on a few solution-processable polymers such as polymers of intrinsic microporosity. Here we report an in situ cross-linking strategy for the preparation of polymeric molecular sieve membranes with hierarchical and tailorable porosity. These membranes demonstrate exceptional performance as molecular sieves with high gas permeabilities and selectivities for smaller gas molecules, such as carbon dioxide and oxygen, over larger molecules such as nitrogen. Hence, these membranes have potential for large-scale gas separations of commercial and environmental relevance. Moreover, this strategy could provide a possible alternative to 'classical' methods for the preparation of porous membranes and, in some cases, the only viable synthetic route towards certain membranes.
Anomalous time delays and quantum weak measurements in optical micro-resonators
Asano, M.; Bliokh, K. Y.; Bliokh, Y. P.; Kofman, A. G.; Ikuta, R.; Yamamoto, T.; Kivshar, Y. S.; Yang, L.; Imoto, N.; Özdemir, Ş.K.; Nori, F.
2016-01-01
Quantum weak measurements, wavepacket shifts and optical vortices are universal wave phenomena, which originate from fine interference of multiple plane waves. These effects have attracted considerable attention in both classical and quantum wave systems. Here we report on a phenomenon that brings together all the above topics in a simple one-dimensional scalar wave system. We consider inelastic scattering of Gaussian wave packets with parameters close to a zero of the complex scattering coefficient. We demonstrate that the scattered wave packets experience anomalously large time and frequency shifts in such near-zero scattering. These shifts reveal close analogies with the Goos–Hänchen beam shifts and quantum weak measurements of the momentum in a vortex wavefunction. We verify our general theory by an optical experiment using the near-zero transmission (near-critical coupling) of Gaussian pulses propagating through a nano-fibre with a side-coupled toroidal micro-resonator. Measurements demonstrate the amplification of the time delays from the typical inverse-resonator-linewidth scale to the pulse-duration scale. PMID:27841269
Coupling molecular dynamics with lattice Boltzmann method based on the immersed boundary method
NASA Astrophysics Data System (ADS)
Tan, Jifu; Sinno, Talid; Diamond, Scott
2017-11-01
The study of viscous fluid flow coupled with rigid or deformable solids has many applications in biological and engineering problems, e.g., blood cell transport, drug delivery, and particulate flow. We developed a partitioned approach to solve this coupled Multiphysics problem. The fluid motion was solved by Palabos (Parallel Lattice Boltzmann Solver), while the solid displacement and deformation was simulated by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator). The coupling was achieved through the immersed boundary method (IBM). The code modeled both rigid and deformable solids exposed to flow. The code was validated with the classic problem of rigid ellipsoid particle orbit in shear flow, blood cell stretching test and effective blood viscosity, and demonstrated essentially linear scaling over 16 cores. An example of the fluid-solid coupling was given for flexible filaments (drug carriers) transport in a flowing blood cell suspensions, highlighting the advantages and capabilities of the developed code. NIH 1U01HL131053-01A1.
Quantum versus classical hyperfine-induced dynamics in a quantum dota)
NASA Astrophysics Data System (ADS)
Coish, W. A.; Loss, Daniel; Yuzbashyan, E. A.; Altshuler, B. L.
2007-04-01
In this article we analyze spin dynamics for electrons confined to semiconductor quantum dots due to the contact hyperfine interaction. We compare mean-field (classical) evolution of an electron spin in the presence of a nuclear field with the exact quantum evolution for the special case of uniform hyperfine coupling constants. We find that (in this special case) the zero-magnetic-field dynamics due to the mean-field approximation and quantum evolution are similar. However, in a finite magnetic field, the quantum and classical solutions agree only up to a certain time scale t <τc, after which they differ markedly.
Limit Theorems for Dispersing Billiards with Cusps
NASA Astrophysics Data System (ADS)
Bálint, P.; Chernov, N.; Dolgopyat, D.
2011-12-01
Dispersing billiards with cusps are deterministic dynamical systems with a mild degree of chaos, exhibiting "intermittent" behavior that alternates between regular and chaotic patterns. Their statistical properties are therefore weak and delicate. They are characterized by a slow (power-law) decay of correlations, and as a result the classical central limit theorem fails. We prove that a non-classical central limit theorem holds, with a scaling factor of {sqrt{nlog n}} replacing the standard {sqrt{n}} . We also derive the respective Weak Invariance Principle, and we identify the class of observables for which the classical CLT still holds.
Crowdsourcing for Cognitive Science – The Utility of Smartphones
Brown, Harriet R.; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A.; McNab, Fiona; Rutledge, Robb B.; Dolan, Raymond J.
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations. PMID:25025865
Crowdsourcing for cognitive science--the utility of smartphones.
Brown, Harriet R; Zeidman, Peter; Smittenaar, Peter; Adams, Rick A; McNab, Fiona; Rutledge, Robb B; Dolan, Raymond J
2014-01-01
By 2015, there will be an estimated two billion smartphone users worldwide. This technology presents exciting opportunities for cognitive science as a medium for rapid, large-scale experimentation and data collection. At present, cost and logistics limit most study populations to small samples, restricting the experimental questions that can be addressed. In this study we investigated whether the mass collection of experimental data using smartphone technology is valid, given the variability of data collection outside of a laboratory setting. We presented four classic experimental paradigms as short games, available as a free app and over the first month 20,800 users submitted data. We found that the large sample size vastly outweighed the noise inherent in collecting data outside a controlled laboratory setting, and show that for all four games canonical results were reproduced. For the first time, we provide experimental validation for the use of smartphones for data collection in cognitive science, which can lead to the collection of richer data sets and a significant cost reduction as well as provide an opportunity for efficient phenotypic screening of large populations.
Classical and sequential limit analysis revisited
NASA Astrophysics Data System (ADS)
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
Lima, Nicola; Caneschi, Andrea; Gatteschi, Dante; Kritikos, Mikael; Westin, L Gunnar
2006-03-20
The susceptibility of the large transition-metal cluster [Mn19O12(MOE)14(MOEH)10].MOEH (MOE = OC2H2O-CH3) has been fitted through classical Monte Carlo simulation, and an estimation of the exchange coupling constants has been done. With these results, it has been possible to perform a full-matrix diagonalization of the cluster core, which was used to provide information on the nature of the low-lying levels.
Rest but busy: Aberrant resting-state functional connectivity of triple network model in insomnia.
Dong, Xiaojuan; Qin, Haixia; Wu, Taoyu; Hu, Hua; Liao, Keren; Cheng, Fei; Gao, Dong; Lei, Xu
2018-02-01
One classical hypothesis among many models to explain the etiology and maintenance of insomnia disorder (ID) is hyperarousal. Aberrant functional connectivity among resting-state large-scale brain networks may be the underlying neurological mechanisms of this hypothesis. The aim of current study was to investigate the functional network connectivity (FNC) among large-scale brain networks in patients with insomnia disorder (ID) during resting state. In the present study, the resting-state fMRI was used to evaluate whether patients with ID showed aberrant FNC among dorsal attention network (DAN), frontoparietal control network (FPC), anterior default mode network (aDMN), and posterior default mode network (pDMN) compared with healthy good sleepers (HGSs). The Pearson's correlation analysis was employed to explore whether the abnormal FNC observed in patients with ID was associated with sleep parameters, cognitive and emotional scores, and behavioral performance assessed by questionnaires and tasks. Patients with ID had worse subjective thought control ability measured by Thought Control Ability Questionnaire (TCAQ) and more negative affect than HGSs. Intriguingly, relative to HGSs, patients with ID showed a significant increase in FNC between DAN and FPC, but a significant decrease in FNC between aDMN and pDMN. Exploratory analysis in patients with ID revealed a significantly positive correlation between the DAN-FPC FNC and reaction time (RT) of psychomotor vigilance task (PVT). The current study demonstrated that even during the resting state, the task-activated and task-deactivated large-scale brain networks in insomniacs may still maintain a hyperarousal state, looking quite similar to the pattern in a task condition with external stimuli. Those results support the hyperarousal model of insomnia.
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.
Hero, Alfred O; Rajaratnam, Bala
2016-01-01
When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.
Direct synthesis of BiCuChO-type oxychalcogenides by mechanical alloying
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pele, Vincent; Barreteau, Celine; CNRS, Orsay F-91405
2013-07-15
We report on the direct synthesis of BiCuChO based materials by mechanical alloying (Ch=Se, Te). We show that contrary to the synthesis paths used in the previous reports dealing with this family of materials, which use costly annealings in closed silica tubes under controlled atmosphere, this new synthesis route enables the synthesis of pure phase materials at room temperature under air, with reasonable milling time. This synthesis procedure is easily scalable for large scale applications. - Highlights: • Phase pure BiCuSeO doped and undoped prepared by mechanical alloying. • Synthesis performed under air at room temperature. • Electrical properties similarmore » to that of samples synthesized by a classical path.« less
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
NASA Astrophysics Data System (ADS)
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.