Weak scale from the maximum entropy principle
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Discrete maximum principle for the P1 - P0 weak Galerkin finite element approximations
NASA Astrophysics Data System (ADS)
Wang, Junping; Ye, Xiu; Zhai, Qilong; Zhang, Ran
2018-06-01
This paper presents two discrete maximum principles (DMP) for the numerical solution of second order elliptic equations arising from the weak Galerkin finite element method. The results are established by assuming an h-acute angle condition for the underlying finite element triangulations. The mathematical theory is based on the well-known De Giorgi technique adapted in the finite element context. Some numerical results are reported to validate the theory of DMP.
The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Chiun-Chang, E-mail: chlee@mail.nhcue.edu.tw
2014-05-15
The present article is concerned with the charge conserving Poisson-Boltzmann (CCPB) equation in high-dimensional bounded smooth domains. The CCPB equation is a Poisson-Boltzmann type of equation with nonlocal coefficients. First, under the Robin boundary condition, we get the existence of weak solutions to this equation. The main approach is variational, based on minimization of a logarithm-type energy functional. To deal with the regularity of weak solutions, we establish a maximum modulus estimate for the standard Poisson-Boltzmann (PB) equation to show that weak solutions of the CCPB equation are essentially bounded. Then the classical solutions follow from the elliptic regularity theorem.more » Second, a maximum principle for the CCPB equation is established. In particular, we show that in the case of global electroneutrality, the solution achieves both its maximum and minimum values at the boundary. However, in the case of global non-electroneutrality, the solution may attain its maximum value at an interior point. In addition, under certain conditions on the boundary, we show that the global non-electroneutrality implies pointwise non-electroneutrality.« less
Optimal startup control of a jacketed tubular reactor.
NASA Technical Reports Server (NTRS)
Hahn, D. R.; Fan, L. T.; Hwang, C. L.
1971-01-01
The optimal startup policy of a jacketed tubular reactor, in which a first-order, reversible, exothermic reaction takes place, is presented. A distributed maximum principle is presented for determining weak necessary conditions for optimality of a diffusional distributed parameter system. A numerical technique is developed for practical implementation of the distributed maximum principle. This involves the sequential solution of the state and adjoint equations, in conjunction with a functional gradient technique for iteratively improving the control function.
Twenty-five years of maximum-entropy principle
NASA Astrophysics Data System (ADS)
Kapur, J. N.
1983-04-01
The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.
Optimal birth control of age-dependent competitive species III. Overtaking problem
NASA Astrophysics Data System (ADS)
He, Ze-Rong; Cheng, Ji-Shu; Zhang, Chun-Guo
2008-01-01
A study is made of an overtaking optimal problem for a population system consisting of two competing species, which is controlled by fertilities. The existence of optimal policy is proved and a maximum principle is carefully derived under less restrictive conditions. Weak and strong turnpike properties of optimal trajectories are established.
From Maximum Entropy Models to Non-Stationarity and Irreversibility
NASA Astrophysics Data System (ADS)
Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar
The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.
Gravitational Thermodynamics for Interstellar Gas and Weakly Degenerate Quantum Gas
NASA Astrophysics Data System (ADS)
Zhu, Ding Yu; Shen, Jian Qi
2016-03-01
The temperature distribution of an ideal gas in gravitational fields has been identified as a longstanding problem in thermodynamics and statistical physics. According to the principle of entropy increase (i.e., the principle of maximum entropy), we apply a variational principle to the thermodynamical entropy functional of an ideal gas and establish a relationship between temperature gradient and gravitational field strength. As an illustrative example, the temperature and density distributions of an ideal gas in two simple but typical gravitational fields (i.e., a uniform gravitational field and an inverse-square gravitational field) are considered on the basis of entropic and hydrostatic equilibrium conditions. The effect of temperature inhomogeneity in gravitational fields is also addressed for a weakly degenerate quantum gas (e.g., Fermi and Bose gas). The present gravitational thermodynamics of a gas would have potential applications in quantum fluids, e.g., Bose-Einstein condensates in Earth’s gravitational field and the temperature fluctuation spectrum in cosmic microwave background radiation.
Complete spacelike hypersurfaces in orthogonally splitted spacetimes
NASA Astrophysics Data System (ADS)
Colombo, Giulio; Rigoli, Marco
2017-10-01
We provide some "half-space theorems" for spacelike complete non-compact hypersurfaces into orthogonally splitted spacetimes. In particular we generalize some recent work of Rubio and Salamanca on maximal spacelike compact hypersurfaces. Beside compactness, we also relax some of their curvature assumptions and even consider the case of nonconstant mean curvature bounded from above. The analytic tools used in various arguments are based on some forms of the weak maximum principle.
NASA Astrophysics Data System (ADS)
Lineweaver, C. H.
2005-12-01
The principle of Maximum Entropy Production (MEP) is being usefully applied to a wide range of non-equilibrium processes including flows in planetary atmospheres and the bioenergetics of photosynthesis. Our goal of applying the principle of maximum entropy production to an even wider range of Far From Equilibrium Dissipative Systems (FFEDS) depends on the reproducibility of the evolution of the system from macro-state A to macro-state B. In an attempt to apply the principle of MEP to astronomical and cosmological structures, we investigate the problematic relationship between gravity and entropy. In the context of open and non-equilibrium systems, we use a generalization of the Gibbs free energy to include the sources of free energy extracted by non-living FFEDS such as hurricanes and convection cells. Redox potential gradients and thermal and pressure gradients provide the free energy for a broad range of FFEDS, both living and non-living. However, these gradients have to be within certain ranges. If the gradients are too weak, FFEDS do not appear. If the gradients are too strong FFEDS disappear. Living and non-living FFEDS often have different source gradients (redox potential gradients vs thermal and pressure gradients) and when they share the same gradient, they exploit different ranges of the gradient. In a preliminary attempt to distinguish living from non-living FFEDS, we investigate the parameter space of: type of gradient and steepness of gradient.
Spontaneous evolution of microstructure in materials
NASA Astrophysics Data System (ADS)
Kirkaldy, J. S.
1993-08-01
Microstructures which evolve spontaneously from random solutions in near isolation often exhibit patterns of remarkable symmetry which can only in part be explained by boundary and crystallographic effects. With reference to the detailed experimental record, we seek the source of causality in this natural tendency to constructive autonomy, usually designated as a principle of pattern or wavenumber selection in a free boundary problem. The phase field approach which incorporates detailed boundary structure and global rate equations has enjoyed some currency in removing internal degrees of freedom, and this will be examined critically in reference to the migration of phase-antiphase boundaries produced in an order-disorder transformation. Analogous problems for singular interfaces including solute trapping are explored. The microscopic solvability hypothesis has received much attention, particularly in relation to dendrite morphology and the Saffman-Taylor fingering problem in hydrodynamics. A weak form of this will be illustrated in relation to local equilibrium binary solidification cells which renders the free boundary problem unique. However, the main thrust of this article concerns dynamic configurations at anisotropic singular interfaces and the related patterns of eutectoid(ic)s, nonequilibrium cells, cellular dendrites, and Liesegang figures where there is a recognizable macroscopic phase space of pattern fluctuations and/or solitons. These possess a weakly defective stability point and thereby submit to a statistical principle of maximum path probability and to a variety of corollary dissipation principles in the determination of a unique average patterning behavior. A theoretical development of the principle based on Hamilton's principle for frictional systems is presented in an Appendix. Elements of the principles of scaling, universality, and deterministic chaos are illustrated.
Equivalence principles and electromagnetism
NASA Technical Reports Server (NTRS)
Ni, W.-T.
1977-01-01
The implications of the weak equivalence principles are investigated in detail for electromagnetic systems in a general framework. In particular, it is shown that the universality of free-fall trajectories (Galileo weak equivalence principle) does not imply the validity of the Einstein equivalence principle. However, the Galileo principle plus the universality of free-fall rotation states does imply the Einstein principle.
NASA Astrophysics Data System (ADS)
Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan
2015-04-01
The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.
A weak Hamiltonian finite element method for optimal control problems
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.
1989-01-01
A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.
A weak Hamiltonian finite element method for optimal control problems
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.
1990-01-01
A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.
Weak Hamiltonian finite element method for optimal control problems
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.
1991-01-01
A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.
Entropy criteria applied to pattern selection in systems with free boundaries
NASA Astrophysics Data System (ADS)
Kirkaldy, J. S.
1985-10-01
The steady state differential or integral equations which describe patterned dissipative structures, typically to be identified with first order phase transformation morphologies like isothermal pearlites, are invariably degenerate in one or more order parameters (the lamellar spacing in the pearlite case). It is often observed that a different pattern is attained at the steady state for each initial condition (the hysteresis or metastable case). Alternatively, boundary perturbations and internal fluctuations during transition up to, or at the steady state, destroy the path coherence. In this case a statistical ensemble of imperfect patterns often emerges which represents a fluctuating but recognizably patterned and unique average steady state. It is cases like cellular, lamellar pearlite, involving an assembly of individual cell patterns which are regularly perturbed by local fluctuation and growth processes, which concern us here. Such weakly fluctuating nonlinear steady state ensembles can be arranged in a thought experiment so as to evolve as subsystems linking two very large mass-energy reservoirs in isolation. Operating on this discontinuous thermodynamic ideal, Onsager’s principle of maximum path probability for isolated systems, which we interpret as a minimal time correlation function connecting subsystem and baths, identifies the stable steady state at a parametric minimum or maximum (or both) in the dissipation rate. This nonlinear principle is independent of the Principle of Minimum Dissipation which is applicable in the linear regime of irreversible thermodynamics. The statistical argument is equivalent to the weak requirement that the isolated system entropy as a function of time be differentiable to the second order despite the macroscopic pattern fluctuations which occur in the subsystem. This differentiability condition is taken for granted in classical stability theory based on the 2nd Law. The optimal principle as applied to isothermal and forced velocity pearlites (in this case maximal) possesses a Le Chatelier (perturbation) Principle which can be formulated exactly via Langer’s conjecture that “each lamella must grow in a direction which is perpendicular to the solidification front”. This is the first example of such an equivalence to be experimentally and theoretically recognized in nonlinear irreversible thermodynamics. A further application to binary solidification cells is reviewed. In this case the optimum in the dissipation is a minimum and the closure between theory and experiment is excellent. Other applications in thermal-hydraulics, biology, and solid state physics are briefy described.
On the Pontryagin maximum principle for systems with delays. Economic applications
NASA Astrophysics Data System (ADS)
Kim, A. V.; Kormyshev, V. M.; Kwon, O. B.; Mukhametshin, E. R.
2017-11-01
The Pontryagin maximum principle [6] is the key stone of finite-dimensional optimal control theory [1, 2, 5]. So beginning with opening the maximum principle it was important to extend the maximum principle on various classes of dynamical systems. In t he paper we consider some aspects of application of i-smooth analysis [3, 4] in the theory of the Pontryagin maximum principle [6] for systems with delays, obtained results can be applied by elaborating optimal program controls in economic models with delays.
Rigidity of complete generic shrinking Ricci solitons
NASA Astrophysics Data System (ADS)
Chu, Yawei; Zhou, Jundong; Wang, Xue
2018-01-01
Let (Mn , g , X) be a complete generic shrinking Ricci soliton of dimension n ≥ 3. In this paper, by employing curvature inequalities, the formula of X-Laplacian for the norm square of the trace-free curvature tensor, the weak maximum principle and the estimate of the scalar curvature of (Mn , g) , we prove some rigidity results for (Mn , g , X) . In particular, it is showed that (Mn , g , X) is isometric to Rn or a finite quotient of Sn under a pointwise pinching condition. Moreover, we establish several optimal inequalities and classify those shrinking solitons for equalities.
NASA Astrophysics Data System (ADS)
Nakanishi, Akitaka; Katayama-Yoshida, Hiroshi
2012-12-01
We have performed the first-principles calculations about the superconducting transition temperature Tc of hole-doped delafossite CuAlO2, AgAlO2 and AuAlO2. Calculated Tc are about 50 K (CuAlO2), 40 K (AgAlO2) and 3 K(AuAlO2) at maximum in the optimum hole-doping concentration. The low Tc of AuAlO2 is attributed to the weak electron-phonon interaction caused by the low covalency and heavy atomic mass.
When Good Evidence Goes Bad: The Weak Evidence Effect in Judgment and Decision-Making
ERIC Educational Resources Information Center
Fernbach, Philip M.; Darlow, Adam; Sloman, Steven A.
2011-01-01
An indispensable principle of rational thought is that positive evidence should increase belief. In this paper, we demonstrate that people routinely violate this principle when predicting an outcome from a weak cause. In Experiment 1 participants given weak positive evidence judged outcomes of public policy initiatives to be less likely than…
NASA Astrophysics Data System (ADS)
Wang, Yi Jiao; Feng, Qing Yi; Chai, Li He
As one of the most important financial markets and one of the main parts of economic system, the stock market has become the research focus in economics. The stock market is a typical complex open system far from equilibrium. Many available models that make huge contribution to researches on market are strong in describing the market however, ignoring strong nonlinear interactions among active agents and weak in reveal underlying dynamic mechanisms of structural evolutions of market. From econophysical perspectives, this paper analyzes the complex interactions among agents and defines the generalized entropy in stock markets. Nonlinear evolutionary dynamic equation for the stock markets is then derived from Maximum Generalized Entropy Principle. Simulations are accordingly conducted for a typical case with the given data, by which the structural evolution of the stock market system is demonstrated. Some discussions and implications are finally provided.
The Next Breakthrough for Organic Photovoltaics?
Jackson, Nicholas E; Savoie, Brett M; Marks, Tobin J; Chen, Lin X; Ratner, Mark A
2015-01-02
While the intense focus on energy level tuning in organic photovoltaic materials has afforded large gains in device performance, we argue here that strategies based on microstructural/morphological control are at least as promising in any rational design strategy. In this work, a meta-analysis of ∼150 bulk heterojunction devices fabricated with different materials combinations is performed and reveals strong correlations between power conversion efficiency and morphology-dominated properties (short-circuit current, fill factor) and surprisingly weak correlations between efficiency and energy level positioning (open-circuit voltage, enthalpic offset at the interface, optical gap). While energy level positioning should in principle provide the theoretical maximum efficiency, the optimization landscape that must be navigated to reach this maximum is unforgiving. Thus, research aimed at developing understanding-based strategies for more efficient optimization of an active layer microstructure and morphology are likely to be at least as fruitful.
The Universe and Life: Deductions from the Weak Anthropic Principle
NASA Astrophysics Data System (ADS)
Hoyle, Fred; Wickramasinghe, Chandra
The existence of life in the Universe is interpreted in terms of the "Weak Anthropic Principle". It is shown that cosmological models are constrained to a class that involves an open timescale and access to infinite quantities of carbonaceous material.
Gebauer, Petr; Malá, Zdena; Bocek, Petr
2010-03-01
This contribution introduces a new separation principle in CE which offers focusing of weak nonamphoteric ionogenic species and their inherent transport to the detector. The prerequisite condition for application of this principle is the existence of an inverse electromigration dispersion profile, i.e. a profile where pH is decreasing toward the anode or cathode for focusing of anionic or cationic weak analytes, respectively. The theory presented defines the principal conditions under which an analyte is focused on a profile of this type. Since electromigration dispersion profiles are migrating ones, the new principle offers inherent transport of focused analytes into the detection cell. The focusing principle described utilizes a mechanism different from both CZE (where separation is based on the difference in mobilities) and IEF (where separation is based on difference in pI), and hence, offers another separation dimension in CE. The new principle and its theory presented here are supplemented by convincing experiments as their proof.
Higher-order gravity and the classical equivalence principle
NASA Astrophysics Data System (ADS)
Accioly, Antonio; Herdy, Wallace
2017-11-01
As is well known, the deflection of any particle by a gravitational field within the context of Einstein’s general relativity — which is a geometrical theory — is, of course, nondispersive. Nevertheless, as we shall show in this paper, the mentioned result will change totally if the bending is analyzed — at the tree level — in the framework of higher-order gravity. Indeed, to first order, the deflection angle corresponding to the scattering of different quantum particles by the gravitational field mentioned above is not only spin dependent, it is also dispersive (energy-dependent). Consequently, it violates the classical equivalence principle (universality of free fall, or equality of inertial and gravitational masses) which is a nonlocal principle. However, contrary to popular belief, it is in agreement with the weak equivalence principle which is nothing but a statement about purely local effects. It is worthy of note that the weak equivalence principle encompasses the classical equivalence principle locally. We also show that the claim that there exists an incompatibility between quantum mechanics and the weak equivalence principle, is incorrect.
Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han Yuecai; Hu Yaozhong; Song Jian, E-mail: jsong2@math.rutgers.edu
2013-04-15
We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need tomore » develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.« less
Swine influenza and vaccines: an alternative approach for decision making about pandemic prevention.
Basili, Marcello; Ferrini, Silvia; Montomoli, Emanuele
2013-08-01
During the global pandemic of A/H1N1/California/07/2009 (A/H1N1/Cal) influenza, many governments signed contracts with vaccine producers for a universal influenza immunization program and bought hundreds of millions of vaccines doses. We argue that, as Health Ministers assumed the occurrence of the worst possible scenario (generalized pandemic influenza) and followed the strong version of the Precautionary Principle, they undervalued the possibility of mild or weak pandemic wave. An alternative decision rule, based on the non-extensive entropy principle, is introduced, and a different Precautionary Principle characterization is applied. This approach values extreme negative results (catastrophic events) in a different way and predicts more plausible and mild events. It introduces less pessimistic forecasts in the case of uncertain influenza pandemic outbreaks. A simplified application is presented using seasonal data of morbidity and severity among Italian children influenza-like illness for the period 2003-10. Established literature results predict an average attack rate of not less than 15% for the next pandemic influenza [Meltzer M, Cox N, Fukuda K. The economic impact of pandemic influenza in the United States: implications for setting priorities for interventions. Emerg Infect Dis 1999;5:659-71; Meltzer M, Cox N, Fukuda K. Modeling the Economic Impact of Pandemic Influenza in the United States: Implications for Setting Priorities for Intervention. Background paper. Atlanta, GA: CDC, 1999. Available at: http://www.cdc.gov/ncidod/eid/vol5no5/melt_back.htm (7 January 2011, date last accessed))]. The strong version of the Precautionary Principle would suggest using this prediction for vaccination campaigns. On the contrary, the non-extensive maximum entropy principle predicts a lower attack rate, which induces a 20% saving in public funding for vaccines doses. The need for an effective influenza pandemic prevention program, coupled with an efficient use of public funding, calls for a rethinking of the Precautionary Principle. The non-extensive maximum entropy principle, which incorporates vague and incomplete information available to decision makers, produces a more coherent forecast of possible influenza pandemic and a conservative spending in public funding.
The maximum entropy production principle: two basic questions.
Martyushev, Leonid M
2010-05-12
The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.
NASA Technical Reports Server (NTRS)
Von Roos, O.
1978-01-01
The limitations of the detectability of extremely weak signals (gravitational radiation for instance) imposed by Heisenberg's uncertainty principle on the sequential determination of those signals have been explored recently. A variety of schemes have been proposed to circumvent these limitations. Although all of the earlier attempts have been proven fruitless a recent proposal seems to be quite promising. The scheme, consisting of two harmonic oscillators interacting with each other in a peculiar way, allows for an exact analytical solution which is derived here. If it can be assumed that the expectation value of one of the canonical variables of the total system suffices to monitor the weak signal it can be shown that, in the absence of thermal noise, arbitrarily weak signals can in principle be measured without interference from the uncertainty principle.
NASA Technical Reports Server (NTRS)
Parker, P. D. M.
1981-01-01
Violation of the equivalence principle by the weak interaction is tested. Any variation of the weak interaction coupling constant with gravitational potential, i.e., a spatial variation of the fundamental constants is investigated. The level of sensitivity required for such a measurement is estimated on the basis of the size of a change in the gravitational potential which is accessible. The alpha particle spectrum is analyzed, and the counting rate was improved by a factor of approximately 100.
Can quantum probes satisfy the weak equivalence principle?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seveso, Luigi, E-mail: luigi.seveso@unimi.it; Paris, Matteo G.A.; INFN, Sezione di Milano, I-20133 Milano
We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’smore » mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.« less
A stochastic maximum principle for backward control systems with random default time
NASA Astrophysics Data System (ADS)
Shen, Yang; Kuen Siu, Tak
2013-05-01
This paper establishes a necessary and sufficient stochastic maximum principle for backward systems, where the state processes are governed by jump-diffusion backward stochastic differential equations with random default time. An application of the sufficient stochastic maximum principle to an optimal investment and capital injection problem in the presence of default risk is discussed.
Constraining the generalized uncertainty principle with the atomic weak-equivalence-principle test
NASA Astrophysics Data System (ADS)
Gao, Dongfeng; Wang, Jin; Zhan, Mingsheng
2017-04-01
Various models of quantum gravity imply the Planck-scale modifications of Heisenberg's uncertainty principle into a so-called generalized uncertainty principle (GUP). The GUP effects on high-energy physics, cosmology, and astrophysics have been extensively studied. Here, we focus on the weak-equivalence-principle (WEP) violation induced by the GUP. Results from the WEP test with the 85Rb-87Rb dual-species atom interferometer are used to set upper bounds on parameters in two GUP proposals. A 1045-level bound on the Kempf-Mangano-Mann proposal and a 1027-level bound on Maggiore's proposal, which are consistent with bounds from other experiments, are obtained. All these bounds have huge room for improvement in the future.
NASA Astrophysics Data System (ADS)
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-01
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo
2018-05-14
Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.
2009-01-01
The past decade witnessed great progress in research on health inequities. The most widely cited definition of health inequity is, arguably, the one proposed by Whitehead and Dahlgren: "Health inequalities that are avoidable, unnecessary, and unfair are unjust." We argue that this definition is useful but in need of further clarification because it is not linked to broader theories of justice. We propose an alternative, pluralist notion of fair distribution of health that is compatible with several theories of distributive justice. Our proposed view consists of the weak principle of health equality and the principle of fair trade-offs. The weak principle of health equality offers an alternative definition of health equity to those proposed in the past. It maintains the all-encompassing nature of the popular Whitehead/Dahlgren definition of health equity, and at the same time offers a richer philosophical foundation. This principle states that every person or group should have equal health except when: (a) health equality is only possible by making someone less healthy, or (b) there are technological limitations on further health improvement. In short, health inequalities that are amenable to positive human intervention are unfair. The principle of fair trade-offs states that weak equality of health is morally objectionable if and only if: (c) further reduction of weak inequality leads to unacceptable sacrifices of average or overall health of the population, or (d) further reduction in weak health inequality would result in unacceptable sacrifices of other important goods, such as education, employment, and social security. PMID:19922612
NASA Technical Reports Server (NTRS)
Schmid, L. A.
1977-01-01
The first and second variations are calculated for the irreducible form of Hamilton's Principle that involves the minimum number of dependent variables necessary to describe the kinetmatics and thermodynamics of inviscid, compressible, baroclinic flow in a specified gravitational field. The form of the second variation shows that, in the neighborhood of a stationary point that corresponds to physically stable flow, the action integral is a complex saddle surface in parameter space. There exists a form of Hamilton's Principle for which a direct solution of a flow problem is possible. This second form is related to the first by a Friedrichs transformation of the thermodynamic variables. This introduces an extra dependent variable, but the first and second variations are shown to have direct physical significance, namely they are equal to the free energy of fluctuations about the equilibrium flow that satisfies the equations of motion. If this equilibrium flow is physically stable, and if a very weak second order integral constraint on the correlation between the fluctuations of otherwise independent variables is satisfied, then the second variation of the action integral for this free energy form of Hamilton's Principle is positive-definite, so the action integral is a minimum, and can serve as the basis for a direct trail and error solution. The second order integral constraint states that the unavailable energy must be maximum at equilibrium, i.e. the fluctuations must be so correlated as to produce a second order decrease in the total unavailable energy.
Chen, Zheng; Huang, Hongying; Yan, Jue
2015-12-21
We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less
Analysis of weak interactions and Eotvos experiments
NASA Technical Reports Server (NTRS)
Hsu, J. P.
1978-01-01
The intermediate-vector-boson model is preferred over the current-current model as a basis for calculating effects due to weak self-energy. Attention is given to a possible violation of the equivalence principle by weak-interaction effects, and it is noted that effects due to weak self-energy are at least an order of magnitude greater than those due to the weak binding energy for typical nuclei. It is assumed that the weak and electromagnetic energies are independent.
CP Violation, Neutral Currents, and Weak Equivalence
DOE R&D Accomplishments Database
Fitch, V. L.
1972-03-23
Within the past few months two excellent summaries of the state of our knowledge of the weak interactions have been presented. Correspondingly, we will not attempt a comprehensive review but instead concentrate this discussion on the status of CP violation, the question of the neutral currents, and the weak equivalence principle.
Copernicus, Kant, and the anthropic cosmological principles
NASA Astrophysics Data System (ADS)
Roush, Sherrilyn
In the last three decades several cosmological principles and styles of reasoning termed 'anthropic' have been introduced into physics research and popular accounts of the universe and human beings' place in it. I discuss the circumstances of 'fine tuning' that have motivated this development, and what is common among the principles. I examine the two primary principles, and find a sharp difference between these 'Weak' and 'Strong' varieties: contrary to the view of the progenitors that all anthropic principles represent a departure from Copernicanism in cosmology, the Weak Anthropic Principle is an instance of Copernicanism. It has close affinities with the step of Copernicus that Immanuel Kant took himself to be imitating in the 'critical' turn that gave rise to the Critique of Pure Reason. I conclude that the fact that a way of going about natural science mentions human beings is not sufficient reason to think that it is a subjective approach; in fact, it may need to mention human beings in order to be objective.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.
Shalymov, Dmitry S; Fradkov, Alexander L
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.
Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle
2016-01-01
We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886
Takagiwa, Yoshiki; Kimura, Kaoru
2014-08-01
In this article, we review the characteristic features of icosahedral cluster solids, metallic-covalent bonding conversion (MCBC), and the thermoelectric properties of Al-based icosahedral quasicrystals and approximants. MCBC is clearly distinguishable from and closely related to the well-known metal-insulator transition. This unique bonding conversion has been experimentally verified in 1/1-AlReSi and 1/0-Al 12 Re approximants by the maximum entropy method and Rietveld refinement for powder x-ray diffraction data, and is caused by a central atom inside the icosahedral clusters. This helps to understand pseudogap formation in the vicinity of the Fermi energy and establish a guiding principle for tuning the thermoelectric properties. From the electron density distribution analysis, rigid heavy clusters weakly bonded with glue atoms are observed in the 1/1-AlReSi approximant crystal, whose physical properties are close to icosahedral Al-Pd-TM (TM: Re, Mn) quasicrystals. They are considered to be an intermediate state among the three typical solids: metals, covalently bonded networks (semiconductor), and molecular solids. Using the above picture and detailed effective mass analysis, we propose a guiding principle of weakly bonded rigid heavy clusters to increase the thermoelectric figure of merit ( ZT ) by optimizing the bond strengths of intra- and inter-icosahedral clusters. Through element substitutions that mainly weaken the inter-cluster bonds, a dramatic increase of ZT from less than 0.01 to 0.26 was achieved. To further increase ZT , materials should form a real gap to obtain a higher Seebeck coefficient.
Corrected Implicit Monte Carlo
Cleveland, Mathew Allen; Wollaber, Allan Benton
2018-01-02
Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less
Corrected implicit Monte Carlo
NASA Astrophysics Data System (ADS)
Cleveland, M. A.; Wollaber, A. B.
2018-04-01
In this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle for frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. We present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cleveland, Mathew Allen; Wollaber, Allan Benton
Here in this work we develop a set of nonlinear correction equations to enforce a consistent time-implicit emission temperature for the original semi-implicit IMC equations. We present two possible forms of correction equations: one results in a set of non-linear, zero-dimensional, non-negative, explicit correction equations, and the other results in a non-linear, non-negative, Boltzman transport correction equation. The zero-dimensional correction equations adheres to the maximum principle for the material temperature, regardless of frequency-dependence, but does not prevent maximum principle violation in the photon intensity, eventually leading to material overheating. The Boltzman transport correction guarantees adherence to the maximum principle formore » frequency-independent simulations, at the cost of evaluating a reduced source non-linear Boltzman equation. Finally, we present numerical evidence suggesting that the Boltzman transport correction, in its current form, significantly improves time step limitations but does not guarantee adherence to the maximum principle for frequency-dependent simulations.« less
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
Using Design Principles to Teach Technical Communication.
ERIC Educational Resources Information Center
Markel, Mike
1995-01-01
Compares the writing of two students--a competent writer and a weak one--in a technical communication course before and after discussion of design principles. Finds that a basic understanding of design principles helped them improve document macrostructure but had little effect on document microstructure. Suggests that integrating document design…
Weak values, 'negative probability', and the uncertainty principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokolovski, D.
2007-10-15
A quantum transition can be seen as a result of interference between various pathways (e.g., Feynman paths), which can be labeled by a variable f. An attempt to determine the value of f without destroying the coherence between the pathways produces a weak value of f. We show f to be an average obtained with an amplitude distribution which can, in general, take negative values, which, in accordance with the uncertainty principle, need not contain information about the actual range of f which contributes to the transition. It is also demonstrated that the moments of such alternating distributions have amore » number of unusual properties which may lead to a misinterpretation of the weak-measurement results. We provide a detailed analysis of weak measurements with and without post-selection. Examples include the double-slit diffraction experiment, weak von Neumann and von Neumann-like measurements, traversal time for an elastic collision, phase time, and local angular momentum.« less
Metamagnetism and weak ferromagnetism in nickel (II) oxalate crystals
NASA Astrophysics Data System (ADS)
Romero-Tela, E.; Mendoza, M. E.; Escudero, R.
2012-05-01
Microcrystals of orthorhombic nickel (II) oxalate dihydrate were synthesized through a precipitation reaction of aqueous solutions of nickel chloride and oxalic acid. Magnetic susceptibility exhibits a sharp peak at 3.3 K and a broad rounded maximum near 43 K. We associated the lower maximum with a metamagnetic transition that occurs when the magnetic field is about ≥ 3.5 T. The maximum at 43 K is typical of 1D antiferromagnets, whereas weak ferromagnetism behavior was observed in the range of 3.3-43 K.
A restricted proof that the weak equivalence principle implies the Einstein equivalence principle
NASA Technical Reports Server (NTRS)
Lightman, A. P.; Lee, D. L.
1973-01-01
Schiff has conjectured that the weak equivalence principle (WEP) implies the Einstein equivalence principle (EEP). A proof is presented of Schiff's conjecture, restricted to: (1) test bodies made of electromagnetically interacting point particles, that fall from rest in a static, spherically symmetric gravitational field; (2) theories of gravity within a certain broad class - a class that includes almost all complete relativistic theories that have been found in the literature, but with each theory truncated to contain only point particles plus electromagnetic and gravitational fields. The proof shows that every nonmentric theory in the class (every theory that violates EEP) must violate WEP. A formula is derived for the magnitude of the violation. It is shown that WEP is a powerful theoretical and experimental tool for constraining the manner in which gravity couples to electromagnetism in gravitation theories.
Why is the correlation between gene importance and gene evolutionary rate so weak?
Wang, Zhi; Zhang, Jianzhi
2009-01-01
One of the few commonly believed principles of molecular evolution is that functionally more important genes (or DNA sequences) evolve more slowly than less important ones. This principle is widely used by molecular biologists in daily practice. However, recent genomic analysis of a diverse array of organisms found only weak, negative correlations between the evolutionary rate of a gene and its functional importance, typically measured under a single benign lab condition. A frequently suggested cause of the above finding is that gene importance determined in the lab differs from that in an organism's natural environment. Here, we test this hypothesis in yeast using gene importance values experimentally determined in 418 lab conditions or computationally predicted for 10,000 nutritional conditions. In no single condition or combination of conditions did we find a much stronger negative correlation, which is explainable by our subsequent finding that always-essential (enzyme) genes do not evolve significantly more slowly than sometimes-essential or always-nonessential ones. Furthermore, we verified that functional density, approximated by the fraction of amino acid sites within protein domains, is uncorrelated with gene importance. Thus, neither the lab-nature mismatch nor a potentially biased among-gene distribution of functional density explains the observed weakness of the correlation between gene importance and evolutionary rate. We conclude that the weakness is factual, rather than artifactual. In addition to being weakened by population genetic reasons, the correlation is likely to have been further weakened by the presence of multiple nontrivial rate determinants that are independent from gene importance. These findings notwithstanding, we show that the principle of slower evolution of more important genes does have some predictive power when genes with vastly different evolutionary rates are compared, explaining why the principle can be practically useful despite the weakness of the correlation.
Looking forwards and backwards: The real-time processing of Strong and Weak Crossover
Lidz, Jeffrey; Phillips, Colin
2017-01-01
We investigated the processing of pronouns in Strong and Weak Crossover constructions as a means of probing the extent to which the incremental parser can use syntactic information to guide antecedent retrieval. In Experiment 1 we show that the parser accesses a displaced wh-phrase as an antecedent for a pronoun when no grammatical constraints prohibit binding, but the parser ignores the same wh-phrase when it stands in a Strong Crossover relation to the pronoun. These results are consistent with two possibilities. First, the parser could apply Principle C at antecedent retrieval to exclude the wh-phrase on the basis of the c-command relation between its gap and the pronoun. Alternatively, retrieval might ignore any phrases that do not occupy an Argument position. Experiment 2 distinguished between these two possibilities by testing antecedent retrieval under Weak Crossover. In Weak Crossover binding of the pronoun is ruled out by the argument condition, but not Principle C. The results of Experiment 2 indicate that antecedent retrieval accesses matching wh-phrases in Weak Crossover configurations. On the basis of these findings we conclude that the parser can make rapid use of Principle C and c-command information to constrain retrieval. We discuss how our results support a view of antecedent retrieval that integrates inferences made over unseen syntactic structure into constraints on backward-looking processes like memory retrieval. PMID:28936483
Extended Huygens-Fresnel principle and optical waves propagation in turbulence: discussion.
Charnotskii, Mikhail
2015-07-01
Extended Huygens-Fresnel principle (EHF) currently is the most common technique used in theoretical studies of the optical propagation in turbulence. A recent review paper [J. Opt. Soc. Am. A31, 2038 (2014)JOAOD60740-323210.1364/JOSAA.31.002038] cites several dozens of papers that are exclusively based on the EHF principle. We revisit the foundations of the EHF, and show that it is burdened by very restrictive assumptions that make it valid only under weak scintillation conditions. We compare the EHF to the less-restrictive Markov approximation and show that both theories deliver identical results for the second moment of the field, rendering the EHF essentially worthless. For the fourth moment of the field, the EHF principle is accurate under weak scintillation conditions, but is known to provide erroneous results for strong scintillation conditions. In addition, since the EHF does not obey the energy conservation principle, its results cannot be accurate for scintillations of partially coherent beam waves.
ATAC Autocuer Modeling Analysis.
1981-01-01
the analysis of the simple rectangular scrnentation (1) is based on detection and estimation theory (2). This approach uses the concept of maximum ...continuous wave forms. In order to develop the principles of maximum likelihood, it is con- venient to develop the principles for the "classical...the concept of maximum likelihood is significant in that it provides the optimum performance of the detection/estimation problem. With a knowledge of
Mathematical analysis of a sharp-diffuse interfaces model for seawater intrusion
NASA Astrophysics Data System (ADS)
Choquet, C.; Diédhiou, M. M.; Rosier, C.
2015-10-01
We consider a new model mixing sharp and diffuse interface approaches for seawater intrusion phenomena in free aquifers. More precisely, a phase field model is introduced in the boundary conditions on the virtual sharp interfaces. We thus include in the model the existence of diffuse transition zones but we preserve the simplified structure allowing front tracking. The three-dimensional problem then reduces to a two-dimensional model involving a strongly coupled system of partial differential equations of parabolic type describing the evolution of the depths of the two free surfaces, that is the interface between salt- and freshwater and the water table. We prove the existence of a weak solution for the model completed with initial and boundary conditions. We also prove that the depths of the two interfaces satisfy a coupled maximum principle.
How Islamism Imperils the Western Liberal Order
2017-02-13
pragmatic approach to the conflict in worldviews that does not abandon liberal principle , but shapes what the environment offers will yield the truest...cannot abandon its principles to practicalities, yet at times, those principles seem to present weakness where there should be strength. The Islamist...formulation of the state’s moral principles , typically described by the state’s constitution, and at least in the West, a commitment to liberal democratic
Phansalkar, Shobha; Zachariah, Marianne; Seidling, Hanna M; Mendes, Chantal; Volk, Lynn; Bates, David W
2014-01-01
Introduction Increasing the adoption of electronic health records (EHRs) with integrated clinical decision support (CDS) is a key initiative of the current US healthcare administration. High over-ride rates of CDS alerts strongly limit these potential benefits. As a result, EHR designers aspire to improve alert design to achieve better acceptance rates. In this study, we evaluated drug–drug interaction (DDI) alerts generated in EHRs and compared them for compliance with human factors principles. Methods We utilized a previously validated questionnaire, the I-MeDeSA, to assess compliance with nine human factors principles of DDI alerts generated in 14 EHRs. Two reviewers independently assigned scores evaluating the human factors characteristics of each EHR. Rankings were assigned based on these scores and recommendations for appropriate alert design were derived. Results The 14 EHRs evaluated in this study received scores ranging from 8 to 18.33, with a maximum possible score of 26. Cohen's κ (κ=0.86) reflected excellent agreement among reviewers. The six vendor products tied for second and third place rankings, while the top system and bottom five systems were home-grown products. The most common weaknesses included the absence of characteristics such as alert prioritization, clear and concise alert messages indicating interacting drugs, actions for clinical management, and a statement indicating the consequences of over-riding the alert. Conclusions We provided detailed analyses of the human factors principles which were assessed and described our recommendations for effective alert design. Future studies should assess whether adherence to these recommendations can improve alert acceptance. PMID:24780721
NASA Technical Reports Server (NTRS)
Fennelly, A. J.
1981-01-01
The TH epsilon mu formalism, used in analyzing equivalence principle experiments of metric and nonmetric gravity theories, is adapted to the description of the electroweak interaction using the Weinberg-Salam unified SU(2) x U(1) model. The use of the TH epsilon mu formalism is thereby extended to the weak interactions, showing how the gravitational field affects W sub mu (+ or -1) and Z sub mu (0) boson propagation and the rates of interactions mediated by them. The possibility of a similar extension to the strong interactions via SU(5) grand unified theories is briefly discussed. Also, using the effects of the potentials on the baryon and lepton wave functions, the effects of gravity on transition mediated in high-A atoms which are electromagnetically forbidden. Three possible experiments to test the equivalence principle in the presence of the weak interactions, which are technologically feasible, are then briefly outline: (1) K-capture by the FE nucleus (counting the emitted X-ray); (2) forbidden absorption transitions in high-A atoms' vapor; and (3) counting the relative Beta-decay rates in a suitable alpha-beta decay chain, assuming the strong interactions obey the equivalence principle.
Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.
ERIC Educational Resources Information Center
Cooper, William S.
1983-01-01
Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…
NASA Astrophysics Data System (ADS)
Chen, Xi; Zhong, Jiaqi; Song, Hongwei; Zhu, Lei; Wang, Jin; Zhan, Mingsheng
2014-08-01
Vibrational noise is one of the most important noises that limits the performance of the nonisotopes atom-interferometers (AIs) -based weak-equivalence-principle (WEP) -test experiment. By analyzing the vibration-induced phases, we find that, although the induced phases are not completely common, their ratio is always a constant at every experimental data point, which is not fully utilized in the traditional elliptic curve-fitting method. From this point, we propose a strategy that can greatly suppress the vibration-induced phase noise by stabilizing the Raman laser frequencies at high precision and controlling the scanning-phase ratio. The noise rejection ratio can be as high as 1015 with arbitrary dual-species AIs. Our method provides a Lissajous curve, and the shape of the curve indicates the breakdown of the weak-equivalence-principle signal. Then we manage to derive an estimator for the differential phase of the Lissajous curve. This strategy could be helpful in extending the candidates of atomic species for high-precision AIs-based WEP-test experiments.
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Zheng; Huang, Hongying; Yan, Jue
We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less
Lagrange Multipliers, Adjoint Equations, the Pontryagin Maximum Principle and Heuristic Proofs
ERIC Educational Resources Information Center
Ollerton, Richard L.
2013-01-01
Deeper understanding of important mathematical concepts by students may be promoted through the (initial) use of heuristic proofs, especially when the concepts are also related back to previously encountered mathematical ideas or tools. The approach is illustrated by use of the Pontryagin maximum principle which is then illuminated by reference to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reister, D.B.
This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed unicycle. The time optimal paths consist of sequences of arcs of circles and straight lines. The maximum principle introduced concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature of the time optimal paths. 10 refs., 6 figs.
Maximum Principles and Application to the Analysis of An Explicit Time Marching Algorithm
NASA Technical Reports Server (NTRS)
LeTallec, Patrick; Tidriri, Moulay D.
1996-01-01
In this paper we develop local and global estimates for the solution of convection-diffusion problems. We then study the convergence properties of a Time Marching Algorithm solving Advection-Diffusion problems on two domains using incompatible discretizations. This study is based on a De-Giorgi-Nash maximum principle.
A Huygens principle for diffusion and anomalous diffusion in spatially extended systems
Gottwald, Georg A.; Melbourne, Ian
2013-01-01
We present a universal view on diffusive behavior in chaotic spatially extended systems for anisotropic and isotropic media. For anisotropic systems, strong chaos leads to diffusive behavior (Brownian motion with drift) and weak chaos leads to superdiffusive behavior (Lévy processes with drift). For isotropic systems, the drift term vanishes and strong chaos again leads to Brownian motion. We establish the existence of a nonlinear Huygens principle for weakly chaotic systems in isotropic media whereby the dynamics behaves diffusively in even space dimension and exhibits superdiffusive behavior in odd space dimensions. PMID:23653481
Why Is the Correlation between Gene Importance and Gene Evolutionary Rate So Weak?
Wang, Zhi; Zhang, Jianzhi
2009-01-01
One of the few commonly believed principles of molecular evolution is that functionally more important genes (or DNA sequences) evolve more slowly than less important ones. This principle is widely used by molecular biologists in daily practice. However, recent genomic analysis of a diverse array of organisms found only weak, negative correlations between the evolutionary rate of a gene and its functional importance, typically measured under a single benign lab condition. A frequently suggested cause of the above finding is that gene importance determined in the lab differs from that in an organism's natural environment. Here, we test this hypothesis in yeast using gene importance values experimentally determined in 418 lab conditions or computationally predicted for 10,000 nutritional conditions. In no single condition or combination of conditions did we find a much stronger negative correlation, which is explainable by our subsequent finding that always-essential (enzyme) genes do not evolve significantly more slowly than sometimes-essential or always-nonessential ones. Furthermore, we verified that functional density, approximated by the fraction of amino acid sites within protein domains, is uncorrelated with gene importance. Thus, neither the lab-nature mismatch nor a potentially biased among-gene distribution of functional density explains the observed weakness of the correlation between gene importance and evolutionary rate. We conclude that the weakness is factual, rather than artifactual. In addition to being weakened by population genetic reasons, the correlation is likely to have been further weakened by the presence of multiple nontrivial rate determinants that are independent from gene importance. These findings notwithstanding, we show that the principle of slower evolution of more important genes does have some predictive power when genes with vastly different evolutionary rates are compared, explaining why the principle can be practically useful despite the weakness of the correlation. PMID:19132081
Does the Budyko curve reflect a maximum power state of hydrological systems? A backward analysis
NASA Astrophysics Data System (ADS)
Westhoff, Martijn; Zehe, Erwin; Archambeau, Pierre; Dewals, Benjamin
2016-04-01
Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving runoff and evaporation for a simple one-box model. We did this in an inverse manner such that when the conductances are optimized with the maximum power principle, the steady state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporations, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the - with the maximum power principle optimized - model with the asymptotes of the Budyko curve we were able to derive more realistic values of the aridity and evaporation index without any parameter calibration. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.
Does the Budyko curve reflect a maximum-power state of hydrological systems? A backward analysis
NASA Astrophysics Data System (ADS)
Westhoff, M.; Zehe, E.; Archambeau, P.; Dewals, B.
2016-01-01
Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving run-off and evaporation for a simple one-box model. We did this in an inverse manner such that, when the conductances are optimized with the maximum-power principle, the steady-state behaviour of the model leads exactly to a point on the asymptotes of the Budyko curve. Subsequently, we added dynamics in forcing and actual evaporation, causing the Budyko curve to deviate from the asymptotes. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves subject to observed dynamics in rainfall and actual evaporation. Thus by constraining the model that has been optimized with the maximum-power principle with the asymptotes of the Budyko curve, we were able to derive more realistic values of the aridity and evaporation index without any parameter calibration. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.
Does the Budyko curve reflect a maximum power state of hydrological systems? A backward analysis
NASA Astrophysics Data System (ADS)
Westhoff, M.; Zehe, E.; Archambeau, P.; Dewals, B.
2015-08-01
Almost all catchments plot within a small envelope around the Budyko curve. This apparent behaviour suggests that organizing principles may play a role in the evolution of catchments. In this paper we applied the thermodynamic principle of maximum power as the organizing principle. In a top-down approach we derived mathematical formulations of the relation between relative wetness and gradients driving runoff and evaporation for a simple one-box model. We did this in such a way that when the conductances are optimized with the maximum power principle, the steady state behaviour of the model leads exactly to a point on the Budyko curve. Subsequently we derived gradients that, under constant forcing, resulted in a Budyko curve following the asymptotes closely. With these gradients we explored the sensitivity of dry spells and dynamics in actual evaporation. Despite the simplicity of the model, catchment observations compare reasonably well with the Budyko curves derived with dynamics in rainfall and evaporation. This indicates that the maximum power principle may be used (i) to derive the Budyko curve and (ii) to move away from the empiricism in free parameters present in many Budyko functions. Future work should focus on better representing the boundary conditions of real catchments and eventually adding more complexity to the model.
Shadow-free single-pixel imaging
NASA Astrophysics Data System (ADS)
Li, Shunhua; Zhang, Zibang; Ma, Xiao; Zhong, Jingang
2017-11-01
Single-pixel imaging is an innovative imaging scheme and receives increasing attention in recent years, for it is applicable for imaging at non-visible wavelengths and imaging under weak light conditions. However, as in conventional imaging, shadows would likely occur in single-pixel imaging and sometimes bring negative effects in practical uses. In this paper, the principle of shadows occurrence in single-pixel imaging is analyzed, following which a technique for shadows removal is proposed. In the proposed technique, several single-pixel detectors are used to detect the backscattered light at different locations so that the shadows in the reconstructed images corresponding to each detector shadows are complementary. Shadow-free reconstruction can be derived by fusing the shadow-complementary images using maximum selection rule. To deal with the problem of intensity mismatch in image fusion, we put forward a simple calibration. As experimentally demonstrated, the technique is able to reconstruct monochromatic and full-color shadow-free images.
Origins of structure in globular proteins.
Chan, H S; Dill, K A
1990-01-01
The principal forces of protein folding--hydrophobicity and conformational entropy--are nonspecific. A long-standing puzzle has, therefore, been: What forces drive the formation of the specific internal architectures in globular proteins? We find that any self-avoiding flexible polymer molecule will develop large amounts of secondary structure, helices and parallel and antiparallel sheets, as it is driven to increasing compactness by any force of attraction among the chain monomers. Thus structure formation arises from the severity of steric constraints in compact polymers. This steric principle of organization can account for why short helices are stable in globular proteins, why there are parallel and anti-parallel sheets in proteins, and why weakly unfolded proteins have some secondary structure. On this basis, it should be possible to construct copolymers, not necessarily using amino acids, that can collapse to maximum compactness in incompatible solvents and that should then have structural organization resembling that of proteins. Images PMID:2385597
Success through Identification and Curriculum Change.
ERIC Educational Resources Information Center
Sapulpa Public Schools, OK.
One of the programs included in "Effective Reading Programs...," this program is based on the principle of early identification of students' strengths and weaknesses and the development of individualized methods to correct the weaknesses and emphasize the strengths. The program, begun in 1972, serves 749 kindergarten and first-grade…
Rosi, G.; D'Amico, G.; Cacciapuoti, L.; Sorrentino, F.; Prevedelli, M.; Zych, M.; Brukner, Č.; Tino, G. M.
2017-01-01
The Einstein equivalence principle (EEP) has a central role in the understanding of gravity and space–time. In its weak form, or weak equivalence principle (WEP), it directly implies equivalence between inertial and gravitational mass. Verifying this principle in a regime where the relevant properties of the test body must be described by quantum theory has profound implications. Here we report on a novel WEP test for atoms: a Bragg atom interferometer in a gravity gradiometer configuration compares the free fall of rubidium atoms prepared in two hyperfine states and in their coherent superposition. The use of the superposition state allows testing genuine quantum aspects of EEP with no classical analogue, which have remained completely unexplored so far. In addition, we measure the Eötvös ratio of atoms in two hyperfine levels with relative uncertainty in the low 10−9, improving previous results by almost two orders of magnitude. PMID:28569742
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle
Shoufan Fang; George Z. Gertner
2000-01-01
When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...
The design of preamplifier and ADC circuit base on weak e-optical signal
NASA Astrophysics Data System (ADS)
Fen, Leng; Ying-ping, Yang; Ya-nan, Yu; Xiao-ying, Xu
2011-02-01
Combined with the demand of the process of weak e-optical signal in QPD detection system, the article introduced the circuit principle of deigning preamplifier and ADC circuit with I/V conversion, instrumentation amplifier, low-pass filter and 16-bit A/D transformation. At the same time the article discussed the circuit's noise suppression and isolation according to the characteristics of the weak signal, and gave the method of software rectification. Finally, tested the weak signal with keithley2000, and got a good effect.
Maximum principle for a stochastic delayed system involving terminal state constraints.
Wen, Jiaqiang; Shi, Yufeng
2017-01-01
We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.
Testing Einstein's theory of gravity in a millisecond pulsar triple system
NASA Astrophysics Data System (ADS)
Archibald, Anne
2015-04-01
Einstein's theory of gravity depends on a key postulate, the strong equivalence principle. This principle says, among other things, that all objects fall the same way, even objects with strong self-gravity. Almost every metric theory of gravity other than Einstein's general relativity violates the strong equivalence principle at some level. While the weak equivalence principle--for objects with negligible self-gravity--has been tested in the laboratory, the strong equivalence principle requires astrophysical tests. Lunar laser ranging provides the best current tests by measuring whether the Earth and the Moon fall the same way in the gravitational field of the Sun. These tests are limited by the weak self-gravity of the Earth: the gravitational binding energy (over c2) over the mass is only 4 . 6 ×10-10 . By contrast, for neutron stars this same ratio is expected to be roughly 0 . 1 . Thus the recently-discovered system PSR J0337+17, a hierarchical triple consisting of a millisecond pulsar and two white dwarfs, offers the possibility of a test of the strong equivalence principle that is more sensitive by a factor of 20 to 100 than the best existing test. I will describe our observations of this system and our progress towards such a test.
Statistical mechanical theory for steady state systems. VI. Variational principles
NASA Astrophysics Data System (ADS)
Attard, Phil
2006-12-01
Several variational principles that have been proposed for nonequilibrium systems are analyzed. These include the principle of minimum rate of entropy production due to Prigogine [Introduction to Thermodynamics of Irreversible Processes (Interscience, New York, 1967)], the principle of maximum rate of entropy production, which is common on the internet and in the natural sciences, two principles of minimum dissipation due to Onsager [Phys. Rev. 37, 405 (1931)] and to Onsager and Machlup [Phys. Rev. 91, 1505 (1953)], and the principle of maximum second entropy due to Attard [J. Chem.. Phys. 122, 154101 (2005); Phys. Chem. Chem. Phys. 8, 3585 (2006)]. The approaches of Onsager and Attard are argued to be the only viable theories. These two are related, although their physical interpretation and mathematical approximations differ. A numerical comparison with computer simulation results indicates that Attard's expression is the only accurate theory. The implications for the Langevin and other stochastic differential equations are discussed.
How the Second Law of Thermodynamics Has Informed Ecosystem Ecology through Its History
NASA Astrophysics Data System (ADS)
Chapman, E. J.; Childers, D. L.; Vallino, J. J.
2014-12-01
Throughout the history of ecosystem ecology many attempts have been made to develop a general principle governing how systems develop and organize. We reviewed the historical developments that led to conceptualization of several goal-oriented principles in ecosystem ecology and the relationships among them. We focused our review on two prominent principles—the Maximum Power Principle and the Maximum Entropy Production Principle—and the literature that applies to both. While these principles have considerable conceptual overlap and both use concepts in physics (power and entropy), we found considerable differences in their historical development, the disciplines that apply these principles, and their adoption in the literature. We reviewed the literature using Web of Science keyword searches for the MPP, the MEPP, as well as for papers that cited pioneers in the MPP and the MEPP development. From the 6000 papers that our keyword searches returned, we limited our further meta-analysis to 32 papers by focusing on studies with a foundation in ecosystems research. Despite these seemingly disparate pasts, we concluded that the conceptual approaches of these two principles were more similar than dissimilar and that maximization of power in ecosystems occurs with maximum entropy production. We also found that these two principles have great potential to explain how systems develop, organize, and function, but there are no widely agreed upon theoretical derivations for the MEPP or the MPP, possibly hindering their broader use in ecological research. We end with recommendations for how ecosystems-level studies may better use these principles.
First-Principles Monte Carlo Simulations of Reaction Equilibria in Compressed Vapors
2016-01-01
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gas equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO2 and N2O in mole fractions approaching 1%, whereas N3 and O3 are not observed. The equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data. PMID:27413785
On the maximum principle for complete second-order elliptic operators in general domains
NASA Astrophysics Data System (ADS)
Vitolo, Antonio
This paper is concerned with the maximum principle for second-order linear elliptic equations in a wide generality. By means of a geometric condition previously stressed by Berestycki-Nirenberg-Varadhan, Cabré was very able to improve the classical ABP estimate obtaining the maximum principle also in unbounded domains, such as infinite strips and open connected cones with closure different from the whole space. Now we introduce a new geometric condition that extends the result to a more general class of domains including the complements of hypersurfaces, as for instance the cut plane. The methods developed here allow us to deal with complete second-order equations, where the admissible first-order term, forced to be zero in a preceding result with Cafagna, depends on the geometry of the domain.
Perspective: Maximum caliber is a general variational principle for dynamical systems
NASA Astrophysics Data System (ADS)
Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.
2018-01-01
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
Perspective: Maximum caliber is a general variational principle for dynamical systems.
Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A
2018-01-07
We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.
THE MAXIMIUM POWER PRINCIPLE: AN EMPIRICAL INVESTIGATION
The maximum power principle is a potential guide to understanding the patterns and processes of ecosystem development and sustainability. The principle predicts the selective persistence of ecosystem designs that capture a previously untapped energy source. This hypothesis was in...
A preliminary study of a cryogenic equivalence principle experiment on Shuttle
NASA Technical Reports Server (NTRS)
Everitt, C. W. F.; Worden, P. W., Jr.
1985-01-01
The Weak Equivalence Principle is the hypothesis that all test bodies fall with the same acceleration in the same gravitational field. The current limit on violations of the Weak Equivalence Principle, measured by the ratio of the difference in acceleration of two test masses to their average acceleration, is about 3 parts in one-hundred billion. It is anticipated that this can be improved in a shuttle experiment to a part in one quadrillion. Topics covered include: (1) studies of the shuttle environment, including interference with the experiment, interfacing to the experiment, and possible alternatives; (2) numerical simulations of the proposed experiment, including analytic solutions for special cases of the mass motion and preliminary estimates of sensitivity and time required; (3) error analysis of several noise sources such as thermal distortion, gas and radiation pressure effects, and mechanical distortion; and (4) development and performance tests of a laboratory version of the instrument.
Use of General Principles in Teaching Biochemistry.
ERIC Educational Resources Information Center
Fernandez, Rolando Hernandez; Tomey, Agustin Vicedo
1991-01-01
Presents Principles of Biochemistry for use as main focus of a biochemistry course. The nine guiding ideas are the principles of continual turnover, macromolecular organization, molecular recognition, multiplicity of utilization, maximum efficiency, gradual change, interrelationship, transformational reciprocity, and information transfer. In use…
Son Hing, Leanne S; Bobocel, D Ramona; Zanna, Mark P; Garcia, Donna M; Gee, Stephanie S; Orazietti, Katie
2011-09-01
We argue that the preference for the merit principle is a separate construct from hierarchy-legitimizing ideologies (i.e., system justification beliefs, prejudice, social dominance orientation), including descriptive beliefs that meritocracy currently exists in society. Moreover, we hypothesized that prescriptive beliefs about merit should have a stronger influence on reactions to the status quo when hierarchy-legitimizing ideologies are weak (vs. strong). In 4 studies, participants' preference for the merit principle and hierarchy-legitimizing ideologies were assessed; later, the participants evaluated organizational selection practices that support or challenge the status quo. Participants' prescriptive and descriptive beliefs about merit were separate constructs; only the latter predicted other hierarchy-legitimizing ideologies. In addition, as hypothesized, among participants who weakly endorsed hierarchy-legitimizing ideologies, the stronger their preference for the merit principle, the more they opposed selection practices that were perceived to be merit violating but the more they supported practices that were perceived to be merit restoring. In contrast, those who strongly endorsed hierarchy-legitimizing ideologies were always motivated to support the status quo, regardless of their preference for the merit principle. PsycINFO Database Record (c) 2011 APA, all rights reserved.
An ESS maximum principle for matrix games.
Vincent, T L; Cressman, R
2000-11-01
Previous work has demonstrated that for games defined by differential or difference equations with a continuum of strategies, there exists a G-function, related to individual fitness, that must take on a maximum with respect to a virtual variable v whenever v is one of the vectors in the coalition of vectors which make up the evolutionarily stable strategy (ESS). This result, called the ESS maximum principle, is quite useful in determining candidates for an ESS. This principle is reformulated here, so that it may be conveniently applied to matrix games. In particular, we define a matrix game to be one in which fitness is expressed in terms of strategy frequencies and a matrix of expected payoffs. It is shown that the G-function in the matrix game setting must again take on a maximum value at all the strategies which make up the ESS coalition vector. The reformulated maximum principle is applicable to both bilinear and nonlinear matrix games. One advantage in employing this principle to solve the traditional bilinear matrix game is that the same G-function is used to find both pure and mixed strategy solutions by simply specifying an appropriate strategy space. Furthermore we show how the theory may be used to solve matrix games which are not in the usual bilinear form. We examine in detail two nonlinear matrix games: the game between relatives and the sex ratio game. In both of these games an ESS solution is determined. These examples not only illustrate the usefulness of this approach to finding solutions to an expanded class of matrix games, but aids in understanding the nature of the ESS as well.
While Heisenberg Is Not Looking: The Strength of "Weak Measurements" in Educational Research
ERIC Educational Resources Information Center
Geelan, David R.
2015-01-01
The concept of "weak measurements" in quantum physics is a way of "cheating" the Uncertainty Principle. Heisenberg stated (and 85 years of experiments have demonstrated) that it is impossible to know both the position and momentum of a particle with arbitrary precision. More precise measurements of one decrease the precision…
Motion control of a gantry crane with a container
NASA Astrophysics Data System (ADS)
Shugailo, T. S.; Yushkov, M. P.
2018-05-01
The transportation of a container by a gantry crane in a given time from one point of space to another is considered. The system is at rest at the end of the motion. A maximum admissible speed is taken into account. The control force is found using either the Pontryagin maximum principle or the generalized Gauss principle. The advantages of the second method over the first one is demonstrated.
Thermodynamic resource theories, non-commutativity and maximum entropy principles
NASA Astrophysics Data System (ADS)
Lostaglio, Matteo; Jennings, David; Rudolph, Terry
2017-04-01
We discuss some features of thermodynamics in the presence of multiple conserved quantities. We prove a generalisation of Landauer principle illustrating tradeoffs between the erasure costs paid in different ‘currencies’. We then show how the maximum entropy and complete passivity approaches give different answers in the presence of multiple observables. We discuss how this seems to prevent current resource theories from fully capturing thermodynamic aspects of non-commutativity.
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barletti, Luigi, E-mail: luigi.barletti@unifi.it
2014-08-15
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
Maximum Principle in the Optimal Design of Plates with Stratified Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roubicek, Tomas
2005-03-15
An optimal design problem for a plate governed by a linear, elliptic equation with bounded thickness varying only in a single prescribed direction and with unilateral isoperimetrical-type constraints is considered. Using Murat-Tartar's homogenization theory for stratified plates and Young-measure relaxation theory, smoothness of the extended cost and constraint functionals is proved, and then the maximum principle necessary for an optimal relaxed design is derived.
Continuous quantum measurements and the action uncertainty principle
NASA Astrophysics Data System (ADS)
Mensky, Michael B.
1992-09-01
The path-integral approach to quantum theory of continuous measurements has been developed in preceding works of the author. According to this approach the measurement amplitude determining probabilities of different outputs of the measurement can be evaluated in the form of a restricted path integral (a path integral “in finite limits”). With the help of the measurement amplitude, maximum deviation of measurement outputs from the classical one can be easily determined. The aim of the present paper is to express this variance in a simpler and transparent form of a specific uncertainty principle (called the action uncertainty principle, AUP). The most simple (but weak) form of AUP is δ S≳ℏ, where S is the action functional. It can be applied for simple derivation of the Bohr-Rosenfeld inequality for measurability of gravitational field. A stronger (and having wider application) form of AUP (for ideal measurements performed in the quantum regime) is |∫{/' t″ }(δ S[ q]/δ q( t))Δ q( t) dt|≃ℏ, where the paths [ q] and [Δ q] stand correspondingly for the measurement output and for the measurement error. It can also be presented in symbolic form as Δ(Equation) Δ(Path) ≃ ℏ. This means that deviation of the observed (measured) motion from that obeying the classical equation of motion is reciprocally proportional to the uncertainty in a path (the latter uncertainty resulting from the measurement error). The consequence of AUP is that improving the measurement precision beyond the threshold of the quantum regime leads to decreasing information resulting from the measurement.
Weak photoacoustic signal detection based on the differential duffing oscillator
NASA Astrophysics Data System (ADS)
Li, Chenjing; Xu, Xuemei; Ding, Yipeng; Yin, Linzi; Dou, Beibei
2018-04-01
In view of photoacoustic spectroscopy theory, the relationship between weak photoacoustic signal and gas concentration is described. The studies, on the principle of Duffing oscillator for identifying state transition as well as determining the threshold value, have proven the feasibility of applying the Duffing oscillator in weak signal detection. An improved differential Duffing oscillator is proposed to identify weak signals with any frequency and ameliorate the signal-to-noise ratio. The analytical methods and numerical experiments of the novel model are introduced in detail to confirm its superiority. Then the signal detection system of weak photoacoustic based on differential Duffing oscillator is constructed, it is the first time that the weak signal detection method with differential Duffing oscillator is applied triumphantly in photoacoustic spectroscopy gas monitoring technology.
Galactic Shapiro delay to the Crab pulsar and limit on weak equivalence principle violation
NASA Astrophysics Data System (ADS)
Desai, Shantanu; Kahya, Emre
2018-02-01
We calculate the total galactic Shapiro delay to the Crab pulsar by including the contributions from the dark matter as well as baryonic matter along the line of sight. The total delay due to dark matter potential is about 3.4 days. For baryonic matter, we included the contributions from both the bulge and the disk, which are approximately 0.12 and 0.32 days respectively. The total delay from all the matter distribution is therefore 3.84 days. We also calculate the limit on violations of Weak equivalence principle by using observations of "nano-shot" giant pulses from the Crab pulsar with time-delay <0.4 ns, as well as using time differences between radio and optical photons observed from this pulsar. Using the former, we obtain a limit on violation of Weak equivalence principle in terms of the PPN parameter Δ γ < 2.41× 10^{-15}. From the time-difference between simultaneous optical and radio observations, we get Δ γ < 1.54× 10^{-9}. We also point out differences in our calculation of Shapiro delay and that from two recent papers (Yang and Zhang, Phys Rev D 94(10):101501, 2016; Zhang and Gong, Astrophys J 837:134, 2017), which used the same observations to obtain a corresponding limit on Δ γ.
Satellite Test of the Equivalence Principle as a Probe of Modified Newtonian Dynamics.
Pereira, Jonas P; Overduin, James M; Poyneer, Alexander J
2016-08-12
The proposed satellite test of the equivalence principle (STEP) will detect possible violations of the weak equivalence principle by measuring relative accelerations between test masses of different composition with a precision of one part in 10^{18}. A serendipitous by-product of the experimental design is that the absolute or common-mode acceleration of the test masses is also measured to high precision as they oscillate along a common axis under the influence of restoring forces produced by the position sensor currents, which in drag-free mode lead to Newtonian accelerations as small as 10^{-14} g. This is deep inside the low-acceleration regime where modified Newtonian dynamics (MOND) diverges strongly from the Newtonian limit of general relativity. We show that MOND theories (including those based on the widely used "n family" of interpolating functions as well as the covariant tensor-vector-scalar formulation) predict an easily detectable increase in the frequency of oscillations of the STEP test masses if the strong equivalence principle holds. If it does not hold, MOND predicts a cumulative increase in oscillation amplitude which is also detectable. STEP thus provides a new and potentially decisive test of Newton's law of inertia, as well as the equivalence principle in both its strong and weak forms.
Existence of weak solutions to degenerate p-Laplacian equations and integral formulas
NASA Astrophysics Data System (ADS)
Chua, Seng-Kee; Wheeden, Richard L.
2017-12-01
We study the problem of solving some general integral formulas and then apply the conclusions to obtain results about the existence of weak solutions of various degenerate p-Laplacian equations. We adapt Variational Calculus methods and the Mountain Pass Lemma without the Palais-Smale condition, and we use an abstract version of Lions' Concentration Compactness Principle II.
The classroom competence and attitudes towards pedagogical principles of beginning teachers.
Preece, P F
1994-06-01
The relationship between preservice education students' (N = 135) attitudes towards general pedagogical principles and the quality of their classroom teaching was investigated. A weak negative relationship was obtained between students' attitudes and the assessment category 'relationships with children'. This suggests that fostering positive attitudes in preservice students towards general pedagogical principles, based on practices in themselves directly associated with enhancing pupil achievement, may result in lower quality teaching because of an adverse effect this has on pupil-teacher relationships.
NASA Astrophysics Data System (ADS)
Bulgakov, V. K.; Strigunov, V. V.
2009-05-01
The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.
Second Order Boltzmann-Gibbs Principle for Polynomial Functions and Applications
NASA Astrophysics Data System (ADS)
Gonçalves, Patrícia; Jara, Milton; Simon, Marielle
2017-01-01
In this paper we give a new proof of the second order Boltzmann-Gibbs principle introduced in Gonçalves and Jara (Arch Ration Mech Anal 212(2):597-644, 2014). The proof does not impose the knowledge on the spectral gap inequality for the underlying model and it relies on a proper decomposition of the antisymmetric part of the current of the system in terms of polynomial functions. In addition, we fully derive the convergence of the equilibrium fluctuations towards (1) a trivial process in case of super-diffusive systems, (2) an Ornstein-Uhlenbeck process or the unique energy solution of the stochastic Burgers equation, as defined in Gubinelli and Jara (SPDEs Anal Comput (1):325-350, 2013) and Gubinelli and Perkowski (Arxiv:1508.07764, 2015), in case of weakly asymmetric diffusive systems. Examples and applications are presented for weakly and partial asymmetric exclusion processes, weakly asymmetric speed change exclusion processes and hamiltonian systems with exponential interactions.
Maximum predictive power and the superposition principle
NASA Technical Reports Server (NTRS)
Summhammer, Johann
1994-01-01
In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.
NASA Astrophysics Data System (ADS)
Varga, T.; Kumar, A.; Vlahos, E.; Denev, S.; Park, M.; Hong, S.; Sanehira, T.; Wang, Y.; Fennie, C. J.; Streiffer, S. K.; Ke, X.; Schiffer, P.; Gopalan, V.; Mitchell, J. F.
2009-07-01
We report the magnetic and electrical characteristics of polycrystalline FeTiO3 synthesized at high pressure that is isostructural with acentric LiNbO3 (LBO). Piezoresponse force microscopy, optical second harmonic generation, and magnetometry demonstrate ferroelectricity at and below room temperature and weak ferromagnetism below ˜120K. These results validate symmetry-based criteria and first-principles calculations of the coexistence of ferroelectricity and weak ferromagnetism in a series of transition metal titanates crystallizing in the LBO structure.
Varga, T; Kumar, A; Vlahos, E; Denev, S; Park, M; Hong, S; Sanehira, T; Wang, Y; Fennie, C J; Streiffer, S K; Ke, X; Schiffer, P; Gopalan, V; Mitchell, J F
2009-07-24
We report the magnetic and electrical characteristics of polycrystalline FeTiO_{3} synthesized at high pressure that is isostructural with acentric LiNbO_{3} (LBO). Piezoresponse force microscopy, optical second harmonic generation, and magnetometry demonstrate ferroelectricity at and below room temperature and weak ferromagnetism below approximately 120 K. These results validate symmetry-based criteria and first-principles calculations of the coexistence of ferroelectricity and weak ferromagnetism in a series of transition metal titanates crystallizing in the LBO structure.
NASA Astrophysics Data System (ADS)
Hey, Anthony J. G.; Walters, Patrick
This book provides a descriptive, popular account of quantum physics. The basic topics addressed include: waves and particles, the Heisenberg uncertainty principle, the Schroedinger equation and matter waves, atoms and nuclei, quantum tunneling, the Pauli exclusion principle and the elements, quantum cooperation and superfluids, Feynman rules, weak photons, quarks, and gluons. The applications of quantum physics to astrophyics, nuclear technology, and modern electronics are addressed.
Generalized uncertainty principle and the maximum mass of ideal white dwarfs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rashidi, Reza, E-mail: reza.rashidi@srttu.edu
The effects of a generalized uncertainty principle on the structure of an ideal white dwarf star is investigated. The equation describing the equilibrium configuration of the star is a generalized form of the Lane–Emden equation. It is proved that the star always has a finite size. It is then argued that the maximum mass of such an ideal white dwarf tends to infinity, as opposed to the conventional case where it has a finite value.
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris; ...
2016-06-13
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
First-principles Monte Carlo simulations of reaction equilibria in compressed vapors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fetisov, Evgenii O.; Kuo, I-Feng William; Knight, Chris
Predictive modeling of reaction equilibria presents one of the grand challenges in the field of molecular simulation. Difficulties in the study of such systems arise from the need (i) to accurately model both strong, short-ranged interactions leading to the formation of chemical bonds and weak interactions arising from the environment, and (ii) to sample the range of time scales involving frequent molecular collisions, slow diffusion, and infrequent reactive events. Here we present a novel reactive first-principles Monte Carlo (RxFPMC) approach that allows for investigation of reaction equilibria without the need to prespecify a set of chemical reactions and their ideal-gasmore » equilibrium constants. We apply RxFPMC to investigate a nitrogen/oxygen mixture at T = 3000 K and p = 30 GPa, i.e., conditions that are present in atmospheric lightning strikes and explosions. The RxFPMC simulations show that the solvation environment leads to a significantly enhanced NO concentration that reaches a maximum when oxygen is present in slight excess. In addition, the RxFPMC simulations indicate the formation of NO 2 and N 2O in mole fractions approaching 1%, whereas N 3 and O 3 are not observed. Lastly, the equilibrium distributions obtained from the RxFPMC simulations agree well with those from a thermochemical computer code parametrized to experimental data.« less
A proposed power assisted system of manual wheelchair based on universal design for eldery
NASA Astrophysics Data System (ADS)
Susmartini, Susy; Pryadhitama, Ilham; Herdiman, Lobes; Wahyufitriani, Cindy
2017-11-01
Difficulties in walking is high percentage case in the limitations mobility of the elderly. An assisted technology commonly used to help the elderly who have walking difficulty is a manual wheelchair. However, the elderly frequently experiences difficulties in operating manual wheelchair due to gradually degradation of their physical condition. Preliminary study results showed that the average grip strength of the hands of seven elderly subjects was 13.8 ± 6.96 kg and the value is relatively weak. In addition, the mean maximum speed of 7 elderly subjects when doing to round the wheelchair is 0.6 ± 0.2m / s. This value is only 56.4% compared with an average speed of 20-23-year age group (8 males), which is 1.1 ± 0.1 m / s. This shows that the elderly who have walking difficulty have low grip strength and speed in operating a wheelchair. On the other hand, manual wheelchairs suffer an inadequate technology solution to solve the problem. Therefore, an assistive technology is proposed to create mobility aid to accommodate the elderly needs. One approach used is Universal Design. This paper proposes a system of intervention in the manual wheelchair through the 7 principles of Universal Design approach. The preliminary principle has not been able to accommodate the needs of the elderly will become a reference in the proposed design of this study.
Non-life insurance pricing: multi-agent model
NASA Astrophysics Data System (ADS)
Darooneh, A. H.
2004-11-01
We use the maximum entropy principle for the pricing of non-life insurance and recover the Bühlmann results for the economic premium principle. The concept of economic equilibrium is revised in this respect.
Weak measurements beyond the Aharonov-Albert-Vaidman formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Shengjun; Li Yang
2011-05-15
We extend the idea of weak measurements to the general case, provide a complete treatment, and obtain results for both the regime when the preselected and postselected states (PPS) are almost orthogonal and the regime when they are exactly orthogonal. We surprisingly find that for a fixed interaction strength, there may exist a maximum signal amplification and a corresponding optimum overlap of PPS to achieve it. For weak measurements in the orthogonal regime, we find interesting quantities that play the same role that weak values play in the nonorthogonal regime.
How the Weak Variance of Momentum Can Turn Out to be Negative
NASA Astrophysics Data System (ADS)
Feyereisen, M. R.
2015-05-01
Weak values are average quantities, therefore investigating their associated variance is crucial in understanding their place in quantum mechanics. We develop the concept of a position-postselected weak variance of momentum as cohesively as possible, building primarily on material from Moyal (Mathematical Proceedings of the Cambridge Philosophical Society, Cambridge University Press, Cambridge, 1949) and Sonego (Found Phys 21(10):1135, 1991) . The weak variance is defined in terms of the Wigner function, using a standard construction from probability theory. We show this corresponds to a measurable quantity, which is not itself a weak value. It also leads naturally to a connection between the imaginary part of the weak value of momentum and the quantum potential. We study how the negativity of the Wigner function causes negative weak variances, and the implications this has on a class of `subquantum' theories. We also discuss the role of weak variances in studying determinism, deriving the classical limit from a variational principle.
Chattaraj, Pratim K; Ayers, Paul W; Melin, Junia
2007-08-07
Ayers, Parr, and Pearson recently showed that insight into the hard/soft acid/base (HSAB) principle could be obtained by analyzing the energy of reactions in hard/soft exchange reactions, i.e., reactions in which a soft acid replaces a hard acid or a soft base replaces a hard base [J. Chem. Phys., 2006, 124, 194107]. We show, in accord with the maximum hardness principle, that the hardness increases for favorable hard/soft exchange reactions and decreases when the HSAB principle indicates that hard/soft exchange reactions are unfavorable. This extends the previous work of the authors, which treated only the "double hard/soft exchange" reaction [P. K. Chattaraj and P. W. Ayers, J. Chem. Phys., 2005, 123, 086101]. We also discuss two different approaches to computing the hardness of molecules from the hardness of the composing fragments, and explain how the results differ. In the present context, it seems that the arithmetic mean of fragment softnesses is the preferable definition.
Identical Quantum Particles and Weak Discernibility
NASA Astrophysics Data System (ADS)
Dieks, Dennis; Versteegh, Marijn A. M.
2008-10-01
Saunders has recently claimed that “identical quantum particles” with an anti-symmetric state (fermions) are weakly discernible objects, just like irreflexively related ordinary objects in situations with perfect symmetry (Black’s spheres, for example). Weakly discernible objects have all their qualitative properties in common but nevertheless differ from each other by virtue of (a generalized version of) Leibniz’s principle, since they stand in relations an entity cannot have to itself. This notion of weak discernibility has been criticized as question begging, but we defend and accept it for classical cases likes Black’s spheres. We argue, however, that the quantum mechanical case is different. Here the application of the notion of weak discernibility indeed is question begging and in conflict with standard interpretational ideas. We conclude that the introduction of the conceptual resource of weak discernibility does not change the interpretational status quo in quantum mechanics.
Mega-sized concerns from the nano-sized world: the intersection of nano- and environmental ethics.
Attia, Peter
2013-09-01
As rapid advances in nanotechnology are made, we must set guidelines to balance the interests of both human beneficiaries and the environment by combining nanoethics and environmental ethics. In this paper, I reject Leopoldian holism as a practical environmental ethic with which to gauge nanotechnologies because, as a nonanthropocentric ethic, it does not value the humans who will actually use the ethic. Weak anthropocentrism is suggested as a reasonable alternative to ethics without a substantial human interest, as it treats nonhuman interests as human interests. I also establish the precautionary principle as a useful situational guideline for decision makers. Finally, I examine existing and potential applications of nanotechnology, including water purification, agriculture, mining, energy, and pollutant removal, from the perspective of weak anthropocentrism using the precautionary principle.
Shortcomings in Dealing with Psychological Effects of Natural Disasters in Iran
RABIEI, Ali; NAKHAEE, Nouzar; POURHOSSEINI, Samira Sadat
2014-01-01
Abstract Background Natural disasters result in numerous economic, social, psychological and cultural consequences. Of them, psychological consequences of disasters will affect the lives of people long after the critical conditions finish. Thus, concerning the importance of psychological support in disasters, this study has identified problems and weaknesses in dealing with the psychological effects of the disasters occurred in Iran. Methods This qualitative study was carried out using semi-structured in-depth interviews and focus groups. Sample volume consisted of 26 experts in the field of disaster management. Content analysis was used to analyze data. Results Nine major problems were identified as weaknesses in handling the psychological effects of the disaster. These weaknesses include: rescuers’ unfamiliarity with the basic principles of psychosocial support, shortage of relevant experts and inadequate training, paying no attention to the needs of specific groups, weaknesses in organizational communications, discontinuation of psychological support after disaster, unfamiliarity with native language and culture of the disaster area, little attention paid by media to psychological principles in broadcasting news, people’s long-term dependence on governmental aid. Conclusions Disaster management has various aspects; in Iran, less attention has been paid to psychological support in disasters. Increasing education at all levels, establishing responsible structures and programming seem necessary in dealing with the psychological effects of disasters. PMID:25927043
NASA Astrophysics Data System (ADS)
Mapakshi, N. K.; Chang, J.; Nakshatrala, K. B.
2018-04-01
Mathematical models for flow through porous media typically enjoy the so-called maximum principles, which place bounds on the pressure field. It is highly desirable to preserve these bounds on the pressure field in predictive numerical simulations, that is, one needs to satisfy discrete maximum principles (DMP). Unfortunately, many of the existing formulations for flow through porous media models do not satisfy DMP. This paper presents a robust, scalable numerical formulation based on variational inequalities (VI), to model non-linear flows through heterogeneous, anisotropic porous media without violating DMP. VI is an optimization technique that places bounds on the numerical solutions of partial differential equations. To crystallize the ideas, a modification to Darcy equations by taking into account pressure-dependent viscosity will be discretized using the lowest-order Raviart-Thomas (RT0) and Variational Multi-scale (VMS) finite element formulations. It will be shown that these formulations violate DMP, and, in fact, these violations increase with an increase in anisotropy. It will be shown that the proposed VI-based formulation provides a viable route to enforce DMP. Moreover, it will be shown that the proposed formulation is scalable, and can work with any numerical discretization and weak form. A series of numerical benchmark problems are solved to demonstrate the effects of heterogeneity, anisotropy and non-linearity on DMP violations under the two chosen formulations (RT0 and VMS), and that of non-linearity on solver convergence for the proposed VI-based formulation. Parallel scalability on modern computational platforms will be illustrated through strong-scaling studies, which will prove the efficiency of the proposed formulation in a parallel setting. Algorithmic scalability as the problem size is scaled up will be demonstrated through novel static-scaling studies. The performed static-scaling studies can serve as a guide for users to be able to select an appropriate discretization for a given problem size.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arima, Takashi, E-mail: tks@stat.nitech.ac.jp; Mentrelli, Andrea, E-mail: andrea.mentrelli@unibo.it; Ruggeri, Tommaso, E-mail: tommaso.ruggeri@unibo.it
Molecular extended thermodynamics of rarefied polyatomic gases is characterized by two hierarchies of equations for moments of a suitable distribution function in which the internal degrees of freedom of a molecule is taken into account. On the basis of physical relevance the truncation orders of the two hierarchies are proven to be not independent on each other, and the closure procedures based on the maximum entropy principle (MEP) and on the entropy principle (EP) are proven to be equivalent. The characteristic velocities of the emerging hyperbolic system of differential equations are compared to those obtained for monatomic gases and themore » lower bound estimate for the maximum equilibrium characteristic velocity established for monatomic gases (characterized by only one hierarchy for moments with truncation order of moments N) by Boillat and Ruggeri (1997) (λ{sub (N)}{sup E,max})/(c{sub 0}) ⩾√(6/5 (N−1/2 )),(c{sub 0}=√(5/3 k/m T)) is proven to hold also for rarefied polyatomic gases independently from the degrees of freedom of a molecule. -- Highlights: •Molecular extended thermodynamics of rarefied polyatomic gases is studied. •The relation between two hierarchies of equations for moments is derived. •The equivalence of maximum entropy principle and entropy principle is proven. •The characteristic velocities are compared to those of monatomic gases. •The lower bound of the maximum characteristic velocity is estimated.« less
Method and apparatus for evaluating structural weakness in polymer matrix composites
Wachter, E.A.; Fisher, W.G.
1996-01-09
A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image. 6 figs.
Method and apparatus for evaluating structural weakness in polymer matrix composites
Wachter, Eric A.; Fisher, Walter G.
1996-01-01
A method and apparatus for evaluating structural weaknesses in polymer matrix composites is described. An object to be studied is illuminated with laser radiation and fluorescence emanating therefrom is collected and filtered. The fluorescence is then imaged and the image is studied to determine fluorescence intensity over the surface of the object being studied and the wavelength of maximum fluorescent intensity. Such images provide a map of the structural integrity of the part being studied and weaknesses, particularly weaknesses created by exposure of the object to heat, are readily visible in the image.
Theory and Applications of Weakly Interacting Markov Processes
2018-02-03
Moderate deviation principles for stochastic dynamical systems. Boston University, Math Colloquium, March 27, 2015. • Moderate Deviation Principles for...Markov chain approximation method. Submitted. [8] E. Bayraktar and M. Ludkovski. Optimal trade execution in illiquid markets. Math . Finance, 21(4):681...701, 2011. [9] E. Bayraktar and M. Ludkovski. Liquidation in limit order books with controlled intensity. Math . Finance, 24(4):627–650, 2014. [10] P.D
Weak Galilean invariance as a selection principle for coarse-grained diffusive models.
Cairoli, Andrea; Klages, Rainer; Baule, Adrian
2018-05-29
How does the mathematical description of a system change in different reference frames? Galilei first addressed this fundamental question by formulating the famous principle of Galilean invariance. It prescribes that the equations of motion of closed systems remain the same in different inertial frames related by Galilean transformations, thus imposing strong constraints on the dynamical rules. However, real world systems are often described by coarse-grained models integrating complex internal and external interactions indistinguishably as friction and stochastic forces. Since Galilean invariance is then violated, there is seemingly no alternative principle to assess a priori the physical consistency of a given stochastic model in different inertial frames. Here, starting from the Kac-Zwanzig Hamiltonian model generating Brownian motion, we show how Galilean invariance is broken during the coarse-graining procedure when deriving stochastic equations. Our analysis leads to a set of rules characterizing systems in different inertial frames that have to be satisfied by general stochastic models, which we call "weak Galilean invariance." Several well-known stochastic processes are invariant in these terms, except the continuous-time random walk for which we derive the correct invariant description. Our results are particularly relevant for the modeling of biological systems, as they provide a theoretical principle to select physically consistent stochastic models before a validation against experimental data.
Weak characteristic information extraction from early fault of wind turbine generator gearbox
NASA Astrophysics Data System (ADS)
Xu, Xiaoli; Liu, Xiuli
2017-09-01
Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.
NASA Astrophysics Data System (ADS)
Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe
2018-04-01
The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |<10-11 if the scalar field couples to the baryon number and to |α |<10-12 if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.
Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe
2018-04-06
The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12} eV (i.e., range larger than a few 10^{5} m), we improve existing constraints by one order of magnitude to |α|<10^{-11} if the scalar field couples to the baryon number and to |α|<10^{-12} if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12} eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.
2002-12-01
Accounting and Reporting System-Field Level SWOT Strengths Weaknesses Opportunities Threats TMA Tricare Management Activity TOA Total Obligational...progression of the four principles. [Ref 3] The organization uses SWOT analysis to assist in developing the mission and business...strategy. SWOT stands for the strengths and weaknesses of the organization and the opportunities for and threats to the organization
Necessary optimality conditions for infinite dimensional state constrained control problems
NASA Astrophysics Data System (ADS)
Frankowska, H.; Marchini, E. M.; Mazzola, M.
2018-06-01
This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.
DeWitt, J.B.; Springer, P.F.
1957-01-01
Short paper that reviews some of the facts about effects of insecticides on wildlife and states principles that should be followed for maximum safety in treatment. These principles include minimal doses, good ground-to-plane control to avoid overdoses, and least possible pollution of water areas.
DeWitt, J.B.; Springer, P.F.
1958-01-01
Short paper that reviews some of the facts about effects of insecticides on wildlife and states principles that should be followed for maximum safety in treatment. These principles include minimal doses, good ground-to-plane control to avoid overdoses, and least possible pollution of water areas.
Meta-Analyses of Seven of NIDA’s Principles of Drug Addiction Treatment
Pearson, Frank S.; Prendergast, Michael L.; Podus, Deborah; Vazan, Peter; Greenwell, Lisa; Hamilton, Zachary
2011-01-01
Seven of the 13 Principles of Drug Addiction Treatment disseminated by the National Institute on Drug Abuse (NIDA) were meta-analyzed as part of the Evidence-based Principles of Treatment (EPT) project. By averaging outcomes over the diverse programs included in EPT, we found that five of the NIDA principles examined are supported: matching treatment to the client’s needs; attending to the multiple needs of clients; behavioral counseling interventions; treatment plan reassessment; and counseling to reduce risk of HIV. Two of the NIDA principles are not supported: remaining in treatment for an adequate period of time and frequency of testing for drug use. These weak effects could be the result of the principles being stated too generally to apply to the diverse interventions and programs that exist or of unmeasured moderator variables being confounded with the moderators that measured the principles. Meta-analysis should be a standard tool for developing principles of effective treatment for substance use disorders. PMID:22119178
Pearson, Frank S; Prendergast, Michael L; Podus, Deborah; Vazan, Peter; Greenwell, Lisa; Hamilton, Zachary
2012-07-01
Of the 13 principles of drug addiction treatment disseminated by the National Institute on Drug Abuse (NIDA), 7 were meta-analyzed as part of the Evidence-based Principles of Treatment (EPT) project. By averaging outcomes over the diverse programs included in the EPT, we found that 5 of the NIDA principles examined are supported: matching treatment to the client's needs, attending to the multiple needs of clients, behavioral counseling interventions, treatment plan reassessment, and counseling to reduce risk of HIV. Two of the NIDA principles are not supported: remaining in treatment for an adequate period and frequency of testing for drug use. These weak effects could be the result of the principles being stated too generally to apply to the diverse interventions and programs that exist or unmeasured moderator variables being confounded with the moderators that measured the principles. Meta-analysis should be a standard tool for developing principles of effective treatment for substance use disorders. Copyright © 2012 Elsevier Inc. All rights reserved.
Bayesian structural equation modeling in sport and exercise psychology.
Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus
2015-08-01
Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.
A weak Hamiltonian finite element method for optimal guidance of an advanced launch vehicle
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Calise, Anthony J.; Bless, Robert R.; Leung, Martin
1989-01-01
A temporal finite-element method based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables, which are expanded in terms of nodal values and simple shape functions. Time derivatives of the states and costates do not appear in the governing variational equation; the only quantities whose time derivatives appear therein are virtual states and virtual costates. Numerical results are presented for an elementary trajectory optimization problem; they show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The feasibility of this approach for real-time guidance applications is evaluated. A simplified model for an advanced launch vehicle application that is suitable for finite-element solution is presented.
NASA Astrophysics Data System (ADS)
Moriarty, John A.
1988-08-01
The first-principles, density-functional version of the generalized pseudopotential theory (GPT) developed in papers I and II of this series [Phys. Rev. B 16, 2537 (1977); 26, 1754 (1982)] for empty- and filled-d-band metals is here extended to pure transition metals with partially filled d bands. The present focus is on a rigorous, real-space expansion of the bulk total energy in terms of widely transferable, structure-independent interatomic potentials, including both central-force pair interactions and angular-force triplet and quadruplet interactions. To accomplish this expansion, a specialized set of starting equations is derived from the basic local-density formalism for a pure metal, including refined expansions for the exchange-correlation terms and a simplified yet accurate representation of the cohesive energy. The parent pseudo-Green's-function formalism of the GPT is then used to develop these equations in a plane-wave, localized-d-state basis. In this basis, the cohesive energy divides quite naturally into a large volume component and a smaller structural component. The volume component,which includes all one-ion intra-atomic energy contributions, already gives a good description of the cohesion in lowest order. The structural component is expanded in terms of weak interatomic matrix elements and gives rise to a multi-ion series which establishes the interatomic potentials. Special attention is focused on the dominant d-electron contributions to this series and complete formal results for the two-ion, three-ion, and four-ion d-state potentials (vd2, vd3, and vd4) are derived. In addition, a simplified model is used to demonstrate that while vd3 can be of comparable importance to vd2, vd4 is inherently small and the series is rapidly convergent beyond three-ion interactions. Analytic model forms are also derived for vd2 and vd3 in the case of canonical d bands. In this limit, vd2 is purely attractive and varies with interatomic distance as r-10, while vd3 is weak and attractive for almost empty or filled d bands and maximum in strength and repulsive for half-filled d bands. Full first-principles expressions are then developed for the total two-ion and three-ion potentials and implemented for all 20 3d and 4d transition metals. The first-principles potentials qualitatively display all of the trends predicted by the model results, but they also reflect additional effects, including long-range hybridization tails which must be suitably screened in real-space calculations. Finally, illustrative application of the first-principles potentials is made to the calculation of the [100] phonon spectrum for V and Cr, where the importance of three-ion angular forces is explicitly demonstrated.
Code of Federal Regulations, 2010 CFR
2010-01-01
... costs. These cost principles shall apply to transactions and activities conducted under grants... AGRICULTURE UNIFORM FEDERAL ASSISTANCE REGULATIONS Cost Principles § 3015.190 Scope. This subpart makes the allowable costs incurred by the recipient the maximum amount of money a recipient is entitled to receive...
NASA Technical Reports Server (NTRS)
Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.
1978-01-01
A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.
NASA Astrophysics Data System (ADS)
Benfenati, Francesco; Beretta, Gian Paolo
2018-04-01
We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).
A review of the generalized uncertainty principle.
Tawfik, Abdel Nasser; Diab, Abdel Magied
2015-12-01
Based on string theory, black hole physics, doubly special relativity and some 'thought' experiments, minimal distance and/or maximum momentum are proposed. As alternatives to the generalized uncertainty principle (GUP), the modified dispersion relation, the space noncommutativity, the Lorentz invariance violation, and the quantum-gravity-induced birefringence effects are summarized. The origin of minimal measurable quantities and the different GUP approaches are reviewed and the corresponding observations are analysed. Bounds on the GUP parameter are discussed and implemented in the understanding of recent PLANCK observations of cosmic inflation. The higher-order GUP approaches predict minimal length uncertainty with and without maximum momenta. Possible arguments against the GUP are discussed; for instance, the concern about its compatibility with the equivalence principles, the universality of gravitational redshift and the free fall and law of reciprocal action are addressed.
Nearfield acoustic holography. I - Theory of generalized holography and the development of NAH
NASA Technical Reports Server (NTRS)
Maynard, J. D.; Williams, E. G.; Lee, Y.
1985-01-01
Because its underlying principles are so fundamental, holography has been studied and applied in many areas of science. Recently, a technique has been developed which takes the maximum advantage of the fundamental principles and extracts much more information from a hologram than is customarily associated with such a measurement. In this paper the fundamental principles of holography are reviewed, and a sound radiation measurement system, called nearfield acoustic holography (NAH), which fully exploits the fundamental principles, is described.
Use of Desired Student Outcomes in Devising Agronomic Curricula.
ERIC Educational Resources Information Center
Grabau, L. J.
1990-01-01
Four models which illustrate potential orientations for baccalaureate programs in agronomy are presented. Included are Technical Training; Information Transfer; Principles Application; and Systems Agronomy. Strengths and weaknesses of each program are discussed. (CW)
Schmidt, Jürgen
2005-01-01
Workers' autobiographies of the late 19th and early 20th centuries depict, at length, diseases both in terms of physical description and impact, and in terms of psychological effects. Drastic physical defects and their consequences are explicitly described. Many writers appear weak against the primary presumption of the strong, male body of the workers. Mourning and dejection over the authors' own weaknesses and the illnesses of others (relatives and colleagues) are prevalent. However, the masculinity of the first-person narrator, in principle, is not eclipsed or overshadowed by doubt because of disease and weakened physical condition. Diseases are metaphors for bad social conditions which lead to weakness, whilst the authors succeeded in coping with their weaknesses by compensating with other abilities and talents.
COMPLEMENTARITY OF ECOLOGICAL GOAL FUNCTIONS
This paper summarizes, in the framework of network environ analysis, a set of analyses of energy-matter flow and storage in steady state systems. The network perspective is used to codify and unify ten ecological orientors or external principles: maximum power (Lotka), maximum st...
Lehmann, A; Scheffler, Ch; Hermanussen, M
2010-02-01
Recent progress in modelling individual growth has been achieved by combining the principal component analysis and the maximum likelihood principle. This combination models growth even in incomplete sets of data and in data obtained at irregular intervals. We re-analysed late 18th century longitudinal growth of German boys from the boarding school Carlsschule in Stuttgart. The boys, aged 6-23 years, were measured at irregular 3-12 monthly intervals during the period 1771-1793. At the age of 18 years, mean height was 1652 mm, but height variation was large. The shortest boy reached 1474 mm, the tallest 1826 mm. Measured height closely paralleled modelled height, with mean difference of 4 mm, SD 7 mm. Seasonal height variation was found. Low growth rates occurred in spring and high growth rates in summer and autumn. The present study demonstrates that combining the principal component analysis and the maximum likelihood principle enables growth modelling in historic height data also. Copyright (c) 2009 Elsevier GmbH. All rights reserved.
A minimum entropy principle in the gas dynamics equations
NASA Technical Reports Server (NTRS)
Tadmor, E.
1986-01-01
Let u(x bar,t) be a weak solution of the Euler equations, governing the inviscid polytropic gas dynamics; in addition, u(x bar, t) is assumed to respect the usual entropy conditions connected with the conservative Euler equations. We show that such entropy solutions of the gas dynamics equations satisfy a minimum entropy principle, namely, that the spatial minimum of their specific entropy, (Ess inf s(u(x,t)))/x, is an increasing function of time. This principle equally applies to discrete approximations of the Euler equations such as the Godunov-type and Lax-Friedrichs schemes. Our derivation of this minimum principle makes use of the fact that there is a family of generalized entrophy functions connected with the conservative Euler equations.
A Parametric Oscillator Experiment for Undergraduates
NASA Astrophysics Data System (ADS)
Huff, Alison; Thompson, Johnathon; Pate, Jacob; Kim, Hannah; Chiao, Raymond; Sharping, Jay
We describe an upper-division undergraduate-level analytic mechanics experiment or classroom demonstration of a weakly-damped pendulum driven into parametric resonance. Students can derive the equations of motion from first principles and extract key oscillator features, such as quality factor and parametric gain, from experimental data. The apparatus is compact, portable and easily constructed from inexpensive components. Motion control and data acquisition are accomplished using an Arduino micro-controller incorporating a servo motor, laser sensor, and data logger. We record the passage time of the pendulum through its equilibrium position and obtain the maximum speed per oscillation as a function of time. As examples of the interesting physics which the experiment reveals, we present contour plots depicting the energy of the system as functions of driven frequency and modulation depth. We observe the transition to steady state oscillation and compare the experimental oscillation threshold with theoretical expectations. A thorough understanding of this hands-on laboratory exercise provides a foundation for current research in quantum information and opto-mechanics, where damped harmonic motion, quality factor, and parametric amplification are central.
Spiking computation and stochastic amplification in a neuron-like semiconductor microstructure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samardak, A. S.; Laboratory of Thin Film Technologies, Far Eastern Federal University, Vladivostok 690950; Nogaret, A.
2011-05-15
We have demonstrated the proof of principle of a semiconductor neuron, which has dendrites, axon, and a soma and computes information encoded in electrical pulses in the same way as biological neurons. Electrical impulses applied to dendrites diffuse along microwires to the soma. The soma is the active part of the neuron, which regenerates input pulses above a voltage threshold and transmits them into the axon. Our concept of neuron is a major step forward because its spatial structure controls the timing of pulses, which arrive at the soma. Dendrites and axon act as transmission delay lines, which modify themore » information, coded in the timing of pulses. We have finally shown that noise enhances the detection sensitivity of the neuron by helping the transmission of weak periodic signals. A maximum enhancement of signal transmission was observed at an optimum noise level known as stochastic resonance. The experimental results are in excellent agreement with simulations of the FitzHugh-Nagumo model. Our neuron is therefore extremely well suited to providing feedback on the various mathematical approximations of neurons and building functional networks.« less
Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints
NASA Astrophysics Data System (ADS)
Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.
2017-04-01
Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.
Rectification of graphene self-switching diodes: First-principles study
NASA Astrophysics Data System (ADS)
Ghaziasadi, Hassan; Jamasb, Shahriar; Nayebi, Payman; Fouladian, Majid
2018-05-01
The first principles calculations based on self-consistent charge density functional tight-binding have performed to investigate the electrical properties and rectification behavior of the graphene self-switching diodes (GSSD). The devices contained two structures called CG-GSSD and DG-GSSD which have metallic or semiconductor gates depending on their side gates have a single or double hydrogen edge functionalized. We have relaxed the devices and calculated I-V curves, transmission spectrums and maximum rectification ratios. We found that the DG-MSM devices are more favorable and more stable. Also, the DG-MSM devices have better maximum rectification ratios and current. Moreover, by changing the side gates widths and behaviors from semiconductor to metal, the threshold voltages under forward bias changed from +1.2 V to +0.3 V. Also, the maximum currents are obtained from 1.12 μA to 10.50 μA. Finally, the MSM and SSS type of all devices have minimum and maximum values of voltage threshold and maximum rectification ratios, but the 769-DG devices don't obey this rule.
Entropy and equilibrium via games of complexity
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2004-09-01
It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.
Fast polarization changes in mm microwave emission of weak multistructured solar bursts
NASA Technical Reports Server (NTRS)
Kaufmann, P.; Strauss, F. M.; Costa, J. E. R.; Dennis, B. R.
1982-01-01
Circular polarization of weak multistructured solar bursts was measured at mm microwaves with unprecedented sensitivity (0.03 sfu rms) and high time resolution (1ms). It was shown that sudden changes occur in the degree of polarization with time scales of 0.04 to 0.3 s. In most cases the degree of polarization attained maximum values before the maximum flux in both mm microwaves and hard X-rays with time scales of 0.04 to 1.0 s. The timing accuracy in determining the degree of polarization was 40 ms. Physical phenomena are discussed invoking one or a combination of various possible causes for the observed effects. The bursts at mm microwaves were weak compared to the contribution of the preexisting active regions, and therefore the changes in magnetoionic propagation conditions for emerging radiation plays an important role in the observed effects. Composite effects due to more than one polarizing mechanism or more than one polarized spots within the antenna beam are discussed.
Large Deviations and Transitions Between Equilibria for Stochastic Landau-Lifshitz-Gilbert Equation
NASA Astrophysics Data System (ADS)
Brzeźniak, Zdzisław; Goldys, Ben; Jegaraj, Terence
2017-11-01
We study a stochastic Landau-Lifshitz equation on a bounded interval and with finite dimensional noise. We first show that there exists a pathwise unique solution to this equation and that this solution enjoys the maximal regularity property. Next, we prove the large deviations principle for the small noise asymptotic of solutions using the weak convergence method. An essential ingredient of the proof is the compactness, or weak to strong continuity, of the solution map for a deterministic Landau-Lifschitz equation when considered as a transformation of external fields. We then apply this large deviations principle to show that small noise can cause magnetisation reversal. We also show the importance of the shape anisotropy parameter for reducing the disturbance of the solution caused by small noise. The problem is motivated by applications from ferromagnetic nanowires to the fabrication of magnetic memories.
Implication of Two-Coupled Differential Van der Pol Duffing Oscillator in Weak Signal Detection
NASA Astrophysics Data System (ADS)
Peng, Hang-hang; Xu, Xue-mei; Yang, Bing-chu; Yin, Lin-zi
2016-04-01
The principle of the Van der Pol Duffing oscillator for state transition and for determining critical value is described, which has been studied to indicate that the application of the Van der Pol Duffing oscillator in weak signal detection is feasible. On the basis of this principle, an improved two-coupled differential Van der Pol Duffing oscillator is proposed which can identify signals under any frequency and ameliorate signal-to-noise ratio (SNR). The analytical methods of the proposed model and the construction of the proposed oscillator are introduced in detail. Numerical experiments on the properties of the proposed oscillator compared with those of the Van der Pol Duffing oscillator are carried out. Our numerical simulations have confirmed the analytical treatment. The results demonstrate that this novel oscillator has better detection performance than the Van der Pol Duffing oscillator.
Principles and Design of a Zeeman–Sisyphus Decelerator for Molecular Beams
Tarbutt, M. R.
2016-01-01
Abstract We explore a technique for decelerating molecules using a static magnetic field and optical pumping. Molecules travel through a spatially varying magnetic field and are repeatedly pumped into a weak‐field seeking state as they move towards each strong field region, and into a strong‐field seeking state as they move towards weak field. The method is time‐independent and so is suitable for decelerating both pulsed and continuous molecular beams. By using guiding magnets at each weak field region, the beam can be simultaneously guided and decelerated. By tapering the magnetic field strength in the strong field regions, and exploiting the Doppler shift, the velocity distribution can be compressed during deceleration. We develop the principles of this deceleration technique, provide a realistic design, use numerical simulations to evaluate its performance for a beam of CaF, and compare this performance to other deceleration methods. PMID:27629547
Parity violation and the masslessness of the neutrino
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mannheim, P.D.
1978-09-01
It is proposed that the weak interaction be obtained by gauging the strong interaction chiral flavor group. The neutrinos are then four-component spinors. Pairs of right-handed neutrinos are allowed to condense into the vacuum. This produces maximal parity violation in both the quark and lepton sectors of the weak interaction, keeps the neutrinos massless, and also leads to the conventional Weinberg mixing pattern. The approach also in principle provides a way of calculating the Cabibbo angle. 11 references.
Possible dynamical explanations for Paltridge's principle of maximum entropy production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virgo, Nathaniel, E-mail: nathanielvirgo@gmail.com; Ikegami, Takashi, E-mail: nathanielvirgo@gmail.com
2014-12-05
Throughout the history of non-equilibrium thermodynamics a number of theories have been proposed in which complex, far from equilibrium flow systems are hypothesised to reach a steady state that maximises some quantity. Perhaps the most celebrated is Paltridge's principle of maximum entropy production for the horizontal heat flux in Earth's atmosphere, for which there is some empirical support. There have been a number of attempts to derive such a principle from maximum entropy considerations. However, we currently lack a more mechanistic explanation of how any particular system might self-organise into a state that maximises some quantity. This is in contrastmore » to equilibrium thermodynamics, in which models such as the Ising model have been a great help in understanding the relationship between the predictions of MaxEnt and the dynamics of physical systems. In this paper we show that, unlike in the equilibrium case, Paltridge-type maximisation in non-equilibrium systems cannot be achieved by a simple dynamical feedback mechanism. Nevertheless, we propose several possible mechanisms by which maximisation could occur. Showing that these occur in any real system is a task for future work. The possibilities presented here may not be the only ones. We hope that by presenting them we can provoke further discussion about the possible dynamical mechanisms behind extremum principles for non-equilibrium systems, and their relationship to predictions obtained through MaxEnt.« less
A novel conceptual framework for understanding the mechanism of adherence to long term therapies
Reach, Gérard
2008-01-01
The World Health Organization claimed recently that improving patient adherence to long term therapies would be more beneficial than any biomedical progress. First, however, we must understand its mechanisms. In this paper I propose a novel approach using concepts elaborated in a field rarely explored in medicine, the philosophy of mind. While conventional psychological models (eg, the Health Belief Model) provide explanations and predictions which have only a statistical value, the philosophical assumption that mental states (eg, beliefs) are causally efficient (mental causation) can provide the basis for a causal theory of health behaviors. This paper shows that nonadherence to long term therapies can be described as the medical expression of a philosophical concept, that is, weakness of will. I use philosophical explanations of this concept to suggest a mechanistic explanation of nonadherence. I propose that it results from the failure of two principles of rationality. First, a principle of continence, described by the philosopher Donald Davidson in his explanation of weakness of will. This principle exhorts us to act after having considered all available arguments and according to which option we consider best. However, patients conforming to this principle of continence should rationally be nonadherent. Indeed, when patients face a choice between adherence and nonadherence, they must decide, in general, between a large, but delayed reward (eg, health) and a small, but immediate reward (eg, smoking a cigarette). According to concepts elaborated by George Ainslie and Jon Elster, the force of our desires is strongly influenced by the proximity of reward. This inter-temporal choice theory on one hand, and the mere principle of continence on the other, should therefore lead to nonadherence. Nevertheless, adherence to long term therapies is possible, as a result of the intervention of an additional principle, the principle of foresight, which tells us to give priority to mental states oriented towards the future. PMID:19920939
Valente, Giordano; Taddei, Fulvia; Jonkers, Ilse
2013-09-03
The weakness of hip abductor muscles is related to lower-limb joint osteoarthritis, and joint overloading may increase the risk for disease progression. The relationship between muscle strength, structural joint deterioration and joint loading makes the latter an important parameter in the study of onset and follow-up of the disease. Since the relationship between hip abductor weakness and joint loading still remains an open question, the purpose of this study was to adopt a probabilistic modeling approach to give insights into how the weakness of hip abductor muscles, in the extent to which normal gait could be unaltered, affects ipsilateral joint contact forces. A generic musculoskeletal model was scaled to each healthy subject included in the study, and the maximum force-generating capacity of each hip abductor muscle in the model was perturbed to evaluate how all physiologically possible configurations of hip abductor weakness affected the joint contact forces during walking. In general, the muscular system was able to compensate for abductor weakness. The reduced force-generating capacity of the abductor muscles affected joint contact forces to a mild extent, with 50th percentile mean differences up to 0.5 BW (maximum 1.7 BW). There were greater increases in the peak knee joint loads than in loads at the hip or ankle. Gluteus medius, particularly the anterior compartment, was the abductor muscle with the most influence on hip and knee loads. Further studies should assess if these increases in joint loading may affect initiation and progression of osteoarthritis. Copyright © 2013 Elsevier Ltd. All rights reserved.
What's Wrong With Conservation Education?
ERIC Educational Resources Information Center
Hobart, Willis L.
1972-01-01
Conservation and conservation education are critically examined, their definitions, foundations, principles, concepts, weaknesses, failures, and successes. It is concluded educators must first educate themselves, or environmental education will fail the same way conservation education has failed to educate Americans to the intricate…
A first-principles model for orificed hollow cathode operation
NASA Technical Reports Server (NTRS)
Salhi, A.; Turchi, P. J.
1992-01-01
A theoretical model describing orificed hollow cathode discharge is presented. The approach adopted is based on a purely analytical formulation founded on first principles. The present model predicts the emission surface temperature and plasma properties such as electron temperature, number densities and plasma potential. In general, good agreements between theory and experiment are obtained. Comparison of the results with the available related experimental data shows a maximum difference of 10 percent in emission surface temperature, 20 percent in electron temperature and 35 percent in plasma potential. In case of the variation of the electron number density with the discharge current a maximum discrepancy of 36 percent is obtained. However, in the case of the variation with the cathode internal pressure, the predicted electron number density is higher than the experimental data by a maximum factor of 2.
Jarzynski equality in the context of maximum path entropy
NASA Astrophysics Data System (ADS)
González, Diego; Davis, Sergio
2017-06-01
In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.
Other Questions with Respect to the Weak Equivalence Principle
NASA Astrophysics Data System (ADS)
Smarandache, Florentin
2017-01-01
A disc rotating at high speed will exert out-of-plane forces resembling an accelerating field. Is the principle of equivalence also applicable for this process? Will someone inside an elevator in free-fall and rotating around its vertical centre, feel a gravitational force? Or will he feel a gravitational force larger than what equivalence principle requires? Does the equivalence principle remain applicable here? An airplane flies at an altitude of 1 km. The co-pilot drops an elevator-room without a passenger inside it. After one second has elapsed, the co-pilot drops four grenades in the direction of the freely-falling elevator's path. The question: Will the grenades reach the elevator before it reaches the ground? If no, why? If yes, which grenade? How will the air resistance influence the outcome?
Foundations for a theory of gravitation theories
NASA Technical Reports Server (NTRS)
Thorne, K. S.; Lee, D. L.; Lightman, A. P.
1972-01-01
A foundation is laid for future analyses of gravitation theories. This foundation is applicable to any theory formulated in terms of geometric objects defined on a 4-dimensional spacetime manifold. The foundation consists of (1) a glossary of fundamental concepts; (2) a theorem that delineates the overlap between Lagrangian-based theories and metric theories; (3) a conjecture (due to Schiff) that the Weak Equivalence Principle implies the Einstein Equivalence Principle; and (4) a plausibility argument supporting this conjecture for the special case of relativistic, Lagrangian-based theories.
Electronic structure and microscopic model of CoNb2O6
NASA Astrophysics Data System (ADS)
Molla, Kaimujjaman; Rahaman, Badiur
2018-05-01
We present the first principle density functional calculations to figure out the underlying spin model of CoNb2O6. The first principles calculations define the main paths of superexchange interaction between Co spins in this compound. We discuss the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings. A microscopic modeling based on analysis of the electronic structure of this system puts it in the interesting class of weakly couple geometrically frustrated isosceles triangular Ising antiferromagnet.
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics
Kumwenda, Maureen; Nzala, Selestine; Zulu, Joseph M
2017-08-22
While health care needs assessments have been conducted among juveniles or adolescents by researchers in developed countries, assessments using an ethics framework particularly in developing countries are lacking. We analysed the health care needs among adolescents at the Nakambala Correctional Institution in Zambia, using the Beauchamp and Childress ethics framework. The ethics approach facilitated analysis of moral injustices or dilemmas triggered by health care needs at the individual (adolescent) level. The research team utilized 35 in-depth interviews with juveniles, 6 key informant interviews and 2 focus group discussions to collect data. We analysed the data using thematic analysis. The use of three sources of data facilitated triangulation of data. Common health problems included HIV/AIDS, STIs, flu, diarrhoea, rashes, and malaria. Although there are some health promotion strategies at the Nakambala Approved School, the respondents classified the health care system as inadequate. The unfavourable social context which included clouded rooms and lack of adolescent health friendly services unfairly exposed adolescents to several health risks and behaviours thus undermining the ethics principle of social justice. In addition, the limited prioritisation of adolescent centres by the stakeholders and erratic funding also worsened injustices by weakening the health care system. Whereas the inadequate medical and drug supplies, shortage of health workers in the nearby health facilities and weak referral systems excluded the juveniles from enjoying maximum health benefits thus undermining adolescents' wellbeing or beneficence. Inadequate medical and drug supplies as well as non-availability of adolescent friendly health services at the nearest health facility did not only affect social justice and beneficence ethics principles but also threatened juveniles' privacy, liberty and confidentiality as well as autonomy with regard to health service utilisation. Adequately addressing the health needs in correctional institutions may require adopting an ethics framework in conducting health needs assessment. An ethics approach is important because it facilitates understanding of moral dilemmas that arise due to health needs. Furthermore, strategies for addressing health needs related to one ethics principle may have a positive ripple effect over other health needs as the principles are intertwined thus facilitating a comprehensive response to health needs.
Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn; Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk
2017-06-15
In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.
NASA Astrophysics Data System (ADS)
Wu, Xing-Gang; Shen, Jian-Ming; Du, Bo-Lun; Brodsky, Stanley J.
2018-05-01
As a basic requirement of the renormalization group invariance, any physical observable must be independent of the choice of both the renormalization scheme and the initial renormalization scale. In this paper, we show that by using the newly suggested C -scheme coupling, one can obtain a demonstration that the principle of maximum conformality prediction is scheme-independent to all-orders for any renormalization schemes, thus satisfying all of the conditions of the renormalization group invariance. We illustrate these features for the nonsinglet Adler function and for τ decay to ν + hadrons at the four-loop level.
NASA Astrophysics Data System (ADS)
Mardlijah; Jamil, Ahmad; Hanafi, Lukman; Sanjaya, Suharmadi
2017-09-01
There are so many benefit of algae. One of them is using for renewable energy and sustainable in the future. The greater growth of algae will increasing biodiesel production and the increase of algae growth is influenced by glucose, nutrients and photosynthesis process. In this paper, the optimal control problem of the growth of algae is discussed. The objective function is to maximize the concentration of dry algae while the control is the flow of carbon dioxide and the nutrition. The solution is obtained by applying the Pontryagin Maximum Principle. and the result show that the concentration of algae increased more than 15 %.
Optimal Control of the Valve Based on Traveling Wave Method in the Water Hammer Process
NASA Astrophysics Data System (ADS)
Cao, H. Z.; Wang, F.; Feng, J. L.; Tan, H. P.
2011-09-01
Valve regulation is an effective method for process control during the water hammer. The principle of d'Alembert traveling wave theory was used in this paper to construct the exact analytical solution of the water hammer, and the optimal speed law of the valve that can reduce the water hammer pressure in the maximum extent was obtained. Combining this law with the valve characteristic curve, the principle corresponding to the valve opening changing with time was obtained, which can be used to guide the process of valve closing and to reduce the water hammer pressure in the maximum extent.
NASA Astrophysics Data System (ADS)
Bauer, Sebastian; Suchaneck, Andre; Puente León, Fernando
2014-01-01
Depending on the actual battery temperature, electrical power demands in general have a varying impact on the life span of a battery. As electrical energy provided by the battery is needed to temper it, the question arises at which temperature which amount of energy optimally should be utilized for tempering. Therefore, the objective function that has to be optimized contains both the goal to maximize life expectancy and to minimize the amount of energy used for obtaining the first goal. In this paper, Pontryagin's maximum principle is used to derive a causal control strategy from such an objective function. The derivation of the causal strategy includes the determination of major factors that rule the optimal solution calculated with the maximum principle. The optimization is calculated offline on a desktop computer for all possible vehicle parameters and major factors. For the practical implementation in the vehicle, it is sufficient to have the values of the major factors determined only roughly in advance and the offline calculation results available. This feature sidesteps the drawback of several optimization strategies that require the exact knowledge of the future power demand. The resulting strategy's application is not limited to batteries in electric vehicles.
Applications of the principle of maximum entropy: from physics to ecology.
Banavar, Jayanth R; Maritan, Amos; Volkov, Igor
2010-02-17
There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.
A novel simultaneous streak and framing camera without principle errors
NASA Astrophysics Data System (ADS)
Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.
2018-02-01
A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.
Wigner's quantum phase-space current in weakly-anharmonic weakly-excited two-state systems
NASA Astrophysics Data System (ADS)
Kakofengitis, Dimitris; Steuernagel, Ole
2017-09-01
There are no phase-space trajectories for anharmonic quantum systems, but Wigner's phase-space representation of quantum mechanics features Wigner current J . This current reveals fine details of quantum dynamics —finer than is ordinarily thought accessible according to quantum folklore invoking Heisenberg's uncertainty principle. Here, we focus on the simplest, most intuitive, and analytically accessible aspects of J. We investigate features of J for bound states of time-reversible, weakly-anharmonic one-dimensional quantum-mechanical systems which are weakly-excited. We establish that weakly-anharmonic potentials can be grouped into three distinct classes: hard, soft, and odd potentials. We stress connections between each other and the harmonic case. We show that their Wigner current fieldline patterns can be characterised by J's discrete stagnation points, how these arise and how a quantum system's dynamics is constrained by the stagnation points' topological charge conservation. We additionally show that quantum dynamics in phase space, in the case of vanishing Planck constant ℏ or vanishing anharmonicity, does not pointwise converge to classical dynamics.
Cooper, Valentino R.; Lee, Jun Hee; Krogel, Jaron T.; ...
2015-08-06
Multiferroic BiFeO 3 exhibits excellent magnetoelectric coupling critical for magnetic information processing with minimal power consumption. Thus, the degenerate nature of the easy spin axis in the (111) plane presents roadblocks for real world applications. Here, we explore the stabilization and switchability of the weak ferromagnetic moments under applied epitaxial strain using a combination of first-principles calculations and group-theoretic analyses. We demonstrate that the antiferromagnetic moment vector can be stabilized along unique crystallographic directions ([110] and [-110]) under compressive and tensile strains. A direct coupling between the anisotropic antiferrodistortive rotations and Dzyaloshinskii-Moria interactions drives the stabilization of weak ferromagnetism. Furthermore,more » energetically competing C- and G-type magnetic orderings are observed at high compressive strains, suggesting that it may be possible to switch the weak ferromagnetism on and off under application of strain. These findings emphasize the importance of strain and antiferrodistortive rotations as routes to enhancing induced weak ferromagnetism in multiferroic oxides.« less
NASA Technical Reports Server (NTRS)
Sohn, Byung-Ju; Smith, Eric A.
1993-01-01
The maximum entropy production principle suggested by Paltridge (1975) is applied to separating the satellite-determined required total transports into atmospheric and oceanic components. Instead of using the excessively restrictive equal energy dissipation hypothesis as a deterministic tool for separating transports between the atmosphere and ocean fluids, the satellite-inferred required 2D energy transports are imposed on Paltridge's energy balance model, which is then solved as a variational problem using the equal energy dissipation hypothesis only to provide an initial guess field. It is suggested that Southern Ocean transports are weaker than previously reported. It is argued that a maximum entropy production principle can serve as a governing rule on macroscale global climate, and, in conjunction with conventional satellite measurements of the net radiation balance, provides a means to decompose atmosphere and ocean transports from the total transport field.
School District Financial Management and Banking.
ERIC Educational Resources Information Center
Dembowski, Frederick L.; Davey, Robert D.
This chapter of "Principles of School Business Management" introduces the concept of cash management, or the process of managing an institution's moneys to ensure maximum cash availability and maximum yield on investments. Four activities are involved: (1) conversion of accounts receivable to cash receipts; (2) conversion of accounts payable to…
Department of Defense Performance and Accountability Report, Fiscal Year 2006
2006-11-15
FY 2006 with a total of 35, resulting in a net gain of one material weakness over FY 2005. Each weakness and their corrective action plans are...held due to statutory requirements for use in national defense, conservation, or national emergencies. The Annual Materials Plan lists the maximum...of non- materiality instances where planning for periods of crisis were not fully developed. (Office of the Under Secretary of Defense
Jones, Harrison N; Crisp, Kelly D; Moss, Tronda; Strollo, Katherine; Robey, Randy; Sank, Jeffrey; Canfield, Michelle; Case, Laura E; Mahler, Leslie; Kravitz, Richard M; Kishnani, Priya S
2014-01-01
Respiratory muscle weakness is a primary therapeutic challenge for patients with infantile Pompe disease. We previously described the clinical implementation of a respiratory muscle training (RMT) regimen in two adults with late-onset Pompe disease; both demonstrated marked increases in inspiratory and expiratory muscle strength in response to RMT. However, the use of RMT in pediatric survivors of infantile Pompe disease has not been previously reported. We report the effects of an intensive RMT program on maximum inspiratory pressure (MIP) and maximum expiratory pressure (MEP) using A-B-A (baseline-treatment-posttest) single subject experimental design in two pediatric survivors of infantile Pompe disease. Both subjects had persistent respiratory muscle weakness despite long-term treatment with alglucosidase alfa. Subject 1 demonstrated negligible to modest increases in MIP/MEP (6% increase in MIP, d=0.25; 19% increase in MEP, d=0.87), while Subject 2 demonstrated very large increases in MIP/MEP (45% increase in MIP, d=2.38; 81% increase in MEP, d=4.31). Following three-month RMT withdrawal, both subjects maintained these strength increases and demonstrated maximal MIP and MEP values at follow-up. Intensive RMT may be a beneficial treatment for respiratory muscle weakness in pediatric survivors of infantile Pompe disease.
NASA Astrophysics Data System (ADS)
Yu, Zi; Xu, Yan; Zhang, Gui-Qing; Hu, Tao-Ping
2018-04-01
In the framework of the relativistic mean field theory including the hyperon-hyperon (YY) interactions, protoneutron stars with a weakly interacting light U boson are studied. The U-boson leads to the increase of the star maximum mass. The modification to the maximum mass by the U-boson with the strong YY interaction is larger than that with the weak YY interaction. The maximum mass of the protoneutron star is less sensitive to the U-boson than that of the neutron star. The inclusion of the U-boson narrows down the mass window for the hyperonized protoneutron stars. As g 2/μ 2 increases, the species of hyperons, which can appear in a stable protoneutron star decrease. The rotation frequency, the red shift, the momentum of inertia and the total neutrino fraction of PSR J1903-0327 are sensitive to the U-boson and change with g 2/μ 2 in an approximate linear trend. The possible way to constrain the coupling constants of the U-boson is discussed. Supported by Jiangsu Province Natural Science Foundation Youth Fund of China under Grant No. Bk20140982, National Natural Science Foundation of China under Grant No. 11447165, and Youth Innovation Promotion Association, Chinese Academy of Sciences under Grant No. 2016056, and the Development Project of Science and Technology of Jilin Province under Grant No. 20180520077JH
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
NASA Astrophysics Data System (ADS)
Hall, Alex; Taylor, Andy
2017-06-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.
Collaborative, Sequential and Isolated Decisions in Design
NASA Technical Reports Server (NTRS)
Lewis, Kemper; Mistree, Farrokh
1997-01-01
The Massachusetts Institute of Technology (MIT) Commission on Industrial Productivity, in their report Made in America, found that six recurring weaknesses were hampering American manufacturing industries. The two weaknesses most relevant to product development were 1) technological weakness in development and production, and 2) failures in cooperation. The remedies to these weaknesses are considered the essential twin pillars of CE: 1) improved development process, and 2) closer cooperation. In the MIT report, it is recognized that total cooperation among teams in a CE environment is rare in American industry, while the majority of the design research in mathematically modeling CE has assumed total cooperation. In this paper, we present mathematical constructs, based on game theoretic principles, to model degrees of collaboration characterized by approximate cooperation, sequential decision making and isolation. The design of a pressure vessel and a passenger aircraft are included as illustrative examples.
Deciphering The Fall And Rise Of The Dead Sea In Relation To Solar Forcing
NASA Astrophysics Data System (ADS)
Yousef, Shahinaz M.
2005-03-01
Solar Forcing on closed seas and Lakes is space time dependent. The Cipher of the Dead Sea level variation since 1200 BC is solved in the context of millenium and Wolf-Gleissberg solar cycles time scales. It is found that the pattern of Dead Sea level variation follows the pattern of major millenium solar cycles. The 70 m rise of Dead Sea around 1AD is due to the forcing of the maximum millenium major solar cycle. Although the pattern of the Dead Sea level variation is almost identical to major solar cycles pattern between 1100 and 1980 AD, there is a dating problem of the Dead Sea time series around 1100-1300 AD that time. A discrepancy that should be corrected for the solar and Dead Sea series to fit. Detailed level variations of the Dead Sea level for the past 200 years are solved in terms of the 80-120 years solar Wolf-Gliessberg magnetic cycles. Solar induced climate changes do happen at the turning points of those cycles. Those end-start and maximum turning points are coincident with the change in the solar rotation rate due to the presence of weak solar cycles. Such weak cycles occur in series of few cycles between the end and start of those Wolf-Gleissberg cycles. Another one or two weak r solar cycle occur following the maximum of those Wolf-Gleissberg cycles. Weak cycles induce drop in the energy budget emitted from the sun and reaching the Earth thus causing solar induced climate change. An 8 meter sudden rise of Dead Sea occur prior 1900 AD due to positive solar forcing of the second cycle of the weak cycles series on the Dead Sea. The same second weak cycle induced negative solar forcing on Lake Chad. The first weak solar cycle forced Lake Victoria to rise abruptly in 1878. The maximum turning point of the solar Wolf-Gleissberg cycle induced negative forcing on both the Aral Sea and the Dead Sea causing their shrinkage to an alarming reduced area ever since. On the other hand, few years delayed positive forcing caused Lake Chad and the Equatorial African lakes to rise abruptly by several meters. Since the present solar cycle number 23 is the first weak cycle of a series, and since it caused 1.6 m sharp rise in Lake Victoria in 1997, then there is a high probability that the Dead Sea will rise by the beginning of the second weak cycle in few years time. And since both the Aral Sea and the Dead Sea are very much in coherence since the late 1950s, then it is rather likely that the Aral Sea will rise with God's wish in the near future. However it is also demanded that Israel should allow more water of the Jordan River to feed the Dead Sea before its real death. Plans for joining the Dead sea to the Red and or to the Mediterranean Seas should be cancelled owing the damaging harm it will cause the Dead Sea as a perfect indicator of solar induced climate change on one hand. On the other hand, the Dead Sea time series always show abrupt changes that can be as high as 70 m; if we add to this a planned artificial rise of the Dead Sea to its level of the thirties, then a damaging flooding effect will ruin the establishments and environment greatly.
Comprehensive Assessment of Children and Youth with ADHD.
ERIC Educational Resources Information Center
Burcham, Barbara G.; DeMers, Stephen T.
1995-01-01
Principles of comprehensive assessment of students with attention deficit hyperactivity disorder are examined in relation to legal compliance, special considerations for cultural diversity, medical diagnosis versus educational identification, and the problem-solving assessment model. Strengths and weakness of specific strategies are identified as…
Beyond the standard Higgs after the 125 GeV Higgs discovery.
Grojean, C
2015-01-13
An elementary weakly coupled and solitary Higgs boson allows one to extend the validity of the Standard Model up to very high energy, maybe as high as the Planck scale. Nonetheless, this scenario fails to fill the universe with dark matter and does not explain the matter-antimatter asymmetry. However, amending the Standard Model tends to destabilize the weak scale by large quantum corrections to the Higgs potential. New degrees of freedom, new forces, new organizing principles are required to provide a consistent and natural description of physics beyond the standard Higgs.
Beyond the standard Higgs after the 125 GeV Higgs discovery
Grojean, C.
2015-01-01
An elementary, weakly coupled and solitary Higgs boson allows one to extend the validity of the Standard Model up to very high energy, maybe as high as the Planck scale. Nonetheless, this scenario fails to fill the universe with dark matter and does not explain the matter–antimatter asymmetry. However, amending the Standard Model tends to destabilize the weak scale by large quantum corrections to the Higgs potential. New degrees of freedom, new forces, new organizing principles are required to provide a consistent and natural description of physics beyond the standard Higgs.
Robust interferometry against imperfections based on weak value amplification
NASA Astrophysics Data System (ADS)
Fang, Chen; Huang, Jing-Zheng; Zeng, Guihua
2018-06-01
Optical interferometry has been widely used in various high-precision applications. Usually, the minimum precision of an interferometry is limited by various technical noises in practice. To suppress such kinds of noises, we propose a scheme which combines the weak measurement with the standard interferometry. The proposed scheme dramatically outperforms the standard interferometry in the signal-to-noise ratio and the robustness against noises caused by the optical elements' reflections and the offset fluctuation between two paths. A proof-of-principle experiment is demonstrated to validate the amplification theory.
NASA Technical Reports Server (NTRS)
Balbus, Steven A.; Hawley, John F.
1991-01-01
A broad class of astronomical accretion disks is presently shown to be dynamically unstable to axisymmetric disturbances in the presence of a weak magnetic field, an insight with consequently broad applicability to gaseous, differentially-rotating systems. In the first part of this work, a linear analysis is presented of the instability, which is local and extremely powerful; the maximum growth rate, which is of the order of the angular rotation velocity, is independent of the strength of the magnetic field. Fluid motions associated with the instability directly generate both poloidal and toroidal field components. In the second part of this investigation, the scaling relation between the instability's wavenumber and the Alfven velocity is demonstrated, and the independence of the maximum growth rate from magnetic field strength is confirmed.
LensEnt2: Maximum-entropy weak lens reconstruction
NASA Astrophysics Data System (ADS)
Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.
2013-08-01
LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.
On a stochastic control method for weakly coupled linear systems. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kwong, R. H.
1972-01-01
The stochastic control of two weakly coupled linear systems with different controllers is considered. Each controller only makes measurements about his own system; no information about the other system is assumed to be available. Based on the noisy measurements, the controllers are to generate independently suitable control policies which minimize a quadratic cost functional. To account for the effects of weak coupling directly, an approximate model, which involves replacing the influence of one system on the other by a white noise process is proposed. Simple suboptimal control problem for calculating the covariances of these noises is solved using the matrix minimum principle. The overall system performance based on this scheme is analyzed as a function of the degree of intersystem coupling.
NASA Technical Reports Server (NTRS)
Eby, P. B.
1978-01-01
The construction of a clock based on the beta decay process is proposed to test for any violations by the weak interaction of the strong equivalence principle bu determining whether the weak interaction coupling constant beta is spatially constant or whether it is a function of gravitational potential (U). The clock can be constructed by simply counting the beta disintegrations of some suitable source. The total number of counts are to be taken a measure of elapsed time. The accuracy of the clock is limited by the statistical fluctuations in the number of counts, N, which is equal to the square root of N. Increasing N gives a corresponding increase in accuracy. A source based on the electron capture process can be used so as to avoid low energy electron discrimination problems. Solid state and gaseous detectors are being considered. While the accuracy of this type of beta decay clock is much less than clocks based on the electromagnetic interaction, there is a corresponding lack of knowledge of the behavior of beta as a function of gravitational potential. No predictions from nonmetric theories as to variations in beta are available as yet, but they may occur at the U/sg C level.
On the connection between Maximum Drag Reduction and Newtonian fluid flow
NASA Astrophysics Data System (ADS)
Whalley, Richard; Park, Jae-Sung; Kushwaha, Anubhav; Dennis, David; Graham, Michael; Poole, Robert
2014-11-01
To date, the most successful turbulence control technique is the dissolution of certain rheology-modifying additives in liquid flows, which results in a universal maximum drag reduction (MDR) asymptote. The MDR asymptote is a well-known phenomenon in the turbulent flow of complex fluids; yet recent direct numerical simulations of Newtonian fluid flow have identified time intervals showing key features of MDR. These intervals have been termed ``hibernating turbulence'' and are a weak turbulence state which is characterised by low wall-shear stress and weak vortical flow structures. Here, in this experimental investigation, we monitor the instantaneous wall-shear stress in a fully-developed turbulent channel flow of a Newtonian fluid with a hot-film probe whilst simultaneously measuring the streamwise velocity at various distances above the wall with laser Doppler velocimetry. We show, by conditionally sampling the streamwise velocity during low wall-shear stress events, that the MDR velocity profile is approached in an additive-free, Newtonian fluid flow. This result corroborates recent numerical investigations, which suggest that the MDR asymptote in polymer solutions is closely connected to weak, transient Newtonian flow structures.
Shpielberg, O; Akkermans, E
2016-06-17
A stability analysis is presented for boundary-driven and out-of-equilibrium systems in the framework of the hydrodynamic macroscopic fluctuation theory. A Hamiltonian description is proposed which allows us to thermodynamically interpret the additivity principle. A necessary and sufficient condition for the validity of the additivity principle is obtained as an extension of the Le Chatelier principle. These stability conditions result from a diagonal quadratic form obtained using the cumulant generating function. This approach allows us to provide a proof for the stability of the weakly asymmetric exclusion process and to reduce the search for stability to the solution of two coupled linear ordinary differential equations instead of nonlinear partial differential equations. Additional potential applications of these results are discussed in the realm of classical and quantum systems.
NASA Astrophysics Data System (ADS)
Shpielberg, O.; Akkermans, E.
2016-06-01
A stability analysis is presented for boundary-driven and out-of-equilibrium systems in the framework of the hydrodynamic macroscopic fluctuation theory. A Hamiltonian description is proposed which allows us to thermodynamically interpret the additivity principle. A necessary and sufficient condition for the validity of the additivity principle is obtained as an extension of the Le Chatelier principle. These stability conditions result from a diagonal quadratic form obtained using the cumulant generating function. This approach allows us to provide a proof for the stability of the weakly asymmetric exclusion process and to reduce the search for stability to the solution of two coupled linear ordinary differential equations instead of nonlinear partial differential equations. Additional potential applications of these results are discussed in the realm of classical and quantum systems.
The principle of finiteness - a guideline for physical laws
NASA Astrophysics Data System (ADS)
Sternlieb, Abraham
2013-04-01
I propose a new principle in physics-the principle of finiteness (FP). It stems from the definition of physics as a science that deals with measurable dimensional physical quantities. Since measurement results including their errors, are always finite, FP postulates that the mathematical formulation of legitimate laws in physics should prevent exactly zero or infinite solutions. I propose finiteness as a postulate, as opposed to a statement whose validity has to be corroborated by, or derived theoretically or experimentally from other facts, theories or principles. Some consequences of FP are discussed, first in general, and then more specifically in the fields of special relativity, quantum mechanics, and quantum gravity. The corrected Lorentz transformations include an additional translation term depending on the minimum length epsilon. The relativistic gamma is replaced by a corrected gamma, that is finite for v=c. To comply with FP, physical laws should include the relevant extremum finite values in their mathematical formulation. An important prediction of FP is that there is a maximum attainable relativistic mass/energy which is the same for all subatomic particles, meaning that there is a maximum theoretical value for cosmic rays energy. The Generalized Uncertainty Principle required by Quantum Gravity is actually a necessary consequence of FP at Planck's scale. Therefore, FP may possibly contribute to the axiomatic foundation of Quantum Gravity.
NASA Astrophysics Data System (ADS)
Feehan, Paul M. N.
2017-09-01
We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].
Inter-comparison of three-dimensional models of volcanic plumes
Suzuki, Yujiro; Costa, Antonio; Cerminara, Matteo; Esposti Ongaro, Tomaso; Herzog, Michael; Van Eaton, Alexa; Denby, Leif
2016-01-01
We performed an inter-comparison study of three-dimensional models of volcanic plumes. A set of common volcanological input parameters and meteorological conditions were provided for two kinds of eruptions, representing a weak and a strong eruption column. From the different models, we compared the maximum plume height, neutral buoyancy level (where plume density equals that of the atmosphere), and level of maximum radial spreading of the umbrella cloud. We also compared the vertical profiles of eruption column properties, integrated across cross-sections of the plume (integral variables). Although the models use different numerical procedures and treatments of subgrid turbulence and particle dynamics, the inter-comparison shows qualitatively consistent results. In the weak plume case (mass eruption rate 1.5 × 106 kg s− 1), the vertical profiles of plume properties (e.g., vertical velocity, temperature) are similar among models, especially in the buoyant plume region. Variability among the simulated maximum heights is ~ 20%, whereas neutral buoyancy level and level of maximum radial spreading vary by ~ 10%. Time-averaging of the three-dimensional (3D) flow fields indicates an effective entrainment coefficient around 0.1 in the buoyant plume region, with much lower values in the jet region, which is consistent with findings of small-scale laboratory experiments. On the other hand, the strong plume case (mass eruption rate 1.5 × 109 kg s− 1) shows greater variability in the vertical plume profiles predicted by the different models. Our analysis suggests that the unstable flow dynamics in the strong plume enhances differences in the formulation and numerical solution of the models. This is especially evident in the overshooting top of the plume, which extends a significant portion (~ 1/8) of the maximum plume height. Nonetheless, overall variability in the spreading level and neutral buoyancy level is ~ 20%, whereas that of maximum height is ~ 10%. This inter-comparison study has highlighted the different capabilities of 3D volcanic plume models, and identified key features of weak and strong plumes, including the roles of jet stability, entrainment efficiency, and particle non-equilibrium, which deserve future investigation in field, laboratory, and numerical studies.
NASA Astrophysics Data System (ADS)
Kovalev, A. M.
The problem of the motion of a mechanical system with constraints conforming to Hamilton's principle is stated as an optimum control problem, with equations of motion obtained on the basis of Pontriagin's principle. A Hamiltonian function in Rodrigues-Hamilton parameters for a gyrostat in a potential force field is obtained as an example. Equations describing the motion of a skate on a sloping surface and the motion of a disk on a horizontal plane are examined.
Cosmological horizons, uncertainty principle, and maximum length quantum mechanics
NASA Astrophysics Data System (ADS)
Perivolaropoulos, L.
2017-05-01
The cosmological particle horizon is the maximum measurable length in the Universe. The existence of such a maximum observable length scale implies a modification of the quantum uncertainty principle. Thus due to nonlocality of quantum mechanics, the global properties of the Universe could produce a signature on the behavior of local quantum systems. A generalized uncertainty principle (GUP) that is consistent with the existence of such a maximum observable length scale lmax is Δ x Δ p ≥ℏ2/1/1 -α Δ x2 where α =lmax-2≃(H0/c )2 (H0 is the Hubble parameter and c is the speed of light). In addition to the existence of a maximum measurable length lmax=1/√{α }, this form of GUP implies also the existence of a minimum measurable momentum pmin=3/√{3 } 4 ℏ√{α }. Using appropriate representation of the position and momentum quantum operators we show that the spectrum of the one-dimensional harmonic oscillator becomes E¯n=2 n +1 +λnα ¯ where E¯n≡2 En/ℏω is the dimensionless properly normalized n th energy level, α ¯ is a dimensionless parameter with α ¯≡α ℏ/m ω and λn˜n2 for n ≫1 (we show the full form of λn in the text). For a typical vibrating diatomic molecule and lmax=c /H0 we find α ¯˜10-77 and therefore for such a system, this effect is beyond the reach of current experiments. However, this effect could be more important in the early Universe and could produce signatures in the primordial perturbation spectrum induced by quantum fluctuations of the inflaton field.
Verification of the Uncertainty Principle by Using Diffraction of Light Waves
ERIC Educational Resources Information Center
Nikolic, D.; Nesic, Lj
2011-01-01
We described a simple idea for experimental verification of the uncertainty principle for light waves. We used a single-slit diffraction of a laser beam for measuring the angular width of zero-order diffraction maximum and obtained the corresponding wave number uncertainty. We will assume that the uncertainty in position is the slit width. For the…
NASA Astrophysics Data System (ADS)
Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.
2017-11-01
This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation
Simulation of a Moving Elastic Beam Using Hamilton’s Weak Principle
2006-03-01
versions were limited to two-dimensional systems with open tree configurations (where a cut in any component separates the system in half) [48]. This...whose com- ponents experienced large angular rotations (turbomachinery, camshafts , flywheels, etc.). More complex systems required the simultaneous
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum
NASA Astrophysics Data System (ADS)
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 104 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10-9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10-14 for a pendulum dipole less than 10-9 A m2. The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ˜ 10-14.
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum.
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 10 4 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10 -9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10 -14 for a pendulum dipole less than 10 -9 A m 2 . The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ∼ 10 -14 .
Theory of dissociative tunneling ionization
NASA Astrophysics Data System (ADS)
Svensmark, Jens; Tolstikhin, Oleg I.; Madsen, Lars Bojer
2016-05-01
We present a theoretical study of the dissociative tunneling ionization process. Analytic expressions for the nuclear kinetic energy distribution of the ionization rates are derived. A particularly simple expression for the spectrum is found by using the Born-Oppenheimer (BO) approximation in conjunction with the reflection principle. These spectra are compared to exact non-BO ab initio spectra obtained through model calculations with a quantum mechanical treatment of both the electronic and nuclear degrees of freedom. In the regime where the BO approximation is applicable, imaging of the BO nuclear wave function is demonstrated to be possible through reverse use of the reflection principle, when accounting appropriately for the electronic ionization rate. A qualitative difference between the exact and BO wave functions in the asymptotic region of large electronic distances is shown. Additionally, the behavior of the wave function across the turning line is seen to be reminiscent of light refraction. For weak fields, where the BO approximation does not apply, the weak-field asymptotic theory describes the spectrum accurately.
Tests of gravity with future space-based experiments
NASA Astrophysics Data System (ADS)
Sakstein, Jeremy
2018-03-01
Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.
An EEG blind source separation algorithm based on a weak exclusion principle.
Lan Ma; Blu, Thierry; Wang, William S-Y
2016-08-01
The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Monasson, Rémi
2015-09-01
The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.
Thermal Conductivity and Large Isotope Effect in GaN from First Principles
2012-08-28
August 2012) We present atomistic first principles results for the lattice thermal conductivity of GaN and compare them to those for GaP, GaAs, and GaSb ...weak scattering results from stiff atomic bonds and the large Ga to N mass ratio, which give phonons high frequencies and also a pronounced energy gap...66.70.f, 63.20.kg, 71.15.m Introduction.—Gallium nitride (GaN) is a wide band gap semiconductor and a promising candidate for use in opto- electronic
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
Maximum entropy method applied to deblurring images on a MasPar MP-1 computer
NASA Technical Reports Server (NTRS)
Bonavito, N. L.; Dorband, John; Busse, Tim
1991-01-01
A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.
NASA Astrophysics Data System (ADS)
Gauthier, D.; Hutchinson, D. J.
2012-04-01
We present simple estimates of the maximum possible critical length of damage or fracture in a weak snowpack layer required to maintain the propagation that leads to avalanche release, based on observations of 'en-echelon' slab fractures during avalanche release. These slab fractures may be preserved in situ if the slab does not slide down slope. The en-echelon fractures are spaced evenly, normally with one every one to ten metres or more. We consider a simple two-dimensional model of a slab and weak layer, with upslope fracture propagating the weak layer, and examine the relationship between the weak layer and en-echelon slab fractures. We assume that the slab fracture occurs in tension, and initiates at either the base or surface of the slab in the area of peak tensile stress at the tip of the weak layer fracture. We also assume that if at the time the slab is completely bisected by fracture the propagation in the weak layer will arrest spontaneously if it has not advanced beyond the critical length. In this scenario, en-echelon slab fractures may only form when the weak layer fracture repeatedly exceeds the critical length; otherwise, there could be only a single slab fracture. We estimate the position of the weak layer fracture at the time of slab bisection using the slab thickness and ratio between the fracture speeds in the weak layer and slab. We show that in the simple model en-echelon fractures only form if the slab thickness multiplied by the velocity ratio is greater than the critical length. Of course, the critical length must also be less than the en-echelon spacing. It follows that the first relationship must be valid independent of the occurrence of en-echelon fractures, although the speed ratio may be process-dependent and difficult to estimate. We use this method to calculate maximum critical lengths for propagation in actual avalanches with and without en echelon fractures, and discuss the implications for comparing competing propagation models. Furthermore, we discuss the possible applications to other cases of progressive basal failure and en-echelon fracturing, e.g. the ribbed flow bowls or so-called 'thumbprint' morphology which sometimes develops during landsliding in sensitive clay soils.
NASA Astrophysics Data System (ADS)
Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf
2017-09-01
There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.
NASA Astrophysics Data System (ADS)
Nakanishi, Akitaka; Fukushima, Tetsuya; Uede, Hiroki; Katayama-Yoshida, Hiroshi
2015-03-01
In order to realize the super-high-TC superconductors (TC>1,000K) based on the general design rules for the negative Ueff system, we have performed computational materials design for theUeff<0 system in the hole-doped two-dimensional (2D) Delafossite CuAlO2, AgAlO2 and AuAlO2 from the first principles. We find the interesting chemical trend of TC in 2D and 3D systems; where the TC increases exponentially in the weak coupling regime (|Ueff (-0.44eV)|< W(2eV), W is the band width) for hole-doped CuFeS2, then the TC goes through a maximum when |Ueff (-4.88eV, -4.14eV)| ~ W (2.8eV, 3.5eV) for hole-doped AgAlO2 and AuAlO2, and the TC decreases with increasing |Ueff|in strong coupling regime, where |Ueff (-4.53eV)|> W(1.7eV) for hole-doped CuAlO2
Hu, Xiaogang; Suresh, Aneesha K; Rymer, William Z; Suresh, Nina L
2015-12-01
The advancement of surface electromyogram (sEMG) recording and signal processing techniques has allowed us to characterize the recruitment properties of a substantial population of motor units (MUs) non-invasively. Here we seek to determine whether MU recruitment properties are modified in paretic muscles of hemispheric stroke survivors. Using an advanced EMG sensor array, we recorded sEMG during isometric contractions of the first dorsal interosseous muscle over a range of contraction levels, from 20% to 60% of maximum, in both paretic and contralateral muscles of stroke survivors. Using MU decomposition techniques, MU action potential amplitudes and recruitment thresholds were derived for simultaneously activated MUs in each isometric contraction. Our results show a significant disruption of recruitment organization in paretic muscles, in that the size principle describing recruitment rank order was materially distorted. MUs were recruited over a very narrow force range with increasing force output, generating a strong clustering effect, when referenced to recruitment force magnitude. Such disturbances in MU properties also correlated well with the impairment of voluntary force generation. Our findings provide direct evidence regarding MU recruitment modifications in paretic muscles of stroke survivors, and suggest that these modifications may contribute to weakness for voluntary contractions.
MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle.
Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter
2017-12-08
According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10^{-15} precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ(Ti,Pt)=[-1±9(stat)±9(syst)]×10^{-15} (1σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.
MICROSCOPE Mission: First Results of a Space Test of the Equivalence Principle
NASA Astrophysics Data System (ADS)
Touboul, Pierre; Métris, Gilles; Rodrigues, Manuel; André, Yves; Baghi, Quentin; Bergé, Joël; Boulanger, Damien; Bremer, Stefanie; Carle, Patrice; Chhun, Ratana; Christophe, Bruno; Cipolla, Valerio; Damour, Thibault; Danto, Pascale; Dittus, Hansjoerg; Fayet, Pierre; Foulon, Bernard; Gageant, Claude; Guidotti, Pierre-Yves; Hagedorn, Daniel; Hardy, Emilie; Huynh, Phuong-Anh; Inchauspe, Henri; Kayser, Patrick; Lala, Stéphanie; Lämmerzahl, Claus; Lebat, Vincent; Leseur, Pierre; Liorzou, Françoise; List, Meike; Löffler, Frank; Panet, Isabelle; Pouilloux, Benjamin; Prieur, Pascal; Rebray, Alexandre; Reynaud, Serge; Rievers, Benny; Robert, Alain; Selig, Hanns; Serron, Laura; Sumner, Timothy; Tanguy, Nicolas; Visser, Pieter
2017-12-01
According to the weak equivalence principle, all bodies should fall at the same rate in a gravitational field. The MICROSCOPE satellite, launched in April 2016, aims to test its validity at the 10-15 precision level, by measuring the force required to maintain two test masses (of titanium and platinum alloys) exactly in the same orbit. A nonvanishing result would correspond to a violation of the equivalence principle, or to the discovery of a new long-range force. Analysis of the first data gives δ (Ti ,Pt )=[-1 ±9 (stat)±9 (syst)]×10-15 (1 σ statistical uncertainty) for the titanium-platinum Eötvös parameter characterizing the relative difference in their free-fall accelerations.
The SWOT Team Approach: Focusing on Minorities.
ERIC Educational Resources Information Center
Gorski, Susan E.
1991-01-01
Underscores the applicability of marketing principles to minority student recruitment and retention at community colleges. Proposes the assessment of an institution's Strengths, Weaknesses, and external Opportunities and Threats (SWOT) to strategically market the college. Considers the development of a plan for action based on the SWOT analysis.…
In Praise of Monetary Motivation.
ERIC Educational Resources Information Center
Piamonte, John S.
1979-01-01
Although management has built remuneration policies on the belief that money does not motivate personnel, the author states that the best way to encourage high performance is still money if administered correctly. He discusses behavior theories, incentive/contingency principles, the weaknesses of many merit pay schemes, and factors in employee…
F-region enhancements induced by solar flares
NASA Technical Reports Server (NTRS)
Donnelly, R. F.; Davies, K.; Grubb, R. N.; Fritz, R. B.
1976-01-01
ATS-6 total electron content (NT) observations during solar flares exhibit four types of response: (1) a sudden increase in NT (SITEC) for about 2 min with several maxima in growth rate, then a maximum or a distinct slowing in growth, followed by a slow smooth increase to a flat peak, and finally a slow decay in NT; (2) a SITEC that occurs during ionospheric storms, where NT decays abruptly after the first maximum; (3) slow enhancements devoid of distinct impulsive structure in growth rate; and (4) no distinct response in NT, even for relatively large soft X-ray flares. Flare-induced increases in NT are dominated by low-loss F2 ionization produced by 90-911-A emission. The impulsive flare component is relatively intense in the 90-911-A range, but is short lived and weak for flares near the edge of the visible solar disk and for certain slow flares. The impulsive flare component produces the rapid rise, the sharp maxima in growth rate, and the first maximum in SITECs. The slow flare components are strong in the 1-90-A range but relatively weak in the 90-911-A range and accumulatively contribute to the second maximum in type 1 and 3 events, except during storms when F2 loss rates are abnormally high in type 2 events.
Ko, Mi-Hwa
2018-01-01
In this paper, based on the Rosenthal-type inequality for asymptotically negatively associated random vectors with values in [Formula: see text], we establish results on [Formula: see text]-convergence and complete convergence of the maximums of partial sums are established. We also obtain weak laws of large numbers for coordinatewise asymptotically negatively associated random vectors with values in [Formula: see text].
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
2017-03-01
ABSTRACT (maximum 200 words) This study applied knowledge management (KM) theories and principles to develop and implement a KM program for the... principles to develop and implement a KM program for the Naval Sea Systems Command (NAVSEA) that strengthens the workforce’s understanding of the...23 C. EXECUTION AND SUSTAINMENT .............................................. 24 1. Marketing
Design principles for wave plate metasurfaces using plasmonic L-shaped nanoantennas
NASA Astrophysics Data System (ADS)
Tahir, Asad A.; Schulz, Sebastian A.; De Leon, Israel; Boyd, Robert W.
2017-03-01
Plasmonic L-shaped antennas are an important building block of metasurfaces and have been used to fabricate ultra-thin wave plates. In this work we present principles that can be used to design wave plates at a wavelength of choice and for diverse application requirements using arrays of L-shaped plasmonic antennas. We derive these design principles by studying the behavior of the vast parameter space of these antenna arrays. We show that there are two distinct regimes: a weak inter-particle coupling and a strong inter-particle coupling regime. We describe the behavior of the antenna array in each regime with regards to wave plate functionality, without resorting to approximate theoretical models. Our work is the first to explain these design principles and serves as a guide for designing wave plates for specific application requirements using plasmonic L-shaped antenna arrays.
Cureton's Basic Principles of Physical Fitness Work (Rules for Conducting Exercise).
ERIC Educational Resources Information Center
President's Council on Physical Fitness and Sports, Washington, DC.
This document is an annotated list of 20 rules for conducting exercise. Among the rules described are the warm-up rule, the rule for regulation of exercise dosage, recuperation rule, posture rule, glandular fitness rule, maximum respiration rule, and maximum circulation rule. The time of workout and procedures for taking cool baths are…
Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection
Zhang, Haibin; He, Qingbo; Kong, Fanrang
2015-01-01
Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system. PMID:26343662
Stochastic Resonance in an Underdamped System with Pinning Potential for Weak Signal Detection.
Zhang, Haibin; He, Qingbo; Kong, Fanrang
2015-08-28
Stochastic resonance (SR) has been proved to be an effective approach for weak sensor signal detection. This study presents a new weak signal detection method based on a SR in an underdamped system, which consists of a pinning potential model. The model was firstly discovered from magnetic domain wall (DW) in ferromagnetic strips. We analyze the principle of the proposed underdamped pinning SR (UPSR) system, the detailed numerical simulation and system performance. We also propose the strategy of selecting the proper damping factor and other system parameters to match a weak signal, input noise and to generate the highest output signal-to-noise ratio (SNR). Finally, we have verified its effectiveness with both simulated and experimental input signals. Results indicate that the UPSR performs better in weak signal detection than the conventional SR (CSR) with merits of higher output SNR, better anti-noise and frequency response capability. Besides, the system can be designed accurately and efficiently owing to the sensibility of parameters and potential diversity. The features also weaken the limitation of small parameters on SR system.
Large Deviations in Weakly Interacting Boundary Driven Lattice Gases
NASA Astrophysics Data System (ADS)
van Wijland, Frédéric; Rácz, Zoltán
2005-01-01
One-dimensional, boundary-driven lattice gases with local interactions are studied in the weakly interacting limit. The density profiles and the correlation functions are calculated to first order in the interaction strength for zero-range and short-range processes differing only in the specifics of the detailed-balance dynamics. Furthermore, the effective free-energy (large-deviation function) and the integrated current distribution are also found to this order. From the former, we find that the boundary drive generates long-range correlations only for the short-range dynamics while the latter provides support to an additivity principle recently proposed by Bodineau and Derrida.
Strong correlation effects on surfaces of topological insulators via holography
NASA Astrophysics Data System (ADS)
Seo, Yunseok; Song, Geunho; Sin, Sang-Jin
2017-07-01
We investigate the effects of strong correlation on the surface state of a topological insulator (TI). We argue that electrons in the regime of crossover from weak antilocalization to weak localization are strongly correlated, and calculate the magnetotransport coefficients of TIs using the gauge-gravity principle. Then, we examine the magnetoconductivity (MC) formula and find excellent agreement with the data of chrome-doped Bi2Te3 in the crossover regime. We also find that the cusplike peak in MC at low doping is absent, which is natural since quasiparticles disappear due to the strong correlation.
Limit Theorems for Dispersing Billiards with Cusps
NASA Astrophysics Data System (ADS)
Bálint, P.; Chernov, N.; Dolgopyat, D.
2011-12-01
Dispersing billiards with cusps are deterministic dynamical systems with a mild degree of chaos, exhibiting "intermittent" behavior that alternates between regular and chaotic patterns. Their statistical properties are therefore weak and delicate. They are characterized by a slow (power-law) decay of correlations, and as a result the classical central limit theorem fails. We prove that a non-classical central limit theorem holds, with a scaling factor of {sqrt{nlog n}} replacing the standard {sqrt{n}} . We also derive the respective Weak Invariance Principle, and we identify the class of observables for which the classical CLT still holds.
NASA Technical Reports Server (NTRS)
Hoflich, P.; Khokhlov, A. M.; Wheeler, J. C.
1995-01-01
We compute optical and infrared light curves of the pulsating class of delayed detonation models for Type Ia supernovae (SN Ia's) using an elaborate treatment of the Local Thermodynamic Equilbrium (LTE) radiation transport, equation of state and ionization balance, expansion opacity including the cooling by CO, Co(+), and SiO, and a Monte Carlo gamma-ray deposition scheme. The models have an amount of Ni-56 in the range from approximately or equal to 0.1 solar mass up to 0.7 solar mass depending on the density at which the transition from a deflagration to a detonation occurs. Models with a large nickel production give light curves comparable to those of typical Type Ia supernovae. Subluminous supernovae can be explained by models with a low nickel production. Multiband light curves are presented in comparison with the normally bright event SN 1992bc and the subluminous events Sn 1991bg and SN 1992bo to establish the principle that the delayed detonation paradigm in Chandrasekhar mass models may give a common explosion mechanism accounting for both normal and subluminous SN Ia's. Secondary IR-maxima are formed in the models of normal SN Ia's as a photospheric effect if the photospheric radius continues to increase well after maximum light. Secondary maxima appear later and stronger in models with moderate expansion velocities and with radioactive material closer to the surface. Model light curves for subluminous SN Ia's tend to show only one 'late' IR-maximum. In some delayed detonation models shell-like envelopes form, which consist of unburned carbon and oxygen. The formation of molecules in these envelopes is addressed. If the model retains a C/O-envelope and is subluminous, strong vibration bands of CO may appear, typically several weeks past maximum light. CO should be very weak or absent in normal Sn Ia's.
Maximum caliber inference of nonequilibrium processes
NASA Astrophysics Data System (ADS)
Otten, Moritz; Stock, Gerhard
2010-07-01
Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reister, D.B.; Lenhart, S.M.
Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature ofmore » the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.« less
First-principles prediction of a promising p-type transparent conductive material CsGeCl3
NASA Astrophysics Data System (ADS)
Huang, Dan; Zhao, Yu-Jun; Ju, Zhi-Ping; Gan, Li-Yong; Chen, Xin-Man; Li, Chang-Sheng; Yao, Chun-mei; Guo, Jin
2014-04-01
Most reported p-type transparent conductive materials are Cu-based compounds such as CuAlO2 and CuCrO2. Here, we report that compounds based on ns2 cations with low binding energy can also possess high valence band maximum, which is crucial for the p-type doping according to the doping limit rules. In particular, CsGeCl3, a compound with valence band maximum from ns2 cations, is predicted as a promising p-type transparent conductive material by first-principles calculations. Our results show that the p-type defect Ge vacancy dominates its intrinsic defects with a shallow transition level, and the calculated hole effective masses are low in CsGeCl3.
An understanding of human dynamics in urban subway traffic from the Maximum Entropy Principle
NASA Astrophysics Data System (ADS)
Yong, Nuo; Ni, Shunjiang; Shen, Shifei; Ji, Xuewei
2016-08-01
We studied the distribution of entry time interval in Beijing subway traffic by analyzing the smart card transaction data, and then deduced the probability distribution function of entry time interval based on the Maximum Entropy Principle. Both theoretical derivation and data statistics indicated that the entry time interval obeys power-law distribution with an exponential cutoff. In addition, we pointed out the constraint conditions for the distribution form and discussed how the constraints affect the distribution function. It is speculated that for bursts and heavy tails in human dynamics, when the fitted power exponent is less than 1.0, it cannot be a pure power-law distribution, but with an exponential cutoff, which may be ignored in the previous studies.
Efficiency of autonomous soft nanomachines at maximum power.
Seifert, Udo
2011-01-14
We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
Longitudinal vector form factors in weak decays of nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Šimkovic, F.; Department of Nuclear Physics and Biophysics, Comenius University, Mlynská dolina F1 SK–842 48 Bratislava; Kovalenko, S.
2015-10-28
The longitudinal form factors of the weak vector current of particles with spin J = 1/2 and isospin I = 1/2 are determined by the mass difference and the charge radii of members of the isotopic doublets. The most promising reactions to measure these form factors are the reactions with large momentum transfers involving the spin-1/2 isotopic doublets with a maximum mass splitting. Numerical estimates of longitudinal form factors are given for nucleons and eight nuclear spin-1/2 isotopic doublets.
The maximum entropy production and maximum Shannon information entropy in enzyme kinetics
NASA Astrophysics Data System (ADS)
Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš
2018-04-01
We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.
NASA Astrophysics Data System (ADS)
Gavrus, Adinel
2017-10-01
This scientific paper proposes to prove that the maximum work principle used by theory of continuum media plasticity can be regarded as a consequence of an optimization problem based on constructal theory (prof. Adrian BEJAN). It is known that the thermodynamics define the conservation of energy and the irreversibility of natural systems evolution. From mechanical point of view the first one permits to define the momentum balance equation, respectively the virtual power principle while the second one explains the tendency of all currents to flow from high to low values. According to the constructal law all finite-size system searches to evolve in such configurations that flow more and more easily over time distributing the imperfections in order to maximize entropy and to minimize the losses or dissipations. During a material forming process the application of constructal theory principles leads to the conclusion that under external loads the material flow is that which all dissipated mechanical power (deformation and friction) become minimal. On a mechanical point of view it is then possible to formulate the real state of all mechanical variables (stress, strain, strain rate) as those that minimize the total dissipated power. So between all other virtual non-equilibrium states, the real state minimizes the total dissipated power. It can be then obtained a variational minimization problem and this paper proof in a mathematical sense that starting from this formulation can be finding in a more general form the maximum work principle together with an equivalent form for the friction term. An application in the case of a plane compression of a plastic material shows the feasibility of the proposed minimization problem formulation to find analytical solution for both cases: one without friction influence and a second which take into account Tresca friction law. To valid the proposed formulation, a comparison with a classical analytical analysis based on slices, upper/lower bound methods and numerical Finite Element simulation is also presented.
Development for equipment of the milk macromolecules content detection
NASA Astrophysics Data System (ADS)
Ding, Guochao; Li, Weimin; Shang, Tingyi; Xi, Yang; Gao, Yunli; Zhou, Zhen
Developed an experimental device for rapid and accurate detection of milk macromolecular content. This device developed based on laser scattered through principle, the principle use of the ingredients of the scattered light and transmitted light ratio characterization of macromolecules. Peristaltic pump to achieve automatic input and output of the milk samples, designing weak signal detection amplifier circuit for detecting the ratio with ICL7650. Real-time operating system μC / OS-II is the core design of the software part of the whole system. The experimental data prove that the device can achieve a fast real-time measurement of milk macromolecules.
From the traditional concept of safety management to safety integrated with quality.
García Herrero, Susana; Mariscal Saldaña, Miguel Angel; Manzanedo del Campo, Miguel Angel; Ritzel, Dale O
2002-01-01
This editorial reviews the evolution of the concepts of safety and quality that have been used in the traditional workplace. The traditional programs of safety are explored showing strengths and weaknesses. The concept of quality management is also viewed. Safety management and quality management principles, stages, and measurement are highlighted. The concepts of quality and safety guarantee are assessed. Total Quality Management concepts are reviewed and applied to safety quality. Total safety management principles are discussed. Finally, an analysis of the relationship between quality and safety from data collected from a company in Spain is presented.
The quantum limit for gravitational-wave detectors and methods of circumventing it
NASA Technical Reports Server (NTRS)
Thorne, K. S.; Caves, C. M.; Sandberg, V. D.; Zimmermann, M.; Drever, R. W. P.
1979-01-01
The Heisenberg uncertainty principle prevents the monitoring of the complex amplitude of a mechanical oscillator more accurately than a certain limit value. This 'quantum limit' is a serious obstacle to the achievement of a 10 to the -21st gravitational-wave detection sensitivity. This paper examines the principles of the back-action evasion technique and finds that this technique may be able to overcome the problem of the quantum limit. Back-action evasion does not solve, however, other problems of detection, such as weak coupling, large amplifier noise, and large Nyquist noise.
NASA Astrophysics Data System (ADS)
Gross, Markus
2018-03-01
We consider a one-dimensional fluctuating interfacial profile governed by the Edwards–Wilkinson or the stochastic Mullins-Herring equation for periodic, standard Dirichlet and Dirichlet no-flux boundary conditions. The minimum action path of an interfacial fluctuation conditioned to reach a given maximum height M at a finite (first-passage) time T is calculated within the weak-noise approximation. Dynamic and static scaling functions for the profile shape are obtained in the transient and the equilibrium regime, i.e. for first-passage times T smaller or larger than the characteristic relaxation time, respectively. In both regimes, the profile approaches the maximum height M with a universal algebraic time dependence characterized solely by the dynamic exponent of the model. It is shown that, in the equilibrium regime, the spatial shape of the profile depends sensitively on boundary conditions and conservation laws, but it is essentially independent of them in the transient regime.
Kimbrell, George A
2009-01-01
Good governance for nanotechnology and nanomaterials is predicated on principles of general good governance. This paper discusses on what lessons we can learn from the oversight of past emerging technologies in formulating these principles. Nanotechnology provides us a valuable opportunity to apply these lessons and a duty to avoid repeating past mistakes. To do that will require mandatory regulation, grounded in precaution, that takes into account the uniqueness of nanomaterials. Moreover, this policy dialogue is not taking place in a vacuum. In applying the lessons of the past, nanotechnology provides a window to renegotiate our public's social contract on chemicals, health, the environment, and risks. Emerging technologies illuminate structural weaknesses, providing a crucial chance to ameliorate lingering regulatory inadequacies and provide much needed updates of existing laws.
Implicit Learning of Arithmetic Regularities Is Facilitated by Proximal Contrast
Prather, Richard W.
2012-01-01
Natural number arithmetic is a simple, powerful and important symbolic system. Despite intense focus on learning in cognitive development and educational research many adults have weak knowledge of the system. In current study participants learn arithmetic principles via an implicit learning paradigm. Participants learn not by solving arithmetic equations, but through viewing and evaluating example equations, similar to the implicit learning of artificial grammars. We expand this to the symbolic arithmetic system. Specifically we find that exposure to principle-inconsistent examples facilitates the acquisition of arithmetic principle knowledge if the equations are presented to the learning in a temporally proximate fashion. The results expand on research of the implicit learning of regularities and suggest that contrasting cases, show to facilitate explicit arithmetic learning, is also relevant to implicit learning of arithmetic. PMID:23119101
Anomalous skin effects in a weakly magnetized degenerate electron plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbas, G., E-mail: gohar.abbas@gcu.edu.pk; Sarfraz, M.; Shah, H. A.
2014-09-15
Fully relativistic analysis of anomalous skin effects for parallel propagating waves in a weakly magnetized degenerate electron plasma is presented and a graphical comparison is made with the results obtained using relativistic Maxwellian distribution function [G. Abbas, M. F. Bashir, and G. Murtaza, Phys. Plasmas 18, 102115 (2011)]. It is found that the penetration depth for R- and L-waves for degenerate case is qualitatively small in comparison with the Maxwellian plasma case. The quantitative reduction due to weak magnetic field in the skin depth in R-wave for degenerate plasma is large as compared to the non-degenerate one. By ignoring themore » ambient magnetic field, previous results for degenerate field free case are salvaged [A. F. Alexandrov, A. S. Bogdankevich, and A. A. Rukhadze, Principles of Plasma Electrodynamics (Springer-Verlag, Berlin/Heidelberg, 1984), p. 90].« less
Detection of light-matter interaction in the weak-coupling regime by quantum light
NASA Astrophysics Data System (ADS)
Bin, Qian; Lü, Xin-You; Zheng, Li-Li; Bin, Shang-Wu; Wu, Ying
2018-04-01
"Mollow spectroscopy" is a photon statistics spectroscopy, obtained by scanning the quantum light scattered from a source system. Here, we apply this technique to detect the weak light-matter interaction between the cavity and atom (or a mechanical oscillator) when the strong system dissipation is included. We find that the weak interaction can be measured with high accuracy when exciting the target cavity by quantum light scattered from the source halfway between the central peak and each side peak. This originally comes from the strong correlation of the injected quantum photons. In principle, our proposal can be applied into the normal cavity quantum electrodynamics system described by the Jaynes-Cummings model and an optomechanical system. Furthermore, it is state of the art for experiment even when the interaction strength is reduced to a very small value.
NASA Technical Reports Server (NTRS)
Atluri, Satya N.; Shen, Shengping
2002-01-01
In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.
Weak Bond-Based Injectable and Stimuli Responsive Hydrogels for Biomedical Applications
Ding, Xiaochu; Wang, Yadong
2017-01-01
Here we define hydrogels crosslinked by weak bonds as physical hydrogels. They possess unique features including reversible bonding, shear thinning and stimuli-responsiveness. Unlike covalently crosslinked hydrogels, physical hydrogels do not require triggers to initiate chemical reactions for in situ gelation. The drug can be fully loaded in a pre-formed hydrogel for delivery with minimal cargo leakage during injection. These benefits make physical hydrogels useful as delivery vehicles for applications in biomedical engineering. This review focuses on recent advances of physical hydrogels crosslinked by weak bonds: hydrogen bonds, ionic interactions, host-guest chemistry, hydrophobic interactions, coordination bonds and π-π stacking interactions. Understanding the principles and the state of the art of gels with these dynamic bonds may give rise to breakthroughs in many biomedical research areas including drug delivery and tissue engineering. PMID:29062484
Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray
2014-05-13
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.
Finite Volume Methods: Foundation and Analysis
NASA Technical Reports Server (NTRS)
Barth, Timothy; Ohlberger, Mario
2003-01-01
Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, porous media flow, meteorology, electromagnetics, models of biological processes, semi-conductor device simulation and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for non-oscillatory discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, E-fluxes, TVD discretization, non-oscillatory reconstruction, slope limiters, positive coefficient schemes, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diffusion terms and the extension to systems of nonlinear conservation laws.
The phenology of Arctic Ocean surface warming.
Steele, Michael; Dickinson, Suzanne
2016-09-01
In this work, we explore the seasonal relationships (i.e., the phenology) between sea ice retreat, sea surface temperature (SST), and atmospheric heat fluxes in the Pacific Sector of the Arctic Ocean, using satellite and reanalysis data. We find that where ice retreats early in most years, maximum summertime SSTs are usually warmer, relative to areas with later retreat. For any particular year, we find that anomalously early ice retreat generally leads to anomalously warm SSTs. However, this relationship is weak in the Chukchi Sea, where ocean advection plays a large role. It is also weak where retreat in a particular year happens earlier than usual, but still relatively late in the season, primarily because atmospheric heat fluxes are weak at that time. This result helps to explain the very different ocean warming responses found in two recent years with extreme ice retreat, 2007 and 2012. We also find that the timing of ice retreat impacts the date of maximum SST, owing to a change in the ocean surface buoyancy and momentum forcing that occurs in early August that we term the Late Summer Transition (LST). After the LST, enhanced mixing of the upper ocean leads to cooling of the ocean surface even while atmospheric heat fluxes are still weakly downward. Our results indicate that in the near-term, earlier ice retreat is likely to cause enhanced ocean surface warming in much of the Arctic Ocean, although not where ice retreat still occurs late in the season.
The Evaluation of Foreign-Language-Teacher Education Programmes
ERIC Educational Resources Information Center
Peacock, Matthew
2009-01-01
This article presents a new procedure for the evaluation of EFL teacher-training programmes based on principles of programme evaluation and foreign-language-teacher (FLT) education. The procedure focuses on programme strengths and weaknesses and how far the programme meets the needs of students. I tested the procedure through an evaluation of a…
Unidimensional and Multidimensional Models for Item Response Theory.
ERIC Educational Resources Information Center
McDonald, Roderick P.
This paper provides an up-to-date review of the relationship between item response theory (IRT) and (nonlinear) common factor theory and draws out of this relationship some implications for current and future research in IRT. Nonlinear common factor analysis yields a natural embodiment of the weak principle of local independence in appropriate…
Democracy's Untold Story: What World History Textbooks Neglect.
ERIC Educational Resources Information Center
Gagnon, Paul
Content weakness in textbooks is a major obstacle to effective social studies teaching. Chapters 1-3 of this book provide the Education for Democracy Project's Statement of Principles, a consideration of history's role as the core of social studies education, and the role of textbooks in teaching world history. Chapters 4-14 examine five selected…
A Critical Study of Vocational-Industrial Education in Taiwan.
ERIC Educational Resources Information Center
Koo, Po-Ken
This study was concerned with determining the kind of vocational-industrial educational programs that would best suit the needs of Taiwan. The general conditions and provisions of 27 existing vocational-industrial programs were studied to determine their strengths and weaknesses and to provide a set of principles that would serve as guideposts for…
Bologna Process Principles Integrated into Education System of Kazakhstan
ERIC Educational Resources Information Center
Nessipbayeva, Olga
2013-01-01
The purpose of this paper is to analyze the fulfillment of the parameters of the Bologna Process in the education system of Kazakhstan. The author gives short review of higher education system of the Republic of Kazakhstan with necessary data. And the weaknesses of the system of higher education are identified. Moreover, implementing…
Spin precession in spin-orbit coupled weak links: Coulomb repulsion and Pauli quenching
NASA Astrophysics Data System (ADS)
Shekhter, R. I.; Entin-Wohlman, O.; Jonson, M.; Aharony, A.
2017-12-01
A simple model for the transmission of pairs of electrons through a weak electric link in the form of a nanowire made of a material with strong electron spin-orbit interaction (SOI) is presented, with emphasis on the effects of Coulomb interactions and the Pauli exclusion principle. The constraints due to the Pauli principle are shown to "quench" the coherent SOI-induced precession of the spins when the spatial wave packets of the two electrons overlap significantly. The quenching, which results from the projection of the pair's spin states onto spin-up and spin-down states on the link, breaks up the coherent propagation in the link into a sequence of coherent hops that add incoherently. Applying the model to the transmission of Cooper pairs between two superconductors, we find that in spite of Pauli quenching, the Josephson current oscillates with the strength of the SOI, but may even change its sign (compared to the limit of the Coulomb blockade, when the quenching is absent). Conditions for an experimental detection of these features are discussed.
1985-12-01
Weissberger, A.; Rossiter, B. W., Eds.; Wiley-Interscience: New York, 1972; p 575. *16) This value is based on studies of self-assembled Langmuir - Blodgett ... liquids . The Dansyl group was chosen because its fluorescence emission maximum and quantum yield are sensitive to the polarity and acidity of the local...environment. The wavelength of maximum fluorescence depended only weakly on the character of the contacting liquid phase; the difference between
ERIC Educational Resources Information Center
Marshall, Rick
2015-01-01
Many icebergs are vulnerable to capsizing. In doing so the gravitational potential energy of the ice is increased, while that of the displaced sea water is decreased. Applying the principle of the conservation of energy shows that by capsizing, there is also a net transfer of energy to the surrounding sea water. This will be a maximum for a…
Wesson, R.L.
1988-01-01
Preliminary measurements of the stress orientation at a depth of 2 km interpreted to indicate that the regional orientation of the maximum compression is normal to the fault, and taken as evidence for a very weak fault. The orientation expected from plate tectonic arguments is about 66?? NE from the strike of the fault. Geodetic data indicate that the orientation of maximum compressive strain rate is about 43?? NE from the strike of the fault, and show nearly pure right-lateral shear acting parallel to the fault. These apparent conflicts in the inferred orientation of the axis of maximum compression may be explained in part by a model in which the fault zone is locked over a depth interval in the range of 2-5 to 15 km, but is very weak above and below that interval. This solution does require, however, a few mm/yr of creep at the surface on the San Andreas or nearby sub-parallel faults (such as the San Jacinto), which has not yet been observed, or a shallow zone near the faults of distributed deformation. -from Author
Code of Federal Regulations, 2010 CFR
2010-07-01
... paragraphs (b) and (c) of this section on the basis of a minimum rate of $4.25 per hour. This principle is... exceptions, the maximum part of the aggregate disposable earnings of an individual for any workweek which is... earner the specified amount of compensation for his personal services rendered in the workweek, or a...
Interatomic potentials in condensed matter via the maximum-entropy principle
NASA Astrophysics Data System (ADS)
Carlsson, A. E.
1987-09-01
A general method is described for the calculation of interatomic potentials in condensed-matter systems by use of a maximum-entropy Ansatz for the interatomic correlation functions. The interatomic potentials are given explicitly in terms of statistical correlation functions involving the potential energy and the structure factor of a ``reference medium.'' Illustrations are given for Al-Cu alloys and a model transition metal.
NASA Astrophysics Data System (ADS)
Anderson, R.; Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Quezada de Luna, M.; Rieben, R.; Tomov, V.
2017-04-01
In this work we present a FCT-like Maximum-Principle Preserving (MPP) method to solve the transport equation. We use high-order polynomial spaces; in particular, we consider up to 5th order spaces in two and three dimensions and 23rd order spaces in one dimension. The method combines the concepts of positive basis functions for discontinuous Galerkin finite element spatial discretization, locally defined solution bounds, element-based flux correction, and non-linear local mass redistribution. We consider a simple 1D problem with non-smooth initial data to explain and understand the behavior of different parts of the method. Convergence tests in space indicate that high-order accuracy is achieved. Numerical results from several benchmarks in two and three dimensions are also reported.
Promising thermoelectric properties of phosphorenes.
Sevik, Cem; Sevinçli, Hâldun
2016-09-02
Electronic, phononic, and thermoelectric transport properties of single layer black- and blue-phosphorene structures are investigated with first-principles based ballistic electron and phonon transport calculations employing hybrid functionals. The maximum values of room temperature thermoelectric figure of merit, ZT corresponding to armchair and zigzag directions of black-phosphorene, ∼0.5 and ∼0.25, are calculated as rather smaller than those obtained with first-principles based semiclassical Boltzmann transport theory calculations. On the other hand, the maximum value of room temperature ZT of blue-phosphorene is predicted to be substantially high and remarkable values as high as 2.5 are obtained for elevated temperatures. Besides the fact that these figures are obtained at the ballistic limit, our findings mark the strong possibility of high thermoelectric performance of blue-phosphorene in new generation thermoelectric applications.
Time optimal paths for high speed maneuvering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reister, D.B.; Lenhart, S.M.
1993-01-01
Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature ofmore » the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.« less
Elements of the cognitive universe
NASA Astrophysics Data System (ADS)
Topsøe, Flemming
2017-06-01
"The least biased inference, taking available information into account, is the one with maximum entropy". So we are taught by Jaynes. The many followers from a broad spectrum of the natural and social sciences point to the wisdom of this principle, the maximum entropy principle, MaxEnt. But "entropy" need not be tied only to classical entropy and thus to probabilistic thinking. In fact, the arguments found in Jaynes' writings and elsewhere can, as we shall attempt to demonstrate, profitably be revisited, elaborated and transformed to apply in a much more general abstract setting. The approach is based on game theoretical thinking. Philosophical considerations dealing with notions of cognition - basically truth and belief - lie behind. Quantitative elements are introduced via a concept of description effort. An interpretation of Tsallis Entropy is indicated.
Oscillatory electrostatic potential on graphene induced by group IV element decoration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Du, Chunyan; Yu, Liwei; Liu, Xiaojie
The structures and electronic properties of partial C, Si and Ge decorated graphene were investigated by first-principles calculations. The calculations show that the interaction between graphene and the decoration patches is weak and the semiconductor patches act as agents for weak electron doping without much disturbing graphene electronic π-bands. Redistribution of electrons due to the partial decoration causes the electrostatic potential lower in the decorated graphene areas, thus induced an electric field across the boundary between the decorated and non-decorated domains. Such an alternating electric field can change normal stochastic adatom diffusion to biased diffusion, leading to selective mass transport.
Transonic Shock-Wave/Boundary-Layer Interactions on an Oscillating Airfoil
NASA Technical Reports Server (NTRS)
Davis, Sanford S.; Malcolm, Gerald N.
1980-01-01
Unsteady aerodynamic loads were measured on an oscillating NACA 64A010 airfoil In the NASA Ames 11 by 11 ft Transonic Wind Tunnel. Data are presented to show the effect of the unsteady shock-wave/boundary-layer interaction on the fundamental frequency lift, moment, and pressure distributions. The data show that weak shock waves induce an unsteady pressure distribution that can be predicted quite well, while stronger shock waves cause complex frequency-dependent distributions due to flow separation. An experimental test of the principles of linearity and superposition showed that they hold for weak shock waves while flows with stronger shock waves cannot be superimposed.
Oscillatory electrostatic potential on graphene induced by group IV element decoration
Du, Chunyan; Yu, Liwei; Liu, Xiaojie; ...
2017-10-13
The structures and electronic properties of partial C, Si and Ge decorated graphene were investigated by first-principles calculations. The calculations show that the interaction between graphene and the decoration patches is weak and the semiconductor patches act as agents for weak electron doping without much disturbing graphene electronic π-bands. Redistribution of electrons due to the partial decoration causes the electrostatic potential lower in the decorated graphene areas, thus induced an electric field across the boundary between the decorated and non-decorated domains. Such an alternating electric field can change normal stochastic adatom diffusion to biased diffusion, leading to selective mass transport.
Translational vibrations between chains of hydrogen-bonded molecules in solid-state aspirin form I
NASA Astrophysics Data System (ADS)
Takahashi, Masae; Ishikawa, Yoichi
2013-06-01
We perform dispersion-corrected first-principles calculations, and far-infrared (terahertz) spectroscopic experiments at 4 K, to examine translational vibrations between chains of hydrogen-bonded molecules in solid-state aspirin form I. The calculated frequencies and relative intensities reproduce the observed spectrum to accuracy of 11 cm-1 or less. The stronger one of the two peaks assigned to the translational mode includes the stretching vibration of the weak hydrogen bond between the acetyl groups of a neighboring one-dimensional chain. The calculation of aspirin form II performed for comparison gives the stretching vibration of the weak hydrogen bond in one-dimensional chain.
Maximum powers of low-loss series-shunt FET RF switches
NASA Astrophysics Data System (ADS)
Yang, Z.; Hu, X.; Yang, J.; Simin, G.; Shur, M.; Gaska, R.
2009-02-01
Low-loss high-power single pole single throw (SPST) monolithic RF switch based on AlGaN/GaN heterojunction field effect transistors (HFETs) demonstrate the insertion loss and isolation of 0.15 dB and 45.9 dB at 0.5 GHz and 0.23 dB and 34.3 dB at 2 GHz. Maximum switching powers are estimated +47 dBm or higher. Factors determining the maximum switching powers are analyzed. Design principles to obtain equally high switching powers in the ON and OFF-states are developed.
NASA Astrophysics Data System (ADS)
Sondag, Andrea; Dittus, Hansjörg
2016-08-01
The Weak Equivalence Principle (WEP) is at the basis of General Relativity - the best theory for gravitation today. It has been and still is tested with different methods and accuracies. In this paper an overview of tests of the Weak Equivalence Principle done in the past, developed in the present and planned for the future is given. The best result up to now is derived from the data of torsion balance experiments by Schlamminger et al. (2008). An intuitive test of the WEP consists of the comparison of the accelerations of two free falling test masses of different composition. This has been carried through by Kuroda & Mio (1989, 1990) with the up to date most precise result for this setup. There is still more potential in this method, especially with a longer free fall time and sensors with a higher resolution. Providing a free fall time of 4.74 s (9.3 s using the catapult) the drop tower of the Center of Applied Space Technology and Microgravity (ZARM) at the University of Bremen is a perfect facility for further improvements. In 2001 a free fall experiment with high sensitive SQUID (Superconductive QUantum Interference Device) sensors tested the WEP with an accuracy of 10-7 (Nietzsche, 2001). For optimal conditions one could reach an accuracy of 10-13 with this setup (Vodel et al., 2001). A description of this experiment and its results is given in the next part of this paper. For the free fall of macroscopic test masses it is important to start with precisely defined starting conditions concerning the positions and velocities of the test masses. An Electrostatic Positioning System (EPS) has been developed to this purpose. It is described in the last part of this paper.
Physical Premium Principle: A New Way for Insurance Pricing
NASA Astrophysics Data System (ADS)
Darooneh, Amir H.
2005-03-01
In our previous work we suggested a way for computing the non-life insurance premium. The probable surplus of the insurer company assumed to be distributed according to the canonical ensemble theory. The Esscher premium principle appeared as its special case. The difference between our method and traditional principles for premium calculation was shown by simulation. Here we construct a theoretical foundation for the main assumption in our method, in this respect we present a new (physical) definition for the economic equilibrium. This approach let us to apply the maximum entropy principle in the economic systems. We also extend our method to deal with the problem of premium calculation for correlated risk categories. Like the Buhlman economic premium principle our method considers the effect of the market on the premium but in a different way.
Comment on "Inference with minimal Gibbs free energy in information field theory".
Iatsenko, D; Stefanovska, A; McClintock, P V E
2012-03-01
Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.
A mechanism producing power law etc. distributions
NASA Astrophysics Data System (ADS)
Li, Heling; Shen, Hongjun; Yang, Bin
2017-07-01
Power law distribution is playing an increasingly important role in the complex system study. Based on the insolvability of complex systems, the idea of incomplete statistics is utilized and expanded, three different exponential factors are introduced in equations about the normalization condition, statistical average and Shannon entropy, with probability distribution function deduced about exponential function, power function and the product form between power function and exponential function derived from Shannon entropy and maximal entropy principle. So it is shown that maximum entropy principle can totally replace equal probability hypothesis. Owing to the fact that power and probability distribution in the product form between power function and exponential function, which cannot be derived via equal probability hypothesis, can be derived by the aid of maximal entropy principle, it also can be concluded that maximal entropy principle is a basic principle which embodies concepts more extensively and reveals basic principles on motion laws of objects more fundamentally. At the same time, this principle also reveals the intrinsic link between Nature and different objects in human society and principles complied by all.
Freeman, Laura A; Anwer, Bilal; Brady, Ryan P; Smith, Benjamin C; Edelman, Theresa L; Misselt, Andrew J; Cressman, Erik N K
2010-03-01
To measure and compare temperature changes in a recently developed gel phantom for thermochemical ablation as a function of reagent strength and concentration with several acids and bases. Aliquots (0.5-1 mL) of hydrochloric acid or acetic acid and sodium hydroxide or aqueous ammonia were injected for 5 seconds into a hydrophobic gel phantom. Stepwise increments in concentration were used to survey the temperature changes caused by these reactions. Injections were performed in triplicate, measured with a thermocouple probe, and plotted as functions of concentration and time. Maximum temperatures were reached almost immediately in all cases, reaching 75 degrees C-110 degrees C at the higher concentrations. The highest temperatures were seen with hydrochloric acid and either base. More concentrated solutions of sodium hydroxide tended to mix incompletely, such that experiments at 9 M and higher were difficult to perform consistently. Higher concentrations for any reagent resulted in higher temperatures. Stronger acid and base combinations resulted in higher temperatures versus weak acid and base combinations at the same concentration. Maximum temperatures obtained are in a range known to cause tissue coagulation, and all combinations tested therefore appeared suitable for further investigation in thermochemical ablation. Because of the loss of the reaction chamber shape at higher concentrations of stronger agents, the phantom does not allow complete characterization under these circumstances. Adequate mixing of reagents to maximize heating potential and avoid systemic exposure to unreacted acid and base must be addressed if the method is to be safely employed in tissues. In addition, understanding factors that control lesion shape in a more realistic tissue model will be critical. Copyright 2010 SIR. Published by Elsevier Inc. All rights reserved.
Active and hibernating turbulence in minimal channel flow of newtonian and polymeric fluids.
Xi, Li; Graham, Michael D
2010-05-28
Turbulent channel flow of drag-reducing polymer solutions is simulated in minimal flow geometries. Even in the Newtonian limit, we find intervals of "hibernating" turbulence that display many features of the universal maximum drag reduction asymptote observed in polymer solutions: weak streamwise vortices, nearly nonexistent streamwise variations, and a mean velocity gradient that quantitatively matches experiments. As viscoelasticity increases, the frequency of these intervals also increases, while the intervals themselves are unchanged, leading to flows that increasingly resemble maximum drag reduction.
Relaxation of a High-Energy Quasiparticle in a One-Dimensional Bose Gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Shina; Glazman, Leonid I.; Pustilnik, Michael
2010-08-27
We evaluate the relaxation rate of high-energy quasiparticles in a weakly interacting one-dimensional Bose gas. Unlike in higher dimensions, the rate is a nonmonotonic function of temperature, with a maximum at the crossover to the state of suppressed density fluctuations. At the maximum, the relaxation rate may significantly exceed its zero-temperature value. We also find the dependence of the differential inelastic scattering rate on the transferred energy. This rate yields information about temperature dependence of local pair correlations.
Shiina, Yumi; Inai, Kei; Takahashi, Tatsunori; Shimomiya, Yamato; Ishizaki, Umiko; Fukushima, Kenji; Nagao, Michinobu
2018-02-01
We developed a novel imaging technique, designated as vortex flow (VF) mapping, which presents a vortex flow visually on conventional two-dimensional (2D) cine MRI. Using it, we assessed circumferential VF patterns and influences on RA thrombus and supraventricular tachycardia (SVT) in AP connection-type Fontan circulation. Retrospectively, we enrolled 27 consecutive patients (25.1 ± 9.2 years) and 7 age-matched controls who underwent cardiac MRI. Conventional cine images acquired using a 1.5-Tesla scanner were scanned for axial and coronal cross section of the RA. We developed "vortex flow mapping" to demonstrate the ratio of the circumferential voxel movement at each phase to the total movement throughout a cardiac cycle towards the RA center. The maximum ratio was used as a magnitude of vortex flow (MVF%) in RA cine imaging. We also measured percentages of strong and weak VF areas (VFA%). Furthermore, in 10 out of 27, we compared VF between previous CMR (3.8 ± 1.5 years ago) and latest CMR. Of the patients, 15 had cardiovascular complications (Group A); 12 did not (Group B). A transaxial image showed that strong VFA% in Group A was significantly smaller than that in Group B or controls. A coronal view revealed that strong VFA% was also smaller, and weak VFA% was larger in Group A than in Group B or controls (P < 0.05, and P < 0.05). Maximum MVF% in Group A was significantly smaller than in other groups (P < 0.001). Univariate logistic analyses revealed weak VFA% on a coronal image, and serum total bilirubin level as factors affecting cardiovascular complications (Odds ratio 1.14 and 66.1, 95% CI 1.004-1.30 and 1.59-2755.6, P values < 0.05 and < 0.05, respectively). Compared to the previous CMR, smaller maximum VMF%, smaller strong VFA%, and larger weak VFA% were identified in the latest CMR. Circumferentially weak VFA% on a coronal image can be one surrogate marker of SVT and thrombus in AP connection-type Fontan circulation. This simple VF assessment is clinically useful to detect blood stagnation.
NASA Astrophysics Data System (ADS)
Gomez, John A.; Henderson, Thomas M.; Scuseria, Gustavo E.
2017-11-01
In electronic structure theory, restricted single-reference coupled cluster (CC) captures weak correlation but fails catastrophically under strong correlation. Spin-projected unrestricted Hartree-Fock (SUHF), on the other hand, misses weak correlation but captures a large portion of strong correlation. The theoretical description of many important processes, e.g. molecular dissociation, requires a method capable of accurately capturing both weak and strong correlation simultaneously, and would likely benefit from a combined CC-SUHF approach. Based on what we have recently learned about SUHF written as particle-hole excitations out of a symmetry-adapted reference determinant, we here propose a heuristic CC doubles model to attenuate the dominant spin collective channel of the quadratic terms in the CC equations. Proof of principle results presented here are encouraging and point to several paths forward for improving the method further.
Designing perturbative metamaterials from discrete models.
Matlack, Kathryn H; Serra-Garcia, Marc; Palermo, Antonio; Huber, Sebastian D; Daraio, Chiara
2018-04-01
Identifying material geometries that lead to metamaterials with desired functionalities presents a challenge for the field. Discrete, or reduced-order, models provide a concise description of complex phenomena, such as negative refraction, or topological surface states; therefore, the combination of geometric building blocks to replicate discrete models presenting the desired features represents a promising approach. However, there is no reliable way to solve such an inverse problem. Here, we introduce 'perturbative metamaterials', a class of metamaterials consisting of weakly interacting unit cells. The weak interaction allows us to associate each element of the discrete model with individual geometric features of the metamaterial, thereby enabling a systematic design process. We demonstrate our approach by designing two-dimensional elastic metamaterials that realize Veselago lenses, zero-dispersion bands and topological surface phonons. While our selected examples are within the mechanical domain, the same design principle can be applied to acoustic, thermal and photonic metamaterials composed of weakly interacting unit cells.
NASA Astrophysics Data System (ADS)
Goddard, William
2013-03-01
For soft materials applications it is essential to obtain accurate descriptions of the weak (London dispersion, electrostatic) interactions between nonbond units, to include interactions with and stabilization by solvent, and to obtain accurate free energies and entropic changes during chemical, physical, and thermal processing. We will describe some of the advances being made in first principles based methods for treating soft materials with applications selected from new organic electrodes and electrolytes for batteries and fuel cells, forward osmosis for water cleanup, extended matter stable at ambient conditions, and drugs for modulating activation of GCPR membrane proteins,
Higher order temporal finite element methods through mixed formalisms.
Kim, Jinkyu
2014-01-01
The extended framework of Hamilton's principle and the mixed convolved action principle provide new rigorous weak variational formalism for a broad range of initial boundary value problems in mathematical physics and mechanics. In this paper, their potential when adopting temporally higher order approximations is investigated. The classical single-degree-of-freedom dynamical systems are primarily considered to validate and to investigate the performance of the numerical algorithms developed from both formulations. For the undamped system, all the algorithms are symplectic and unconditionally stable with respect to the time step. For the damped system, they are shown to be accurate with good convergence characteristics.
Principles of the radiosity method versus radiative transfer for canopy reflectance modeling
NASA Technical Reports Server (NTRS)
Gerstl, Siegfried A. W.; Borel, Christoph C.
1992-01-01
The radiosity method is introduced to plant canopy reflectance modeling. We review the physics principles of the radiosity method which originates in thermal radiative transfer analyses when hot and cold surfaces are considered within a given enclosure. The radiosity equation, which is an energy balance equation for discrete surfaces, is described and contrasted with the radiative transfer equation, which is a volumetric energy balance equation. Comparing the strengths and weaknesses of the radiosity method and the radiative transfer method, we conclude that both methods are complementary to each other. Results of sample calculations are given for canopy models with up to 20,000 discrete leaves.
THE STRUCTURE, MAGNETISM AND CONDUCTIVITY OF Li3V2(PO4)3: A THEORETICAL AND EXPERIMENTAL STUDY
NASA Astrophysics Data System (ADS)
Lin, Zhi-Ping; Zhao, Yu-Jun; Zhao, Yan-Ming
2013-10-01
In this paper, we present a combination of first-principles and experimental investigations on the structural, magnetic and electronic properties of monoclinic Li3V2(PO4)3. The change of dielectric constant indicates that the structural phase transition appear around the temperature 120°C. The first-principles calculation and magnetic measurement display that Li3V2(PO4)3 is a compound with weak ferromagnetism, with Curie constant of C = 0.004 and Curie temperature of 140 K. The experimental and theoretical results demonstrated that the Li3V2(PO4)3 is a typical semiconductor.
2015-08-20
evapotranspiration (ET) over oceans may be significantly lower than previously thought. The MEP model parameterized turbulent transfer coefficients...fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual evapotranspiration (ET) over...Bras, Jingfeng Wang. A model of evapotranspiration based on the theory of maximum entropy production, Water Resources Research, (03 2011): 0. doi
Quality of Life in Rural Areas: Processes of Divergence and Convergence
ERIC Educational Resources Information Center
Spellerberg, Annette; Huschka, Denis; Habich, Roland
2007-01-01
In Germany, processes can be observed that have long been out of keeping with the principle of equality of opportunity. Unemployment is concentrated in the structurally weak peripheral areas, in Eastern Germany in particular; emigration of young and better-educated people to the West is not diminishing, but contrary to expectation is again on the…
Sometimes … Dances … 'Do More' on the Page than They Ever Did on the Stage
ERIC Educational Resources Information Center
Curl, Gordon
2018-01-01
This critical review aims to expose the confusion that exists in professional arts criticism in general, and dance criticism in particular--with implications for dance education. The underlying principles of formalism are outlined and its strengths and weaknesses highlighted. A demand for more interpretive, theoretical and contextual criticism is…
Generating Enthusiasm with Generative Phonology.
ERIC Educational Resources Information Center
Dickerson, Wayne B.
This paper attempts a systematic approach to the teaching of word stress in the ESL classroom. Stress assignment rules from Chomsky and Halle and from Ross are used to establish the SISL Principle (Stress Initial Strong Left), for final weak-syllable words. On the basis of spelling, this rule can be applied correctly to 95 out of 100 cases. (AM)
A Report to the Iowa General Assembly on the Community College Funding Formula.
ERIC Educational Resources Information Center
Iowa State Dept. of Education, Des Moines. Div. of Community Colleges and Workforce Preparation.
Examining the methodology used to fund Iowa's 15 community colleges, this report reviews the history of the state's community colleges, highlights the strengths and weaknesses of the funding formula, and describes principles upon which sound funding should be based. Following a preface and executive summary, an introduction is provided to Iowa's…
Fostering Good Governance at School Level in Honduras: The Role of Transparency Bulletin Boards
ERIC Educational Resources Information Center
Boehm, Frédéric; Caprio, Temby
2014-01-01
Corruption is at the core of weak governance. In the education sector, corruption is a threat to the quality of and access to education. Although the diagnosis is straightforward, effective reforms are more difficult to implement. The principles of good governance (transparency, participation, accountability, and integrity) provide us guidance,…
Fully- and weakly-nonlinear biperiodic traveling waves in shallow water
NASA Astrophysics Data System (ADS)
Hirakawa, Tomoaki; Okamura, Makoto
2018-04-01
We directly calculate fully nonlinear traveling waves that are periodic in two independent horizontal directions (biperiodic) in shallow water. Based on the Riemann theta function, we also calculate exact periodic solutions to the Kadomtsev-Petviashvili (KP) equation, which can be obtained by assuming weakly-nonlinear, weakly-dispersive, weakly-two-dimensional waves. To clarify how the accuracy of the biperiodic KP solution is affected when some of the KP approximations are not satisfied, we compare the fully- and weakly-nonlinear periodic traveling waves of various wave amplitudes, wave depths, and interaction angles. As the interaction angle θ decreases, the wave frequency and the maximum wave height of the biperiodic KP solution both increase, and the central peak sharpens and grows beyond the height of the corresponding direct numerical solutions, indicating that the biperiodic KP solution cannot qualitatively model direct numerical solutions for θ ≲ 45^\\circ . To remedy the weak two-dimensionality approximation, we apply the correction of Yeh et al (2010 Eur. Phys. J. Spec. Top. 185 97-111) to the biperiodic KP solution, which substantially improves the solution accuracy and results in wave profiles that are indistinguishable from most other cases.
Dynamical transition between weak and strong coupling in Brillouin laser pulse amplification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schluck, F.; Lehmann, G.; Müller, C.
Short laser pulse amplification via stimulated Brillouin backscattering in plasma is considered. Previous work distinguishes between the weakly and strongly coupled regime and treats them separately. It is shown here that such a separation is not generally applicable because strong and weak coupling interaction regimes are entwined with each other. An initially weakly coupled amplification scenario may dynamically transform into strong coupling. This happens when the local seed amplitude grows and thus triggers the strongly driven plasma response. On the other hand, when in a strong coupling scenario, the pump pulse gets depleted, and its amplitude might drop below themore » strong coupling threshold. This may cause significant changes in the final seed pulse shape. Furthermore, experimentally used pump pulses are typically Gaussian-shaped. The intensity threshold for strong coupling may only be exceeded around the maximum and not in the wings of the pulse. Also here, a description valid in both strong and weak coupling regimes is required. We propose such a unified treatment which allows us, in particular, to study the dynamic transition between weak and strong coupling. Consequences for the pulse forms of the amplified seed are discussed.« less
Maximum entropy production in environmental and ecological systems.
Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M
2010-05-12
The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.
DEM interpolation weight calculation modulus based on maximum entropy
NASA Astrophysics Data System (ADS)
Chen, Tian-wei; Yang, Xia
2015-12-01
There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.
τ mapping of the autofluorescence of the human ocular fundus
NASA Astrophysics Data System (ADS)
Schweitzer, Dietrich; Kolb, Achim; Hammer, Martin; Thamm, Eike
2000-12-01
Changes in the autofluorescence at the living eye-ground are assumed as important mark in discovering of the pathomechanism in age-related macular degeneration. The discrimination of fluorophores is required and also the presentation of their 2D distribution. Caused by transmission of ocular media, a differentiation between fluorophores by the spectral excitation and emission range is limited. Using the laser scanner principle, the fluorescence lifetime can be measured in 2D. Keeping the maximal permissible exposure, only a very weak signal is detectable, which is optimal for application of the time- correlated single photon counting (TCSPC). In an experimental set-up, pulses of an active model locked Ar+ laser (FWHM = 300 ps, reptition rate = 77.3 MHz, selectable wavelengths: 457.9, 465.8, 472.7, 496.5, 501.7, 514.5 nm)excite the eye-ground during the scanning process. A routing module realizes the synchronization between scanning and TCSPC. Investigation of structured samples of Rhodamin 6G and of Coumarin 522 showed that a mono-exponential decay can be calculated with an error of less than 10 percent using only a few hundred photons. The maximum likelihood algorithm delivers the most correct results. A first in vivo tau-image, exhibit a lifetime of 1.5 ns in the nasal part and 5 ns at large retinal vessels.
NASA Astrophysics Data System (ADS)
Hu, Xiaogang; Suresh, Aneesha K.; Rymer, William Z.; Suresh, Nina L.
2015-12-01
Objective. The advancement of surface electromyogram (sEMG) recording and signal processing techniques has allowed us to characterize the recruitment properties of a substantial population of motor units (MUs) non-invasively. Here we seek to determine whether MU recruitment properties are modified in paretic muscles of hemispheric stroke survivors. Approach. Using an advanced EMG sensor array, we recorded sEMG during isometric contractions of the first dorsal interosseous muscle over a range of contraction levels, from 20% to 60% of maximum, in both paretic and contralateral muscles of stroke survivors. Using MU decomposition techniques, MU action potential amplitudes and recruitment thresholds were derived for simultaneously activated MUs in each isometric contraction. Main results. Our results show a significant disruption of recruitment organization in paretic muscles, in that the size principle describing recruitment rank order was materially distorted. MUs were recruited over a very narrow force range with increasing force output, generating a strong clustering effect, when referenced to recruitment force magnitude. Such disturbances in MU properties also correlated well with the impairment of voluntary force generation. Significance. Our findings provide direct evidence regarding MU recruitment modifications in paretic muscles of stroke survivors, and suggest that these modifications may contribute to weakness for voluntary contractions.
Hu, Xiaogang; Suresh, Aneesha K; Rymer, William Z; Suresh, Nina L
2017-01-01
Objective The advancement of surface electromyogram (sEMG) recording and signal processing techniques has allowed us to characterize the recruitment properties of a substantial population of motor units (MUs) non-invasively. Here we seek to determine whether MU recruitment properties are modified in paretic muscles of hemispheric stroke survivors. Approach Using an advanced EMG sensor array, we recorded sEMG during isometric contractions of the first dorsal interosseous muscle over a range of contraction levels, from 20% to 60% of maximum, in both paretic and contralateral muscles of stroke survivors. Using MU decomposition techniques, MU action potential amplitudes and recruitment thresholds were derived for simultaneously activated MUs in each isometric contraction. Main results Our results show a significant disruption of recruitment organization in paretic muscles, in that the size principle describing recruitment rank order was materially distorted. MUs were recruited over a very narrow force range with increasing force output, generating a strong clustering effect, when referenced to recruitment force magnitude. Such disturbances in MU properties also correlated well with the impairment of voluntary force generation. Significance Our findings provide direct evidence regarding MU recruitment modifications in paretic muscles of stroke survivors, and suggest that these modifications may contribute to weakness for voluntary contractions. PMID:26402920
LETTER TO THE EDITOR: Free-response operator characteristic models for visual search
NASA Astrophysics Data System (ADS)
Hutchinson, T. P.
2007-05-01
Computed tomography of diffraction enhanced imaging (DEI-CT) is a novel x-ray phase-contrast computed tomography which is applied to inspect weakly absorbing low-Z samples. Refraction-angle images which are extracted from a series of raw DEI images measured in different positions of the rocking curve of the analyser can be regarded as projections of DEI-CT. Based on them, the distribution of refractive index decrement in the sample can be reconstructed according to the principles of CT. How to combine extraction methods and reconstruction algorithms to obtain the most accurate reconstructed results is investigated in detail in this paper. Two kinds of comparison, the comparison of different extraction methods and the comparison between 'two-step' algorithms and the Hilbert filtered backprojection (HFBP) algorithm, draw the conclusion that the HFBP algorithm based on the maximum refraction-angle (MRA) method may be the best combination at present. Though all current extraction methods including the MRA method are approximate methods and cannot calculate very large refraction-angle values, the HFBP algorithm based on the MRA method is able to provide quite acceptable estimations of the distribution of refractive index decrement of the sample. The conclusion is proved by the experimental results at the Beijing Synchrotron Radiation Facility.
Optimal Control of Malaria Transmission using Insecticide Treated Nets and Spraying
NASA Astrophysics Data System (ADS)
Athina, D.; Bakhtiar, T.; Jaharuddin
2017-03-01
In this paper, we consider a model of the transmission of malaria which was developed by Silva and Torres equipped with two control variables, namely the use of insecticide treated nets (ITN) to reduce the number of human beings infected and spraying to reduce the number of mosquitoes. Pontryagin maximum principle was applied to derive the differential equation system as optimality conditions which must be satisfied by optimal control variables. The Mangasarian sufficiency theorem shows that Pontryagin maximum principle is necessary as well as sufficient conditions for optimization problem. The 4th-order Runge Kutta method was then performed to solve the differential equations system. The numerical results show that both controls given at once can reduce the number of infected individuals as well as the number of mosquitoes which reduce the impact of malaria transmission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhattacharya, Sourav; Dialektopoulos, Konstantinos F.; Romano, Antonio Enea
The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for thismore » theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.« less
NASA Technical Reports Server (NTRS)
Hauser, Cavour H; Plohr, Henry W
1951-01-01
The nature of the flow at the exit of a row of turbine blades for the range of conditions represented by four different blade configurations was evaluated by the conservation-of-momentum principle using static-pressure surveys and by analysis of Schlieren photographs of the flow. It was found that for blades of the type investigated, the maximum exit tangential-velocity component is a function of the blade geometry only and can be accurately predicted by the method of characteristics. A maximum value of exit velocity coefficient is obtained at a pressure ratio immediately below that required for maximum blade loading followed by a sharp drop after maximum blade loading occurs.
On the maximum energy of shock-accelerated cosmic rays at ultra-relativistic shocks
NASA Astrophysics Data System (ADS)
Reville, B.; Bell, A. R.
2014-04-01
The maximum energy to which cosmic rays can be accelerated at weakly magnetised ultra-relativistic shocks is investigated. We demonstrate that for such shocks, in which the scattering of energetic particles is mediated exclusively by ion skin-depth scale structures, as might be expected for a Weibel-mediated shock, there is an intrinsic limit on the maximum energy to which particles can be accelerated. This maximum energy is determined from the requirement that particles must be isotropized in the downstream plasma frame before the mean field transports them far downstream, and falls considerably short of what is required to produce ultra-high-energy cosmic rays. To circumvent this limit, a highly disorganized field is required on larger scales. The growth of cosmic ray-induced instabilities on wavelengths much longer than the ion-plasma skin depth, both upstream and downstream of the shock, is considered. While these instabilities may play an important role in magnetic field amplification at relativistic shocks, on scales comparable to the gyroradius of the most energetic particles, the calculated growth rates have insufficient time to modify the scattering. Since strong modification is a necessary condition for particles in the downstream region to re-cross the shock, in the absence of an alternative scattering mechanism, these results imply that acceleration to higher energies is ruled out. If weakly magnetized ultra-relativistic shocks are disfavoured as high-energy particle accelerators in general, the search for potential sources of ultra-high-energy cosmic rays can be narrowed.
Principles of War for Cyberspace
2011-01-15
knowing that relationships between things matter most in the strategy of war . It is essential to examine which tradition is the best guide for...Clausewitzian Cyberthink Clausewitz’s principles of war are based on a western Newtonian view of the world. Clausewitz states war is an act of force...to compel our enemy to do our will, maximum use of force is required, the aim is to disarm the enemy, and the motive of war is the political
2016-09-01
Characteristics of Silver Carp (Hypophthalmichthys molitrix) Using Video Analyses and Principles of Projectile Physics by Glenn R. Parsons, Ehlana Stell...2002) estimated maximum swim speeds of videotaped, captive, and free-ranging dolphins, Delphinidae, by timed sequential analyses of video frames... videos to estimate the swim speeds and leap characteristics of carp as they exit the waters’ surface. We used both direct estimates of swim speeds as
Principles of time evolution in classical physics
NASA Astrophysics Data System (ADS)
Güémez, J.; Fiolhais, M.
2018-07-01
We address principles of time evolution in classical mechanical/thermodynamical systems in translational and rotational motion, in three cases: when there is conservation of mechanical energy, when there is energy dissipation and when there is mechanical energy production. In the first case, the time derivative of the Hamiltonian vanishes. In the second one, when dissipative forces are present, the time evolution is governed by the minimum potential energy principle, or, equivalently, maximum increase of the entropy of the universe. Finally, in the third situation, when internal sources of work are available to the system, it evolves in time according to the principle of minimum Gibbs function. We apply the Lagrangian formulation to the systems, dealing with the non-conservative forces using restriction functions such as the Rayleigh dissipative function.
Femtosecond Photon-Counting Receiver
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Rambo, Timothy M.; Yang, Guangning; Lu, Wei; Numata, Kenji
2016-01-01
An optical correlation receiver is described that provides ultra-precise distance and/or time/pulse-width measurements even for weak (single photons) and short (femtosecond) optical signals. A new type of optical correlation receiver uses a fourth-order (intensity) interferometer to provide micron distance measurements even for weak (single photons) and short (femtosecond) optical signals. The optical correlator uses a low-noise-integrating detector that can resolve photon number. The correlation (range as a function of path delay) is calculated from the variance of the photon number of the difference of the optical signals on the two detectors. Our preliminary proof-of principle data (using a short-pulse diode laser transmitter) demonstrates tens of microns precision.
Femtosecond Photon-Counting Receiver
NASA Technical Reports Server (NTRS)
Krainak, Michael A.; Rambo, Timothy M.; Yang, Guangning; Lu, Wei; Numata, Kenji
2016-01-01
An optical correlation receiver is described that provides ultra-precise distance and/or time-pulse-width measurements even for weak (single photons) and short (femtosecond) optical signals. A new type of optical correlation receiver uses a fourth-order (intensity) interferometer to provide micron distance measurements even for weak (single photons) and short (femtosecond) optical signals. The optical correlator uses a low-noise-integrating detector that can resolve photon number. The correlation (range as a function of path delay) is calculated from the variance of the photon number of the difference of the optical signals on the two detectors. Our preliminary proof-of principle data (using a short-pulse diode laser transmitter) demonstrates tens of microns precision.
Semistrict higher gauge theory
NASA Astrophysics Data System (ADS)
Jurčo, Branislav; Sämann, Christian; Wolf, Martin
2015-04-01
We develop semistrict higher gauge theory from first principles. In particular, we describe the differential Deligne cohomology underlying semistrict principal 2-bundles with connective structures. Principal 2-bundles are obtained in terms of weak 2-functors from the Čech groupoid to weak Lie 2-groups. As is demonstrated, some of these Lie 2-groups can be differentiated to semistrict Lie 2-algebras by a method due to Ševera. We further derive the full description of connective structures on semistrict principal 2-bundles including the non-linear gauge transformations. As an application, we use a twistor construction to derive superconformal constraint equations in six dimensions for a non-Abelian tensor multiplet taking values in a semistrict Lie 2-algebra.
Song, Young Dong; Jain, Nimash; Kang, Yeon Gwi; Kim, Tae Yune; Kim, Tae Kyun
2016-06-01
Correlations between maximum flexion and functional outcomes in total knee arthroplasty (TKA) patients are reportedly weak. We investigated whether there are differences between passive maximum flexion in nonweight bearing and other types of maximum flexion and whether the type of maximum flexion correlates with functional outcomes. A total of 210 patients (359 knees) underwent preoperative evaluation and postoperative follow-up evaluations (6, 12, and 24 months) for the assessment of clinical outcomes including maximum knee flexion. Maximum flexion was measured under five conditions: passive nonweight bearing, passive weight bearing, active nonweight bearing, and active weight bearing with or without arm support. Data were analyzed for relationships between passive maximum flexion in nonweight bearing by Pearson correlation analyses, and a variance comparison between measurement techniques via paired t test. We observed substantial differences between passive maximum flexion in nonweight bearing and the other four maximum flexion types. At all time points, passive maximum flexion in nonweight bearing correlated poorly with active maximum flexion in weight bearing with or without arm support. Active maximum flexion in weight bearing better correlated with functional outcomes than the other maximum flexion types. Our study suggests active maximum flexion in weight bearing should be reported together with passive maximum flexion in nonweight bearing in research on the knee motion arc after TKA.
Song, Young Dong; Jain, Nimash; Kang, Yeon Gwi; Kim, Tae Yune
2016-01-01
Purpose Correlations between maximum flexion and functional outcomes in total knee arthroplasty (TKA) patients are reportedly weak. We investigated whether there are differences between passive maximum flexion in nonweight bearing and other types of maximum flexion and whether the type of maximum flexion correlates with functional outcomes. Materials and Methods A total of 210 patients (359 knees) underwent preoperative evaluation and postoperative follow-up evaluations (6, 12, and 24 months) for the assessment of clinical outcomes including maximum knee flexion. Maximum flexion was measured under five conditions: passive nonweight bearing, passive weight bearing, active nonweight bearing, and active weight bearing with or without arm support. Data were analyzed for relationships between passive maximum flexion in nonweight bearing by Pearson correlation analyses, and a variance comparison between measurement techniques via paired t test. Results We observed substantial differences between passive maximum flexion in nonweight bearing and the other four maximum flexion types. At all time points, passive maximum flexion in nonweight bearing correlated poorly with active maximum flexion in weight bearing with or without arm support. Active maximum flexion in weight bearing better correlated with functional outcomes than the other maximum flexion types. Conclusions Our study suggests active maximum flexion in weight bearing should be reported together with passive maximum flexion in nonweight bearing in research on the knee motion arc after TKA. PMID:27274468
RAJABI, Fateme; ESMAILZADEH, Hamid; ROSTAMIGOORAN, Narges; MAJDZADEH, Reza
2013-01-01
Background: Preparing long term reformatory plan for the health system, like other macro plans, requires guiding principles which is according to the values, and as a bridge, connect the ideals and values to the goals. This study was designed with the purpose of explaining the values and principles of health system, and as a pre-requisite to compilation of Iran’s health system reform plan at 2025. Method: The document of values and principles of health system reform plan for 2025 was developed by reviewing the literature and receiving the opinions of senior experts of health system, and was criticized in focus group discussion sessions of experts and decision makers. Results: The values of Iran are: dignity of human, the right to maximum attainable level of health, comprehensive health, equity and social cohesion. The principles of this health system include: institutionalizing the ethical values, responsiveness and accountability, equitable access (utilization), prevention and health promotion, community participation, inter-sectoral collaboration, integrated stewardship, benefit from innovation and desired technology, human resources promotion and excellence and harmony. Conclusion: Based on the perception of cultural and religious teachings in Iran, protecting of human dignity and human prosperity are the ultimate social goal. In this sense, health and healthy humans, in its holistic concept (physical, mental, social health and spiritual) are the center and development in any form should lead to the human prosperity in a way that each of the individuals could enjoy the maximum attainable level of health in its holistic meaning and in a faire manner. PMID:23515322
NASA Astrophysics Data System (ADS)
Sylwester, Barbara; Sylwester, Janusz; Siarkowski, Marek; Gburek, Szymon; Phillips, Kenneth
Very high sensitivity of SphinX soft X-ray spectrophotometer aboard Coronas-Photon allows to observe spectra of small X-ray brightenings(microflares), many of them with maximum intensities well below the GOES or RHESSI sensitivity thresholds. Hundreds of such small flare-like events have been observed in the period between March and November 2009 with energy resolution better than 0.5 keV. The spectra have been measured in the energy range extending above 1 keV. In this study we investigate the time variability of basic plasma parameters: temperature T and emission measure EM for a number of these weak flare-like events and discuss respective evolutionary patterns on the EM-T diagnostic diagrams. For some of these events, unusual behavior is observed, different from this characteristic for a "normal" flares of higher maximum intensities. Physical scenarios providing possible explanation of such unusual evolutionary patterns will be discussed.
Reflection by absorbing periodically stratified media
NASA Astrophysics Data System (ADS)
Lekner, John
2014-03-01
Existing theory gives the optical properties of a periodically stratified medium in terms of a two by two matrix. This theory is valid also for absorbing media, because the matrix remains unimodular. The main effect of absorption is that the reflection (of either polarization) becomes independent of the number of periods N, and of the substrate properties, provided N exceeds a certain value which depends on the absorption. The s and p reflections are then given by simple formulae. The stop-band structure, which gives total reflection in bands of frequency and angle of incidence in the non-absorbing case, remains influential in weakly absorbing media, causing strong variations in reflectivity. The theory is applied to the frequency dependence of the normal-incidence reflectivity of a quarter-wave stack in which the high-index and low-index layers both absorb weakly. Analytical expressions are obtained for the frequency at which the reflectivity is maximum, the maximum reflectivity, and also for the reflectivity at the band edges of the stop band of the non-absorbing stack.
Trezise, J; Collier, N; Blazevich, A J
2016-06-01
This study examined the relative influence of anatomical and neuromuscular variables on maximal isometric and concentric knee extensor torque and provided a comparative dataset for healthy young males. Quadriceps cross-sectional area (CSA) and fascicle length (l f) and angle (θ f) from the four quadriceps components; agonist (EMG:M) and antagonist muscle activity, and percent voluntary activation (%VA); patellar tendon moment arm distance (MA) and maximal voluntary isometric and concentric (60° s(-1)) torques, were measured in 56 men. Linear regression models predicting maximum torque were ranked using Akaike's Information Criterion (AICc), and Pearson's correlation coefficients assessed relationships between variables. The best-fit models explained up to 72 % of the variance in maximal voluntary knee extension torque. The combination of 'CSA + θ f + EMG:M + %VA' best predicted maximum isometric torque (R (2) = 72 %, AICc weight = 0.38) and 'CSA + θ f + MA' (R (2) = 65 %, AICc weight = 0.21) best predicted maximum concentric torque. Proximal quadriceps CSA was included in all models rather than the traditionally used mid-muscle CSA. Fascicle angle appeared consistently in all models despite its weak correlation with maximum torque in isolation, emphasising the importance of examining interactions among variables. While muscle activity was important for torque prediction in both contraction modes, MA only strongly influenced maximal concentric torque. These models identify the main sources of inter-individual differences strongly influencing maximal knee extension torque production in healthy men. The comparative dataset allows the identification of potential variables to target (i.e. weaknesses) in individuals.
Drinkable, But. . . Much to be Done
ERIC Educational Resources Information Center
Sterrett, Frances S.
1977-01-01
Maximum levels of the principle water contaminants are discussed in this article. Difficulties related to establishing standards for contaminants in drinking water are identified and the possible results of high levels of these contaminants included. (MA)
Continuous protein concentration via free-flow moving reaction boundary electrophoresis.
Kong, Fanzhi; Zhang, Min; Chen, Jingjing; Fan, Liuyin; Xiao, Hua; Liu, Shaorong; Cao, Chengxi
2017-07-28
In this work, we developed the model and theory of free-flow moving reaction boundary electrophoresis (FFMRB) for continuous protein concentration for the first time. The theoretical results indicated that (i) the moving reaction boundary (MRB) can be quantitatively designed in free-flow electrophoresis (FFE) system; (ii) charge-to-mass ratio (Z/M) analysis could provide guidance for protein concentration optimization; and (iii) the maximum processing capacity could be predicted. To demonstrate the model and theory, three model proteins of hemoglobin (Hb), cytochrome C (Cyt C) and C-phycocyanin (C-PC) were chosen for the experiments. The experimental results verified that (i) stable MRBs with different velocities could be established in FFE apparatus with weak acid/weak base neutralization reaction system; (ii) proteins of Hb, Cyt C and C-PC were well concentrated with FFMRB; and (iii) a maximum processing capacity and recovery ratio of Cyt C enrichment were 126mL/h and 95.5% respectively, and a maximum enrichment factor was achieved 12.6 times for Hb. All of the experiments demonstrated the protein concentration model and theory. In contrast to other methods, the continuous processing ability enables FFMRB to efficiently enrich diluted protein or peptide in large volume solution. Copyright © 2017 Elsevier B.V. All rights reserved.
Nakano, Masayoshi
2017-01-01
Open-shell character, e. g., diradical character, is a quantum chemically well-defined quantity in ground-state molecular systems, which is not an observable but can quantify the degree of effective bond weakness in the chemical sense or electron correlation strength in the physical sense. Because this quantity also correlates to specific excited states, physicochemical properties concerned with those states are expected to strongly correlate to the open-shell character. This feature enables us to open a new path to revealing the mechanism of these properties as well as to realizing new design principles for efficient functional molecular systems. This account explains the open-shell-character-based molecular design principles and introduces their applications to the rational design of highly efficient nonlinear optical and singlet fission molecular systems. © 2017 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
The STEP mission - Satellite test of the equivalence principle
NASA Technical Reports Server (NTRS)
Atzei, A.; Swanson, P.; Anselmi, A.
1992-01-01
The STEP experiment is a joint ESA/NASA mission candidate for selection as the next medium science project in the ESA scientific program. ESA and NASA have undertaken a joint feasibility study of STEP. The principles of STEP and details of the mission are presented and the mission and spacecraft are described. The primary objective of STEP is to measure differences in the rate of fall of test masses of different compositions to one part in 10 exp 17 of the total gravitational acceleration, a factor of 10 exp 8 improvement in sensitivity over previous experiments. STEP constitutes a comparison of gravitational and inertial mass or a test of the weak equivalence principle (WEP). A test of WEP that is six orders of magnitude more accurate than previous tests will reveal whether the underlying structure of the universe is filled with undiscovered small forces, necessitating a fundamental change in our theories of matter on all scales.
Traditional Chinese medicine on the effects of low-intensity laser irradiation on cells
NASA Astrophysics Data System (ADS)
Liu, Timon C.; Duan, Rui; Li, Yan; Cai, Xiongwei
2002-04-01
In previous paper, process-specific times (PSTs) are defined by use of molecular reaction dynamics and time quantum theory established by TCY Liu et al., and the change of PSTs representing two weakly nonlinearly coupled bio-processes are shown to be parallel, which is called time parallel principle (TPP). The PST of a physiological process (PP) is called physiological time (PT). After the PTs of two PPs are compared with their Yin-Yang property of traditional Chinese medicine (TCM), the PST model of Yin and Yang (YPTM) was put forward: for two related processes, the process of small PST is Yin, and the other process is Yang. The Yin-Yang parallel principle (YPP) was put forward in terms of YPTM and TPP, which is the fundamental principle of TCM. In this paper, we apply it to study TCM on the effects of low intensity laser on cells, and successfully explained observed phenomena.
On the ground state energy of the delta-function Fermi gas
NASA Astrophysics Data System (ADS)
Tracy, Craig A.; Widom, Harold
2016-10-01
The weak coupling asymptotics to order γ of the ground state energy of the delta-function Fermi gas, derived heuristically in the literature, is here made rigorous. Further asymptotics are in principle computable. The analysis applies to the Gaudin integral equation, a method previously used by one of the authors for the asymptotics of large Toeplitz matrices.
ERIC Educational Resources Information Center
Brennan, Jewel E.
Alcoholism is a problem of immense proportions. Views about alcoholism range from consideration of the problem as a moral weakness to the disease concept approach. Since the effects of alcoholic intake can be benevolent as well as toxic, the dilemma centers around alcohol usage. Various theories have been formulated, experimented with, and…
ERIC Educational Resources Information Center
Inner London Education Authority (England).
This unit on equilibrium is one of 10 first year units produced by the Independent Learning Project for Advanced Chemistry (ILPAC). The unit, which consists of two levels, focuses on the application of equilibrium principles to equilibria involving weak acids and bases, including buffer solutions and indicators. Level one uses Le Chatelier's…
Should You Trust Your Money to a Robot?
Dhar, Vasant
2015-06-01
Financial markets emanate massive amounts of data from which machines can, in principle, learn to invest with minimal initial guidance from humans. I contrast human and machine strengths and weaknesses in making investment decisions. The analysis reveals areas in the investment landscape where machines are already very active and those where machines are likely to make significant inroads in the next few years.
van Rees, Lauren J; Ballard, Kirrie J; McCabe, Patricia; Macdonald-D'Silva, Anita G; Arciuli, Joanne
2012-08-01
Impaired lexical stress production characterizes multiple pediatric speech disorders. Effective remediation strategies are not available, and little is known about the normal process of learning to assign and produce lexical stress. This study examined whether typically developing (TD) children can be trained to produce lexical stress on bisyllabic pseudowords that are orthographically biased to a strong-weak or weak-strong pattern (e.g., MAMbey or beDOON), in combination with the principles of motor learning (PML). Fourteen TD children ages 5;0 (years;months) to 13;0 were randomly assigned to a training or control group using concealed allocation within blocks. A pre- to posttraining group design was used to examine the acquisition, retention, and generalization of lexical stress production. The training group learned to produce appropriate lexical stress for the pseudowords with strong maintenance and generalization to related untrained stimuli. Accuracy of stress production did not change in the control group. TD children can learn to produce lexical stress patterns for orthographically biased pseudowords via explicit training methods. Findings have relevance for the study of languages other than English and for a range of prosodic disorders.
Multiple testing and power calculations in genetic association studies.
So, Hon-Cheong; Sham, Pak C
2011-01-01
Modern genetic association studies typically involve multiple single-nucleotide polymorphisms (SNPs) and/or multiple genes. With the development of high-throughput genotyping technologies and the reduction in genotyping cost, investigators can now assay up to a million SNPs for direct or indirect association with disease phenotypes. In addition, some studies involve multiple disease or related phenotypes and use multiple methods of statistical analysis. The combination of multiple genetic loci, multiple phenotypes, and multiple methods of evaluating associations between genotype and phenotype means that modern genetic studies often involve the testing of an enormous number of hypotheses. When multiple hypothesis tests are performed in a study, there is a risk of inflation of the type I error rate (i.e., the chance of falsely claiming an association when there is none). Several methods for multiple-testing correction are in popular use, and they all have strengths and weaknesses. Because no single method is universally adopted or always appropriate, it is important to understand the principles, strengths, and weaknesses of the methods so that they can be applied appropriately in practice. In this article, we review the three principle methods for multiple-testing correction and provide guidance for calculating statistical power.
NASA Astrophysics Data System (ADS)
Ninomiya, K.; Akiyama, T.; Hata, M.; Hatori, M.; Iguri, T.; Ikeda, Y.; Inaba, S.; Kawamura, H.; Kishi, R.; Murakami, H.; Nakaya, Y.; Nishio, H.; Ogawa, N.; Onishi, J.; Saiba, S.; Sakuta, T.; Tanaka, S.; Tanuma, R.; Totsuka, Y.; Tsutsui, R.; Watanabe, K.; Murata, J.
2017-09-01
The composition dependence of gravitational constant G is measured at the millimeter scale to test the weak equivalence principle, which may be violated at short range through new Yukawa interactions such as the dilaton exchange force. A torsion balance on a turning table with two identical tungsten targets surrounded by two different attractor materials (copper and aluminum) is used to measure gravitational torque by means of digital measurements of a position sensor. Values of the ratios \\tilde{G}_Al-W/\\tilde{G}_Cu-W -1 and \\tilde{G}_Cu-W/GN -1 were (0.9 +/- 1.1sta +/- 4.8sys) × 10-2 and (0.2 +/- 0.9sta +/- 2.1sys) × 10-2 , respectively; these were obtained at a center to center separation of 1.7 cm and surface to surface separation of 4.5 mm between target and attractor, which is consistent with the universality of G. A weak equivalence principle (WEP) violation parameter of η_Al-Cu(r∼ 1 cm)=(0.9 +/- 1.1sta +/- 4.9sys) × 10-2 at the shortest range of around 1 cm was also obtained.
Transient AC voltage related phenomena for HVDC schemes connected to weak AC systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilotto, L.A.S.; Szechtman, M.; Hammad, A.E.
1992-07-01
In this paper a didactic explanation of voltage stability associated phenomena at HVDC terminals is presented. Conditions leading to ac voltage collapse problems are identified. A mechanism that excites control-induced voltage oscillations is shown. The voltage stability factor is used for obtaining the maximum power limits of ac/dc systems operating with different control strategies. Correlation to Pd {times} Id curves is given. Solutions for eliminating the risks of voltage collapse and for avoiding control-induced oscillations are discussed. The results are supported by detailed digital simulations of a weak ac/dc system using EMTP.
Han, Xiao; Wang, Hai Bo; Wang, Xiao di; Shi, Xiang Bin; Wang, Bao Liang; Zheng, Xiao Cui; Wang, Zhi Qiang; Liu, Feng Zhi
2017-10-01
The photo response curves of 11 rootstock-scion combinations including summer black/Beta, summer black/1103P, summer black/101-14, summer black/3309C, summer black/140Ru, summer black/5C, summer black/5BB, summer black/420A, summer black/SO4, summer black/Kangzhen No.1, summer black/Huapu No.1 were fitted by rectangular hyperbola mo-del, non-rectangular hyperbola model, modified rectangular hyperbola model and exponential model respectively, and the differences of imitative effects were analyzed by determination coefficiency, light compensation point, light saturation point, initial quantum efficiency, maximum photosynthetic rate and dark respiration rate. The result showed that the fit coefficients of all four models were above 0.98, and there was no obvious difference on the fitted values of light compensation point among the four models. The modified rectangular hyperbola model fitted best on light saturation point, apparent quantum yield, maximum photosynthetic rate and dark respiration rate, and had the minimum AIC value based on the akaike information criterion, therefore, the modified rectangular hyperbola model was the best one. The clustering analysis indicated that summer black/SO4 and summer black/420A combinations had low light compensation point, high apparent quantum yield and low dark respiration rate among 11 rootstock-scion combinations, suggesting that these two combinations could use weak light more efficiently due to their less respiratory consumption and higher weak light tolerance. The Topsis comparison method ranked summer black/SO4 and summer black/420A combinations as No. 1 and No. 2 respectively in weak light tolerance ability, which was consistent with cluster analysis. Consequently, summer black has the highest weak light tolerance in case grafted on 420A or SO4, which could be the most suitable rootstock-scion combinations for protected cultivation.
A new method of differential structural analysis of gamma-family basic parameters
NASA Technical Reports Server (NTRS)
Melkumian, L. G.; Ter-Antonian, S. V.; Smorodin, Y. A.
1985-01-01
The maximum likelihood method is used for the first time to restore parameters of electron photon cascades registered on X-ray films. The method permits one to carry out a structural analysis of the gamma quanta family darkening spots independent of the gamma quanta overlapping degree, and to obtain maximum admissible accuracies in estimating the energies of the gamma quanta composing a family. The parameter estimation accuracy weakly depends on the value of the parameters themselves and exceeds by an order of the values obtained by integral methods.
Convex Accelerated Maximum Entropy Reconstruction
Worley, Bradley
2016-01-01
Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476
Zhou, Lin; Long, Shitong; Tang, Biao; Chen, Xi; Gao, Fen; Peng, Wencui; Duan, Weitao; Zhong, Jiaqi; Xiong, Zongyuan; Wang, Jin; Zhang, Yuanzhong; Zhan, Mingsheng
2015-07-03
We report an improved test of the weak equivalence principle by using a simultaneous 85Rb-87Rb dual-species atom interferometer. We propose and implement a four-wave double-diffraction Raman transition scheme for the interferometer, and demonstrate its ability in suppressing common-mode phase noise of Raman lasers after their frequencies and intensity ratios are optimized. The statistical uncertainty of the experimental data for Eötvös parameter η is 0.8×10(-8) at 3200 s. With various systematic errors corrected, the final value is η=(2.8±3.0)×10(-8). The major uncertainty is attributed to the Coriolis effect.
Iafolla, V; Lefevre, C; Fiorenza, E; Santoli, F; Nozzoli, S; Magnafico, C; Lucente, M; Lucchesi, D; Peron, R; Shapiro, I I; Glashow, S; Lorenzini, E C
2014-01-01
A cryogenic differential accelerometer has been developed to test the weak equivalence principle to a few parts in 10(15) within the framework of the general relativity accuracy test in an Einstein elevator experiment. The prototype sensor was designed to identify, address, and solve the major issues associated with various aspects of the experiment. This paper illustrates the measurements conducted on this prototype sensor to attain a high quality factor (Q ∼ 10(5)) at low frequencies (<20 Hz). Such a value is necessary for reducing the Brownian noise to match the target acceleration noise of 10(-14) g/√Hz, hence providing the desired experimental accuracy.
Quantum Theory of Jaynes' Principle, Bayes' Theorem, and Information
NASA Astrophysics Data System (ADS)
Haken, Hermann
2014-12-01
After a reminder of Jaynes' maximum entropy principle and of my quantum theoretical extension, I consider two coupled quantum systems A,B and formulate a quantum version of Bayes' theorem. The application of Feynman's disentangling theorem allows me to calculate the conditional density matrix ρ (A|B) , if system A is an oscillator (or a set of them), linearly coupled to an arbitrary quantum system B. Expectation values can simply be calculated by means of the normalization factor of ρ (A|B) that is derived.
ERIC Educational Resources Information Center
Boyd, James N.
1991-01-01
Presents a mathematical problem that, when examined and generalized, develops the relationships between power and efficiency in energy transfer. Offers four examples of simple electrical and mechanical systems to illustrate the principle that maximum power occurs at 50 percent efficiency. (MDH)
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
Numerical Schemes for the Hamilton-Jacobi and Level Set Equations on Triangulated Domains
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Sethian, James A.
1997-01-01
Borrowing from techniques developed for conservation law equations, numerical schemes which discretize the Hamilton-Jacobi (H-J), level set, and Eikonal equations on triangulated domains are presented. The first scheme is a provably monotone discretization for certain forms of the H-J equations. Unfortunately, the basic scheme lacks proper Lipschitz continuity of the numerical Hamiltonian. By employing a virtual edge flipping technique, Lipschitz continuity of the numerical flux is restored on acute triangulations. Next, schemes are introduced and developed based on the weaker concept of positive coefficient approximations for homogeneous Hamiltonians. These schemes possess a discrete maximum principle on arbitrary triangulations and naturally exhibit proper Lipschitz continuity of the numerical Hamiltonian. Finally, a class of Petrov-Galerkin approximations are considered. These schemes are stabilized via a least-squares bilinear form. The Petrov-Galerkin schemes do not possess a discrete maximum principle but generalize to high order accuracy.
Random versus maximum entropy models of neural population activity
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry
2017-04-01
The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.
Circular Regression in a Dual-Phase Lock-In Amplifier for Coherent Detection of Weak Signal
Wang, Gaoxuan; Reboul, Serge; Fertein, Eric
2017-01-01
Lock-in amplification (LIA) is an effective approach for recovery of weak signal buried in noise. Determination of the input signal amplitude in a classical dual-phase LIA is based on incoherent detection which leads to a biased estimation at low signal-to-noise ratio. This article presents, for the first time to our knowledge, a new architecture of LIA involving phase estimation with a linear-circular regression for coherent detection. The proposed phase delay estimate, between the input signal and a reference, is defined as the maximum-likelihood of a set of observations distributed according to a von Mises distribution. In our implementation this maximum is obtained with a Newton Raphson algorithm. We show that the proposed LIA architecture provides an unbiased estimate of the input signal amplitude. Theoretical simulations with synthetic data demonstrate that the classical LIA estimates are biased for SNR of the input signal lower than −20 dB, while the proposed LIA is able to accurately recover the weak signal amplitude. The novel approach is applied to an optical sensor for accurate measurement of NO2 concentrations at the sub-ppbv level in the atmosphere. Side-by-side intercomparison measurements with a commercial LIA (SR830, Stanford Research Inc., Sunnyvale, CA, USA ) demonstrate that the proposed LIA has an identical performance in terms of measurement accuracy and precision but with simplified hardware architecture. PMID:29135951
NASA Astrophysics Data System (ADS)
Nath, Sunil
2018-05-01
Metabolic energy obtained from the coupled chemical reactions of oxidative phosphorylation (OX PHOS) is harnessed in the form of ATP by cells. We experimentally measured thermodynamic forces and fluxes during ATP synthesis, and calculated the thermodynamic efficiency, η and the rate of free energy dissipation, Φ. We show that the OX PHOS system is tuned such that the coupled nonequilibrium processes operate at optimal η. This state does not coincide with the state of minimum Φ but is compatible with maximum Φ under the imposed constraints. Conditions that must hold for species concentration in order to satisfy the principle of optimal efficiency are derived analytically and a molecular explanation based on Nath's torsional mechanism of energy transduction and ATP synthesis is suggested. Differences of the proposed principle with Prigogine's principle are discussed.
Bachorz, Rafał A; Klopper, Wim; Gutowski, Maciej; Li, Xiang; Bowen, Kit H
2008-08-07
The photoelectron spectrum (PES) of the uracil anion is reported and discussed from the perspective of quantum chemical calculations of the vertical detachment energies (VDEs) of the anions of various tautomers of uracil. The PES peak maximum is found at an electron binding energy of 2.4 eV, and the width of the main feature suggests that the parent anions are in a valence rather than a dipole-bound state. The canonical tautomer as well as four tautomers that result from proton transfer from an NH group to a C atom were investigated computationally. At the Hartree-Fock and second-order Moller-Plesset perturbation theory levels, the adiabatic electron affinity (AEA) and the VDE have been converged to the limit of a complete basis set to within +/-1 meV. Post-MP2 electron-correlation effects have been determined at the coupled-cluster level of theory including single, double, and noniterative triple excitations. The quantum chemical calculations suggest that the most stable valence anion of uracil is the anion of a tautomer that results from a proton transfer from N1H to C5. It is characterized by an AEA of 135 meV and a VDE of 1.38 eV. The peak maximum is as much as 1 eV larger, however, and the photoelectron intensity is only very weak at 1.38 eV. The PES does not lend support either to the valence anion of the canonical tautomer, which is the second most stable anion, and whose VDE is computed at about 0.60 eV. Agreement between the peak maximum and the computed VDE is only found for the third most stable tautomer, which shows an AEA of approximately -0.1 eV and a VDE of 2.58 eV. This tautomer results from a proton transfer from N3H to C5. The results illustrate that the characteristics of biomolecular anions are highly dependent on their tautomeric form. If indeed the third most stable anion is observed in the experiment, then it remains an open question why and how this species is formed under the given conditions.
Cavity electromagnetically induced transparency via spontaneously generated coherence
NASA Astrophysics Data System (ADS)
Tariq, Muhammad; Ziauddin, Bano, Tahira; Ahmad, Iftikhar; Lee, Ray-Kuang
2017-09-01
A four-level N-type atomic ensemble enclosed in a cavity is revisited to investigate the influence of spontaneous generated coherence (SGC) on transmission features of weak probe light field. A weak probe field is propagating through the cavity where each atom inside the cavity follows four-level N-type atom-field configuration of rubidium (?) atom. We use input-output theory and study the interaction of atomic ensemble and three cavity fields which are coupled to the same cavity mode. A SGC affects the transmission properties of weak probe light field due to which a transparency window (cavity EIT) appears. At resonance condition the transparency window increases with increasing the SGC in the system. We also studied the influence of the SGC on group delay and investigated magnitude enhancement of group delay for the maximum SGC in the system.
Strong Measurements Give a Better Direct Measurement of the Quantum Wave Function.
Vallone, Giuseppe; Dequal, Daniele
2016-01-29
Weak measurements have thus far been considered instrumental in the so-called direct measurement of the quantum wave function [4J. S. Lundeen, Nature (London) 474, 188 (2011).]. Here we show that a direct measurement of the wave function can be obtained by using measurements of arbitrary strength. In particular, in the case of strong measurements, i.e., those in which the coupling between the system and the measuring apparatus is maximum, we compared the precision and the accuracy of the two methods, by showing that strong measurements outperform weak measurements in both for arbitrary quantum states in most cases. We also give the exact expression of the difference between the original and reconstructed wave function obtained by the weak measurement approach; this will allow one to define the range of applicability of such a method.
Sample of CFD optimization of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.
Metabolic networks evolve towards states of maximum entropy production.
Unrean, Pornkamol; Srienc, Friedrich
2011-11-01
A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.
Revealing the transport properties of the spin-polarized β‧-Tb2(MoO4)3: DFT+U
NASA Astrophysics Data System (ADS)
Reshak, A. H.
2017-11-01
The thermoelectric properties of the spin-polarized β‧-Tb2(MoO4)3 phase are calculated using first-principles and second-principles methods to solve the semi-classical Bloch-Boltzmann transport equations. It is interesting to highlight that the calculated electronic band structure reveals that the β‧-Tb2(MoO4)3 has parabolic bands in the vicinity of the Fermi level (EF); therefore, the carriers exhibit low effective mass and hence high mobility. The existence of strong covalent bonds between Mo and O in the MoO4 tetrahedrons is more favorable for the transport of the carriers than the ionic bond. It has been found that the carrier concentration of spin-up (↑) and spin-down (↓) increases linearly with increasing the temperature and exhibits a maximum carrier concentration at EF. The calculations reveal that the β‧-Tb2(MoO4)3 exhibits maximum electrical conductivity, minimum electronic thermal conductivity, a large Seebeck coefficient and a high power factor at EF for (↑) and (↓). Therefore, the vicinity of EF is the area where the β‧-Tb2(MoO4)3 is expected to show maximum efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yuwei; Sun, Jifeng; Singh, David J.
In this paper, we report the properties of the reported transparent conductor CuI, including the effect of heavy p-type doping. The results, based on first-principles calculations, include an analysis of the electronic structure and calculations of optical and dielectric properties. We find that the origin of the favorable transparent conducting behavior lies in the absence in the visible of strong interband transitions between deeper valence bands and states at the valence-band maximum that become empty with p-type doping. Instead, strong interband transitions to the valence-band maximum are concentrated in the infrared with energies below 1.3 eV. This is contrast tomore » the valence bands of many wide-band-gapmaterials. Turning to the mobility,we find that the states at the valence-band maximum are relatively dispersive. This originates from their antibonding Cu d–I p character. We find a modest enhancement of the Born effective charges relative to nominal values, leading to a dielectric constant ε(0) = 6.3. This is sufficiently large to reduce ionized impurity scattering, leading to the expectation that the properties of CuI can still be significantly improved through sample quality.« less
Li, Yuwei; Sun, Jifeng; Singh, David J.
2018-03-26
In this paper, we report the properties of the reported transparent conductor CuI, including the effect of heavy p-type doping. The results, based on first-principles calculations, include an analysis of the electronic structure and calculations of optical and dielectric properties. We find that the origin of the favorable transparent conducting behavior lies in the absence in the visible of strong interband transitions between deeper valence bands and states at the valence-band maximum that become empty with p-type doping. Instead, strong interband transitions to the valence-band maximum are concentrated in the infrared with energies below 1.3 eV. This is contrast tomore » the valence bands of many wide-band-gapmaterials. Turning to the mobility,we find that the states at the valence-band maximum are relatively dispersive. This originates from their antibonding Cu d–I p character. We find a modest enhancement of the Born effective charges relative to nominal values, leading to a dielectric constant ε(0) = 6.3. This is sufficiently large to reduce ionized impurity scattering, leading to the expectation that the properties of CuI can still be significantly improved through sample quality.« less
Kleidon, Axel
2009-06-01
The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.
Cancer cachexia decreases specific force and accelerates fatigue in limb muscle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberts, B.M.; Frye, G.S.; Ahn, B.
Highlights: •C-26 cancer cachexia causes a significant decrease in limb muscle absolute force. •C-26 cancer cachexia causes a significant decrease in limb muscle specific force. •C-26 cancer cachexia decreases fatigue resistance in the soleus muscle. •C-26 cancer cachexia prolongs time to peak twitch tension in limb muscle. •C-26 cancer cachexia prolongs one half twitch relaxation time in limb muscle. -- Abstract: Cancer cachexia is a complex metabolic syndrome that is characterized by the loss of skeletal muscle mass and weakness, which compromises physical function, reduces quality of life, and ultimately can lead to mortality. Experimental models of cancer cachexia havemore » recapitulated this skeletal muscle atrophy and consequent decline in muscle force generating capacity. However, more recently, we provided evidence that during severe cancer cachexia muscle weakness in the diaphragm muscle cannot be entirely accounted for by the muscle atrophy. This indicates that muscle weakness is not just a consequence of muscle atrophy but that there is also significant contractile dysfunction. The current study aimed to determine whether contractile dysfunction is also present in limb muscles during severe Colon-26 (C26) carcinoma cachexia by studying the glycolytic extensor digitorum longus (EDL) muscle and the oxidative soleus muscle, which has an activity pattern that more closely resembles the diaphragm. Severe C-26 cancer cachexia caused significant muscle fiber atrophy and a reduction in maximum absolute force in both the EDL and soleus muscles. However, normalization to muscle cross sectional area further demonstrated a 13% decrease in maximum isometric specific force in the EDL and an even greater decrease (17%) in maximum isometric specific force in the soleus. Time to peak tension and half relaxation time were also significantly slowed in both the EDL and the solei from C-26 mice compared to controls. Since, in addition to postural control, the oxidative soleus is also important for normal locomotion, we further performed a fatigue trial in the soleus and found that the decrease in relative force was greater and more rapid in solei from C-26 mice compared to controls. These data demonstrate that severe cancer cachexia causes profound muscle weakness that is not entirely explained by the muscle atrophy. In addition, cancer cachexia decreases the fatigue resistance of the soleus muscle, a postural muscle typically resistant to fatigue. Thus, specifically targeting contractile dysfunction represents an additional means to counter muscle weakness in cancer cachexia, in addition to targeting the prevention of muscle atrophy.« less
Negative specific heat of a magnetically self-confined plasma torus
Kiessling, Michael K.-H.; Neukirch, Thomas
2003-01-01
It is shown that the thermodynamic maximum-entropy principle predicts negative specific heat for a stationary, magnetically self-confined current-carrying plasma torus. Implications for the magnetic self-confinement of fusion plasma are considered. PMID:12576553
IMPROVING THE TMDL PROCESS USING WATERSHED RISK ASSESSMENT PRINCIPLES
Watershed ecological risk assessment (WERA) evaluates potential causal relationships between multiple sources and stressors and impacts on valued ecosystem components. This has many similarities tothe placed-based analuses that are undertaken to develop total maximum daily loads...
Intelligent Sensors for Atomization Processing of Molten Metals and Alloys
1988-06-01
20ff. 12. Hirleman, Dan E. Particle Sizing by Optical , Nonimaging Techniques. Liquid Particle Size Measurement Techniques, ASTM, 1984, pp. 35ff. 13...sensors are based on electric, electromagnetic or optical principles, the latter being most developed in fields obviously related to atomization. Optical ...beams to observe various interference, diffraction, and heterodyning effects, and to observe, with high signal-to-noise ratio, even weak optical
ERIC Educational Resources Information Center
Okoye, K. R. E.; Michael, Ofonmbuk Isaac
2015-01-01
This paper attempts to examine the concept of Competency-Based Training (CBT) as a veritable mode of delivery of Technical and Vocational Education and Training (TVET) and at the same time highlights some of the strengths and weaknesses of implementing competency-base training. The characteristics, principles and benefits of CBT were also x-rayed.…
If the U.S. had Canada's stumpage system
Henry Spelter
2006-01-01
North American log markets function on different principles -- a profit allowance for the wood processor plays a role in timber pricing in Canada, while in the United States, it is a byproduct of the give and take of armâs-length market negotiations. The former is characterized by high elasticities of price transmission and, at times of market weakness, by low...
Experimental constraints on metric and non-metric theories of gravity
NASA Technical Reports Server (NTRS)
Will, Clifford M.
1989-01-01
Experimental constraints on metric and non-metric theories of gravitation are reviewed. Tests of the Einstein Equivalence Principle indicate that only metric theories of gravity are likely to be viable. Solar system experiments constrain the parameters of the weak field, post-Newtonian limit to be close to the values predicted by general relativity. Future space experiments will provide further constraints on post-Newtonian gravity.
Weaknesses in Applying a Process Approach in Industry Enterprises
NASA Astrophysics Data System (ADS)
Kučerová, Marta; Mĺkva, Miroslava; Fidlerová, Helena
2012-12-01
The paper deals with a process approach as one of the main principles of the quality management. Quality management systems based on process approach currently represents one of a proofed ways how to manage an organization. The volume of sales, costs and profit levels are influenced by quality of processes and efficient process flow. As results of the research project showed, there are some weaknesses in applying of the process approach in the industrial routine and it has been often only a formal change of the functional management to process management in many organizations in Slovakia. For efficient process management it is essential that companies take attention to the way how to organize their processes and seek for their continuous improvement.
Underwater electric field detection system based on weakly electric fish
NASA Astrophysics Data System (ADS)
Xue, Wei; Wang, Tianyu; Wang, Qi
2018-04-01
Weakly electric fish sense their surroundings in complete darkness by their active electric field detection system. However, due to the insufficient detection capacity of the electric field, the detection distance is not enough, and the detection accuracy is not high. In this paper, a method of underwater detection based on rotating current field theory is proposed to improve the performance of underwater electric field detection system. First of all, we built underwater detection system based on the theory of the spin current field mathematical model with the help of the results of previous researchers. Then we completed the principle prototype and finished the metal objects in the water environment detection experiments, laid the foundation for the further experiments.
A paradox on quantum field theory of neutrino mixing and oscillations
NASA Astrophysics Data System (ADS)
Li, Yu-Feng; Liu, Qiu-Yu
2006-10-01
Neutrino mixing and oscillations in quantum field theory framework had been studied before, which shew that the Fock space of flavor states is unitarily inequivalent to that of mass states (inequivalent vacua model). A paradox emerges when we use these neutrino weak states to calculate the amplitude of W boson decay. The branching ratio of W+→e++νμ to W+→e++νe is approximately at the order of O(mi2/k2). This existence of flavor changing currents contradicts to the Hamiltonian we started from, and the usual knowledge about weak processes. Also, negative energy neutrinos (or violating the principle of energy conservation) appear in this framework. We discuss possible reasons for the appearance of this paradox.
The Electron Drift Technique for Measuring Electric and Magnetic Fields
NASA Technical Reports Server (NTRS)
Paschmann, G.; McIlwain, C. E.; Quinn, J. M.; Torbert, R. B.; Whipple, E. C.; Christensen, John (Technical Monitor)
1998-01-01
The electron drift technique is based on sensing the drift of a weak beam of test electrons that is caused by electric fields and/or gradients in the magnetic field. These quantities can, by use of different electron energies, in principle be determined separately. Depending on the ratio of drift speed to magnetic field strength, the drift velocity can be determined either from the two emission directions that cause the electrons to gyrate back to detectors placed some distance from the emitting guns, or from measurements of the time of flight of the electrons. As a by-product of the time-of-flight measurements, the magnetic field strength is also determined. The paper describes strengths and weaknesses of the method as well as technical constraints.
Effect of clay type on the velocity and run-out distance of cohesive sediment gravity flows
NASA Astrophysics Data System (ADS)
Baker, Megan; Baas, Jaco H.; Malarkey, Jonathan; Kane, Ian
2016-04-01
Novel laboratory experiments in a lock-exchange flume filled with natural seawater revealed that sediment gravity flows (SGFs) laden with kaolinite clay (weakly cohesive), bentonite clay (strongly cohesive) and silica flour (non-cohesive) have strongly contrasting flow properties. Knowledge of cohesive clay-laden sediment gravity flows is limited, despite clay being one of the most abundant sediment types on earth and subaqueous SGFs transporting the greatest volumes of sediment on our planet. Cohesive SGFs are particularly complex owing to the dynamic interplay between turbulent and cohesive forces. Cohesive forces allow the formation of clay flocs and gels, which increase the viscosity and shear strength of the flow, and attenuate shear-induced turbulence. The experimental SGFs ranged from dilute turbidity currents to dense debris flows. For each experiment, the run-out distance, head velocity and thickness distribution of the deposit were measured, and the flow properties were recorded using high-resolution video. Increasing the volume concentration of kaolinite and bentonite above 22% and 17%, respectively, reduced both the maximum head velocity and the run-out distances of the SGFs. We infer that increasing the concentration of clay particles enhances the opportunity for the particles to collide and flocculate, thus increasing the viscosity and shear strength of the flows at the expense of turbulence, and reducing their forward momentum. Increasing the volume concentration in the silica-flour laden flows from 1% to 46% increased the maximum head velocity, owing to the gradual increase in excess density. Thereafter, however, intergranular friction is inferred to have attenuated the turbulence, causing a rapid reduction in the maximum head velocity and run-out distance as suspended sediment concentration was increased. Moving from flows carrying bentonite via kaolinite to silica flour, a progressively larger volumetric suspended sediment concentration was needed to produce similar run-out distances and maximum head velocities. Strongly cohesive bentonite flows were able to create a stronger network of particle bonds than weakly cohesive kaolinite flows of a similar concentration, thus producing the lower maximum head velocities and run-out distances observed. The lack of cohesion in the silica-flour laden flows meant that extremely high suspended sediment concentrations, i.e. close to the cubic packing density, were required to produce a high enough frictional strength to reduce the forward momentum of these flows. These experimental results can be used to improve our understanding of the deposit geometry and run-out distance of fine-grained SGFs in the natural environment. We suggest that natural SGFs that carry weakly cohesive clays (e.g. kaolinite) reach a greater distance from their origin than flows that contain strongly cohesive clays (e.g. bentonite) at similar suspended sediment concentrations, whilst equivalent fine-grained, non-cohesive SGFs travel the furthest. In addition, weakly cohesive SGFs may cover a larger surface area and have thinner deposits, with important ramifications for the architecture of stacked event beds.
Digitally balanced detection for optical tomography.
Hafiz, Rehan; Ozanyan, Krikor B
2007-10-01
Analog balanced Photodetection has found extensive usage for sensing of a weak absorption signal buried in laser intensity noise. This paper proposes schemes for compact, affordable, and flexible digital implementation of the already established analog balanced detection, as part of a multichannel digital tomography system. Variants of digitally balanced detection (DBD) schemes, suitable for weak signals on a largely varying background or weakly varying envelopes of high frequency carrier waves, are introduced analytically and elaborated in terms of algorithmic and hardware flow. The DBD algorithms are implemented on a low-cost general purpose reconfigurable hardware (field-programmable gate array), utilizing less than half of its resources. The performance of the DBD schemes compare favorably with their analog counterpart: A common mode rejection ratio of 50 dB was observed over a bandwidth of 300 kHz, limited mainly by the host digital hardware. The close relationship between the DBD outputs and those of known analog balancing circuits is discussed in principle and shown experimentally in the example case of propane gas detection.
REMARKS ON COMPOUND MODELS, CONSERVED CURRENTS AND WEAK INTERACTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayer, M.E.
A discussion is given of some implications of a symmetry principle, conjectured by Gamba, Marshak, and Okubo (GMO), in connection with the compound models for elementary particles, and the interpretation of weak interactions by a heavy vector meson coupled to the conserved V and A currents of the fermions. GMO observed that, for weak interactions, the three baryons LAMBDA deg , n, p are equivalent to the leptons mu /sup -/, e/sup -/, nu in the sense that any reaction permitted or observed for one of the groups is permitted for the other and conversely, no reaction forbidden for onemore » is observed in the other. This permitted the extension of the notions of isospin and strangeness to leptons and led to the expression of the electric charge in terms of the isospin projection, T/sub 3/, and the baryon and lepton numbers B and L:. Q = T/sub 3/ + 1/2(S+ B -- L). (B.O.G.)« less
Transition between strong and weak topological insulator in ZrTe5 and HfTe5
NASA Astrophysics Data System (ADS)
Fan, Zongjian; Liang, Qi-Feng; Chen, Y. B.; Yao, Shu-Hua; Zhou, Jian
2017-04-01
ZrTe5 and HfTe5 have attracted increasingly attention recently since the theoretical prediction of being topological insulators (TIs). However, subsequent works show many contradictions about their topolog-ical nature. Three possible phases, i.e. strong TI, weak TI, and Dirac semi-metal, have been observed in different experiments until now. Essentially whether ZrTe5 or HfTe5 has a band gap or not is still a question. Here, we present detailed first-principles calculations on the electronic and topological prop-erties of ZrTe5 and HfTe5 on variant volumes and clearly demonstrate the topological phase transition from a strong TI, going through an intermediate Dirac semi-metal state, then to a weak TI when the crystal expands. Our work might give a unified explain about the divergent experimental results and propose the crucial clue to further experiments to elucidate the topological nature of these materials.
Obtaining maximal stability with a septal extension technique in East asian rhinoplasty.
Jeong, Jae Yong
2014-01-01
Recently, in Korea, the septal extension graft from the septum or rib has become a common method of correcting a small or short nose. The success rate of this method has led to the blind faith that it provides superior tip projection and definition, and to the failure to notice its weaknesses. Even if there is a sufficient amount of cartilage, improper separation or fixation might waste the cartilage, resulting in an inefficient operation. Appropriate resection and effective fixation are essential factors for economical rhinoplasty. The septal extension graft is a remarkable procedure since it can control the nasal tip bidirectionally and three dimensionally. Nevertheless, it has a serious drawback since resection is responsible for septal weakness. Safe resection and firm reconstruction of the framework should be carried out. Operating on the basis of the principle of "safe harvest" and rebuilding the structures is important. Further, it is important to learn several techniques to manage septal weakness, insufficient cartilage quantity, and failure of the rigid frame during the surgery.
Li, Dongmei; Guan, Tian; He, Yonghong; Liu, Fang; Yang, Anping; He, Qinghua; Shen, Zhiyuan; Xin, Meiguo
2018-07-01
A new chiral sensor based on weak measurement to accurately measure the optical rotation (OR) has been developed for the estimation of a trace amount of chiral molecule. With the principle of optical weak measurement in frequency domain, the central wavelength shift of output spectra is quantitatively relative to the angle of preselected polarization. Hence, a chiral molecule (e.g., L-amino acid, or D-amino acid) can be enantioselectively determined by modifying the preselection angle with the OR, which will cause the rotation of a polarization plane. The concentration of the chiral sample, corresponding to its optical activity, is quantitatively analyzed with the central wavelength shift of output spectra, which can be collected in real time. Immune to the refractive index change, the proposed chiral sensor is valid in complicated measuring circumstance. The detections of Proline enantiomer concentration in different solvents were implemented. The results demonstrated that weak measurement acted as a reliable method to chiral recognition of Proline enantiomers in diverse circumstance with the merits of high precision and good robustness. In addition, this real-time monitoring approach plays a crucial part in asymmetric synthesis and biological systems. Copyright © 2018. Published by Elsevier B.V.
Cosmology with cosmic shear observations: a review.
Kilbinger, Martin
2015-07-01
Cosmic shear is the distortion of images of distant galaxies due to weak gravitational lensing by the large-scale structure in the Universe. Such images are coherently deformed by the tidal field of matter inhomogeneities along the line of sight. By measuring galaxy shape correlations, we can study the properties and evolution of structure on large scales as well as the geometry of the Universe. Thus, cosmic shear has become a powerful probe into the nature of dark matter and the origin of the current accelerated expansion of the Universe. Over the last years, cosmic shear has evolved into a reliable and robust cosmological probe, providing measurements of the expansion history of the Universe and the growth of its structure. We review here the principles of weak gravitational lensing and show how cosmic shear is interpreted in a cosmological context. Then we give an overview of weak-lensing measurements, and present the main observational cosmic-shear results since it was discovered 15 years ago, as well as the implications for cosmology. We then conclude with an outlook on the various future surveys and missions, for which cosmic shear is one of the main science drivers, and discuss promising new weak cosmological lensing techniques for future observations.
Principle of Maximum Fisher Information from Hardy’s Axioms Applied to Statistical Systems
Frieden, B. Roy; Gatenby, Robert A.
2014-01-01
Consider a finite-sized, multidimensional system in a parameter state a. The system is in either a state of equilibrium or general non-equilibrium, and may obey either classical or quantum physics. L. Hardy’s mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N = max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N = max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I = Imax. This is important because many physical laws have been derived, assuming as a working hypothesis that I = Imax. These derivations include uses of the principle of Extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell’s equations, new laws of biology (e.g. of Coulomb force-directed cell development, and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I = Imax itself derives, from suitably extended Hardy axioms, thereby eliminates its need to be assumed in these derivations. Thus, uses of I = Imax and EPI express physics at its most fundamental level – its axiomatic basis in math. PMID:24229152
Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.
Frieden, B Roy; Gatenby, Robert A
2013-10-01
Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.
Use and validity of principles of extremum of entropy production in the study of complex systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitor Reis, A., E-mail: ahr@uevora.pt
2014-07-15
It is shown how both the principles of extremum of entropy production, which are often used in the study of complex systems, follow from the maximization of overall system conductivities, under appropriate constraints. In this way, the maximum rate of entropy production (MEP) occurs when all the forces in the system are kept constant. On the other hand, the minimum rate of entropy production (mEP) occurs when all the currents that cross the system are kept constant. A brief discussion on the validity of the application of the mEP and MEP principles in several cases, and in particular to themore » Earth’s climate is also presented. -- Highlights: •The principles of extremum of entropy production are not first principles. •They result from the maximization of conductivities under appropriate constraints. •The conditions of their validity are set explicitly. •Some long-standing controversies are discussed and clarified.« less
NASA Astrophysics Data System (ADS)
Gao, J.; Lythe, M. B.
1996-06-01
This paper presents the principle of the Maximum Cross-Correlation (MCC) approach in detecting translational motions within dynamic fields from time-sequential remotely sensed images. A C program implementing the approach is presented and illustrated in a flowchart. The program is tested with a pair of sea-surface temperature images derived from Advanced Very High Resolution Radiometer (AVHRR) images near East Cape, New Zealand. Results show that the mean currents in the region have been detected satisfactorily with the approach.
Brain tissues volume measurements from 2D MRI using parametric approach
NASA Astrophysics Data System (ADS)
L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.
2018-04-01
The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.
Probability of stress-corrosion fracture under random loading
NASA Technical Reports Server (NTRS)
Yang, J. N.
1974-01-01
Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.
Obstacles to integrated pest management adoption in developing countries
Parsa, Soroush; Morse, Stephen; Bonifacio, Alejandro; Chancellor, Timothy C. B.; Condori, Bruno; Crespo-Pérez, Verónica; Hobbs, Shaun L. A.; Kroschel, Jürgen; Ba, Malick N.; Rebaudo, François; Sherwood, Stephen G.; Vanek, Steven J.; Faye, Emile; Herrera, Mario A.; Dangles, Olivier
2014-01-01
Despite its theoretical prominence and sound principles, integrated pest management (IPM) continues to suffer from anemic adoption rates in developing countries. To shed light on the reasons, we surveyed the opinions of a large and diverse pool of IPM professionals and practitioners from 96 countries by using structured concept mapping. The first phase of this method elicited 413 open-ended responses on perceived obstacles to IPM. Analysis of responses revealed 51 unique statements on obstacles, the most frequent of which was “insufficient training and technical support to farmers.” Cluster analyses, based on participant opinions, grouped these unique statements into six themes: research weaknesses, outreach weaknesses, IPM weaknesses, farmer weaknesses, pesticide industry interference, and weak adoption incentives. Subsequently, 163 participants rated the obstacles expressed in the 51 unique statements according to importance and remediation difficulty. Respondents from developing countries and high-income countries rated the obstacles differently. As a group, developing-country respondents rated “IPM requires collective action within a farming community” as their top obstacle to IPM adoption. Respondents from high-income countries prioritized instead the “shortage of well-qualified IPM experts and extensionists.” Differential prioritization was also evident among developing-country regions, and when obstacle statements were grouped into themes. Results highlighted the need to improve the participation of stakeholders from developing countries in the IPM adoption debate, and also to situate the debate within specific regional contexts. PMID:24567400
Ai, Shiwei; Guo, Rui; Liu, Bailin; Ren, Liang; Naeem, Sajid; Zhang, Wenya; Zhang, Yingmei
2016-10-01
Vegetables and crops can take up heavy metals when grown on polluted lands. The concentrations and dynamic uptake of heavy metals vary at different growth points for different vegetables. In order to assess the safe consumption of vegetables in weak alkaline farmlands, Chinese cabbage and radish were planted on the farmlands of Baiyin (polluted site) and Liujiaxia (relatively unpolluted site). Firstly, the growth processes of two vegetables were recorded. The growth curves of the two vegetables observed a slow growth at the beginning, an exponential growth period, and a plateau towards the end. Maximum concentrations of copper (Cu), zinc (Zn), lead (Pb), and cadmium (Cd) were presented at the slow growth period and showed a downtrend except the radish shoot. The concentrations of heavy metals (Cu, Zn, and Cd) in vegetables of Baiyin were higher than those of Liujiaxia. In the meanwhile, the uptake contents continued to increase during the growth or halted at maximum at a certain stage. The maximum uptake rates were found on the maturity except for the shoot of radish which took place at the exponential growth stages of root. The sigmoid model could simulate the dynamic processes of growth and heavy metals uptake of Chinese cabbage and radish. Conclusively, heavy metals have higher bioaccumulation tendency for roots in Chinese cabbage and for shoots in radish.
STATISTICAL CHARACTERISTICS OF ELEMENTAL ABUNDANCE RATIOS: OBSERVATIONS FROM THE ACE SPACECRAFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, L.-L.; Zhang, H.
We statistically analyze the elemental galactic cosmic ray (GCR) composition measurements of elements 5 ≤ Z ≤ 28 within the energy range 30–500 MeV/nucleon from the CRIS instrument on board the ACE spacecraft in orbit about the L1 Lagrange point during the period from 1997 to 2014. Similarly to the last unusual solar minimum, the elevated elemental intensities of all heavy nuclei during the current weak solar maximum in 2014 are ∼40% higher than that of the previous solar maximum in 2002, which has been attributed to the weak modulation associated with low solar activity levels during the ongoing weakestmore » solar maximum since the dawn of space age. In addition, the abundance ratios of heavy nuclei with respect to elemental oxygen are generally independent of kinetic energy per nucleon in the energy region 60–200 MeV/nuc, in good agreement with previous experiments. Furthermore, the abundance ratios of most relatively abundant species, except carbon, exhibit considerable solar-cycle variation, which are obviously positively correlated with the sunspot numbers with about one-year time lag. We also find that the percentage variation of abundance ratios for most elements are approximately identical. These preliminary results provide valuable insights into the characteristics of elemental heavy nuclei composition and place new and significant constraints on future GCR heavy nuclei propagation and modulation models.« less
NASA Astrophysics Data System (ADS)
Pan, Xinpeng; Zhang, Guangzhi; Yin, Xingyao
2018-01-01
Seismic amplitude variation with offset and azimuth (AVOaz) inversion is well known as a popular and pragmatic tool utilized to estimate fracture parameters. A single set of vertical fractures aligned along a preferred horizontal direction embedded in a horizontally layered medium can be considered as an effective long-wavelength orthorhombic medium. Estimation of Thomsen's weak-anisotropy (WA) parameters and fracture weaknesses plays an important role in characterizing the orthorhombic anisotropy in a weakly anisotropic medium. Our goal is to demonstrate an orthorhombic anisotropic AVOaz inversion approach to describe the orthorhombic anisotropy utilizing the observable wide-azimuth seismic reflection data in a fractured reservoir with the assumption of orthorhombic symmetry. Combining Thomsen's WA theory and linear-slip model, we first derive a perturbation in stiffness matrix of a weakly anisotropic medium with orthorhombic symmetry under the assumption of small WA parameters and fracture weaknesses. Using the perturbation matrix and scattering function, we then derive an expression for linearized PP-wave reflection coefficient in terms of P- and S-wave moduli, density, Thomsen's WA parameters, and fracture weaknesses in such an orthorhombic medium, which avoids the complicated nonlinear relationship between the orthorhombic anisotropy and azimuthal seismic reflection data. Incorporating azimuthal seismic data and Bayesian inversion theory, the maximum a posteriori solutions of Thomsen's WA parameters and fracture weaknesses in a weakly anisotropic medium with orthorhombic symmetry are reasonably estimated with the constraints of Cauchy a priori probability distribution and smooth initial models of model parameters to enhance the inversion resolution and the nonlinear iteratively reweighted least squares strategy. The synthetic examples containing a moderate noise demonstrate the feasibility of the derived orthorhombic anisotropic AVOaz inversion method, and the real data illustrate the inversion stabilities of orthorhombic anisotropy in a fractured reservoir.
Possibility to implement invasive species control in Swedish forests.
Pettersson, Maria; Strömberg, Caroline; Keskitalo, E Carina H
2016-02-01
Invasive alien species constitute an increasing risk to forestry, as indeed to natural systems in general. This study reviews the legislative framework governing invasive species in the EU and Sweden, drawing upon both a legal analysis and interviews with main national level agencies responsible for implementing this framework. The study concludes that EU and Sweden are limited in how well they can act on invasive species, in particular because of the weak interpretation of the precautionary principle in the World Trade Organisation and Sanitary and Phytosanitary agreements. In the Swedish case, this interpretation also conflicts with the stronger interpretation of the precautionary principle under the Swedish Environmental Code, which could in itself provide for stronger possibilities to act on invasive species.
Does quantity generate quality? Testing the fundamental principle of brainstorming.
Muñoz Adánez, Alfredo
2005-11-01
The purpose of this work is to test the chief principle of brainstorming, formulated as "quantity generates quality." The study is included within a broad program whose goal is to detect the strong and weak points of creative techniques. In a sample of 69 groups, containing between 3 and 8 members, the concurrence of two commonly accepted criteria was established as a quality rule: originality and utility or value. The results fully support the quantity-quality relation (r = .893): the more ideas produced to solve a problem, the better quality of the ideas. The importance of this finding, which supports Osborn's theory, is discussed, and the use of brainstorming is recommended to solve the many open problems faced by our society.
Principle, system, and applications of tip-enhanced Raman spectroscopy
NASA Astrophysics Data System (ADS)
Zhang, MingQian; Wang, Rui; Wu, XiaoBin; Wang, Jia
2012-08-01
Raman spectroscopy is a powerful technique in chemical information characterization. However, this spectral method is subject to two obstacles in nano-material detection. One is diffraction limited spatial resolution, and the other is its inherent small Raman cross section and weak signaling. To resolve these problems, a new approach has been developed, denoted as tip-enhanced Raman spectroscopy (TERS). TERS is capable of high-resolution and high-sensitivity detection and demonstrated to be a promising spectroscopic and micro-topographic method to characterize nano-materials and nanostructures. In this paper, the principle and experimental system of TERS are discussed. The latest application of TERS in molecule detection, biological specimen identification, nanao-material characterization, and semi-conductor material determination with some specific experimental examples are presented.
Weakly Informative Prior for Point Estimation of Covariance Matrices in Hierarchical Models
ERIC Educational Resources Information Center
Chung, Yeojin; Gelman, Andrew; Rabe-Hesketh, Sophia; Liu, Jingchen; Dorie, Vincent
2015-01-01
When fitting hierarchical regression models, maximum likelihood (ML) estimation has computational (and, for some users, philosophical) advantages compared to full Bayesian inference, but when the number of groups is small, estimates of the covariance matrix (S) of group-level varying coefficients are often degenerate. One can do better, even from…
High-reflectivity phase conjugation using Brillouin preamplification.
Ridley, K D; Scott, A M
1990-07-15
We describe experiments in which a weak laser pulse is phase conjugated by using a high-gain Brillouin amplifier in front of a stimulated Brillouin scattering phase-conjugate mirror. We observe phase conjugation with signal energies as low as 3 x 10(-13) J and with a maximum reflection coefficient of 2 x 10(8).
Dynamics on the laminar-turbulent boundary and the origin of the maximum drag reduction asymptote.
Xi, Li; Graham, Michael D
2012-01-13
Dynamical trajectories on the boundary in state space between laminar and turbulent plane channel flow-edge states-are computed for Newtonian and viscoelastic fluids. Viscoelasticity has a negligible effect on the properties of these solutions, and, at least at a low Reynolds number, their mean velocity profiles correspond closely to experimental observations for polymer solutions in the maximum drag reduction regime. These results confirm the existence of weak turbulence states that cannot be suppressed by polymer additives, explaining the fact that there is an upper limit for polymer-induced drag reduction.
EPRB Gedankenexperiment and Entanglement with Classical Light Waves
NASA Astrophysics Data System (ADS)
Rashkovskiy, Sergey A.
2018-06-01
In this article we show that results similar to those of the Einstein-Podolsky-Rosen-Bohm (EPRB) Gedankenexperiment and entanglement of photons can be obtained using weak classical light waves if we take into account the discrete (atomic) structure of the detectors and a specific nature of the light-atom interaction. We show that the CHSH (Clauser, Horne, Shimony, and Holt) criterion in the EPRB Gedankenexperiment with classical light waves can exceed not only the maximum value SHV=2 that is predicted by the local hidden-variable theories but also the maximum value S_{QM} = 2√2 predicted by quantum mechanics.
Effects of pulse width and coding on radar returns from clear air
NASA Technical Reports Server (NTRS)
Cornish, C. R.
1983-01-01
In atmospheric radar studies it is desired to obtain maximum information about the atmosphere and to use efficiently the radar transmitter and processing hardware. Large pulse widths are used to increase the signal to noise ratio since clear air returns are generally weak and maximum height coverage is desired. Yet since good height resolution is equally important, pulse compression techniques such as phase coding are employed to optimize the average power of the transmitter. Considerations in implementing a coding scheme and subsequent effects of an impinging pulse on the atmosphere are investigated.
A Rejection Principle for Sequential Tests of Multiple Hypotheses Controlling Familywise Error Rates
BARTROFF, JAY; SONG, JINLIN
2015-01-01
We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together we call these conditions a “rejection principle for sequential tests,” which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and to finding the maximum safe dose of a treatment. PMID:26985125
[Study on culture and philosophy of processing of traditional Chinese medicines].
Yang, Ming; Zhang, Ding-Kun; Zhong, Ling-Yun; Wang, Fang
2013-07-01
According to cultural views and philosophical thoughts, this paper studies the cultural origin, thinking modes, core principles, general regulation and methods of processing, backtracks processing's culture and history which contains generation and deduction process, experienced and promoting process, and core value, summarizes processing's basic principles which are directed by holistic, objective, dynamic, balanced and appropriate thoughts; so as to propagate cultural characteristic and philosophical wisdom of traditional Chinese medicine processing, to promote inheritance and development of processing and to ensure the maximum therapeutic value of Chinese medical clinical.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jianhui; Chen, Chen; Lu, Xiaonan
2015-08-01
This guideline focuses on the integration of DMS with DERMS and microgrids connected to the distribution grid by defining generic and fundamental design and implementation principles and strategies. It starts by addressing the current status, objectives, and core functionalities of each system, and then discusses the new challenges and the common principles of DMS design and implementation for integration with DERMS and microgrids to realize enhanced grid operation reliability and quality power delivery to consumers while also achieving the maximum energy economics from the DER and microgrid connections.
JPRS Report. Soviet Union: World Economy & International Relations, No. 7, July 1989.
1989-11-09
developing countries is based not so much on a national industrial bourgeoisie (which in principle should be the agent of capitalist development...as on the bureaucratic bour- geoisie, which represents the state-capitalist structure (a kind of "surrogate" bourgeoisie ). The socialist tendency...countries of Tropical Africa. The industrial bourgeoisie is financially weak, is not inclined to take risks, lacks authority, and is inexperienced and
2014-04-01
FISCAM Federal Information System Controls Audit Manual FMFIA Federal Managers’ Financial Integrity Act FMR Financial Management Regulation GAAP ...rules are incorporated into generally accepted accounting principles ( GAAP ) for the federal government. For additional information on the two methods of...to hold executive branch officials accountable for proper use of budgetary resources, and to ensure proper stewardship and transparency of the use
2001-12-01
Act of 1996 FMFIA Federal Managers’ Financial Integrity Act of 1982 FTE full-time equivalent GAAP generally...statements.11 This guidance requires that financial statements be prepared in accordance with U.S. generally accepted accounting principles ( GAAP )12 and the...Federal Financial Statements. 12The Federal Accounting Standards Advisory Board promulgates GAAP for federal government entities. Annual Financial
Guidelines and Options for Computer Access from a Reclined Position.
Grott, Ray
2015-01-01
Many people can benefit from working in a reclined position when accessing a computer. This can be due to disabilities involving musculoskeletal weakness, or the need to offload pressure on the spine or elevate the legs. Although there are "reclining workstations" on the market that work for some people, potentially better solutions tailored to individual needs can be configured at modest cost by following some basic principles.
The Second Economy in the USSR and Eastern Europe: A Bibliography
1985-09-01
34 with an appendix "Maximum Principle for Speculative Money Balances" by Andrzej Zieba . In Gaertner and Wenig, Economics of the Shadow Economy, 377-391...1976. Translation of Partiia ili mafia (Paris, 1976). Zieba , Andrzej. -- see Brus/Laski.
Wormholes, emergent gauge fields, and the weak gravity conjecture
Harlow, Daniel
2016-01-20
This paper revisits the question of reconstructing bulk gauge fields as boundary operators in AdS/CFT. In the presence of the ormhole dual to the thermo field double state of two CFTs, the existence of bulk gauge fields is in some tension with the microscopic tensor factorization of the Hilbert space. Here, I explain how this tension can be resolved by splitting the gauge field into charged constituents, and I argue that this leads to a new argument for the "principle of completeness", which states that the charge lattice of a gauge theory coupled to gravity must be fully populated. Imore » also claim that it leads to a new motivation for (and a clarification of) the "weak gravity conjecture", which I interpret as a strengthening of this principle. This setup gives a simple example of a situation where describing low-energy bulk physics in CFT language requires knowledge of high-energy bulk physics. Furthermore, this contradicts to some extent the notion of "effective conformal field theory", but in fact is an expected feature of the resolution of the black hole information problem. An analogous factorization issue exists also for the gravitational field, and I comment on several of its implications for reconstructing black hole interiors and the emergence of spacetime more generally.« less
Limits on amplification by Aharonov-Albert-Vaidman weak measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koike, Tatsuhiko; Tanaka, Saki
2011-12-15
We analyze the amplification by the Aharonov-Albert-Vaidman weak quantum measurement on a Sagnac interferometer [Dixon et al., Phys. Rev. Lett. 102, 173601 (2009)] up to all orders of the coupling strength between the measured system and the measuring device. The amplifier transforms a small tilt of a mirror into a large transverse displacement of the laser beam. The conventional analysis has shown that the measured value is proportional to the weak value, so that the amplification can be made arbitrarily large in the cost of decreasing output laser intensity. It is shown that the measured displacement and the amplification factormore » are in fact not proportional to the weak value and rather vanish in the limit of infinitesimal output intensity. We derive the optimal overlap of the pre- and postselected states with which the amplification become maximum. We also show that the nonlinear effects begin to arise in the performed experiments so that any improvements in the experiment, typically with an amplification greater than 100, should require the nonlinear theory in translating the observed value to the original displacement.« less
Etude et simulation de la MADA
NASA Astrophysics Data System (ADS)
Defontaines, Remi
Over the past ten years, the production of electric energy using wind turbines has increased eight times. This production of energy is in full expansion, and different means are now at the dispositions of researchers to finally explore it to the maximum. The DFIG is a type of wind turbine that has been the object of numerous studies over the past several years. This wind turbine functions with the speed of the wind. Its principle particularity is that it is constituted of an asynchronous machine, a wound-rotor and is capable of providing active power to the network by the stator and the rotor. This structure permits a good performance over a wide range of wind speeds, at a reasonable cost. It manages to be cost-effective because it uses weakly dimensioned power converters. Despite its advantages, there is a problem: its connection to the network. The electric network is not always stable; it regularly suffers voltage damage (low voltage, unbalance or overvoltage). This damage can result in fault from poor quality in the machine, and this damages or even destroys the power converters. To avoid this issue, the wind turbine disconnects from the network when it undergoes deterioration. The goal of this research is to find a strategy that allows the wind turbine to function even when the network voltage deteriorates, which in turn results in avoiding disconnection and therefore the loss of electrical power.
Saravana Kumar, P; Duraipandiyan, V; Ignacimuthu, S
2014-09-01
Thirty-seven actinomycetes strains were isolated from soil samples collected from an agriculture field in Vengodu, Thiruvannamalai District, Tamil Nadu, India (latitude: 12° 54' 0033″, North; longitude: 79° 78' 5216″, East; elevation: 228.6/70.0 ft/m). The isolates were assessed for antagonistic activity against five Gram-positive bacteria, seven Gram-negative bacteria, and two pathogenic fungi. During the initial screening, 43% of the strains showed weak activity, 16% showed moderate activity, 5% showed good activity, and 35% showed no antagonistic activity. Among the strains tested, SCA 7 showed strong antimicrobial activity. Maximum biological activity was obtained on modified nutrient glucose agar (MNGA) medium. The mycelia of SCA 7 were extracted with methanol and tested against microbial pathogens using the disc diffusion method. The crude extract was purified partially using column chromatography and assessed for antimicrobial activity. Fraction 10 showed good activity against Staphylococcus epidermidis (31.25 μg/mL) and Malassezia pachydermatis (500 μg/mL) and the active principle (fraction 10) was identified as 2,4-bis (1,1-dimethylethyl) phenol. Based on morphological, physiological, biochemical, cultural, and molecular characteristics (16S rDNA sequencing), this strain was identified as Streptomyces sp. SCA 7. It could be used in the development of new substances for pharmaceutical or agricultural purposes. Copyright © 2014. Published by Elsevier B.V.
Antiferromagnetism in the van der Waals layered spin-lozenge semiconductor CrTe 3
McGuire, Michael A.; Garlea, V. Ovidiu; KC, Santosh; ...
2017-04-14
We have investigated the crystallographic, magnetic, and transport properties of the van der Waals bonded, layered compound CrTe 3 on single-crystal and polycrystalline materials. Furthermore, the crystal structure contains layers made up of lozenge-shaped Cr 4 tetramers. Electrical resistivity measurements show the crystals to be semiconducting, with a temperature dependence consistent with a band gap of 0.3 eV. The magnetic susceptibility exhibits a broad maximum near 300 K characteristic of low dimensional magnetic systems. Weak anomalies are observed in the susceptibility and heat capacity near 55 K, and single-crystal neutron diffraction reveals the onset of long-range antiferromagnetic order at thismore » temperature. Strongly dispersive spin waves are observed in the ordered state. Significant magnetoelastic coupling is indicated by the anomalous temperature dependence of the lattice parameters and is evident in structural optimization in van der Waals density functional theory calculations for different magnetic configurations. The cleavability of the compound is apparent from its handling and is confirmed by first-principles calculations, which predict a cleavage energy 0.5 J / m 2 , similar to graphite. Based on our results, CrTe 3 is identified as a promising compound for studies of low dimensional magnetism in bulk crystals as well as magnetic order in monolayer materials and van der Waals heterostructures.« less
NASA Astrophysics Data System (ADS)
Midha, Tripti; Kolomeisky, Anatoly B.; Gupta, Arvind Kumar
2018-04-01
Stimulated by the effect of the nearest neighbor interactions in vehicular traffic and motor proteins, we study a 1D driven lattice gas model, in which the nearest neighbor particle interactions are taken in accordance with the thermodynamic concepts. The non-equilibrium steady-state properties of the system are analyzed under both open and periodic boundary conditions using a combination of cluster mean-field analysis and Monte Carlo simulations. Interestingly, the fundamental diagram of current versus density shows a complex behavior with a unimodal dependence for attractions and weak repulsions that turns into the bimodal behavior for stronger repulsive interactions. Specific details of system-reservoir coupling for the open system have a strong effect on the stationary phases. We produce the steady-state phase diagrams for the bulk-adapted coupling to the reservoir using the minimum and maximum current principles. The strength and nature of interaction energy has a striking influence on the number of stationary phases. We observe that interactions lead to correlations having a strong impact on the system dynamical properties. The correlation between any two sites decays exponentially as the distance between the sites increases. Moreover, they are found to be short-range for repulsions and long-range for attractions. Our results also suggest that repulsions and attractions asymmetrically modify the dynamics of interacting particles in exclusion processes.
Lineshapes of Dipole-Dipole Resonances in a Cold Rydberg Gas
NASA Astrophysics Data System (ADS)
Richards, B. G.; Jones, R. R.
2015-05-01
We have examined the lineshapes associated with Stark tuned, dipole-dipole resonances involving Rydberg atoms in a cold gas. Rb atoms in a MOT are laser excited from the 5 p level to 32p3 / 2 in the presence of a weak electric field. A fast rising electric field pulse Stark tunes the total energy of two 32 p atom pairs so it is (nearly) degenerate with that of the 32s1 / 2+33s1 / 2 states. Because of the dipole-dipole coupling, atom pairs separated by a distance R, develop 32s1 / 2+33s1 / 2 character. The maximum probability for finding atoms in s-states depends on the detuning from degeneracy and on the dipole-dipole coupling. We obtain the ``resonance'' lineshape by measuring, via state-selective field ionization, the s-state population as a function of the tuning field. The resonance width decreases with density due to R-3 dependence of the dipole-dipole coupling. In principle, the lineshape provides information about the distribution of Rydberg atom spacings in the sample. For equally spaced atoms, the lineshape should be Lorentzian while for a random nearest neighbor distribution it appears as a cusp. At low densities nearly Gaussian lineshapes are observed with widths that are too large to be the result of inhomogeneous electric or magnetic fields. Supported by the NSF.
Antiferromagnetism in the van der Waals layered spin-lozenge semiconductor CrTe 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuire, Michael A.; Garlea, V. Ovidiu; KC, Santosh
We have investigated the crystallographic, magnetic, and transport properties of the van der Waals bonded, layered compound CrTe 3 on single-crystal and polycrystalline materials. Furthermore, the crystal structure contains layers made up of lozenge-shaped Cr 4 tetramers. Electrical resistivity measurements show the crystals to be semiconducting, with a temperature dependence consistent with a band gap of 0.3 eV. The magnetic susceptibility exhibits a broad maximum near 300 K characteristic of low dimensional magnetic systems. Weak anomalies are observed in the susceptibility and heat capacity near 55 K, and single-crystal neutron diffraction reveals the onset of long-range antiferromagnetic order at thismore » temperature. Strongly dispersive spin waves are observed in the ordered state. Significant magnetoelastic coupling is indicated by the anomalous temperature dependence of the lattice parameters and is evident in structural optimization in van der Waals density functional theory calculations for different magnetic configurations. The cleavability of the compound is apparent from its handling and is confirmed by first-principles calculations, which predict a cleavage energy 0.5 J / m 2 , similar to graphite. Based on our results, CrTe 3 is identified as a promising compound for studies of low dimensional magnetism in bulk crystals as well as magnetic order in monolayer materials and van der Waals heterostructures.« less
Soobrattee, Muhammad A; Bahorun, Theeshan; Neergheen, Vidushi S; Googoolye, Kreshna; Aruoma, Okezie I
2008-02-01
There is continued interest in the assessment of the bioefficacy of the active principles in extracts from a variety of traditional medicine and food plants in order to determine their impact on the management of a variety of clinical conditions and maintenance of health. The polyphenolic composition and antioxidant potential of Mauritian endemic plants of the Rubiaceae, Ebenaceae, Celastraceae, Erythroxylaceae and Sterculaceae family were determined. The phenolics level of the plant extracts varied from 1 to 75 mg/g FW, the maximum level measured in Diospyros neraudii (Ebenaceae). Coffea macrocarpa showed the highest flavonoids content with 18+/-0.7 mg/g FW. The antioxidant capacity based on the TEAC and FRAP values were strongly related to total phenolics and proanthocyanidins content, while a weaker correlation was observed with (-) gallic acid. Erythroxylum sideroxyloides showed the highest protective effect in the lipid peroxidation systems with IC(50) of 0.0435+/-0.001 mg FW/ml in the Fe(3+)/ascorbate system and 0.05+/-0.002 mg FW/ml in the AAPH system. Cassine orientalis, E. sideroxyloides, Diospyros mellanida and Chassalia coriancea var. johnstonii were weakly prooxidant only at higher concentration greater of 10 g FW/L indicating potential safety. Mauritian endemic plants, particularly the genus Diospyros, are good sources of phenolic antioxidants and potential candidates for the development of prophylactic agents.
NASA Astrophysics Data System (ADS)
Liu, Molin; Zhao, Zonghua; You, Xiaohe; Lu, Jianbo; Xu, Lixin
2017-07-01
About 0.4 s after the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a transient gravitational-wave (GW) signal GW150914, the Fermi Gamma-ray Burst Monitor (GBM) also found a weak electromagnetic transient (GBM transient 150914). Time and location coincidences favor a possible association between GW150904 and GBM transient 150914. Under this possible association, we adopt Fermi's electromagnetic (EM) localization and derive constraints on possible violations of the Weak Equivalence Principle (WEP) from the observations of two events. Our calculations are based on four comparisons: (1) The first is the comparison of the initial GWs detected at the two LIGO sites. From the different polarizations of these initial GWs, we obtain a limit on any difference in the parametrized post-Newtonian (PPN) parameter Δγ ≲10-10. (2) The second is a comparison of GWs and possible EM waves. Using a traditional super-Eddington accretion model for GBM transient 150914, we again obtain an upper limit Δγ ≲10-10. Compared with previous results for photons and neutrinos, our limits are five orders of magnitude stronger than those from PeV neutrinos in blazar flares, and seven orders stronger than those from MeV neutrinos in SN1987A. (3) The third is a comparison of GWs with different frequencies in the range [35 Hz, 250 Hz]. (4) The fourth is a comparison of EM waves with different energies in the range [1 keV, 10 MeV]. These last two comparisons lead to an even stronger limit, Δγ ≲10-8. Our results highlight the potential of multi-messenger signals exploiting different emission channels to strengthen existing tests of the WEP.
NASA Astrophysics Data System (ADS)
Zheng, Z. D.; Wang, X. C.; Mi, W. B.
2018-04-01
The electronic structure of Fe adsorbed g-C2N with different layers is investigated by first-principles calculations. The Fe1 and Fe2 represent the Fe adsorptions at Csbnd C and Csbnd N rings, and Fe11 and Fe121 adsorption sites are also considered. The Fe1 adsorbed g-C2N is metallic with layer from n = 1 to 4, and the maximum spin splitting is 515, 428, 46 and 133 meV. The band gap of Fe2 adsorbed g-C2N with different layers is 0, 0, 117 and 6 meV, and the maximum spin splitting is 565, 369, 195 and 146 meV, respectively. All of the Fe11 adsorbed g-C2N are metallic with layer from n = 1 to 4, and the maximum spin splitting is 199, 0, 83 and 203 meV. An indirect band gap of 215 meV appears in Fe121 adsorbed g-C2N at layer n = 3, and the maximum spin splitting is 283, 211, 304 and 153 meV, respectively. Our results show that the electronic structures of Fe adsorbed novel two-dimensional semiconductor g-C2N can be tuned by different layers. Moreover, the spin splitting of Fe2 adsorbed g-C2N decreases monotonically as g-C2N layer increases from n = 1 to 4, which will provide more potential applications in spintronic devices.
Smith, Isaac H; Aquino, Karl; Koleva, Spassena; Graham, Jesse
2014-08-01
Throughout history, principles such as obedience, loyalty, and purity have been instrumental in binding people together and helping them thrive as groups, tribes, and nations. However, these same principles have also led to in-group favoritism, war, and even genocide. Does adhering to the binding moral foundations that underlie such principles unavoidably lead to the derogation of out-group members? We demonstrated that for people with a strong moral identity, the answer is "no," because they are more likely than those with a weak moral identity to extend moral concern to people belonging to a perceived out-group. Across three studies, strongly endorsing the binding moral foundations indeed predicted support for the torture of out-group members (Studies 1a and 1b) and withholding of necessary help from out-group members (Study 2), but this relationship was attenuated among participants who also had a strong moral identity. © The Author(s) 2014.
Noncommutative Common Cause Principles in algebraic quantum field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hofer-Szabo, Gabor; Vecsernyes, Peter
2013-04-15
States in algebraic quantum field theory 'typically' establish correlation between spacelike separated events. Reichenbach's Common Cause Principle, generalized to the quantum field theoretical setting, offers an apt tool to causally account for these superluminal correlations. In the paper we motivate first why commutativity between the common cause and the correlating events should be abandoned in the definition of the common cause. Then we show that the Noncommutative Weak Common Cause Principle holds in algebraic quantum field theory with locally finite degrees of freedom. Namely, for any pair of projections A, B supported in spacelike separated regions V{sub A} and V{submore » B}, respectively, there is a local projection C not necessarily commuting with A and B such that C is supported within the union of the backward light cones of V{sub A} and V{sub B} and the set {l_brace}C, C{sup Up-Tack }{r_brace} screens off the correlation between A and B.« less
NASA Astrophysics Data System (ADS)
Biswas, Sohag; Dasgupta, Teesta; Mallik, Bhabani S.
2016-09-01
We present the reactivity of an organic intermediate by studying the proton transfer process from water to ketyl radical anion using gas phase electronic structure calculations and the metadynamics method based first principles molecular dynamics (FPMD) simulations. Our results indicate that during the micro solvation of anion by water molecules systematically, the presence of minimum three water molecules in the gas phase cluster is sufficient to observe the proton transfer event. The analysis of trajectories obtained from initial FPMD simulation of an aqueous solution of the anion does not show any evident of complete transfer of the proton from water. The cooperativity of water molecules and the relatively weak anion-water interaction in liquid state prohibit the full release of the proton. Using biasing potential through first principles metadynamics simulations, we report the observation of proton transfer reaction from water to ketyl radical anion with a barrier height of 16.0 kJ/mol.
Combining Experiments and Simulations Using the Maximum Entropy Principle
Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten
2014-01-01
A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124
Energetics of slope flows: linear and weakly nonlinear solutions of the extended Prandtl model
NASA Astrophysics Data System (ADS)
Güttler, Ivan; Marinović, Ivana; Večenaj, Željko; Grisogono, Branko
2016-07-01
The Prandtl model succinctly combines the 1D stationary boundary-layer dynamics and thermodynamics of simple anabatic and katabatic flows over uniformly inclined surfaces. It assumes a balance between the along-the-slope buoyancy component and adiabatic warming/cooling, and the turbulent mixing of momentum and heat. In this study, energetics of the Prandtl model is addressed in terms of the total energy (TE) concept. Furthermore, since the authors recently developed a weakly nonlinear version of the Prandtl model, the TE approach is also exercised on this extended model version, which includes an additional nonlinear term in the thermodynamic equation. Hence, interplay among diffusion, dissipation and temperature-wind interaction of the mean slope flow is further explored. The TE of the nonlinear Prandtl model is assessed in an ensemble of solutions where the Prandtl number, the slope angle and the nonlinearity parameter are perturbed. It is shown that nonlinear effects have the lowest impact on variability in the ensemble of solutions of the weakly nonlinear Prandtl model when compared to the other two governing parameters. The general behavior of the nonlinear solution is similar to the linear solution, except that the maximum of the along-the-slope wind speed in the nonlinear solution reduces for larger slopes. Also, the dominance of PE near the sloped surface, and the elevated maximum of KE in the linear and nonlinear energetics of the extended Prandtl model are found in the PASTEX-94 measurements. The corresponding level where KE>PE most likely marks the bottom of the sublayer subject to shear-driven instabilities. Finally, possible limitations of the weakly nonlinear solutions of the extended Prandtl model are raised. In linear solutions, the local storage of TE term is zero, reflecting the stationarity of solutions by definition. However, in nonlinear solutions, the diffusion, dissipation and interaction terms (where the height of the maximum interaction is proportional to the height of the low-level jet by the factor ≈4/9) do not balance and the local storage of TE attains non-zero values. In order to examine the issue of non-stationarity, the inclusion of velocity-pressure covariance in the momentum equation is suggested for future development of the extended Prandtl model.
Laser-beam scintillations for weak and moderate turbulence
NASA Astrophysics Data System (ADS)
Baskov, R. A.; Chumak, O. O.
2018-04-01
The scintillation index is obtained for the practically important range of weak and moderate atmospheric turbulence. To study this challenging range, the Boltzmann-Langevin kinetic equation, describing light propagation, is derived from first principles of quantum optics based on the technique of the photon distribution function (PDF) [Berman et al., Phys. Rev. A 74, 013805 (2006), 10.1103/PhysRevA.74.013805]. The paraxial approximation for laser beams reduces the collision integral for the PDF to a two-dimensional operator in the momentum space. Analytical solutions for the average value of PDF as well as for its fluctuating constituent are obtained using an iterative procedure. The calculated scintillation index is considerably greater than that obtained within the Rytov approximation even at moderate turbulence strength. The relevant explanation is proposed.
Seismic waves in a self-gravitating planet
NASA Astrophysics Data System (ADS)
Brazda, Katharina; de Hoop, Maarten V.; Hörmann, Günther
2013-04-01
The elastic-gravitational equations describe the propagation of seismic waves including the effect of self-gravitation. We rigorously derive and analyze this system of partial differential equations and boundary conditions for a general, uniformly rotating, elastic, but aspherical, inhomogeneous, and anisotropic, fluid-solid earth model, under minimal assumptions concerning the smoothness of material parameters and geometry. For this purpose we first establish a consistent mathematical formulation of the low regularity planetary model within the framework of nonlinear continuum mechanics. Using calculus of variations in a Sobolev space setting, we then show how the weak form of the linearized elastic-gravitational equations directly arises from Hamilton's principle of stationary action. Finally we prove existence and uniqueness of weak solutions by the method of energy estimates and discuss additional regularity properties.
Comparative study of electronic structure and microscopic model of SrMn3P4O14 and Sr3Cu3(PO4)4
NASA Astrophysics Data System (ADS)
Khanam, Dilruba; Rahaman, Badiur
2018-05-01
We present the first principle density functional calculations to figure out the comparative study of the underlying spin model SrMn3P4O14 and Sr3Cu3(PO4)4. We explicitly discuss the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings for both compounds. A microscopic modeling based on analysis of the electronic structure of both systems puts them in the interesting class of weakly coupled trimer units, which makes chains S=5/2 for SrMn3P4O14 and S=1/2 for Sr3Cu3(PO4)4 that are in turn weakly coupled to each other.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Irshad; Gnedin, Nickolay Y.
Baryonic effects are amongst the most severe systematics to the tomographic analysis of weak lensing data which is the principal probe in many future generations of cosmological surveys like LSST, Euclid etc.. Modeling or parameterizing these effects is essential in order to extract valuable constraints on cosmological parameters. In a recent paper, Eifler et al. (2015) suggested a reduction technique for baryonic effects by conducting a principal component analysis (PCA) and removing the largest baryonic eigenmodes from the data. In this article, we conducted the investigation further and addressed two critical aspects. Firstly, we performed the analysis by separating the simulations into training and test sets, computing a minimal set of principle components from the training set and examining the fits on the test set. We found that using only four parameters, corresponding to the four largest eigenmodes of the training set, the test sets can be fitted thoroughly with an RMSmore » $$\\sim 0.0011$$. Secondly, we explored the significance of outliers, the most exotic/extreme baryonic scenarios, in this method. We found that excluding the outliers from the training set results in a relatively bad fit and degraded the RMS by nearly a factor of 3. Therefore, for a direct employment of this method to the tomographic analysis of the weak lensing data, the principle components should be derived from a training set that comprises adequately exotic but reasonable models such that the reality is included inside the parameter domain sampled by the training set. The baryonic effects can be parameterized as the coefficients of these principle components and should be marginalized over the cosmological parameter space.« less
Tentative Identification of Interstellar Dust in the Magnetic Wall of the Heliosphere
NASA Astrophysics Data System (ADS)
Frisch, Priscilla C.
2005-10-01
Observations of the weak polarization of light from nearby stars, reported by Tinbergen, are consistent with polarization by small (radius <0.14 μm), interstellar dust grains entrained in the magnetic wall of the heliosphere. The region of maximum polarization is toward ecliptic coordinates (λ, β)~(295deg, 0deg), corresponding to (l, b) = (20°, -21°). The direction of maximum polarization is offset along the ecliptic longitude by ~35° from the nose of the heliosphere and extends to low ecliptic latitudes. An offset is also seen between the region with the best-aligned dust grains, λ~281deg-330deg, and the upwind direction of the undeflected large grains, λ~259deg, β~+8deg, which are observed by Ulysses and Galileo to be flowing into the heliosphere. In the aligned-grain region, the strength of polarization anticorrelates with ecliptic latitude, indicating that the magnetic wall is predominantly at negative ecliptic latitudes. An extension of the magnetic wall to β<0deg, formed by the interstellar magnetic field BIS draped over the heliosphere, is consistent with predictions by Linde (1998). A consistent interpretation follows if the maximum-polarization region traces the heliosphere magnetic wall in a direction approximately perpendicular to BIS, while the region of best-aligned dust samples the region where BIS drapes smoothly over the heliosphere with maximum compression. These data are consistent with BIS being tilted by 60° with respect to the ecliptic plane and parallel to the Galactic plane. Interstellar dust grains captured in the heliosheath may also introduce a weak, but important, large-scale contaminant for the cosmic microwave background signal with a symmetry consistent with the relative tilts of BIS and the ecliptic.
The Importance of Motivation in the Typewriting Classroom
ERIC Educational Resources Information Center
Jacks, Mary L.
1976-01-01
If a typing teacher makes maximum use of intrinsic rewards, it will not be necessary to use many extrinsic motivational devices. The implications of Maslow's "hierarchy of needs" for teachers of adolescents, and the basic motivational principles developed by Rowe are presented. (Author/AJ)
ERIC Educational Resources Information Center
Natale, Joseph L.
This chapter of "Principles of School Business Management" discusses the effective management of purchasing processes in a school district. These processes include obtaining materials, supplies, and equipment of maximum value for the least expense, and receiving, storing, and distributing the items obtained. The chapter opens with an overview of…
Optimal Control for Stochastic Delay Evolution Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com
2016-08-15
In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less
Implications of the principle of maximum conformality for the QCD strong coupling
Deur, Alexandre; Shen, Jian -Ming; Wu, Xing -Gang; ...
2017-08-14
The Principle of Maximum Conformality (PMC) provides scale-fixed perturbative QCD predictions which are independent of the choice of the renormalization scheme, as well as the choice of the initial renormalization scale. In this article, we will test the PMC by comparing its predictions for the strong couplingmore » $$\\alpha^s_{g_1}(Q)$$, defined from the Bjorken sum rule, with predictions using conventional pQCD scale-setting. The two results are found to be compatible with each other and with the available experimental data. However, the PMC provides a significantly more precise determination, although its domain of applicability ($$Q \\gtrsim 1.5$$ GeV) does not extend to as small values of momentum transfer as that of a conventional pQCD analysis ($$Q \\gtrsim 1$$ GeV). In conclusion, we suggest that the PMC range of applicability could be improved by a modified intermediate scheme choice or using a single effective PMC scale.« less
Principle of maximum entropy for reliability analysis in the design of machine components
NASA Astrophysics Data System (ADS)
Zhang, Yimin
2018-03-01
We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.
NASA Technical Reports Server (NTRS)
Jones, David J.; Kurath, Peter
1988-01-01
Fully reversed uniaxial strain controlled fatigue tests were performed on smooth cylindrical specimens made of 304 stainless steel. Fatigue life data and cracking observations for uniaxial tests were compared with life data and cracking behavior observed in fully reversed torsional tests. It was determined that the product of maximum principle strain amplitude and maximum principle stress provided the best correlation of fatigue lives for these two loading conditions. Implementation of this parameter is in agreement with observed physical damage and it accounts for the variation of stress-strain response, which is unique to specific loading conditions. Biaxial fatigue tests were conducted on tubular specimens employing both in-phase and out-of-phase tension torsion cyclic strain paths. Cracking observations indicated that the physical damage which occurred in the biaxial tests was similar to the damage observed in uniaxial and torsional tests. The Smith, Watson, and Topper parameter was then extended to predict the fatigue lives resulting from the more complex loading conditions.
Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.
O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao
2017-07-01
Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.
Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K
2016-11-05
The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.
Biological evolution of replicator systems: towards a quantitative approach.
Martin, Osmel; Horvath, J E
2013-04-01
The aim of this work is to study the features of a simple replicator chemical model of the relation between kinetic stability and entropy production under the action of external perturbations. We quantitatively explore the different paths leading to evolution in a toy model where two independent replicators compete for the same substrate. To do that, the same scenario described originally by Pross (J Phys Org Chem 17:312-316, 2004) is revised and new criteria to define the kinetic stability are proposed. Our results suggest that fast replicator populations are continually favored by the effects of strong stochastic environmental fluctuations capable to determine the global population, the former assumed to be the only acting evolution force. We demonstrate that the process is continually driven by strong perturbations only, and that population crashes may be useful proxies for these catastrophic environmental fluctuations. As expected, such behavior is particularly enhanced under very large scale perturbations, suggesting a likely dynamical footprint in the recovery patterns of new species after mass extinction events in the Earth's geological past. Furthermore, the hypothesis that natural selection always favors the faster processes may give theoretical support to different studies that claim the applicability of maximum principles like the Maximum Metabolic Flux (MMF) or Maximum Entropy Productions Principle (MEPP), seen as the main goal of biological evolution.
Biological Evolution of Replicator Systems: Towards a Quantitative Approach
NASA Astrophysics Data System (ADS)
Martin, Osmel; Horvath, J. E.
2013-04-01
The aim of this work is to study the features of a simple replicator chemical model of the relation between kinetic stability and entropy production under the action of external perturbations. We quantitatively explore the different paths leading to evolution in a toy model where two independent replicators compete for the same substrate. To do that, the same scenario described originally by Pross (J Phys Org Chem 17:312-316, 2004) is revised and new criteria to define the kinetic stability are proposed. Our results suggest that fast replicator populations are continually favored by the effects of strong stochastic environmental fluctuations capable to determine the global population, the former assumed to be the only acting evolution force. We demonstrate that the process is continually driven by strong perturbations only, and that population crashes may be useful proxies for these catastrophic environmental fluctuations. As expected, such behavior is particularly enhanced under very large scale perturbations, suggesting a likely dynamical footprint in the recovery patterns of new species after mass extinction events in the Earth's geological past. Furthermore, the hypothesis that natural selection always favors the faster processes may give theoretical support to different studies that claim the applicability of maximum principles like the Maximum Metabolic Flux (MMF) or Maximum Entropy Productions Principle (MEPP), seen as the main goal of biological evolution.
Neurogenic Orofacial Weakness and Speech in Adults With Dysarthria
Makashay, Matthew J.; Helou, Leah B.; Clark, Heather M.
2017-01-01
Purpose This study compared orofacial strength between adults with dysarthria and neurologically normal (NN) matched controls. In addition, orofacial muscle weakness was examined for potential relationships to speech impairments in adults with dysarthria. Method Matched groups of 55 adults with dysarthria and 55 NN adults generated maximum pressure (Pmax) against an air-filled bulb during lingual elevation, protrusion and lateralization, and buccodental and labial compressions. These orofacial strength measures were compared with speech intelligibility, perceptual ratings of speech, articulation rate, and fast syllable-repetition rate. Results The dysarthria group demonstrated significantly lower orofacial strength than the NN group on all tasks. Lingual strength correlated moderately and buccal strength correlated weakly with most ratings of speech deficits. Speech intelligibility was not sensitive to dysarthria severity. Individuals with severely reduced anterior lingual elevation Pmax (< 18 kPa) had normal to profoundly impaired sentence intelligibility (99%–6%) and moderately to severely impaired speech (26%–94% articulatory imprecision; 33%–94% overall severity). Conclusions Results support the presence of orofacial muscle weakness in adults with dysarthrias of varying etiologies but reinforce tenuous links between orofacial strength and speech production disorders. By examining individual data, preliminary evidence emerges to suggest that speech, but not necessarily intelligibility, is likely to be impaired when lingual weakness is severe. PMID:28763804
More about unphysical zeroes in quark mass matrices
NASA Astrophysics Data System (ADS)
Emmanuel-Costa, David; González Felipe, Ricardo
2017-01-01
We look for all weak bases that lead to texture zeroes in the quark mass matrices and contain a minimal number of parameters in the framework of the standard model. Since there are ten physical observables, namely, six nonvanishing quark masses, three mixing angles and one CP phase, the maximum number of texture zeroes in both quark sectors is altogether nine. The nine zero entries can only be distributed between the up- and down-quark sectors in matrix pairs with six and three texture zeroes or five and four texture zeroes. In the weak basis where a quark mass matrix is nonsingular and has six zeroes in one sector, we find that there are 54 matrices with three zeroes in the other sector, obtainable through right-handed weak basis transformations. It is also found that all pairs composed of a nonsingular matrix with five zeroes and a nonsingular and nondecoupled matrix with four zeroes simply correspond to a weak basis choice. Without any further assumptions, none of these pairs of up- and down-quark mass matrices has physical content. It is shown that all non-weak-basis pairs of quark mass matrices that contain nine zeroes are not compatible with current experimental data. The particular case of the so-called nearest-neighbour-interaction pattern is also discussed.
Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice.
Balasubramani, Pragathi P; Moreno-Bote, Rubén; Hayden, Benjamin Y
2018-01-01
The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals). Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies.
Using a Simple Neural Network to Delineate Some Principles of Distributed Economic Choice
Balasubramani, Pragathi P.; Moreno-Bote, Rubén; Hayden, Benjamin Y.
2018-01-01
The brain uses a mixture of distributed and modular organization to perform computations and generate appropriate actions. While the principles under which the brain might perform computations using modular systems have been more amenable to modeling, the principles by which the brain might make choices using distributed principles have not been explored. Our goal in this perspective is to delineate some of those distributed principles using a neural network method and use its results as a lens through which to reconsider some previously published neurophysiological data. To allow for direct comparison with our own data, we trained the neural network to perform binary risky choices. We find that value correlates are ubiquitous and are always accompanied by non-value information, including spatial information (i.e., no pure value signals). Evaluation, comparison, and selection were not distinct processes; indeed, value signals even in the earliest stages contributed directly, albeit weakly, to action selection. There was no place, other than at the level of action selection, at which dimensions were fully integrated. No units were specialized for specific offers; rather, all units encoded the values of both offers in an anti-correlated format, thus contributing to comparison. Individual network layers corresponded to stages in a continuous rotation from input to output space rather than to functionally distinct modules. While our network is likely to not be a direct reflection of brain processes, we propose that these principles should serve as hypotheses to be tested and evaluated for future studies. PMID:29643773
ERIC Educational Resources Information Center
Machado, Marco; Willardson, Jeffrey M.; Silva, Dailson P.; Frigulha, Italo C.; Koch, Alexander J.; Souza, Sergio C.
2012-01-01
In the current study, we examined the relationship between serum creatine kinase (CK) activity following upper body resistance exercise with a 1- or 3-min rest between sets. Twenty men performed two sessions, each consisting of four sets with a 10-repetition maximum load. The results demonstrated significantly greater volume for the 3-min…
NASA Astrophysics Data System (ADS)
Boehm, R. F.
1985-09-01
A review of thermodynamic principles is given in an effort to see if these concepts may indicate possibilities for improvements in solar central receiver power plants. Aspects related to rate limitations in cycles, thermodynamic availability of solar radiation, and sink temperature considerations are noted. It appears that considerably higher instantaneous plant efficiencies are possible by raising the maximum temperature and lowering the minimum temperature of the cycles. Of course, many practical engineering problems will have to be solved to realize the promised benefits.
Killeen, Peter R.; Sitomer, Matthew T.
2008-01-01
Mathematical Principles of Reinforcement (MPR) is a theory of reinforcement schedules. This paper reviews the origin of the principles constituting MPR: arousal, association and constraint. Incentives invigorate responses, in particular those preceding and predicting the incentive. The process that generates an associative bond between stimuli, responses and incentives is called coupling. The combination of arousal and coupling constitutes reinforcement. Models of coupling play a central role in the evolution of the theory. The time required to respond constrains the maximum response rates, and generates a hyperbolic relation between rate of responding and rate of reinforcement. Models of control by ratio schedules are developed to illustrate the interaction of the principles. Correlations among parameters are incorporated into the structure of the models, and assumptions that were made in the original theory are refined in light of current data. PMID:12729968
Teaching the principles of statistical dynamics
Ghosh, Kingshuk; Dill, Ken A.; Inamdar, Mandar M.; Seitaridou, Effrosyni; Phillips, Rob
2012-01-01
We describe a simple framework for teaching the principles that underlie the dynamical laws of transport: Fick’s law of diffusion, Fourier’s law of heat flow, the Newtonian viscosity law, and the mass-action laws of chemical kinetics. In analogy with the way that the maximization of entropy over microstates leads to the Boltzmann distribution and predictions about equilibria, maximizing a quantity that E. T. Jaynes called “caliber” over all the possible microtrajectories leads to these dynamical laws. The principle of maximum caliber also leads to dynamical distribution functions that characterize the relative probabilities of different microtrajectories. A great source of recent interest in statistical dynamics has resulted from a new generation of single-particle and single-molecule experiments that make it possible to observe dynamics one trajectory at a time. PMID:23585693
Teaching the principles of statistical dynamics.
Ghosh, Kingshuk; Dill, Ken A; Inamdar, Mandar M; Seitaridou, Effrosyni; Phillips, Rob
2006-02-01
We describe a simple framework for teaching the principles that underlie the dynamical laws of transport: Fick's law of diffusion, Fourier's law of heat flow, the Newtonian viscosity law, and the mass-action laws of chemical kinetics. In analogy with the way that the maximization of entropy over microstates leads to the Boltzmann distribution and predictions about equilibria, maximizing a quantity that E. T. Jaynes called "caliber" over all the possible microtrajectories leads to these dynamical laws. The principle of maximum caliber also leads to dynamical distribution functions that characterize the relative probabilities of different microtrajectories. A great source of recent interest in statistical dynamics has resulted from a new generation of single-particle and single-molecule experiments that make it possible to observe dynamics one trajectory at a time.
Lezon, Timothy R; Banavar, Jayanth R; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V
2006-12-12
We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems.
NASA Astrophysics Data System (ADS)
Kruempelmann, J.; Mariappan, C. R.; Schober, C.; Roling, B.
2010-12-01
We have measured potential-dependent interfacial capacitances of two Na-Ca-phosphosilicate glasses and of an AgI-doped silver borate glass between ion-blocking Pt electrodes. An asymmetric electrode configuration with highly dissimilar electrode areas on both faces of the glass samples allowed us to determine the capacitance at the small-area electrode. Using equivalent circuit fitting we extract potential-dependent double-layer capacitances. The potential-dependent anodic capacitance exhibits a weak maximum and drops strongly at higher potentials. The cathodic capacitance exhibits a more pronounced maximum, this maximum being responsible for the maximum in the total capacitance observed in measurements in a symmetrical electrode configuration. The capacitance maxima of the Na-Ca phosphosilicate glasses show up at higher electrode potentials than the maxima of the AgI-doped silver borate glass. Remarkably, for both types of glasses, the potential of the cathodic capacitance maximum is closely related to the activation energy of the bulk ion transport. We compare our results to recent theoretical predictions by Shklovskii and co-workers.
The Mentally Retarded in Sweden.
ERIC Educational Resources Information Center
Grunewald, Karl
Described are residential and educational services provided for mentally retarded (MC) children and adults in Sweden. Normalization is the focus of the services which make maximum use of mental and physical capacities to reduce the handicap of mental retardation. Described are general principles, and four stages involving development of services…
Four-phonon scattering significantly reduces intrinsic thermal conductivity of solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tianli; Lindsay, Lucas R.; Ruan, Xiulin
We rigorously calculate intrinsic phonon thermal resistance from four-phonon scattering processesusing rst principles Boltzmann transport methods. Fundamental questions concerning the role ofhigher order scattering at high temperature and in systems with otherwise weak intrinsic scatteringare answered. Using diamond and silicon as benchmark materials, the predicted thermal conductiv-ity including intrinsic four-phonon resistance gives signicantly better agreement with measurementsat high temperatures than previous rst principles calculations. In the predicted ultrahigh thermalconductivity material, zincblende BAs, four-phonon scattering is strikingly strong when comparedto three-phonon processes, even at room temperature, as the latter have an extremely limited phasespace for scattering. Including four-phonon thermal resistance reducesmore » the predicted thermal con-ductivity of BAs from 2200 W/m-K to 1400 W/m-K.« less
The problem with rescue medicine.
Jecker, Nancy S
2013-02-01
Is there a rational and ethical basis for efforts to rescue individuals in dire straits? When does rescue have ethical support, and when does it reflect an irrational impulse? This paper defines a Rule of Rescue and shows its intuitive appeal. It then proceeds to argue that this rule lacks support from standard principles of justice and from ethical principles more broadly, and should be rejected in many situations. I distinguish between agent-relative and agent-neutral reasons, and argue that the Rule of Rescue qualifies only in a narrow range of cases where agent-relative considerations apply. I conclude that it would be wise to set aside the Rule of Rescue in many cases, especially those involving public policies, where it has only weak normative justification. The broader implications of this analysis are noted.
Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu
2014-01-01
We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550
Four-phonon scattering significantly reduces intrinsic thermal conductivity of solids
Feng, Tianli; Lindsay, Lucas R.; Ruan, Xiulin
2017-10-27
We rigorously calculate intrinsic phonon thermal resistance from four-phonon scattering processesusing rst principles Boltzmann transport methods. Fundamental questions concerning the role ofhigher order scattering at high temperature and in systems with otherwise weak intrinsic scatteringare answered. Using diamond and silicon as benchmark materials, the predicted thermal conductiv-ity including intrinsic four-phonon resistance gives signicantly better agreement with measurementsat high temperatures than previous rst principles calculations. In the predicted ultrahigh thermalconductivity material, zincblende BAs, four-phonon scattering is strikingly strong when comparedto three-phonon processes, even at room temperature, as the latter have an extremely limited phasespace for scattering. Including four-phonon thermal resistance reducesmore » the predicted thermal con-ductivity of BAs from 2200 W/m-K to 1400 W/m-K.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Heidi, E-mail: heidi.mwalker@yahoo.ca; Sinclair, A. John, E-mail: john.sinclair@ad.umanitoba.ca; Spaling, Harry, E-mail: harry.spaling@kingsu.ca
Meaningful public engagement is a challenging, but promising, feature of strategic environmental assessment (SEA) due to its potential for integrating sustainability principles into policies, plans and programs in developing countries such as Kenya. This research examined two selected SEA case studies to identify the extent of participation, learning outcomes attributable to participation, and if any learning outcomes led to social action for sustainability at the community level. Strengths across the two cases were the inclusion of marginalized populations and consideration of socio-economic concerns. Consistent weaknesses included inadequate notice, document inaccessibility, lack of feedback and communication, and late analysis of alternatives.more » Despite some learning conditions being unfulfilled, examples of instrumental, communicative, and transformative learning were identified through a focus group and semi-structured interviews with community participants and public officials. Some of these learning outcomes led to individual and social actions that contribute to sustainability. -- Highlights: • The strengths and weaknesses of Kenyan SEA public participation processes were identified. • Multiple deficiencies in the SEA process likely frustrate meaningful public engagement. • Participant learning was observed despite process weaknesses. • Participant learning can lead to action for sustainability at the community level.« less
Analysis of the autonomous problem about coupled active non-Newtonian multi-seepage in sparse medium
NASA Astrophysics Data System (ADS)
Deng, Shuxian; Li, Hongen
2017-10-01
The flow field of non-Newtonian fluid in sparse medium was analyzed by computational fluid dynamics (CFD) method. The results show that the axial velocity and radial velocity of the non-Newtonian fluid are larger than those of the Newtonian fluid due to the coupling of the viscosity of the non-Newtonian fluid and the shear rate, and the tangential velocity is less than that of the Newtonian fluid. These differences lead to the difference in the sparse medium Non-Newtonian fluids are of a special nature. The influence of the weight function on the global existence and blasting of the problem is discussed by analyzing the non-Newtonian percolation equation with nonlocal and weighted non-local Dirichlet boundary conditions. According to the non-Newtonian percolation equation, we define the weak solution of the problem and expound the local existence of the weak solution. Then we construct the test function and prove the weak comparison principle by using the Grown well inequality. The overall existence and blasting are analyzed by constructing the upper and lower solutions.
A unified approach to computational drug discovery.
Tseng, Chih-Yuan; Tuszynski, Jack
2015-11-01
It has been reported that a slowdown in the development of new medical therapies is affecting clinical outcomes. The FDA has thus initiated the Critical Path Initiative project investigating better approaches. We review the current strategies in drug discovery and focus on the advantages of the maximum entropy method being introduced in this area. The maximum entropy principle is derived from statistical thermodynamics and has been demonstrated to be an inductive inference tool. We propose a unified method to drug discovery that hinges on robust information processing using entropic inductive inference. Increasingly, applications of maximum entropy in drug discovery employ this unified approach and demonstrate the usefulness of the concept in the area of pharmaceutical sciences. Copyright © 2015. Published by Elsevier Ltd.
The iterative thermal emission method: A more implicit modification of IMC
NASA Astrophysics Data System (ADS)
Long, A. R.; Gentile, N. A.; Palmer, T. S.
2014-11-01
For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of ;pseudo-scattering; introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in that they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium).
The iterative thermal emission method: A more implicit modification of IMC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, A.R., E-mail: arlong.ne@tamu.edu; Gentile, N.A.; Palmer, T.S.
2014-11-15
For over 40 years, the Implicit Monte Carlo (IMC) method has been used to solve challenging problems in thermal radiative transfer. These problems typically contain regions that are optically thick and diffusive, as a consequence of the high degree of “pseudo-scattering” introduced to model the absorption and reemission of photons from a tightly-coupled, radiating material. IMC has several well-known features that could be improved: a) it can be prohibitively computationally expensive, b) it introduces statistical noise into the material and radiation temperatures, which may be problematic in multiphysics simulations, and c) under certain conditions, solutions can be nonphysical, in thatmore » they violate a maximum principle, where IMC-calculated temperatures can be greater than the maximum temperature used to drive the problem. We have developed a variant of IMC called iterative thermal emission IMC, which is designed to have a reduced parameter space in which the maximum principle is violated. ITE IMC is a more implicit version of IMC in that it uses the information obtained from a series of IMC photon histories to improve the estimate for the end of time step material temperature during a time step. A better estimate of the end of time step material temperature allows for a more implicit estimate of other temperature-dependent quantities: opacity, heat capacity, Fleck factor (probability that a photon absorbed during a time step is not reemitted) and the Planckian emission source. We have verified the ITE IMC method against 0-D and 1-D analytic solutions and problems from the literature. These results are compared with traditional IMC. We perform an infinite medium stability analysis of ITE IMC and show that it is slightly more numerically stable than traditional IMC. We find that significantly larger time steps can be used with ITE IMC without violating the maximum principle, especially in problems with non-linear material properties. The ITE IMC method does however yield solutions with larger variance because each sub-step uses a different Fleck factor (even at equilibrium)« less
NASA Astrophysics Data System (ADS)
Leyva, R.; Artillan, P.; Cabal, C.; Estibals, B.; Alonso, C.
2011-04-01
The article studies the dynamic performance of a family of maximum power point tracking circuits used for photovoltaic generation. It revisits the sinusoidal extremum seeking control (ESC) technique which can be considered as a particular subgroup of the Perturb and Observe algorithms. The sinusoidal ESC technique consists of adding a small sinusoidal disturbance to the input and processing the perturbed output to drive the operating point at its maximum. The output processing involves a synchronous multiplication and a filtering stage. The filter instance determines the dynamic performance of the MPPT based on sinusoidal ESC principle. The approach uses the well-known root-locus method to give insight about damping degree and settlement time of maximum-seeking waveforms. This article shows the transient waveforms in three different filter instances to illustrate the approach. Finally, an experimental prototype corroborates the dynamic analysis.
29 CFR 778.418 - Pieceworkers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... applicable maximum hours standard for the particular workweek; and (4) The compensation paid for the overtime... Principles Computing Overtime Pay on the Rate Applicable to the Type of Work Performed in Overtime Hours... the basis of a piece rate for the work performed during nonovertime hours may agree with his employer...
A robot control formalism based on an information quality concept
NASA Technical Reports Server (NTRS)
Ekman, A.; Torne, A.; Stromberg, D.
1994-01-01
A relevance measure based on Jaynes maximum entropy principle is introduced. Information quality is the conjunction of accuracy and relevance. The formalism based on information quality is developed for one-agent applications. The robot requires a well defined working environment where properties of each object must be accurately specified.
ERIC Educational Resources Information Center
Formann, Anton K.
1986-01-01
It is shown that for equal parameters explicit formulas exist, facilitating the application of the Newton-Raphson procedure to estimate the parameters in the Rasch model and related models according to the conditional maximum likelihood principle. (Author/LMO)
40 CFR 30.27 - Allowable costs.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of appendix E of 45 CFR part 74, “Principles for Determining Costs Applicable to Research and... consultants retained by recipients or by a recipient's contractors or subcontractors to the maximum daily rate... designated individuals with specialized skills who are paid at a daily or hourly rate. This rate does not...
Tsallis Entropy and the Transition to Scaling in Fragmentation
NASA Astrophysics Data System (ADS)
Sotolongo-Costa, Oscar; Rodriguez, Arezky H.; Rodgers, G. J.
2000-12-01
By using the maximum entropy principle with Tsallis entropy we obtain a fragment size distribution function which undergoes a transition to scaling. This distribution function reduces to those obtained by other authors using Shannon entropy. The treatment is easily generalisable to any process of fractioning with suitable constraints.
33 CFR 241.5 - Procedures for estimating the alternative cost-share.
Code of Federal Regulations, 2014 CFR
2014-07-01
... THE ARMY, DEPARTMENT OF DEFENSE FLOOD CONTROL COST-SHARING REQUIREMENTS UNDER THE ABILITY TO PAY.... Determine the maximum possible reduction in the level of non-Federal cost-sharing for any project. (1) Calculate the ratio of flood control benefits (developed using the Water Resources Council's Principles and...
33 CFR 241.5 - Procedures for estimating the alternative cost-share.
Code of Federal Regulations, 2011 CFR
2011-07-01
... THE ARMY, DEPARTMENT OF DEFENSE FLOOD CONTROL COST-SHARING REQUIREMENTS UNDER THE ABILITY TO PAY.... Determine the maximum possible reduction in the level of non-Federal cost-sharing for any project. (1) Calculate the ratio of flood control benefits (developed using the Water Resources Council's Principles and...
33 CFR 241.5 - Procedures for estimating the alternative cost-share.
Code of Federal Regulations, 2010 CFR
2010-07-01
... THE ARMY, DEPARTMENT OF DEFENSE FLOOD CONTROL COST-SHARING REQUIREMENTS UNDER THE ABILITY TO PAY.... Determine the maximum possible reduction in the level of non-Federal cost-sharing for any project. (1) Calculate the ratio of flood control benefits (developed using the Water Resources Council's Principles and...
33 CFR 241.5 - Procedures for estimating the alternative cost-share.
Code of Federal Regulations, 2013 CFR
2013-07-01
... THE ARMY, DEPARTMENT OF DEFENSE FLOOD CONTROL COST-SHARING REQUIREMENTS UNDER THE ABILITY TO PAY.... Determine the maximum possible reduction in the level of non-Federal cost-sharing for any project. (1) Calculate the ratio of flood control benefits (developed using the Water Resources Council's Principles and...
33 CFR 241.5 - Procedures for estimating the alternative cost-share.
Code of Federal Regulations, 2012 CFR
2012-07-01
... THE ARMY, DEPARTMENT OF DEFENSE FLOOD CONTROL COST-SHARING REQUIREMENTS UNDER THE ABILITY TO PAY.... Determine the maximum possible reduction in the level of non-Federal cost-sharing for any project. (1) Calculate the ratio of flood control benefits (developed using the Water Resources Council's Principles and...
Rippon, Gina; Jordan-Young, Rebecca; Kaiser, Anelis; Fine, Cordelia
2014-01-01
Neuroimaging (NI) technologies are having increasing impact in the study of complex cognitive and social processes. In this emerging field of social cognitive neuroscience, a central goal should be to increase the understanding of the interaction between the neurobiology of the individual and the environment in which humans develop and function. The study of sex/gender is often a focus for NI research, and may be motivated by a desire to better understand general developmental principles, mental health problems that show female-male disparities, and gendered differences in society. In order to ensure the maximum possible contribution of NI research to these goals, we draw attention to four key principles-overlap, mosaicism, contingency and entanglement-that have emerged from sex/gender research and that should inform NI research design, analysis and interpretation. We discuss the implications of these principles in the form of constructive guidelines and suggestions for researchers, editors, reviewers and science communicators.
Metz, Johan A Jacob; Staňková, Kateřina; Johansson, Jacob
2016-03-01
This paper should be read as addendum to Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013). Our goal is, using little more than high-school calculus, to (1) exhibit the form of the canonical equation of adaptive dynamics for classical life history problems, where the examples in Dieckmann et al. (J Theor Biol 241:370-389, 2006) and Parvinen et al. (J Math Biol 67: 509-533, 2013) are chosen such that they avoid a number of the problems that one gets in this most relevant of applications, (2) derive the fitness gradient occurring in the CE from simple fitness return arguments, (3) show explicitly that setting said fitness gradient equal to zero results in the classical marginal value principle from evolutionary ecology, (4) show that the latter in turn is equivalent to Pontryagin's maximum principle, a well known equivalence that however in the literature is given either ex cathedra or is proven with more advanced tools, (5) connect the classical optimisation arguments of life history theory a little better to real biology (Mendelian populations with separate sexes subject to an environmental feedback loop), (6) make a minor improvement to the form of the CE for the examples in Dieckmann et al. and Parvinen et al.
The zero age main sequence of WIMP burners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fairbairn, Malcolm; Scott, Pat; Edsjoe, Joakim
2008-02-15
We modify a stellar structure code to estimate the effect upon the main sequence of the accretion of weakly-interacting dark matter onto stars and its subsequent annihilation. The effect upon the stars depends upon whether the energy generation rate from dark matter annihilation is large enough to shut off the nuclear burning in the star. Main sequence weakly-interacting massive particles (WIMP) burners look much like proto-stars moving on the Hayashi track, although they are in principle completely stable. We make some brief comments about where such stars could be found, how they might be observed and more detailed simulations whichmore » are currently in progress. Finally we comment on whether or not it is possible to link the paradoxically hot, young stars found at the galactic center with WIMP burners.« less
Metric Properties of Relativistic Rotating Frames with Axial Symmetry
NASA Astrophysics Data System (ADS)
Torres, S. A.; Arenas, J. R.
2017-07-01
This abstract summarizes our poster contribution to the conference. We study the properties of an axially symmetric stationary gravitational field, by considering the spacetime properties of an uniformly rotating frame and the Einstein's Equivalence Principle (EEP). To undertake this, the weak field and slow-rotation limit of the kerr metric are determined, by making a first-order perturbation to the metric of a rotating frame. Also, we show a local connection between the effects of centrifugal and Coriolis forces with the effects of an axially symmetric stationary weak gravitational field, by calculating the geodesic equations of a free particle. It is observed that these geodesic, applying the (EEP), are locally equivalent to the geodesic equations of a free particle on a rotating frame. Furthermore, some aditional properties as the Lense-Thirring effect, the Sagnac effect, among others are studied.
Finite element method for optimal guidance of an advanced launch vehicle
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Bless, Robert R.; Calise, Anthony J.; Leung, Martin
1992-01-01
A temporal finite element based on a mixed form of Hamilton's weak principle is summarized for optimal control problems. The resulting weak Hamiltonian finite element method is extended to allow for discontinuities in the states and/or discontinuities in the system equations. An extension of the formulation to allow for control inequality constraints is also presented. The formulation does not require element quadrature, and it produces a sparse system of nonlinear algebraic equations. To evaluate its feasibility for real-time guidance applications, this approach is applied to the trajectory optimization of a four-state, two-stage model with inequality constraints for an advanced launch vehicle. Numerical results for this model are presented and compared to results from a multiple-shooting code. The results show the accuracy and computational efficiency of the finite element method.
Ross, Jennifer L
2016-09-06
The inside of the cell is full of important, yet invisible species of molecules and proteins that interact weakly but couple together to have huge and important effects in many biological processes. Such "dark matter" inside cells remains mostly hidden, because our tools were developed to investigate strongly interacting species and folded proteins. Example dark-matter species include intrinsically disordered proteins, posttranslational states, ion species, and rare, transient, and weak interactions undetectable by biochemical assays. The dark matter of biology is likely to have multiple, vital roles to regulate signaling, rates of reactions, water structure and viscosity, crowding, and other cellular activities. We need to create new tools to image, detect, and understand these dark-matter species if we are to truly understand fundamental physical principles of biology. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A Simple Power Law Governs Many Sensory Amplifications and Multisensory Enhancements.
Billock, Vincent A; Havig, Paul R
2018-05-16
When one sensory response occurs in the presence of a different sensory stimulation, the sensory response is often amplified. The variety of sensory enhancement data tends to obscure the underlying rules, but it has long been clear that weak signals are usually amplified more than strong ones (the Principle of Inverse Effectiveness). Here we show that for many kinds of sensory amplification, the underlying law is simple and elegant: the amplified response is a power law of the unamplified response, with a compressive exponent that amplifies weak signals more than strong. For both psychophysics and cortical electrophysiology, for both humans and animals, and for both sensory integration and enhancement within a sense, gated power law amplification (amplification of one sense triggered by the presence of a different sensory signal) is often sufficient to explain sensory enhancement.
Nonequilibrium Thermodynamics in Biological Systems
NASA Astrophysics Data System (ADS)
Aoki, I.
2005-12-01
1. Respiration Oxygen-uptake by respiration in organisms decomposes macromolecules such as carbohydrate, protein and lipid and liberates chemical energy of high quality, which is then used to chemical reactions and motions of matter in organisms to support lively order in structure and function in organisms. Finally, this chemical energy becomes heat energy of low quality and is discarded to the outside (dissipation function). Accompanying this heat energy, entropy production which inevitably occurs by irreversibility also is discarded to the outside. Dissipation function and entropy production are estimated from data of respiration. 2. Human body From the observed data of respiration (oxygen absorption), the entropy production in human body can be estimated. Entropy production from 0 to 75 years old human has been obtained, and extrapolated to fertilized egg (beginning of human life) and to 120 years old (maximum period of human life). Entropy production show characteristic behavior in human life span : early rapid increase in short growing phase and later slow decrease in long aging phase. It is proposed that this tendency is ubiquitous and constitutes a Principle of Organization in complex biotic systems. 3. Ecological communities From the data of respiration of eighteen aquatic communities, specific (i.e. per biomass) entropy productions are obtained. They show two phase character with respect to trophic diversity : early increase and later decrease with the increase of trophic diversity. The trophic diversity in these aquatic ecosystems is shown to be positively correlated with the degree of eutrophication, and the degree of eutrophication is an "arrow of time" in the hierarchy of aquatic ecosystems. Hence specific entropy production has the two phase: early increase and later decrease with time. 4. Entropy principle for living systems The Second Law of Thermodynamics has been expressed as follows. 1) In isolated systems, entropy increases with time and approaches to a maximum value. This is well-known classical Clausius principle. 2) In open systems near equilibrium entropy production always decreases with time approaching a minimum stationary level. This is the minimum entropy production principle by Prigogine. These two principle are established ones. However, living systems are not isolated and not near to equilibrium. Hence, these two principles can not be applied to living systems. What is entropy principle for living systems? Answer: Entropy production in living systems consists of multi-stages with time: early increasing, later decreasing and/or intermediate stages. This tendency is supported by various living systems.
Optical gain in an optically driven three-level ? system in atomic Rb vapor
NASA Astrophysics Data System (ADS)
Ballmann, C. W.; Yakovlev, V. V.
2018-06-01
In this work, we report experimentally achieved optical gain of a weak probe beam in a three-level ? system in a low density Rubidium vapor cell driven by a single pump beam. The maximum measured gain of the probe beam was about 0.12%. This work could lead to new approaches for enhancing molecular spectroscopy applications.
Jia, Feng; Lei, Yaguo; Shan, Hongkai; Lin, Jing
2015-01-01
The early fault characteristics of rolling element bearings carried by vibration signals are quite weak because the signals are generally masked by heavy background noise. To extract the weak fault characteristics of bearings from the signals, an improved spectral kurtosis (SK) method is proposed based on maximum correlated kurtosis deconvolution (MCKD). The proposed method combines the ability of MCKD in indicating the periodic fault transients and the ability of SK in locating these transients in the frequency domain. A simulation signal overwhelmed by heavy noise is used to demonstrate the effectiveness of the proposed method. The results show that MCKD is beneficial to clarify the periodic impulse components of the bearing signals, and the method is able to detect the resonant frequency band of the signal and extract its fault characteristic frequency. Through analyzing actual vibration signals collected from wind turbines and hot strip rolling mills, we confirm that by using the proposed method, it is possible to extract fault characteristics and diagnose early faults of rolling element bearings. Based on the comparisons with the SK method, it is verified that the proposed method is more suitable to diagnose early faults of rolling element bearings. PMID:26610501
Deformation Behaviors of Geosynthetic Reinforced Soil Walls on Shallow Weak Ground
NASA Astrophysics Data System (ADS)
Kim, You-Seong; Won, Myoung-Soo
In this study, the fifteen-month behavior of two geosynthetic reinforced soil walls, which was constructed on the shallow weak ground, was measured and analyzed. The walls were backfilled with clayey soil obtained from the construction site nearby, and the safety factors obtained from general limit equilibrium analysis were less than 1.3 in both wall. To compare with the measured data from the real GRS walls and unreinforced soil mass, a series of finite element method (FEM) analyses on two field GRS walls and unreinforced soil mass were conducted. The FEM analysis results showed that failure plane of unreinforced soil mass was consistent with the Rankine active state, but failure plane did not occur in GRS walls. In addition, maximum horizontal displacements and shear strains in GRS walls were 50% smaller than those found in unreinforced soil mass. Modeling results such as the maximum horizontal displacements, horizontal pressure, and geosynthetic tensile strengths in GRS wall have a god agreement with the measured data. Based on this study, it could be concluded that geosynthetic reinforcement are effective to reduce the displacement of the wall face and/or the deformation of the backfill soil even if the mobilized tensile stress after construction is very small.
The coupling to matter in massive, bi- and multi-gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noller, Johannes; Melville, Scott, E-mail: noller@physics.ox.ac.uk, E-mail: scott.melville@queens.ox.ac.uk
2015-01-01
In this paper we construct a family of ways in which matter can couple to one or more 'metrics'/spin-2 fields in the vielbein formulation. We do so subject to requiring the weak equivalence principle and the absence of ghosts from pure spin-2 interactions generated by the matter action. Results are presented for Massive, Bi- and Multi-Gravity theories and we give explicit expressions for the effective matter metric in all of these cases.
Modified gravity (MOG), the speed of gravitational radiation and the event GW170817/GRB170817A
NASA Astrophysics Data System (ADS)
Green, M. A.; Moffat, J. W.; Toth, V. T.
2018-05-01
Modified gravity (MOG) is a covariant, relativistic, alternative gravitational theory whose field equations are derived from an action that supplements the spacetime metric tensor with vector and scalar fields. Both gravitational (spin 2) and electromagnetic waves travel on null geodesics of the theory's one metric. MOG satisfies the weak equivalence principle and is consistent with observations of the neutron star merger and gamma ray burster event GW170817/GRB170817A.
Towards A Predictive First Principles Understanding Of Molecular Adsorption On Graphene
2016-10-05
used and developed state-of-the-art quantum mechanical methods to make accurate predictions about the interaction strength and adsorption structure...density functional theory, ab initio methods 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18. NUMBER OF PAGES 11 19a. NAME OF...important physical properties for a whole class of systems with weak non-covalent interactions, for example those involving the binding between water
Midfield wireless powering of subwavelength autonomous devices.
Kim, Sanghoek; Ho, John S; Poon, Ada S Y
2013-05-17
We obtain an analytical bound on the efficiency of wireless power transfer to a weakly coupled device. The optimal source is solved for a multilayer geometry in terms of a representation based on the field equivalence principle. The theory reveals that optimal power transfer exploits the properties of the midfield to achieve efficiencies far greater than conventional coil-based designs. As a physical realization of the source, we present a slot array structure whose performance closely approaches the theoretical bound.
Problems of Automation and Management Principles Information Flow in Manufacturing
NASA Astrophysics Data System (ADS)
Grigoryuk, E. N.; Bulkin, V. V.
2017-07-01
Automated control systems of technological processes are complex systems that are characterized by the presence of elements of the overall focus, the systemic nature of the implemented algorithms for the exchange and processing of information, as well as a large number of functional subsystems. The article gives examples of automatic control systems and automated control systems of technological processes held parallel between them by identifying strengths and weaknesses. Other proposed non-standard control system of technological process.
Basic Properties of Strong Mixing Conditions.
1985-06-01
H. Dehling and W. Philipp. Almost sure invariance principles for weakly dependent vector-valued random variables. Ann. Probab. 10 (1982) 689-701. 34...Harris chains will not be discussed here.) It is well known that every stationary Harris chain has a well defined " period " p E {1,2,3,... (the chain is...chain is absolutely regular. (ii) More generally, for any strictly stationary real Harris chain, lim n (o in) = 1 - 1/p where p is the period . A
Newtonian self-gravitating system in a relativistic huge void universe model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nishikawa, Ryusuke; Nakao, Ken-ichi; Yoo, Chul-Moon, E-mail: ryusuke@sci.osaka-cu.ac.jp, E-mail: knakao@sci.osaka-cu.ac.jp, E-mail: yoo@gravity.phys.nagoya-u.ac.jp
We consider a test of the Copernican Principle through observations of the large-scale structures, and for this purpose we study the self-gravitating system in a relativistic huge void universe model which does not invoke the Copernican Principle. If we focus on the the weakly self-gravitating and slowly evolving system whose spatial extent is much smaller than the scale of the cosmological horizon in the homogeneous and isotropic background universe model, the cosmological Newtonian approximation is available. Also in the huge void universe model, the same kind of approximation as the cosmological Newtonian approximation is available for the analysis of themore » perturbations contained in a region whose spatial size is much smaller than the scale of the huge void: the effects of the huge void are taken into account in a perturbative manner by using the Fermi-normal coordinates. By using this approximation, we derive the equations of motion for the weakly self-gravitating perturbations whose elements have relative velocities much smaller than the speed of light, and show the derived equations can be significantly different from those in the homogeneous and isotropic universe model, due to the anisotropic volume expansion in the huge void. We linearize the derived equations of motion and solve them. The solutions show that the behaviors of linear density perturbations are very different from those in the homogeneous and isotropic universe model.« less
Emergence of Super Cooperation of Prisoner’s Dilemma Games on Scale-Free Networks
Li, Angsheng; Yong, Xi
2015-01-01
Recently, the authors proposed a quantum prisoner’s dilemma game based on the spatial game of Nowak and May, and showed that the game can be played classically. By using this idea, we proposed three generalized prisoner’s dilemma (GPD, for short) games based on the weak Prisoner’s dilemma game, the full prisoner’s dilemma game and the normalized Prisoner’s dilemma game, written by GPDW, GPDF and GPDN respectively. Our games consist of two players, each of which has three strategies: cooperator (C), defector (D) and super cooperator (denoted by Q), and have a parameter γ to measure the entangled relationship between the two players. We found that our generalised prisoner’s dilemma games have new Nash equilibrium principles, that entanglement is the principle of emergence and convergence (i.e., guaranteed emergence) of super cooperation in evolutions of our generalised prisoner’s dilemma games on scale-free networks, that entanglement provides a threshold for a phase transition of super cooperation in evolutions of our generalised prisoner’s dilemma games on scale-free networks, that the role of heterogeneity of the scale-free networks in cooperations and super cooperations is very limited, and that well-defined structures of scale-free networks allow coexistence of cooperators and super cooperators in the evolutions of the weak version of our generalised prisoner’s dilemma games. PMID:25643279
A New Method to Test the Einstein’s Weak Equivalence Principle
NASA Astrophysics Data System (ADS)
Yu, Hai; Xi, Shao-Qiang; Wang, Fa-Yin
2018-06-01
The Einstein’s weak equivalence principle (WEP) is one of the foundational assumptions of general relativity and some other gravity theories. In the theory of parametrized post-Newtonian (PPN), the difference between the PPN parameters γ of different particles or the same type of particle with different energies, Δγ, represents the violation of WEP. Current constraints on Δγ are derived from the observed time delay between correlated particles of astronomical sources. However, the observed time delay is contaminated by other effects, such as the time delays due to different particle emission times, the potential Lorentz invariance violation, and none-zero photon rest mass. Therefore, current constraints are only upper limits. Here, we propose a new method to test WEP based on the fact that the gravitational time delay is direction-dependent while others are not. This is the first method that can naturally correct other time-delay effects. Using the time-delay measurements of BASTE gamma-ray burst sample and the gravitational potential of local super galaxy cluster Laniakea, we find that the constraint on Δγ of different energy photons can be as low as 10‑14. In the future, if more gravitational wave events and fast radio bursts with much more precise time-delay measurements are observed, this method can give a reliable and tight constraint on WEP.
New test of weak equivalence principle using polarized light from astrophysical events
NASA Astrophysics Data System (ADS)
Wu, Xue-Feng; Wei, Jun-Jie; Lan, Mi-Xiang; Gao, He; Dai, Zi-Gao; Mészáros, Peter
2017-05-01
Einstein's weak equivalence principle (WEP) states that any freely falling, uncharged test particle follows the same identical trajectory independent of its internal structure and composition. Since the polarization of a photon is considered to be part of its internal structure, we propose that polarized photons from astrophysical transients, such as gamma-ray bursts (GRBs) and fast radio bursts (FRBs), can be used to constrain the accuracy of the WEP through the Shapiro time delay effect. Assuming that the arrival time delays of photons with different polarizations are mainly attributed to the gravitational potential of the Laniakea supercluster of galaxies, we show that a strict upper limit on the differences of the parametrized post-Newtonian parameter γ value for the polarized optical emission of GRB 120308A is Δ γ <1.2 ×10-10 , for the polarized gamma-ray emission of GRB 100826A is Δ γ <1.2 ×10-10 , and for the polarized radio emission of FRB 150807 is Δ γ <2.2 ×10-16 . These are the first direct verifications of the WEP for multiband photons with different polarizations. In particular, the result from FRB 150807 provides the most stringent limit to date on a deviation from the WEP, improving by one order of magnitude the previous best result based on Crab pulsar photons with different energies.
IMPACT OF GRAVITY LOADING ON POST-STROKE REACHING AND ITS RELATIONSHIP TO WEAKNESS
Beer, Randall F.; Ellis, Michael D.; Holubar, Bradley G.; Dewald, Julius P.A.
2010-01-01
The ability to extend the elbow following stroke depends on the magnitude and direction of torques acting at the shoulder. The mechanisms underlying this link remain unclear. The purpose of this study was to evaluate whether the effects of shoulder loading on elbow function were related to weakness or its distribution in the paretic limb. Ten subjects with longstanding hemiparesis performed movements with the arm either passively supported against gravity by an air bearing, or by activation of shoulder muscles. Isometric maximum voluntary torques at the elbow and shoulder were measured using a load cell. The speed and range of elbow extension movements were negatively impacted by actively supporting the paretic limb against gravity. However, the effects of gravity loading were not related to proximal weakness or abnormalities in the elbow flexor–extensor strength balance. The findings support the existence of abnormal descending motor commands that constrain the ability of stroke survivors to generate elbow extension torque in combination with abduction torque at the shoulder. PMID:17486581
Impact of gravity loading on post-stroke reaching and its relationship to weakness.
Beer, Randall F; Ellis, Michael D; Holubar, Bradley G; Dewald, Julius P A
2007-08-01
The ability to extend the elbow following stroke depends on the magnitude and direction of torques acting at the shoulder. The mechanisms underlying this link remain unclear. The purpose of this study was to evaluate whether the effects of shoulder loading on elbow function were related to weakness or its distribution in the paretic limb. Ten subjects with longstanding hemiparesis performed movements with the arm either passively supported against gravity by an air bearing, or by activation of shoulder muscles. Isometric maximum voluntary torques at the elbow and shoulder were measured using a load cell. The speed and range of elbow extension movements were negatively impacted by actively supporting the paretic limb against gravity. However, the effects of gravity loading were not related to proximal weakness or abnormalities in the elbow flexor-extensor strength balance. The findings support the existence of abnormal descending motor commands that constrain the ability of stroke survivors to generate elbow extension torque in combination with abduction torque at the shoulder.
Lezon, Timothy R.; Banavar, Jayanth R.; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V.
2006-01-01
We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems. PMID:17138668
Bicakli, Derya Hopanci; Ozveren, Ahmet; Uslu, Ruchan; Dalak, Reci Meseri; Cehreli, Ruksan; Uyar, Mehmet; Karabulut, Bulent; Akcicek, Fehmi
2018-03-01
Malnutrition is common in patients with geriatric gastrointestinal system (GIS) cancer. This study aimed to evaluate patients with geriatric GIS cancer in terms of nutritional status and weakness and determine the changes caused by chemotherapy (CT). Patients with geriatric GIS cancer who received CT were included in the study. Their nutritional status was assessed with the Mini Nutritional Assessment, and weakness was assessed with the handgrip strength/body mass index ratio. After CT (minimum 4 wk and maximum 6 wk later), patients were assessed for the same parameters. A total of 153 patients aged ≥65 y (mean age, 70.5 ± 5.6 y; 44 female and 109 male) were evaluated. The population consisted of patients who were diagnosed with colorectal (51.6%), gastric (26.8%), pancreatic (11.8%), hepatic (7.2%), biliary tract (2%), and esophageal (0.7%) cancer. Of these patients, 37.9% were malnourished, 34.6% were at risk of malnutrition, and 27.5% were well nourished. After one course of CT, the frequency of malnutrition increased to 46.4% (P = 0.001). The patient groups with the highest rates of weakness were those who were diagnosed with biliary tract, hepatic, and colorectal cancer (33.3%, 27.3%, and 20%, respectively). Weakness was significantly increased after one course of CT in patients who received CT before (P = 0.039). Malnutrition and weakness were common in patients with geriatric GIS cancer, and even one course of CT worsened the nutritional status of the patients. Patients who have received CT previously should be carefully monitored for weakness. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Sarkarinejad, Khalil; Zafarmand, Bahareh; Oveisi, Behnam
2018-03-01
The NW-SE trending Zagros orogenic belt was initiated during the convergence of the Afro-Arabian continent and the Iranian microcontinent in the Late Cretaceous. Ongoing convergence is confirmed by intense seismicity related to compressional stresses collision-related in the Zagros orogenic belt by reactivation of an early extensional faulting to latter compressional segmented strike-slip and dip-slip faulting. These activities are strongly related either to the deep-seated basement fault activities (deep-seated earthquakes) underlies the sedimentary cover or gently dipping shallow-seated décollement horizon of the rheological weak rocks of the infra-Cambrian Hormuz salt. The compressional stress regimes in the different units play an important role in controlling the stress conditions between the different units within the sedimentary cover and basement. A significant set of nearly N-S trending right-lateral strike-slip faults exists throughout the study area in the Fars area in the Zagros Foreland Folded Belt. Fault-slip and focal mechanism data were analyzed using the stress inversion method to reconstruct the paleo and recent stress conditions. The results suggest that the current direction of maximum principal stress averages N19°E, with N38°E that for the past from Cretaceous to Tertiary (although a few sites on the Kar-e-Bass fault yield a different direction). The results are consistent with the collision of the Afro-Arabian continent and the Iranian microcontinent. The difference between the current and paleo-stress directions indicates an anticlockwise rotation in the maximum principle stress direction over time. This difference resulted from changes in the continental convergence path, but was also influenced by the local structural evolution, including the lateral propagation of folds and the presence of several local décollement horizons that facilitated decoupling of the deformation between the basement and the sedimentary cover. The obliquity of the maximum compressional stress into the fault trends reveals a typical stress partitioning of thrust and strike-slip motion in the Kazerun, Kar-e-Bass, Sabz-Pushan, and Sarvestan fault zones that caused these fault zones behave as segmented strike-slip and dip-slip faults.
McCaffrey, R; Goldfinger, C
1995-02-10
The maximum size of thrust earthquakes at the world's subduction zones appears to be limited by anelastic deformation of the overriding plate. Anelastic strain in weak forearcs and roughness of the plate interface produced by faults cutting the forearc may limit the size of thrust earthquakes by inhibiting the buildup of elastic strain energy or slip propagation or both. Recently discovered active strike-slip faults in the submarine forearc of the Cascadia subduction zone show that the upper plate there deforms rapidly in response to arc-parallel shear. Thus, Cascadia, as a result of its weak, deforming upper plate, may be the type of subduction zone at which great (moment magnitude approximately 9) thrust earthquakes do not occur.
Generalized energy detector for weak random signals via vibrational resonance
NASA Astrophysics Data System (ADS)
Ren, Yuhao; Pan, Yan; Duan, Fabing
2018-03-01
In this paper, the generalized energy (GE) detector is investigated for detecting weak random signals via vibrational resonance (VR). By artificially injecting the high-frequency sinusoidal interferences into an array of GE statistics formed for the detector, we show that the normalized asymptotic efficacy can be maximized when the interference intensity takes an appropriate non-zero value. It is demonstrated that the normalized asymptotic efficacy of the dead-zone-limiter detector, aided by the VR mechanism, outperforms that of the GE detector without the help of high-frequency interferences. Moreover, the maximum normalized asymptotic efficacy of dead-zone-limiter detectors can approach a quarter of the second-order Fisher information for a wide range of non-Gaussian noise types.
Weak Lensing by Large-Scale Structure: A Dark Matter Halo Approach.
Cooray; Hu; Miralda-Escudé
2000-05-20
Weak gravitational lensing observations probe the spectrum and evolution of density fluctuations and the cosmological parameters that govern them, but they are currently limited to small fields and subject to selection biases. We show how the expected signal from large-scale structure arises from the contributions from and correlations between individual halos. We determine the convergence power spectrum as a function of the maximum halo mass and so provide the means to interpret results from surveys that lack high-mass halos either through selection criteria or small fields. Since shot noise from rare massive halos is mainly responsible for the sample variance below 10&arcmin;, our method should aid our ability to extract cosmological information from small fields.
Crowding-facilitated macromolecular transport in attractive micropost arrays.
Chien, Fan-Tso; Lin, Po-Keng; Chien, Wei; Hung, Cheng-Hsiang; Yu, Ming-Hung; Chou, Chia-Fu; Chen, Yeng-Long
2017-05-02
Our study of DNA dynamics in weakly attractive nanofabricated post arrays revealed crowding enhances polymer transport, contrary to hindered transport in repulsive medium. The coupling of DNA diffusion and adsorption to the microposts results in more frequent cross-post hopping and increased long-term diffusivity with increased crowding density. We performed Langevin dynamics simulations and found maximum long-term diffusivity in post arrays with gap sizes comparable to the polymer radius of gyration. We found that macromolecular transport in weakly attractive post arrays is faster than in non-attractive dense medium. Furthermore, we employed hidden Markov analysis to determine the transition of macromolecular adsorption-desorption on posts and hopping between posts. The apparent free energy barriers are comparable to theoretical estimates determined from polymer conformational fluctuations.
Global Harmonization of Maximum Residue Limits for Pesticides.
Ambrus, Árpád; Yang, Yong Zhen
2016-01-13
International trade plays an important role in national economics. The Codex Alimentarius Commission develops harmonized international food standards, guidelines, and codes of practice to protect the health of consumers and to ensure fair practices in the food trade. The Codex maximum residue limits (MRLs) elaborated by the Codex Committee on Pesticide Residues are based on the recommendations of the FAO/WHO Joint Meeting on Pesticides (JMPR). The basic principles applied currently by the JMPR for the evaluation of experimental data and related information are described together with some of the areas in which further developments are needed.
Lorenz, Ralph D
2010-05-12
The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.
Lorenz, Ralph D.
2010-01-01
The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253
Predictions of the causal entropic principle for environmental conditions of the universe
NASA Astrophysics Data System (ADS)
Cline, James M.; Frey, Andrew R.; Holder, Gilbert
2008-03-01
The causal entropic principle has been proposed as an alternative to the anthropic principle for understanding the magnitude of the cosmological constant. In this approach, the probability to create observers is assumed to be proportional to the entropy production ΔS in a maximal causally connected region—the causal diamond. We improve on the original treatment by better quantifying the entropy production due to stars, using an analytic model for the star formation history which accurately accounts for changes in cosmological parameters. We calculate the dependence of ΔS on the density contrast Q=δρ/ρ, and find that our universe is much closer to the most probable value of Q than in the usual anthropic approach and that probabilities are relatively weakly dependent on this amplitude. In addition, we make first estimates of the dependence of ΔS on the baryon fraction and overall matter abundance. Finally, we also explore the possibility that decays of dark matter, suggested by various observed gamma ray excesses, might produce a comparable amount of entropy to stars.
Biominerals- hierarchical nanocomposites: the example of bone
Beniash, Elia
2010-01-01
Many organisms incorporate inorganic solids in their tissues to enhance their functional, primarily mechanical, properties. These mineralized tissues, also called biominerals, are unique organo-mineral nanocomposites, organized at several hierarchical levels, from nano- to macroscale. Unlike man made composite materials, which often are simple physical blends of their components, the organic and inorganic phases in biominerals interface at the molecular level. Although these tissues are made of relatively weak components at ambient conditions, their hierarchical structural organization and intimate interactions between different elements lead to superior mechanical properties. Understanding basic principles of formation, structure and functional properties of these tissues might lead to novel bioinspired strategies for material design and better treatments for diseases of the mineralized tissues. This review focuses on general principles of structural organization, formation and functional properties of biominerals on the example the bone tissues. PMID:20827739
NASA Technical Reports Server (NTRS)
Hsia, Wei-Shen
1986-01-01
In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.