SDI: Statistical dynamic interactions
Blann, M.; Mustafa, M.G. ); Peilert, G.; Stoecker, H.; Greiner, W. . Inst. fuer Theoretische Physik)
1991-04-01
We focus on the combined statistical and dynamical aspects of heavy ion induced reactions. The overall picture is illustrated by considering the reaction {sup 36}Ar + {sup 238}U at a projectile energy of 35 MeV/nucleon. We illustrate the time dependent bound excitation energy due to the fusion/relaxation dynamics as calculated with the Boltzmann master equation. An estimate of the mass, charge and excitation of an equilibrated nucleus surviving the fast (dynamic) fusion-relaxation process is used as input into an evaporation calculation which includes 20 heavy fragment exit channels. The distribution of excitations between residue and clusters is explicitly calculated, as is the further deexcitation of clusters to bound nuclei. These results are compared with the exclusive cluster multiplicity measurements of Kim et al., and are found to give excellent agreement. We consider also an equilibrated residue system at 25% lower initial excitation, which gives an unsatisfactory exclusive multiplicity distribution. This illustrates that exclusive fragment multiplicity may provide a thermometer for system excitation. This analysis of data involves successive binary decay with no compressional effects nor phase transitions. Several examples of primary versus final (stable) cluster decay probabilities for an A = 100 nucleus at excitations of 100 to 800 MeV are presented. From these results a large change in multifragmentation patterns may be understood as a simple phase space consequence, invoking neither phase transitions, nor equation of state information. These results are used to illustrate physical quantities which are ambiguous to deduce from experimental fragment measurements. 14 refs., 4 figs.
Excursions in statistical dynamics
NASA Astrophysics Data System (ADS)
Crooks, Gavin Earl
There are only a very few known relations in statistical dynamics that are valid for systems driven arbitrarily far away from equilibrium by an external perturbation. One of these is the fluctuation theorem, which places conditions on the entropy production probability distribution of nonequilibrium systems. Another recently discovered far-from-equilibrium expression relates nonequilibrium measurements of the work done on a system to equilibrium free energy differences. In contrast to linear response theory, these expressions are exact no matter the strength of the perturbation, or how far the system has been driven from equilibrium. In this work I show that these relations (and several other closely related results) can all be considered special cases of a single theorem. This expression is explicitly derived for discrete time and space Markovian dynamics, with the additional assumptions that the unperturbed dynamics preserve the appropriate equilibrium ensemble, and that the energy of the system remains finite. These theoretical results indicate that the most interesting nonequilibrium phenomena will be observed during rare excursions of the system away from the stable states. However, direct simulation of the dynamics is inherently inefficient, since the majority of the computation time is taken watching small, uninteresting fluctuations about the stable states. Transition path sampling has been developed as a Monte Carlo algorithm to efficiently sample rare transitions between stable or metastable states in equilibrium systems. Here, the transition path sampling methodology is adapted to the efficient sampling of large fluctuations in nonequilibrium systems evolving according to Langevin's equations of motion. Simulations are then used to study the behavior of the Maier-Stein system, an important model for a large class of nonequilibrium systems. Path sampling is also implemented for the kinetic Ising model, which is then employed to study surface induced evaporation.
Wakefield accelerators for SDI
NASA Astrophysics Data System (ADS)
Jones, M. E.; Keinigs, R.; Faehl, R. J.; Devolder, B. G.
The Wakefield accelerator concept consists of utilizing the electric field generated by the motion of one group of particles to accelerate another group of particles. Essentially, the Wakefield accelerator is a transformer which transfers the energy in a large number of relatively low energy particles to a smaller number of particles, resulting in acceleration of these particles to high energy. The basic physics of Wakefield acceleration is described, and issues relevant to SDI applications. For such applications the amount of total energy in the beam must be maximized. Specific Wakefield concepts are described - RF cavity and slow-wave structure Wakefield and the plasma Wakefield accelerator.
Lee, S.
2011-05-05
The Savannah River Remediation (SRR) Organization requested that Savannah River National Laboratory (SRNL) develop a Computational Fluid Dynamics (CFD) method to mix and blend the miscible contents of the blend tanks to ensure the contents are properly blended before they are transferred from the blend tank; such as, Tank 50H, to the Salt Waste Processing Facility (SWPF) feed tank. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The transient CFD governing equations consisting of three momentum equations, one mass balance, two turbulence transport equations for kinetic energy and dissipation rate, and one species transport were solved by an iterative technique until the species concentrations of tank fluid were in equilibrium. The steady-state flow solutions for the entire tank fluid were used for flow pattern analysis, for velocity scaling analysis, and the initial conditions for transient blending calculations. A series of the modeling calculations were performed to estimate the blending times for various jet flow conditions, and to investigate the impact of the cooling coils on the blending time of the tank contents. The modeling results were benchmarked against the pilot scale test results. All of the flow and mixing models were performed with the nozzles installed at the mid-elevation, and parallel to the tank wall. From the CFD modeling calculations, the main results are summarized as follows: (1) The benchmark analyses for the CFD flow velocity and blending models demonstrate their consistency with Engineering Development Laboratory (EDL) and literature test results in terms of local velocity measurements and experimental observations. Thus, an application of the established criterion to SRS full scale tank will provide a better, physically-based estimate of the required mixing time, and elevation of transfer pump for minimum sludge disturbance. (2) An empirical equation for a tank with no cooling coils agrees reasonably with the current modeling results for the dual jet. (3) From the sensitivity study of the cooling coils, it was found that the tank mixing time for the coiled tank was about two times longer than that of the tank fluid with no coils under the 1/10th scale, while the coiled tank required only 50% longer than the one without coils under the full scale Tank 50H. In addition, the time difference is reduced when the pumping U{sub o}d{sub o} value is increased for a given tank. (4) The blending time for T-shape dual jet pump is about 20% longer than that of 15{sup o} upward V-shape pump under the 1/10th pilot-scale tank, while the time difference between the two pumps is about 12% for the full-scale Tank 50H. These results are consistent with the literature information. (5) A transfer pump with a solid-plate suction screen operating at 130 gpm can be located 9.5 inches above settled sludge for 2 in screen height in a 85 ft waste tank without disturbing any sludge. Detailed results are summarized in Table 13. Final pump performance calculations were made by using the established CW pump design, and operating conditions to satisfy the two requirements of minimum sludge disturbance, and adequate blending of tank contents. The final calculation results show that the blending times for the coiled and uncoiled tanks coupled with the CW pump design are 159 and 83 minutes, respectively. All the results are provided in Table 16.
NASP and SDI Spearhead CFD Developments
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B.
1992-01-01
The National Aerospace Plane (NASP) program's purpose, as stated by the National Space Council, is to "develop and demonstrate hypersonic technologies with the ultimate goal of single stage to orbit." The council has also directed that "performance of the experimental flight vehicle will be constrained to the minimum necessary to meet the highest priority research, as opposed to operational objectives .... The program will be conducted in such a way as to minimize technical and cost uncertainty associated with the experimental vehicle." The purpose of the Strategic Defense Initiative (SDI), as defined by President Bush, is "...protection from limited ballistic missile strikes, whatever their source." Computational fluid dynamics (CFD) plays a vital role in both endeavors.
The Evaluation of SISMAKOM (Computerized SDI Project).
ERIC Educational Resources Information Center
University of Science, Penang (Malaysia).
A survey of 88 users of SISMAKOM, a computerized selective dissemination of information (SDI) and document delivery service provided by the Universiti Sains Malaysia and four other Malaysian universities, was conducted in August 1982 in order to collect data about SISMAKOM and to assess the value of a computerized SDI service in a developing…
Statistical dynamics of religion evolutions
NASA Astrophysics Data System (ADS)
Ausloos, M.; Petroni, F.
2009-10-01
A religion affiliation can be considered as a “degree of freedom” of an agent on the human genre network. A brief review is given on the state of the art in data analysis and modelization of religious “questions” in order to suggest and if possible initiate further research, after using a “statistical physics filter”. We present a discussion of the evolution of 18 so-called religions, as measured through their number of adherents between 1900 and 2000. Some emphasis is made on a few cases presenting a minimum or a maximum in the investigated time range-thereby suggesting a competitive ingredient to be considered, besides the well accepted “at birth” attachment effect. The importance of the “external field” is still stressed through an Avrami late stage crystal growth-like parameter. The observed features and some intuitive interpretations point to opinion based models with vector, rather than scalar, like agents.
SDI spinoffs: research now, standards later
Smith, T.K. Jr.
1986-04-01
A major benefit of the Strategic Defense Initiative (SDI) its is potential for technological spinoffs. The lack of a consistent answer on the feasibility of developing an effective ballistic missile defense system may force Congress to look at the possible spinoffs in order to make a funding decision on SDI. Spinoffs have historically played an important role in providing industry with commercial applications, but there are also a number of unattractive aspects: unpredictability and possible suppression for national security reasons. Edward Teller is among those who promote X-ray lasers, while others support gamma-ray laser research. The possibility of SDI technology and spinoffs gives scientists and engineers a chance to participate in the development of new standards. 7 references.
Statistical dynamics of dissipative drift wave turbulence
Gang, F.Y. ); Diamond, P.H. General Atomics, San Diego, California 92138 ); Crotinger, J.A.; Koniges, A.E. )
1991-04-01
The statistical dynamics of a two-field model of dissipative drift wave turbulence is investigated using the EDQNM (eddy damped quasinormal Markovian) closure method (J. Fluid Mech. {bold 41}, 363 (1970)). The analyses include studies of statistical closure equations, derivation of an H theorem, and its application to formulation of selective decay hypotheses for turbulent relaxation process. The results show that the dynamics of the two-field model is fundamentally different from that of the familiar, one-field Hasegawa--Mima model (Phys. Fluids {bold 21}, 87 (1978)). In particular, density fluctuations nonlinearly couple to small scales, as does enstrophy. This transfer process is nonlinearly regulated by the dynamics of the density--vorticity cross correlation. Since density perturbations are not simply related to potential perturbations, as is vorticity, their transfer rate is greater. As a result, turbulent relaxation processes exhibit both dynamic alignment of density and vorticity and coherent vortex formation.
Multifragmentation at intermediate energy: Dynamics or statistics
Beaulieu, L.; Phair, L.; Moretto, L.G.; Wozniak, G.J.
1998-01-01
In this report the authors consider two contradictory claims that have been advanced recently: (1) the claim for a predominantly dynamical fragment production mechanism; and (2) the claim for a dominant statistical and thermal process. They present a new analysis in terms of Poissonian reducibility and thermal scaling, which addresses some of the criticisms of the binomial analysis.
Operational Results of an Adaptive SDI System.
ERIC Educational Resources Information Center
Sage, C. R.; Fitzwater, D. R.
The Ames Laboratory SDI system requires a minimum of human intervention. The adaptability of the system provides two major contributions to information dissemination. (1) The user benefits proportionately from the amount of effort he expends in setting up his profile and the diligence in sending back responses. (2) The document input has only to…
Automated SDI Services. (Selective Dissemination of Information).
ERIC Educational Resources Information Center
Altmann, Berthold
An automated SDI service based on tapes supplied by DDC, Science Abstracts, and Engineering Index is evaluated as a component element of the entire HDL information system. Current studies for improving the efficiency are briefly described,--in particular, the establishment of a parameter reference service that should shorten the lead-time for the
Automated SDI Services. (Selective Dissemination of Information).
ERIC Educational Resources Information Center
Altmann, Berthold
An automated SDI service based on tapes supplied by DDC, Science Abstracts, and Engineering Index is evaluated as a component element of the entire HDL information system. Current studies for improving the efficiency are briefly described,--in particular, the establishment of a parameter reference service that should shorten the lead-time for the…
Toward Statistical Descriptions of Convective Cloud Dynamics
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi; Quaas, Johannes; Wagner, Till M.; Plant, Robert S.
2008-06-01
Workshop on Concepts for Convective Parameterizations in Large-Scale Models; Hamburg, Germany, 12-14 February 2008; An accurate representation (parameterization) of deep convective clouds is essential for the successful simulation of precipitation in climate models. However, the question of closure (i.e., how to find a closed system of equations) for convective parameterizations remains unsettled. Because the parameterization is conceptually a description of an ensemble of convective clouds, the development of ``statistical cumulus dynamics'' (SCD) would be the ultimate way to provide the closure, just as the statistical mechanics of a microphysical system provides the ultimate basis for its macrophysical thermodynamics.
Takatsuka, Kazuo; Matsumoto, Kentaro
2016-01-21
We present a basic theory to study real-time dynamics embedded in a large environment that is treated using a statistical method. In light of great progress in the molecular-level studies on time-resolved spectroscopies, chemical reaction dynamics, and so on, not only in the gas phase but also in condensed phases like liquid solvents and even in crowded environments in living cells, we need to bridge over a gap between statistical mechanics and microscopic real-time dynamics. For instance, an analogy to gas-phase dynamics in which molecules are driven by the gradient of the potential energy hyper-surfaces (PESs) suggests that particles in condensed phases should run on the free energy surface instead. The question is whether this anticipation is correct. To answer it, we here propose a mixed dynamics and statistical representation to treat chemical dynamics embedded in a statistical ensemble. We first define the entropy functional, which is a function of the phase-space position of the dynamical subsystem, being dressed with statistical weights from the statistical counterpart. We then consider the functionals of temperature, free energy, and chemical potential as their extensions in statistical mechanics, through which one can clarify the relationship between real-time microscopic dynamics and statistical quantities. As an illustrative example we show that molecules in the dynamical subsystem should run on the free-energy functional surface, if and only if the spatial gradients of the temperature functional are all zero. Otherwise, additional forces emerge from the gradient of the temperature functional. Numerical demonstrations are presented at the very basic level of this theory of molecular dissociation in atomic cluster solvents. PMID:26674298
The origins of SDI, 1944--1983
Baucom, D.R.
1992-01-01
The most distinctive and important contribution of this new book on the Strategic Defense Initiative is that it ends where most other studies begin, with President Ronald Reagan's famous (or infamous, depending on one's perspective) March 1983 speech that introduced the Star Wars concept. In taking this approach, Donald R. Baucom - a former Air Force historian who has been the official historian who has been the official historian of the Strategic Defense Initiative Organization since May 1987 - helps to correct the common misperception that US efforts in strategic defense began and ended with the SDI. Although Baucom tells us that The Origins of SDI is a significantly revised version of an SDIO study he completed in 1989, representing his own views and not those of the SDIO, the reader should be warned that the book reads like an official history. It is often dry or too episodic and offers little that is new in the way of analysis or interpretation.
SDI (Strategic Defense Initiative): Shield or sword. Study Project
Butler, C.S.; Spiczak, G.R.
1989-05-15
The paper attempts to answer the fundamental question, is SDI an adjunct to a first strike strategy. As its criteria, it discusses Soviet and U.S. opposing views on SDI, an historical application of Mutual Assured Destruction strategy, and a discussion of Soviet and U.S. thinking on first-strike capability. President Reagan's March 1983 address on SDI is used as the backdrop to set the stage for the discussion. It is the objective of the authors to evaluate and analyze the potential impact of SDI on first strike.
Statistical Physics Applied to Human Heartbeat Dynamics
NASA Astrophysics Data System (ADS)
Stanley, H. Eugene
2000-03-01
A major problem in biology is the quantitative analysis of nonstationary time series. A central question is whether such noisy fluctuating signals contain information useful for understanding underlying physiological mechanisms. This review talk summarizes recent work that analyzes physiological signals--principally lengthy time series of interbeat heart intervals--using a range of approaches adapted from modern statistical mechanics. These approaches include (i) detrended fluctuation analysis of long-range anticorrelations, (ii) wavelet analysis, and (iii) multifractal analysis. The work reported here was carried out primarily by L. A. Nunes Amaral, A. L. Goldberger, S. Havlin, P. Ch. Ivanov, C.-K. Peng, M. G. Rosenblum, and Z. Struzik; see [1-5] and references therein for details. [1] For an overview, see H. E. Stanley, L. A. N. Amaral, A. L. Goldberger, S. Havlin, P. Ch. Ivanov, and C.-K. Peng, ``Statistical Physics and Physiology: Monofractal and Multifractal Approaches,'' Physica A 270 (1999) 309. [2] C.-K. Peng, S. Havlin, H. E. Stanley, and A. L. Goldberger, ``Quantification of Scaling Exponents and Crossover Phenomena in Nonstationary Heartbeat Time Series,'' Chaos 5 (1995) 82. [3] L. A. N. Amaral, A. L. Goldberger, P. Ch. Ivanov, and H. E. Stanley, ``Scale-Independent Measures and Pathologic Cardiac Dynamics,'' Phys. Rev. Lett. 81 (1998) 2388. [4] P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, and H. E. Stanley, ``Wavelets in Medicine and Physiology,'' in Wavelets, edited by H. C. van den Berg (Cambridge University Press, Cambridge, 1999). [5] P. Ch. Ivanov, L. A. N. Amaral, A. L. Goldberger, S. Havlin, M. G. Rosenblum, Z. Struzik, and H. E. Stanley, ``Multifractality in Human Heartbeat Dynamics,'' Nature 399 (1999) 461.
Using SDI-12 with ST microelectronics MCU's
Saari, Alexandra; Hinzey, Shawn Adrian; Frigo, Janette Rose; Proicou, Michael Chris; Borges, Louis
2015-09-03
ST Microelectronics microcontrollers and processors are readily available, capable and economical processors. Unfortunately they lack a broad user base like similar offerings from Texas Instrument, Atmel, or Microchip. All of these devices could be useful in economical devices for remote sensing applications used with environmental sensing. With the increased need for environmental studies, and limited budgets, flexibility in hardware is very important. To that end, and in an effort to increase open support of ST devices, I am sharing my teams' experience in interfacing a common environmental sensor communication protocol (SDI-12) with ST devices.
Investigating strategies to improve crop germination when using SDI
Technology Transfer Automated Retrieval System (TEKTRAN)
As the nation's population increases and available irrigation water decreases, new technologies are being developed to maintain or increase production on fewer acres. One of these advancements has been the use of subsurface drip irrigation (SDI) on field crops. Research has shown that SDI is the m...
Lost in space: SDI struggles through its sixth year
MacDonald, B.W.
1989-09-01
After six years of debate, it is clear that Congress is willing to support a robust research program for SDI, but it is also clear that Congress will not support SDI annual outlays on the order of $10 billion. Thus the policy choice is between a good research program that meshes with fiscal reality, or an inadequate and wasteful development program that continues to focus on preparing for a Phase I deployment for which the funds simply will not be available. The Bush administration so far seems trapped by its own rhetoric from coming to grips with the implications of the new SDI reality. The responsibility for getting SDI on a steadier course toward more realistic research objectives thus seems to lie with Congress in the near term. Since Congress has been reluctant to earmark SDI research funds for specific objectives, it will take a change in administration perceptions before SDI program goals can be changed away from Phase I deployment. The only likely way this could happen in the near term would be as a result of a Congress-executive branch summit agreement on SDI objectives and funding levels. In the absence of such an agreement, SDI will be sailing under ever weaker fiscal and political winds and runs the risk of finding itself becalmed, working ceaselessly toward goals that will never be fulfilled.
Surface drip irrigation (SDI): Status of the technology in 2010
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI), although a much smaller fraction of the microirrigated land area than surface drip irrigation, is growing at a much faster rate and is the subject of considerable research and educational efforts in the United States. This paper will discuss the growth in SDI, highl...
Teachers' Use of Transnumeration in Solving Statistical Tasks with Dynamic Statistical Software
ERIC Educational Resources Information Center
Lee, Hollylynne S.; Kersaint, Gladis; Harper, Suzanne R.; Driskell, Shannon O.; Jones, Dusty L.; Leatham, Keith R.; Angotti, Robin L.; Adu-Gyamfi, Kwaku
2014-01-01
This study examined a random stratified sample (n = 62) of teachers' work across eight institutions on three tasks that utilized dynamic statistical software. We considered how teachers may utilize and develop their statistical knowledge and technological statistical knowledge when investigating a statistical task. We examined how teachers
Teachers' Use of Transnumeration in Solving Statistical Tasks with Dynamic Statistical Software
ERIC Educational Resources Information Center
Lee, Hollylynne S.; Kersaint, Gladis; Harper, Suzanne R.; Driskell, Shannon O.; Jones, Dusty L.; Leatham, Keith R.; Angotti, Robin L.; Adu-Gyamfi, Kwaku
2014-01-01
This study examined a random stratified sample (n = 62) of teachers' work across eight institutions on three tasks that utilized dynamic statistical software. We considered how teachers may utilize and develop their statistical knowledge and technological statistical knowledge when investigating a statistical task. We examined how teachers…
Artificial intelligence applications in space and SDI: A survey
NASA Technical Reports Server (NTRS)
Fiala, Harvey E.
1988-01-01
The purpose of this paper is to survey existing and planned Artificial Intelligence (AI) applications to show that they are sufficiently advanced for 32 percent of all space applications and SDI (Space Defense Initiative) software to be AI-based software. To best define the needs that AI can fill in space and SDI programs, this paper enumerates primary areas of research and lists generic application areas. Current and planned NASA and military space projects in AI will be reviewed. This review will be largely in the selected area of expert systems. Finally, direct applications of AI to SDI will be treated. The conclusion covers the importance of AI to space and SDI applications, and conversely, their importance to AI.
Air Force Satellite Control Network and SDI development
NASA Astrophysics Data System (ADS)
Bleier, T.
The Air Force Satellite Control Network (AFSCN) represents a military, worldwide network of control centers and remote tracking sites (RTS). A relatively large and growing constellation of DOD satellites is supported. The near term and long term plans for the AFSCN are discussed, taking into account also the impact of the Space Defense Initiative (SDI) on the AFSCN. It is pointed out that the SDI adds a new dimension to the support provided by the AFSCN to the DOD satellites, because some SDI scenarios being considered include many more satellite platforms, each containing multiple kinetic energy weapons. Space-ground link sites are discussed along with AFSCN control sites, and communication between RTS and control centers. Attention is given to changing roles and responsibilities, the Satellite Test Center (STC) as an excellent site for the R and D phase of SDI development, and an operational concept for a highly proliferated weapons platforms architecture, and goals of developing more survivable satellite systems.
Multifragmentation: New dynamics or old statistics?
Moretto, L.G.; Delis, D.N.; Wozniak, G.J.
1993-10-01
The understanding of the fission process as it has developed over the last fifty years has been applied to multifragmentation. Two salient aspects have been discovered: 1) a strong decoupling of the entrance and exit channels with the formation of well-characterized sources: 2) a statistical competition between two-, three-, four-, five-, ... n-body decays.
Segmenting Dynamic Human Action via Statistical Structure
ERIC Educational Resources Information Center
Baldwin, Dare; Andersson, Annika; Saffran, Jenny; Meyer, Meredith
2008-01-01
Human social, cognitive, and linguistic functioning depends on skills for rapidly processing action. Identifying distinct acts within the dynamic motion flow is one basic component of action processing; for example, skill at segmenting action is foundational to action categorization, verb learning, and comprehension of novel action sequences. Yet…
Dynamics and statistics of unstable quantum states
NASA Astrophysics Data System (ADS)
Sokolov, V. V.; Zelevinsky, V. G.
1989-11-01
The statistical theory of spectra formulated in terms of random matrices is extended to unstable states. The energies and widths of these states are treated as real and imaginary parts of complex eigenvalues for an effective non-hermitian hamiltonian. Eigenvalue statistics are investigated under simple assumptions. If the coupling through common decay channels is weak we obtain a Wigner distribution for the level spacings and a Porter-Thomas one for the widths, with the only exception for spacings less than widths where level repulsion fades out. Meanwhile in the complex energy plane the repulsion of eigenvalues is quadratic in accordance with the T-noninvariant character of decaying systems. In the opposite case of strong coupling with the continuum, k short-lived states are formed ( k is the number of open decay channels). These states accumulate almost the whole total width, the rest of the states becoming long-lived. Such a perestroika corresponds to separation of direct processes (a nuclear analogue of Dicke coherent superradiance). At small channel number, Ericson fluctuations of the cross sections are found to be suppressed. The one-channel case is considered in detail. The joint distribution of energies and widths is obtained. The average cross sections and density of unstable states are calculated.
Photon Counts Statistics in Leukocyte Cell Dynamics
NASA Astrophysics Data System (ADS)
van Wijk, Eduard; van der Greef, Jan; van Wijk, Roeland
2011-12-01
In the present experiment ultra-weak photon emission/ chemiluminescence from isolated neutrophils was recorded. It is associated with the production of reactive oxygen species (ROS) in the "respiratory burst" process which can be activated by PMA (Phorbol 12-Myristate 13-Acetate). Commonly, the reaction is demonstrated utilizing the enhancer luminol. However, with the use of highly sensitive photomultiplier equipment it is also recorded without enhancer. In that case, it can be hypothesized that photon count statistics may assist in understanding the underlying metabolic activity and cooperation of these cells. To study this hypothesis leukocytes were stimulated with PMA and increased photon signals were recorded in the quasi stable period utilizing Fano factor analysis at different window sizes. The Fano factor is defined by the variance over the mean of the number of photon within the observation time. The analysis demonstrated that the Fano factor of true signal and not of the surrogate signals obtained by random shuffling increases when the window size increased. It is concluded that photon count statistics, in particular Fano factor analysis, provides information regarding leukocyte interactions. It opens the perspective to utilize this analytical procedure in (in vivo) inflammation research. However, this needs further validation.
SDI-Based Groundwater Information Interoperability
NASA Astrophysics Data System (ADS)
Brodaric, B.; Boisvert, E.
2007-12-01
Though groundwater data are important inputs to hydrologic decision-making, they are highly distributed and heterogeneous, and thus difficult to access in a coordinated manner. The Geological Survey of Canada (GSC) is developing an information system for coordinated groundwater data access, using the standards and technologies of Spatial Data Infrastructures (SDI). In mid-stage development, the system is designed to manage and disseminate data produced by GSC scientists, as well as potentially disseminate data produced by other groundwater agencies. The system involves a typical three-tiered, mediator-wrapper architecture that includes a data tier, a mediator tier, and an applications tier. At the data tier local data sources are wrapped by OGC web services (WFS, WMS), which deliver diversely structured data to the mediator tier. The mediator tier acts as: (1) a central registry for the distributed data and other services; (2) a translator of the local data to the standard data format, GroundWater Markup Language; and (3) a consistent set of OGC web services that enable users to access the distributed data as one source. The applications tier involves both GSC and third-party web applications, such as analysis tools or on-line atlases, that provide user interfaces to the system. Apart from the data format standards used to achieve schematic interoperability, the system also deploys some light-weight data content standards to move toward semantic interoperability. These content standards include the definition of common categories for datasets such as standard subject classifications and map layers. A demonstration of the working prototype will be available, as well as discussion of the architecture of the system and the impacts on interoperability. The intent of the development is to grow the system into a national enterprise with a broad range of contributors and users.
Protein electron transfer: Dynamics and statistics
NASA Astrophysics Data System (ADS)
Matyushov, Dmitry V.
2013-07-01
Electron transfer between redox proteins participating in energy chains of biology is required to proceed with high energetic efficiency, minimizing losses of redox energy to heat. Within the standard models of electron transfer, this requirement, combined with the need for unidirectional (preferably activationless) transitions, is translated into the need to minimize the reorganization energy of electron transfer. This design program is, however, unrealistic for proteins whose active sites are typically positioned close to the polar and flexible protein-water interface to allow inter-protein electron tunneling. The high flexibility of the interfacial region makes both the hydration water and the surface protein layer act as highly polar solvents. The reorganization energy, as measured by fluctuations, is not minimized, but rather maximized in this region. Natural systems in fact utilize the broad breadth of interfacial electrostatic fluctuations, but in the ways not anticipated by the standard models based on equilibrium thermodynamics. The combination of the broad spectrum of static fluctuations with their dispersive dynamics offers the mechanism of dynamical freezing (ergodicity breaking) of subsets of nuclear modes on the time of reaction/residence of the electron at a redox cofactor. The separation of time-scales of nuclear modes coupled to electron transfer allows dynamical freezing. In particular, the separation between the relaxation time of electro-elastic fluctuations of the interface and the time of conformational transitions of the protein caused by changing redox state results in dynamical freezing of the latter for sufficiently fast electron transfer. The observable consequence of this dynamical freezing is significantly different reorganization energies describing the curvature at the bottom of electron-transfer free energy surfaces (large) and the distance between their minima (Stokes shift, small). The ratio of the two reorganization energies establishes the parameter by which the energetic efficiency of protein electron transfer is increased relative to the standard expectations, thus minimizing losses of energy to heat. Energetically efficient electron transfer occurs in a chain of conformationally quenched cofactors and is characterized by flattened free energy surfaces, reminiscent of the flat and rugged landscape at the stability basin of a folded protein.
Protein electron transfer: Dynamics and statistics.
Matyushov, Dmitry V
2013-07-14
Electron transfer between redox proteins participating in energy chains of biology is required to proceed with high energetic efficiency, minimizing losses of redox energy to heat. Within the standard models of electron transfer, this requirement, combined with the need for unidirectional (preferably activationless) transitions, is translated into the need to minimize the reorganization energy of electron transfer. This design program is, however, unrealistic for proteins whose active sites are typically positioned close to the polar and flexible protein-water interface to allow inter-protein electron tunneling. The high flexibility of the interfacial region makes both the hydration water and the surface protein layer act as highly polar solvents. The reorganization energy, as measured by fluctuations, is not minimized, but rather maximized in this region. Natural systems in fact utilize the broad breadth of interfacial electrostatic fluctuations, but in the ways not anticipated by the standard models based on equilibrium thermodynamics. The combination of the broad spectrum of static fluctuations with their dispersive dynamics offers the mechanism of dynamical freezing (ergodicity breaking) of subsets of nuclear modes on the time of reaction/residence of the electron at a redox cofactor. The separation of time-scales of nuclear modes coupled to electron transfer allows dynamical freezing. In particular, the separation between the relaxation time of electro-elastic fluctuations of the interface and the time of conformational transitions of the protein caused by changing redox state results in dynamical freezing of the latter for sufficiently fast electron transfer. The observable consequence of this dynamical freezing is significantly different reorganization energies describing the curvature at the bottom of electron-transfer free energy surfaces (large) and the distance between their minima (Stokes shift, small). The ratio of the two reorganization energies establishes the parameter by which the energetic efficiency of protein electron transfer is increased relative to the standard expectations, thus minimizing losses of energy to heat. Energetically efficient electron transfer occurs in a chain of conformationally quenched cofactors and is characterized by flattened free energy surfaces, reminiscent of the flat and rugged landscape at the stability basin of a folded protein. PMID:23862967
Statistics and dynamics of the perturbed universe
NASA Astrophysics Data System (ADS)
Lemson, G.
1995-09-01
In the not too distant past, our theorizing about the nature of the Universe we live in, was not much limited by observational constraints. Consequently, no true science could be developed dealing with the nature of the Universe at large: its origin, its present state and its future. This was the realm of religion and philosophy. In this century, revolutionary developments in physics have provided the framework within which to describe the Universe as a whole and which finally made it possible to obtain tentative answers to questions we have only recently learned to ask. In this thesis, I present investigations that deal with a small part of the theory of cosmology. In particular, I have investigated certain aspects of the theory of structure formation in the Universe. This subject has been extensively studied in the last few decennia. It originated from the realization that the Universe has not always been the same as observed at present. The Universe as we observe it today is filled with objects of a great variety of sizes and shapes. In the 2nd and 3rd decade of this century Hubble discovered that our Universe is expanding. This implies that in the past the Universe was smaller and therefore denser. All the structures we observe nowadays, if also existing in the past, would have been closer and at some time would have touched and overlapped. Furthermore, the theories that were developed to describe such an expanding Universe in quantitative detail, required that the Universe be homogeneous and isotropic, i.e. it should look the same at every position and in every direction. All mass and radiation must once have been distributed uniformly throughout space. With these theories, Gamov (1946, 1948ab) predicted that in the past the Universe must have been much hotter than presently, and that the afterglow of this epoch should still be observable as a faint radio signal at a temperature a few degrees above the absolute zero point. In the early sixties, Penzias and Wilson discovered the corresponding radiation field, at a temperature of roughly 3K (Penzias & Wilson, 1965). It soon appeared that this microwave background radiation was isotropic to a high degree, which conrmed the assumptions made about the homogeneity of the early Universe. At present however, we see that the Universe is no longer featureless and smooth. Starting from the smallest scales we see matter organized in structures up to very large scales: from planets to stars to stellar systems to galaxies to groups and clusters of galaxies, up to super-clusters, where clusters and galaxies are organized in the largest structures known. Somewhere during the evolution of the Universe, these structures must have developed out of the featureless, uniform sea of matter and radiation. Various different theories have been developed to explain the emergence of structure, but in this thesis I will concentrate exclusively on the most generally accepted theory, that of gravitational instability. In this theory it is assumed that in the early Universe, small fluctuations in the density were present, and these would grow under the influence of gravity towards the presently observed structures. There is actually a rather complete theory of the early stages of this process, that regime where these deviations from homogeneity are small. In that case, the inhomogeneous field may be seen as a small disturbance to the uniform model, and the standard apparatus of perturbation theory may be applied. In this thesis I investigate the later stages of this process of structure formation, where the fluctuations have grown to such a size that this 'linear' perturbation approach breaks down. There is as yet no comprehensive model describing this 'nonlinear' regime as successfully as the linear theory describes the early stages of structure formation. Instead, the problem is approached from many different directions, using different, approximate models for describing the dynamics and other techniques for describing the resulting patterns in the matter distribution. Throughout this thesis I will argue that in fact this is the greatest hindrance for progress in this field; namely, the dynamics of the matter distribution and its structural characteristics are described using different techniques, and it is difficult to translate the results from one into the other. In the rest of this Introduction I will explain how this comes about, using the example of the linear regime, where this discrepancy does not yet exist. First I will give a more detailed description of the homogeneous Universe and then apply the perturbation approach to derive the equation governing the evolution of the density fluctuations in the linear regime. From these it is easy to see how the development of nonlinearities spoils the unity between description and dynamics, and in the following section I will give a short description of some of the models that have been used to treat this regime. At the end of this Introduction I will then give an overview of the work presented in this thesis.
SDI (Strategic Defense Initiative) and national security policy. Research report
Davis, R.W.
1988-04-01
The paper attempts to answer the fundamental question of Can SDI make a significant contribution to US national security. It uses as its evaluation criteria historical arms-control measurements of stability, reduction in the probability of war, reduction in the consequences of war, economic benefits, and political benefits. A historical discussion of US nuclear strategy development along with Soviet thinking is provided as a backdrop to set the stage for an analysis of the reasons for President Reagan's March 1983 speech. The objectives of SDI are discussed along with the major concerns expressed by the program critics. Using the evaluation criteria defined above, the author analyzes SDI potential position in a long-term integrated national strategy that includes arms control and competitive strategies.
Applying GPS to ERIS and other SDI applications
NASA Astrophysics Data System (ADS)
Hoefener, C. E.; Clark, J.
The necessity of adapting GPS technology to SDI program requirements is demonstrated. Basic measurement techniques for direct hit, near miss, and miss situations are described. It is concluded that GPS is far superior to tracking radars for time, space, and position information (TSPI) applications in the SDI test and evaluation program of the future. It is noted that in the case of ground launched interceptors such as ERIS or HEDI, GPS frequency translators would be used both in the interceptors and in the reentry vehicle targets.
Statistical coarse-graining of molecular dynamics into peridynamics.
Silling, Stewart Andrew; Lehoucq, Richard B.
2007-10-01
This paper describes an elegant statistical coarse-graining of molecular dynamics at finite temperature into peridynamics, a continuum theory. Peridynamics is an efficient alternative to molecular dynamics enabling dynamics at larger length and time scales. In direct analogy with molecular dynamics, peridynamics uses a nonlocal model of force and does not employ stress/strain relationships germane to classical continuum mechanics. In contrast with classical continuum mechanics, the peridynamic representation of a system of linear springs and masses is shown to have the same dispersion relation as the original spring-mass system.
A Dynamical System Having Deterministic Behavior Governed by Statistics
NASA Astrophysics Data System (ADS)
Baylor, Martha-Elizabeth; Anderson, Dana; Popovic, Zoya
2007-03-01
We describe a holographic optoelectronic circuit whose dynamics to lowest order is described by a Lotka-Volterra system in which the parameters are determined by the second- and fourth- order statistical moments of a collection of input signals. The system is multistable, metastable, or monostable, depending on whether the input signal statistical fourth moments are sub-Gaussian, Gaussian, super-Gaussian, or a mixture of statistical classes. More generally the circuit gain is directly derived from the input-space statistical characteristic function. We use the dynamical properties to demonstrate the cocktail party effect in which the circuit unscrambles a mixed pair of audio or radio frequency signals in the absence of any a priori information about the mixture.
Design and Installation of SDI Systems in North Carolina
Technology Transfer Automated Retrieval System (TEKTRAN)
As a part of the humid Southeast, North Carolina’s climate, topography, soils, cropping systems, and water sources require special consideration when considering and implementing a subsurface drip irrigation (SDI) system. This publication is not a step-by-step design manual, but it will help you in ...
Selective Dissemination of Information (SDI) in a Technological University Library
ERIC Educational Resources Information Center
Tell, Bjorn V.
1972-01-01
The Royal Institute of Technology Library in Stockholm operates a computerized SDI service to industry which has encouraged greater use of the library's collections through loans and photocopies. A useful by-product is a union catalog of serials in machine-readable form, from which a photo-composed list was published. (4 references) (Author/SJ)
Soviet SDI Rhetoric: The "Evil Empire" Vision of Mikhail Gorbachev.
ERIC Educational Resources Information Center
Kelley, Colleen E.
The symbolic presence of Ronald Reagan's Strategic Defense Initiative (SDI) has been and continues to be the pivot point in all summitry rhetoric between the American President and Soviet General Secretary Mikhail Gorbachev. To examine some of the rhetorical choices made by Gorbachev to dramatize his vision of why Ronald Reagan refuses to…
SDI Considerations for North Carolina Growers and Producers
Technology Transfer Automated Retrieval System (TEKTRAN)
Humid areas, such as the southeastern and midsouthern United States, have particular climate, topography, soils, cropping systems, and water sources that require special consideration when implementing a subsurface drip irrigation (SDI) system. These factors are normally different enough in value or...
Ames Selective Dissemination of Information (SDI) System Operating Manual.
ERIC Educational Resources Information Center
Anderson, Lloyd E.; Wegner, Waldo W.
The Ames Selective Dissemination of Information (SDI) System is an attempt to efficiently place rapidly increasing amounts of information into the hands of scientists and engineers who can exploit it. It is a computerized current awareness system designed to increase researchers' literature searching capabilities by bringing to their attention…
Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points
ERIC Educational Resources Information Center
Ekol, George
2015-01-01
This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…
Statistical determination of space shuttle component dynamic magnification factors
NASA Technical Reports Server (NTRS)
Lehner, F.
1973-01-01
A method is presented of obtaining vibration design loads for components and brackets. Dynamic Magnification Factors from applicable Saturn/Apollo qualification, reliability, and vibroacoustic tests have been statistically formulated into design nomographs. These design nomographs have been developed for different component and bracket types, mounted on backup structure or rigidly mounted and excited by sinusoidal or random inputs. Typical nomographs are shown.
Seasonal statistical-dynamical forecasts of droughts over Western Iberia
NASA Astrophysics Data System (ADS)
Ribeiro, Andreia; Pires, Carlos
2015-04-01
The Standard Precipitation Index (SPI) has been used here as a drought predictand in order to assess seasonal drought predictability over the western Iberia. Hybrid (statistical-dynamical) long-range forecasts of the drought index SPI are estimated with lead-times up to 6 months, over the period of 1987-2008. Operational forecasts of geopotential height and total precipitation from the UK Met Office operational forecasting system are considered. Past ERA-Interim reanalysis data, prior to the forecast launching, are used for the purpose of build a set of SPI predictors, integrating recent past observations. Then, a two-step hybridization procedure is adopted: in the first-step both forecasted and observational large-scale fields are subjected to a Principal Component Analysis (PCA) and forecasted PCs and persistent PCs are used as predictors. The second hybridization step consists on a statistical/hybrid downscaling to the regional scale based on regression techniques, after the selection of the statistically significant predictors. The large-scale filter predictors from past observations and operational forecasts are used to downscale SPI and the advantage of combining predictors with both dynamical and statistical background in the prediction of drought conditions at different lags is evaluated. The SPI estimations and the added value of combining dynamical and statistical methods are evaluated in cross-validation mode. Results show that winter is the most predictable season, and most of the predictive power is on the large-scale fields and at the shorter lead-times. The hybridization improves forecasting drought skill in comparison to purely dynamical forecasts, since the persistence of large-scale patterns displays the main role in the long-range predictability of precipitation. These findings provide clues about the predictability of the SPI, particularly in Portugal, and may contribute to the predictability of crops yields and to some guidance on users (such as farmers) decision making process.
Dynamical control of [`]statistical' ion-molecule reactions
NASA Astrophysics Data System (ADS)
Liu, Jianbo; Anderson, Scott L.
2005-03-01
Experimental and theoretical studies of two ion-molecule reactions are reviewed. The reactions of H2CO+ and C2H2+ with methane are both mediated by long-lived complexes at low collision energies. The complex lifetimes, product recoil energy and angular distributions, and product branching ratios are all in good agreement with predictions based on statistical decay of the intermediate complexes. Nonetheless, it is clear that both reactions are, in fact, controlled by dynamical effects. In particular, reactivity is strongly and mode-specifically dependent on the vibrational state of the reactants, whereas a statistical mechanism would depend only the energy content of the vibrations. The vibrational effects reflect the dynamics involved in the formation and decay of weakly bound precursor complexes, before the collisional interaction can scramble the initial state information.
Statistical Computations Underlying the Dynamics of Memory Updating
Gershman, Samuel J.; Radulescu, Angela; Norman, Kenneth A.; Niv, Yael
2014-01-01
Psychophysical and neurophysiological studies have suggested that memory is not simply a carbon copy of our experience: Memories are modified or new memories are formed depending on the dynamic structure of our experience, and specifically, on how gradually or abruptly the world changes. We present a statistical theory of memory formation in a dynamic environment, based on a nonparametric generalization of the switching Kalman filter. We show that this theory can qualitatively account for several psychophysical and neural phenomena, and present results of a new visual memory experiment aimed at testing the theory directly. Our experimental findings suggest that humans can use temporal discontinuities in the structure of the environment to determine when to form new memory traces. The statistical perspective we offer provides a coherent account of the conditions under which new experience is integrated into an old memory versus forming a new memory, and shows that memory formation depends on inferences about the underlying structure of our experience. PMID:25375816
Dynamics, stability, and statistics on lattices and networks
Livi, Roberto
2014-07-15
These lectures aim at surveying some dynamical models that have been widely explored in the recent scientific literature as case studies of complex dynamical evolution, emerging from the spatio-temporal organization of several coupled dynamical variables. The first message is that a suitable mathematical description of such models needs tools and concepts borrowed from the general theory of dynamical systems and from out-of-equilibrium statistical mechanics. The second message is that the overall scenario is definitely reacher than the standard problems in these fields. For instance, systems exhibiting complex unpredictable evolution do not necessarily exhibit deterministic chaotic behavior (i.e., Lyapunov chaos) as it happens for dynamical models made of a few degrees of freedom. In fact, a very large number of spatially organized dynamical variables may yield unpredictable evolution even in the absence of Lyapunov instability. Such a mechanism may emerge from the combination of spatial extension and nonlinearity. Moreover, spatial extension allows one to introduce naturally disorder, or heterogeneity of the interactions as important ingredients for complex evolution. It is worth to point out that the models discussed in these lectures share such features, despite they have been inspired by quite different physical and biological problems. Along these lectures we describe also some of the technical tools employed for the study of such models, e.g., Lyapunov stability analysis, unpredictability indicators for “stable chaos,” hydrodynamic description of transport in low spatial dimension, spectral decomposition of stochastic dynamics on directed networks, etc.
Statistical energy conservation principle for inhomogeneous turbulent dynamical systems
Majda, Andrew J.
2015-01-01
Understanding the complexity of anisotropic turbulent processes over a wide range of spatiotemporal scales in engineering shear turbulence as well as climate atmosphere ocean science is a grand challenge of contemporary science with important societal impact. In such inhomogeneous turbulent dynamical systems there is a large dimensional phase space with a large dimension of unstable directions where a large-scale ensemble mean and the turbulent fluctuations exchange energy and strongly influence each other. These complex features strongly impact practical prediction and uncertainty quantification. A systematic energy conservation principle is developed here in a Theorem that precisely accounts for the statistical energy exchange between the mean flow and the related turbulent fluctuations. This statistical energy is a sum of the energy in the mean and the trace of the covariance of the fluctuating turbulence. This result applies to general inhomogeneous turbulent dynamical systems including the above applications. The Theorem involves an assessment of statistical symmetries for the nonlinear interactions and a self-contained treatment is presented below. Corollary 1 and Corollary 2 illustrate the power of the method with general closed differential equalities for the statistical energy in time either exactly or with upper and lower bounds, provided that the negative symmetric dissipation matrix is diagonal in a suitable basis. Implications of the energy principle for low-order closure modeling and automatic estimates for the single point variance are discussed below. PMID:26150510
Statistical energy conservation principle for inhomogeneous turbulent dynamical systems.
Majda, Andrew J
2015-07-21
Understanding the complexity of anisotropic turbulent processes over a wide range of spatiotemporal scales in engineering shear turbulence as well as climate atmosphere ocean science is a grand challenge of contemporary science with important societal impact. In such inhomogeneous turbulent dynamical systems there is a large dimensional phase space with a large dimension of unstable directions where a large-scale ensemble mean and the turbulent fluctuations exchange energy and strongly influence each other. These complex features strongly impact practical prediction and uncertainty quantification. A systematic energy conservation principle is developed here in a Theorem that precisely accounts for the statistical energy exchange between the mean flow and the related turbulent fluctuations. This statistical energy is a sum of the energy in the mean and the trace of the covariance of the fluctuating turbulence. This result applies to general inhomogeneous turbulent dynamical systems including the above applications. The Theorem involves an assessment of statistical symmetries for the nonlinear interactions and a self-contained treatment is presented below. Corollary 1 and Corollary 2 illustrate the power of the method with general closed differential equalities for the statistical energy in time either exactly or with upper and lower bounds, provided that the negative symmetric dissipation matrix is diagonal in a suitable basis. Implications of the energy principle for low-order closure modeling and automatic estimates for the single point variance are discussed below. PMID:26150510
NASA Astrophysics Data System (ADS)
Zheng, Yujun; Yi, Xizhang; Guan, Daren; Meng, Qingtian
2000-05-01
A dynamical Lie algebraic approach to statistical dynamics of the rotationally inelastic gas-surface scattering is described. This method is applied to the study of the scattering of NO from Ag( 1 1 1 ) surface. Statistical average values of some physical observables, such as the translational-to-rotational (T→R) energy transfer and the interaction potential, and their dependence on various dynamic variables of the system are given analytically. The calculations predict a strong dependence of the average energy transfer and average interaction potential on temperature and the incident translational energy. The results imply that the dynamical Lie algebraic method appears to have a wide range of validity for describing the statistical dynamics of gas-surface scattering.
Soviet military on SDI (Strategic Defense Initiative). Professional paper
Fitzgerald, M.C.
1987-08-01
Numerous Western analysts have suggested that all American assessments of SDI should proceed not only from a consideration of American intentions, but also from the outlook of Soviet perceptions. Since 23 March 1983, the prevailing tone of Soviet military writings on SDI has been overwhelmingly negative. Myron Hedlin has concluded that this harsh reaction to a U.S. initiative still years from realization suggests both a strong concern about the ultimate impact of these plans on the strategic balance, and a perceived opportunity for scoring propaganda points. Indeed, the present review of Soviet writings since President Reagan's so-called Star Wars speech has yielded both objective Soviet concerns and regressions to psychological warfare. This, in turn, has necessitated a careful effort to separate rhetoric from more official assessments of SDI. While there has long been dispute in the West over the validity of Soviet statements, they have time and again been subsequently confirmed in Soviet hardware, exercises, and operational behavior. Some Western analysts will nonetheless contend that the Soviet statements under examination in this study are merely a commodity for export.
A Stochastic Fractional Dynamics Model of Rainfall Statistics
NASA Astrophysics Data System (ADS)
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Statistical characterization of complex object structure by dynamic tomography
NASA Astrophysics Data System (ADS)
Tillack, Gerd-Rüdiger; Goebbels, Jürgen; Illerhaus, Bernhard; Artemiev, Valentin; Naumov, Alexander
2002-05-01
Considering modern materials like reinforced plastics or metal foams the mechanical properties of the component are not determined by every single structural element like a single fiber in a composite. Moreover the ensemble mean and correlation properties of all structural elements form the mechanical properties of the component. Accordingly a statistical description of material properties on a macroscopic scale allow to characterize its mechanical behavior or aging. State of the art tomographic techniques assign a measure of material properties to a volume element. The discretization, i.e., the volume or size of a single element, is limited mainly by the physical mechanisms and the equipment used for the data acquisition. In any case the result of reconstruction yields a statistical average within the considered volume element. To evaluate the integrity of the component the determined measures have to be correlated with the mechanical properties of the component. Special reconstruction algorithms are investigated that allow the statistical description of complex object structures including its dynamics. The algorithm is based on the Kalman filter using statistical prior. The prior includes knowledge about the covariance matrix as well as a prior assumption about the probability density distribution function. The resulting algorithm is recursive yielding a quasi-optimal solution at every reconstruction step. The applicability of the developed algorithm is discussed for the investigation of a specimen made from aluminum foam.
General Properties of Landscapes: Vacuum Structure, Dynamics and Statistics
NASA Astrophysics Data System (ADS)
Zukowski, Claire Elizabeth
Even the simplest extra-dimensional theory, when compactified, can lead to a vast and complex landscape. To make progress, it is useful to focus on generic features of landscapes and compactifications. In this work we will explore universal features and consequences of (i) vacuum structure, (ii) dynamics resulting from symmetry breaking, and (iii) statistical predictions for low-energy parameters and observations. First, we focus on deriving general properties of the vacuum structure of a theory independent of the details of the geometry. We refine the procedure for performing compactifications by proposing a general gauge-invariant method to obtain the full set of Kaluza-Klein towers of fields for any internal geometry. Next, we study dynamics in a toy model for flux compactifications. We show that the model exhibits symmetry-breaking instabilities for the geometry to develop lumps, and suggest that similar dynamical effects may occur generically in other landscapes. The questions of the observed arrow of time as well as the observed value of the neutrino mass lead us to consider statistics within a landscape, and we verify that our observations are in fact typical given the correct vacuum structure and (in the case of the arrow of time) initial conditions. Finally, we address the question of subregion duality in AdS/CFT, arguing for a criterion for a bulk region to be reconstructable from a given boundary subregion by local operators. While of less direct relevance to cosmological space-times, this work provides an improved understanding of the UV/IR correspondence, a principle that underlies the construction of many holographically-inspired measures used to make statistical predictions in landscapes.
Hydrological responses to dynamically and statistically downscaled climate model output
Wilby, R.L.; Hay, L.E.; Gutowski, W.J., Jr.; Arritt, R.W.; Takle, E.S.; Pan, Z.; Leavesley, G.H.; Clark, M.P.
2000-01-01
Daily rainfall and surface temperature series were simulated for the Animas River basin, Colorado using dynamically and statistically downscaled output from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis. A distributed hydrological model was then applied to the downscaled data. Relative to raw NCEP output, downscaled climate variables provided more realistic stimulations of basin scale hydrology. However, the results highlight the sensitivity of modeled processes to the choice of downscaling technique, and point to the need for caution when interpreting future hydrological scenarios.
Role of quantum statistics in multi-particle decay dynamics
Marchewka, Avi; Granot, Er’el
2015-04-15
The role of quantum statistics in the decay dynamics of a multi-particle state, which is suddenly released from a confining potential, is investigated. For an initially confined double particle state, the exact dynamics is presented for both bosons and fermions. The time-evolution of the probability to measure two-particle is evaluated and some counterintuitive features are discussed. For instance, it is shown that although there is a higher chance of finding the two bosons (as oppose to fermions, and even distinguishable particles) at the initial trap region, there is a higher chance (higher than fermions) of finding them on two opposite sides of the trap as if the repulsion between bosons is higher than the repulsion between fermions. The results are demonstrated by numerical simulations and are calculated analytically in the short-time approximation. Furthermore, experimental validation is suggested.
Solar wind dynamic pressure variations: Quantifying the statistical magnetospheric response
NASA Technical Reports Server (NTRS)
Sibeck, D. G.
1990-01-01
Solar wind dynamic pressure variations are common and have large amplitudes. Existing models for the transient magnetospheric and ionospheric response to the solar wind dynamic pressure variation are quantified. The variations drive large amplitude (approx 1 R sub E) magnetopause motion with velocities of approx. 60 km/s and transient dayside ionospheric flows of 2 km/s which are organized into double convection vortices. Ground magnetometer signatures are more pronounced under the auroral ionosphere, where they reach 60 to 300 nT, and under the equatorial electrojet. A statistical comparison of transient ground magnetometer events seen at a South Pole station and geosynchronous orbit indicates that all but the weakest ground events are associated with clear compressional signatures at the dayside geosynchronous orbit.
Extreme event statistics of daily rainfall: dynamical systems approach
NASA Astrophysics Data System (ADS)
Cigdem Yalcin, G.; Rabassa, Pau; Beck, Christian
2016-04-01
We analyse the probability densities of daily rainfall amounts at a variety of locations on Earth. The observed distributions of the amount of rainfall fit well to a q-exponential distribution with exponent q close to q≈ 1.3. We discuss possible reasons for the emergence of this power law. In contrast, the waiting time distribution between rainy days is observed to follow a near-exponential distribution. A careful investigation shows that a q-exponential with q≈ 1.05 yields the best fit of the data. A Poisson process where the rate fluctuates slightly in a superstatistical way is discussed as a possible model for this. We discuss the extreme value statistics for extreme daily rainfall, which can potentially lead to flooding. This is described by Fréchet distributions as the corresponding distributions of the amount of daily rainfall decay with a power law. Looking at extreme event statistics of waiting times between rainy days (leading to droughts for very long dry periods) we obtain from the observed near-exponential decay of waiting times extreme event statistics close to Gumbel distributions. We discuss superstatistical dynamical systems as simple models in this context.
Moments of probable seas: statistical dynamics of Planet Ocean
NASA Astrophysics Data System (ADS)
Holloway, Greg
The ocean is too big. From the scale of planetary radius to scales of turbulent microstructure, the range of length scales is 109. Likewise for time scales. Classical geophysical fluid dynamics does not have an apparatus for dealing with such complexity, while `brute force' computing on the most powerful supercomputers, extant or presently foreseen, barely scratches this complexity. Yet the everywhere-swirling-churning ocean interacts unpredictably in climate history and climate future - against which we attempt to devise planetary stewardship. Can we better take into account the unpredictability of oceans to improve upon present ocean/climate forecasting? What to do? First, recognize that our goal is to comprehend probabilities of possible oceans. Questions we would ask are posed as moments (expectations). Then the dynamical goal is clear: we seek equations of motion of moments of probable oceans. Classical fluid mechanics offers part of the answer but fails to recognize statistical dynamical aspects (missing the arrow of time as past==>future). At probabilities of oceans, the missing physics emerges: moments are forced by gradients of entropy with respect to moments. Time regains its arrow, and first (simplest) approximations to entropy-gradient forces enhance the fidelity of ocean theories and practical models.
Statistical work-energy theorems in deterministic dynamics
NASA Astrophysics Data System (ADS)
Kim, Chang Sub
2015-07-01
We theoretically explore the Bochkov-Kuzovlev-Jarzynski-Crooks work theorems in a finite system subject to external control, which is coupled to a heat reservoir. We first elaborate the mechanical energy balance between the system and the surrounding reservoir and then proceed to formulate its statistical counterpart under the general nonequilibrium conditions. Consequently, a consistency condition is derived, underpinning the nonequilibrium equalities, both in the framework of the system-centric and nonautonomous Hamiltonian pictures, and its utility is examined in a few examples. Also, we elucidate that the symmetric fluctuation associated with forward and backward manipulation of the nonequilibrium work is contingent on time-reversal invariance of the underlying mesoscopic dynamics.
Forecasting: it is not about statistics, it is about dynamics.
Judd, Kevin; Stemler, Thomas
2010-01-13
In 1963, the mathematician and meteorologist Edward Lorenz published a paper (Lorenz 1963 J. Atmos. Sci. 20, 130-141) that changed the way scientists think about the prediction of geophysical systems, by introducing the ideas of chaos, attractors, sensitivity to initial conditions and the limitations to forecasting nonlinear systems. Three years earlier, the mathematician and engineer Rudolf Kalman had published a paper (Kalman 1960 Trans. ASME Ser. D, J. Basic Eng. 82, 35-45) that changed the way engineers thought about prediction of electronic and mechanical systems. Ironically, in recent years, geophysicists have become increasingly interested in Kalman filters, whereas engineers have become increasingly interested in chaos. It is argued that more often than not the tracking and forecasting of nonlinear systems has more to do with the nonlinear dynamics that Lorenz considered than it has to do with statistics that Kalman considered. A position with which both Lorenz and Kalman would appear to agree. PMID:19948555
Vegetation patchiness: Pareto statistics, cluster dynamics and desertification.
NASA Astrophysics Data System (ADS)
Shnerb, N. M.
2009-04-01
Recent studies [1-4] of cluster distribution of vegetation in the dryland revealed Pareto statistics for the size of spatial colonies. These results were supported by cellular automata simulations that yield robust criticality for endogenous pattern formation based on positive feedback. We show that this self-organized criticality is a manifestation of the law of proportion effec: mapping the stochastic model to a Markov birth-death process, the transition rates are shown to scale linearly with cluster size. This mapping provides a connection between patch statistics and the dynamics of the ecosystem; the "first passage time" for different colonies emerges as a powerful tool that discriminates between endogenous and exogenous clustering mechanisms. Imminent catastrophic shifts (like desertification) manifest themselves in a drastic change of the stability properties of spatial colonies, as the chance of a cluster to disappear depends logarithmically, rather than linearly, on its size. [1] Scanlon et. al., Nature 449, 209212 [2007]. [2] Kefi et. al., Nature 449, 213217 [2007]. [3] Sole R., Nature 449, p. 151 [2007]. [4] Vandermeer et. al., Nature 451, p. 457 [2008].
ERIC Educational Resources Information Center
Scheffler, F. L.; March, J. F.
The Aerospace Materials Information Center (AMIC) Selective Dissemination of Information (SDI) program was evaluated by an interview technique after one year of operation. The data base for the SDI consists of the periodic document index records input to the AMIC system. The users are 63 engineers, scientists, and technical administrators at the…
Subsurface drip irrigation (SDI) research at USDA-ARS in Bushland, TX
Technology Transfer Automated Retrieval System (TEKTRAN)
Producers in the Texas High Plains have recently adopted subsurface drip irrigation (SDI) at unprecedented rates in response to drought, declining water resources from the Ogallala Aquifer, and increasing energy costs to pump groundwater. However, SDI has much greater capital and maintenance require...
Statistical and dynamical remastering of classic exoplanet systems
NASA Astrophysics Data System (ADS)
Nelson, Benjamin Earl
The most powerful constraints on planet formation will come from characterizing the dynamical state of complex multi-planet systems. Unfortunately, with that complexity comes a number of factors that make analyzing these systems a computationally challenging endeavor: the sheer number of model parameters, a wonky shaped posterior distribution, and hundreds to thousands of time series measurements. In this dissertation, I will review our efforts to improve the statistical analyses of radial velocity (RV) data and their applications to some renown, dynamically complex exoplanet system. In the first project (Chapters 2 and 4), we develop a differential evolution Markov chain Monte Carlo (RUN DMC) algorithm to tackle the aforementioned difficult aspects of data analysis. We test the robustness of the algorithm in regards to the number of modeled planets (model dimensionality) and increasing dynamical strength. We apply RUN DMC to a couple classic multi-planet systems and one highly debated system from radial velocity surveys. In the second project (Chapter 5), we analyze RV data of 55 Cancri, a wide binary system known to harbor five planetary orbiting the primary. We find the inner-most planet "e" must be coplanar to within 40 degrees of the outer planets, otherwise Kozai-like perturbations will cause the planet to enter the stellar photosphere through its periastron passage. We find the orbits of planets "b" and "c" are apsidally aligned and librating with low to median amplitude (50+/-6 10 degrees), but they are not orbiting in a mean-motion resonance. In the third project (Chapters 3, 4, 6), we analyze RV data of Gliese 876, a four planet system with three participating in a multi-body resonance, i.e. a Laplace resonance. From a combined observational and statistical analysis computing Bayes factors, we find a four-planet model is favored over one with three-planets. Conditioned on this preferred model, we meaningfully constrain the three-dimensional orbital architecture of all the planets orbiting Gliese 876 based on the radial velocity data alone. By demanding orbital stability, we find the resonant planets have low mutual inclinations phi so they must be roughly coplanar (phicb = 1.41(+/-0.62/0.57) degrees and phibe = 3.87(+/-1.99/1.86 degrees). The three-dimensional Laplace argument librates chaotically with an amplitude of 50.5(+/-7.9/10.0) degrees, indicating significant past disk migration and ensuring long-term stability. In the final project (Chapter 7), we analyze the RV data for nu Octantis, a closely separated binary with an alleged planet orbiting interior and retrograde to the binary. Preliminary results place very tight constraints on the planet-binary mutual inclination but no model is dynamically stable beyond 105 years. These empirically derived models motivate the need for more sophisticated algorithms to analyze exoplanet data and will provide new challenges for planet formation models.
Characterizing Uncertainties in Hydrologic Extremes: Statistical vs. Dynamical Downscaling
NASA Astrophysics Data System (ADS)
Mauger, G. S.; Salathe, E. P., Jr.
2013-12-01
Numerous agencies are now charged with considering the impacts of climate change in management decisions, both from the standpoint of adapting to changing conditions and minimizing emissions of greenhouse gases. These decisions require robust projections of change and defensible estimates of their uncertainty. We present work that is specifically focused on characterizing the uncertainty in projections of hydrologic extremes. Much recent work has been devoted to characterizing the uncertainty in hydrologic projections due to differences in downscaling methodology (e.g., Abatzoglou and Brown, 2012; Bürger et al., 2012; Rasmussen et al., 2011; Wetterhall et al., 2012) and among hydrologic models (e.g., Bennett et al., 2012; Clark et al., 2008; Fenicia et al., 2008; Smith and Marshall, 2010; Vano et al., 2012). These have established a basis for such analyses, but have generally focused on the implications for monthly and annual flows rather than flow extremes. In addition, few among these have been focused within the Pacific Northwest. In this work we assess the uncertainty in projected changes to hydrologic extremes associated with dynamical vs. statistical downscaling. The analysis is focused on 3 distinct watersheds within the Pacific Northwest - the Skagit, Green, and Willamette river basins. Results highlight the sensitivity of flood projections to downscaling approach and hydrologic model assumptions. Sensitivities are characterized as a function of geographic location, hydrologic regime, and climate - identifying circumstances under which projections are reliable and others in which answers differ markedly based on methodology. For example, one notable result is that dynamically downscaled projections appear to refute the assumed relationship between watershed type (snow-dominant vs. rain-dominant) and projected changes to flood risk - currently considered a key indicator of future flood risk. Results presented here provide key information for decision-making as well as for prioritizing future impacts research.
The role of the medical librarian in SDI systems.
Garfield, E
1969-10-01
Many ongoing selective dissemination systems designers assume that the librarian can be omitted from active participation in execution of the master plan. ISI's four years of experience with ASCA(R) service have shown that librarians must be an integral part of the system and engage in an active dialogue between users and the machine. Specific examples of how librarians can best serve the information needs of scientists using SDI systems are examined. It is the basic contention of this paper that the librarian should serve as an intermediary between users and the numerous new information media. In this manner the librarian can filter and translate the requirements of individual scientists to conform with the inherent limitations of all machine systems while exploiting their capabilities to the fullest. PMID:5823506
Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming
2013-05-01
Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.
A Statistical Model for In Vivo Neuronal Dynamics
Surace, Simone Carlo; Pfister, Jean-Pascal
2015-01-01
Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Chen, Y. S.
1986-01-01
The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.
NASA Astrophysics Data System (ADS)
Potirakis, Stelios M.; Zitis, Pavlos I.; Eftaxias, Konstantinos
2013-07-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. Several authors have suggested that earthquake dynamics and the dynamics of economic (financial) systems can be analyzed within similar mathematical frameworks. We apply concepts of the nonextensive statistical physics, on time-series data of observable manifestations of the underlying complex processes ending up with these different extreme events, in order to support the suggestion that a dynamical analogy exists between a financial crisis (in the form of share or index price collapse) and a single earthquake. We also investigate the existence of such an analogy by means of scale-free statistics (the Gutenberg-Richter distribution of event sizes). We show that the populations of: (i) fracto-electromagnetic events rooted in the activation of a single fault, emerging prior to a significant earthquake, (ii) the trade volume events of different shares/economic indices, prior to a collapse, and (iii) the price fluctuation (considered as the difference of maximum minus minimum price within a day) events of different shares/economic indices, prior to a collapse, follow both the traditional Gutenberg-Richter law as well as a nonextensive model for earthquake dynamics, with similar parameter values. The obtained results imply the existence of a dynamic analogy between earthquakes and economic crises, which moreover follow the dynamics of seizures, magnetic storms and solar flares.
Statistical predictability in the atmosphere and other dynamical systems
NASA Astrophysics Data System (ADS)
Kleeman, Richard
2007-06-01
Ensemble predictions are an integral part of routine weather and climate prediction because of the sensitivity of such projections to the specification of the initial state. In many discussions it is tacitly assumed that ensembles are equivalent to probability distribution functions (p.d.f.s) of the random variables of interest. In general for vector valued random variables this is not the case (not even approximately) since practical ensembles do not adequately sample the high dimensional state spaces of dynamical systems of practical relevance. In this contribution we place these ideas on a rigorous footing using concepts derived from Bayesian analysis and information theory. In particular we show that ensembles must imply a coarse graining of state space and that this coarse graining implies loss of information relative to the converged p.d.f. To cope with the needed coarse graining in the context of practical applications, we introduce a hierarchy of entropic functionals. These measure the information content of multivariate marginal distributions of increasing order. For fully converged distributions (i.e. p.d.f.s) these functionals form a strictly ordered hierarchy. As one proceeds up the hierarchy with ensembles instead however, increasingly coarser partitions are required by the functionals which implies that the strict ordering of the p.d.f. based functionals breaks down. This breakdown is symptomatic of the necessarily limited sampling by practical ensembles of high dimensional state spaces and is unavoidable for most practical applications. In the second part of the paper the theoretical machinery developed above is applied to the practical problem of mid-latitude weather prediction. We show that the functionals derived in the first part all decline essentially linearly with time and there appears in fact to be a fairly well defined cut off time (roughly 45 days for the model analyzed) beyond which initial condition information is unimportant to statistical prediction.
Building Better SDI Profiles for Users of Large, Multidisciplinary Data Bases.
ERIC Educational Resources Information Center
Sprague, Robert J.; Frendenreich, L. Ben
1978-01-01
Reports an investigation conducted to improve the performance of SDI profiles in a large, complex data base that contains over a million references to scientific and technical literature, and uses full text indexing for retrieval. (Author/MBR)
Comparison of grain sorghum, soybean, and cotton production under spray, LEPA, and SDI
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop production was compared under subsurface drip irrigation (SDI), low energy precision applicators (LEPA), low elevation spray applicators (LESA), and mid elevation spray applicators (MESA) at the USDA-Agricultural Research Service Conservation and Production Research Laboratory, Bushland, Tex., ...
Dynamical instability and statistical behaviour of N-body systems
NASA Astrophysics Data System (ADS)
Cipriani, Piero; Di Bari, Maria
1998-12-01
In this paper, we argue about a synthetic characterization of the qualitative properties of generic many-degrees-of-freedom (mdf) dynamical systems (DS's) by means of a geometric description of the dynamics [Geometro-Dynamical Approach (GDA)]. We exhaustively describe the mathematical framework needed to link geometry and dynamical (in)stability, discussing in particular which geometrical quantity is actually related to instability and why some others cannot give, in general, any indication of the occurrence of chaos. The relevance of the Schur theorem to select such Geometrodynamic Indicators (GDI) of instability is then emphasized, as its implications seem to have been underestimated in some of the previous works. We then compare the analytical and numerical results obtained by us and by Pettini and coworkers concerning the FPU chain, verifying a complete agreement between the outcomes of averaging the relevant GDI's over phase space (Casetti and Pettini, 1995) and our findings (Cipriani, 1993), obtained in a more conservative way, time-averaging along geodesics. Along with the check of the ergodic properties of GDI's, these results confirm that the mechanism responsible for chaos in realistic DS's largely depends on the fluctuations of curvatures rather than on their negative values, whose occurrence is very unlikely. On these grounds we emphasize the importance of the virialization process, which separates two different regimes of instability. This evolutionary path, predicted on the basis of analytical estimates, receives clear support from numerical simulations, which, at the same time, confirm also the features of the evolution of the GDI's along with their dependence on the number of degrees of freedom, N, and on the other relevant parameters of the system, pointing out the scarce relevance of negative curvature (for N ≫ 1) as a source of instability. The general arguments outlined above, are then concretely applied to two specific N-body problems, obtaining some new insights into known outcomes and also some new results The comparative analysis of the FPU chain and the gravitational N-body system allows us to suggest a new definition of strong stochasticity, for any DS. The generalization of the concept of dynamical time-scale, tD, is at the basis of this new criterion. We derive for both the mdf systems considered the ( N, ɛ)-dependence of tD (ɛ being the specific energy) of the system. In light of this, the results obtained (Cerruti-Sola and Pettini, 1995), indeed turn out to be reliable, the perplexity there raised originating from the neglected N-dependence of tD, and not to an excessive degree of approximation in the averaged equations used. This points out also the peculiarities of gravitationally bound systems, which are always in a regime of strong instability; the dimensionless quantity L1 = γ1 · tD [γ 1 is the maximal Lyapunov Characteristic Number (LCN)] being always positive and independent of ɛ, as it happens for the FPU chain only above the strong stochasticity threshold (SST). The numerical checks on the analytical estimates about the ( N, ɛ)-dependence of GDI's, allow us to single out their scaling laws, which support our claim that, for N ≫ 1, the probability of finding a negative value of Ricci curvature is practically negligible, always for the FPU chain, whereas in the case of the Gravitational N-body system, this is certainly true when the virial equilibrium has been attained. The strong stochasticity of the latter DS is clearly due to the large amplitude of curvature fluctuations. To prove the positivity of Ricci curvature, we need to discuss the pathologies of mathematical Newtonian interaction, which have some implications also on the ergodicity of the GDI's for this DS. We discuss the Statistical Mechanical properties of gravity, arguing how they are related to its long range nature rather than to its short scale divergencies. The N-scaling behaviour of the single terms entering the Ricci curvature show that the dominant contribution comes from the Laplacian of the potential energy, whose singularity is reflected on the issue of equality between time and static averages. However, we find that the physical N-body system is actually ergodic where the GDI's are concerned, and that the Ricci curvature associated is indeed almost everywhere (and then almost always) positive, as long as N ≫ 1 and the system is gravitationally bound and virialized. On these grounds the equality among the above mentioned averages is restored, and the GDA to instability of gravitating systems gives fully reliable and understandable results. Finally, as a by-product of the numerical simulations performed, for both the DS's considered, it emerges that the time averages of GDI's quickly approach the corresponding canonical ones, even in the quasi-integrable limit, whereas, as expected, their fluctuations relax on much longer timescales, in particular below the SST.
ERIC Educational Resources Information Center
Lee, Hollylynne Stohl; Kersaint, Gladis; Harper, Suzanne; Driskell, Shannon O.; Leatham, Keith R.
2012-01-01
This study examined a random stratified sample (n = 62) of prospective teachers' work across eight institutions on three tasks that utilized dynamic statistical software. The authors considered how teachers utilized their statistical knowledge and technological statistical knowledge to engage in cycles of investigation. This paper characterizes…
Crystallization and preliminary X-ray studies of SdiA from Escherichia coli
Wu, Chunai; Lokanath, Neratur K.; Kim, Dong Young; Nguyen, Lan Dao Ngoc; Kim, Kyeong Kyu
2008-01-01
E. coli SdiA was overexpressed, purified and crystallized. The crystals belonged to the hexagonal space group P6{sub 1}22 or P6{sub 5}22 and diffracted to 2.7 resolution. SdiA enhances cell division by regulating the ftsQAZ operon in Escherichia coli as a transcription activator. In addition, SdiA is suggested to play a role in detecting quorum signals that emanate from other species. It is therefore a homologue of LuxR, a cognate quorum-sensing receptor that recognizes a quorum signal and activates the quorum responses. To elucidate the role of SdiA and its functional and structural relationship to LuxR, structural studies were performed on E. coli SdiA. Recombinant SdiA was overexpressed, purified and crystallized at 287 K using the hanging-drop vapour-diffusion method. X-ray diffraction data from a native crystal were collected with 99.7% completeness to 2.7 resolution with an R{sub merge} of 6.0%. The crystals belong to the hexagonal space group P6{sub 1}22 or P6{sub 5}22, with unit-cell parameters a = b = 130.47, c = 125.23 .
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues. PMID:24267876
NASA Astrophysics Data System (ADS)
Guan, Daren; Yi, Xizhang; Meng, Qingtian; Zheng, Yujun
2001-05-01
The dynamical Lie algebraic (DLA) method is applied to statistical dynamics of energy transfer in rotationally inelastic molecule-surface scattering of NO molecules from Ag(1 1 1) surfaces. The statistical average values of the translational-to-rotational energy transfer and their dependence on main dynamical variables for the system, especially collision time, are obtained by the method in terms of the density operator formalism in statistical mechanics. It is shown that the DLA method appears to provide an alternative efficient technique to treat the energy transfer in the gas-surface scattering.
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
NASA Astrophysics Data System (ADS)
Schepen, Andrew; Wang, Q. J.; Robertson, David E.
2012-10-01
Forecasting rainfall at the seasonal time scale is highly challenging. Seasonal rainfall forecasts are typically made using statistical or dynamical models. The two types of models have different strengths, and their combination has the potential to increase forecast skill. In this study, statistical-dynamical forecasts of Australian seasonal rainfall are assessed. Statistical rainfall forecasts are made based on observed relationships with lagged climate indices. Dynamical forecasts are made by calibrating raw outputs from multiple general circulation models. The statistical and dynamical forecasts are then merged using a Bayesian model averaging (BMA) method. The skill and reliability of the forecasts is assessed through cross-validation for the period 1980-2010. We confirm that the dynamical and statistical groups of models give skill in different locations and seasons and the merged statistical-dynamical forecasts represent a significant improvement in terms of maximizing spatial and temporal coverage of skillfulness. We find that the merged statistical-dynamical forecasts are reliable in representing forecast uncertainty.
Statistical Anomaly Detection for Monitoring of Human Dynamics
NASA Astrophysics Data System (ADS)
Kamiya, K.; Fuse, T.
2015-05-01
Understanding of human dynamics has drawn attention to various areas. Due to the wide spread of positioning technologies that use GPS or public Wi-Fi, location information can be obtained with high spatial-temporal resolution as well as at low cost. By collecting set of individual location information in real time, monitoring of human dynamics is recently considered possible and is expected to lead to dynamic traffic control in the future. Although this monitoring focuses on detecting anomalous states of human dynamics, anomaly detection methods are developed ad hoc and not fully systematized. This research aims to define an anomaly detection problem of the human dynamics monitoring with gridded population data and develop an anomaly detection method based on the definition. According to the result of a review we have comprehensively conducted, we discussed the characteristics of the anomaly detection of human dynamics monitoring and categorized our problem to a semi-supervised anomaly detection problem that detects contextual anomalies behind time-series data. We developed an anomaly detection method based on a sticky HDP-HMM, which is able to estimate the number of hidden states according to input data. Results of the experiment with synthetic data showed that our proposed method has good fundamental performance with respect to the detection rate. Through the experiment with real gridded population data, an anomaly was detected when and where an actual social event had occurred.
Measures of trajectory ensemble disparity in nonequilibrium statistical dynamics
Crooks, Gavin; Sivak, David
2011-06-03
Many interesting divergence measures between conjugate ensembles of nonequilibrium trajectories can be experimentally determined from the work distribution of the process. Herein, we review the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen-Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Renyi divergence.
Control system design for dynamical systems with statistical model uncertainty
NASA Astrophysics Data System (ADS)
Huerta-Ochoa, Ruben Tarsicio
2000-10-01
This dissertation is devoted to the study of control systems for which the plant models are uncertain. The plant model uncertainty considered is of statistical nature. The model uncertainty is assumed to be given in terms of first- and second-order statistics of random model parameters, or alternatively, by a joint probability distribution. The first problem is the characterization of the eigenvalue variance of an uncertain matrix. The variance of an uncertain eigenvalue is expressed approximately in terms of the statistics of the matrix uncertain parameters. The variance expression is then used to construct uncertainty cost functions used to optimize the robustness of a control system design in a state-space framework. An example from the aerospace industry is presented to illustrate the methodology. The second problem is analogous to the first one. The root variance of an uncertain polynomial is calculated approximately as a function of uncertain polynomial parameters. It is then used to construct design cost functions used to optimize the robustness of a control system design in a transfer function framework. A DC servomotor control system design is presented. In order to illustrate the methodology. The third part of this work describes the application of the root variance formula to the construction of the stochastic root locus (SRL). The SRL is here defined in such a way that it reduces to the root locus if the model uncertainty is reduced to zero. A seismic structural control application is employed for illustration purposes. Finally, the problem of model uncertainty characterization is addressed within a statistical framework. Both coprime factor and additive uncertainty structures are studied. A gaussian structure of the joint probability distribution for the uncertain plant parameters is assumed, and elliptical contours of uncertainty are then plotted in the complex plane. The idea in mind is to translate statistical model uncertainty into weighting factors instrumental in an Hinfinity control system design. The last part of the document includes the conclusions and suggestions for further study of the problem.
SdiA Aids Enterohemorrhagic Escherichia coli Carriage by Cattle Fed a Forage or Grain Diet
Sheng, Haiqing; Nguyen, Y. N.
2013-01-01
Enterohemorrhagic Escherichia coli (EHEC) causes hemorrhagic colitis and life-threatening complications. The main reservoirs for EHEC are healthy ruminants. We reported that SdiA senses acyl homoserine lactones (AHLs) in the bovine rumen to activate expression of the glutamate acid resistance (gad) genes priming EHEC's acid resistance before they pass into the acidic abomasum. Conversely, SdiA represses expression of the locus of enterocyte effacement (LEE) genes, whose expression is not required for bacterial survival in the rumen but is necessary for efficient colonization at the rectoanal junction (RAJ) mucosa. Our previous studies show that SdiA-dependent regulation was necessary for efficient EHEC colonization of cattle fed a grain diet. Here, we compared the SdiA role in EHEC colonization of cattle fed a forage hay diet. We detected AHLs in the rumen of cattle fed a hay diet, and these AHLs activated gad gene expression in an SdiA-dependent manner. The rumen fluid and fecal samples from hay-fed cattle were near neutrality, while the same digesta samples from grain-fed animals were acidic. Cattle fed either grain or hay and challenged with EHEC orally carried the bacteria similarly. EHEC was cleared from the rumen within days and from the RAJ mucosa after approximately one month. In competition trials, where animals were challenged with both wild-type and SdiA deletion mutant bacteria, diet did not affect the outcome that the wild-type strain was better able to persist and colonize. However, the wild-type strain had a greater advantage over the SdiA deletion mutant at the RAJ mucosa among cattle fed the grain diet. PMID:23836826
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Dynamical and statistical modeling of seasonal precipitation over Mexico
NASA Astrophysics Data System (ADS)
Fuentes-Franco, R.; Coppola, E.; Giorgi, F.; Pavia, E. G.; Graef Ziehl, F.
2012-12-01
Simulated patterns of seasonal precipitation over Mexico (Pmex) by a statistical model and by the recently-released version of the Regional Climate Model (RegCM4) are compared. The European Centre for Medium Range Weather Forecasts (ECMWF) reanalysis ERA-Interim is used to provide initial and lateral boundary conditions for the RegCM4 simulation over the CORDEX Central America region; while regions of high correlation between Pmex and global sea surface temperatures (SST) over the Atlantic and Pacific Oceans are used as predictors in the statistical model. Compared with observations, the RegCM4 simulation shows a wet bias in topographically complex regions and a dry bias over Yucatan and northwestern Mexico. The wet bias is probably caused by the model's convection scheme, but the dry bias may be due to a lack of topographical features (in Yucatan) and a weakened representation of the North American Monsoon (in northwestern Mexico). RegCM4 simulates quite well the seasonal precipitation patterns and also the inter-seasonal variability, reproducing well the observed wetter or drier than normal seasons. RegCM4 is also able to reproduce adequately well the mid-summer drought in the south of Mexico. The statistical model also reproduces well the inter-seasonal precipitation variability, simulating Pmex better over southern and central Mexico than over northern Mexico. This may suggest that Pmex over northern Mexico is less dependent on SST than over other regions of the country.
NASA Astrophysics Data System (ADS)
Guan, Daren; Yi, Xizhang; Zheng, Yujun; Ding, Shiliang; Sun, Jiazhong
2000-09-01
The dynamical Lie algebraic method is used for the description of statistical mechanics of rotationally inelastic molecule-surface scattering. A main advantage of this method is that it can not only give the expression for evolution operator in terms of the group parameters, but also provide the expression for the density operator for a given system. The group parameters may then be determined by solving a set of coupled nonlinear differential equations. Thus, the expressions of the statistical average values of the translational-to-rotational energy transfer, the interaction potential, and their dependence on the main dynamic variables for the system are derived in terms of the density operator formalism in statistical mechanics. The method is applied to the scattering of NO molecules from a static, flat Ag(111) surface to illustrate its general procedure. The results demonstrate that the dynamical Lie algebraic method can be useful for describing statistical dynamics of gas-surface scattering.
Viscoelastic effects in avalanche dynamics: a key to earthquake statistics.
Jagla, E A; Landes, François P; Rosso, Alberto
2014-05-01
In many complex systems a continuous input of energy over time can be suddenly relaxed in the form of avalanches. Conventional avalanche models disregard the possibility of internal dynamical effects in the interavalanche periods, and thus miss basic features observed in some real systems. We address this issue by studying a model with viscoelastic relaxation, showing how coherent oscillations of the stress field can emerge spontaneously. Remarkably, these oscillations generate avalanche patterns that are similar to those observed in seismic phenomena. PMID:24836251
Chandrasekhar's dynamical friction and non-extensive statistics
NASA Astrophysics Data System (ADS)
Silva, J. M.; Lima, J. A. S.; de Souza, R. E.; Del Popolo, A.; Le Delliou, Morgan; Lee, Xi-Guo
2016-05-01
The motion of a point like object of mass M passing through the background potential of massive collisionless particles (m ll M) suffers a steady deceleration named dynamical friction. In his classical work, Chandrasekhar assumed a Maxwellian velocity distribution in the halo and neglected the self gravity of the wake induced by the gravitational focusing of the mass M. In this paper, by relaxing the validity of the Maxwellian distribution due to the presence of long range forces, we derive an analytical formula for the dynamical friction in the context of the q-nonextensive kinetic theory. In the extensive limiting case (q = 1), the classical Gaussian Chandrasekhar result is recovered. As an application, the dynamical friction timescale for Globular Clusters spiraling to the galactic center is explicitly obtained. Our results suggest that the problem concerning the large timescale as derived by numerical N-body simulations or semi-analytical models can be understood as a departure from the standard extensive Maxwellian regime as measured by the Tsallis nonextensive q-parameter.
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
Statistical mechanics of neocortical interactions - Dynamics of synaptic modification
NASA Technical Reports Server (NTRS)
Ingber, L.
1983-01-01
A recent study has demonstrated that several scales of neocortical interactions can be consistently analyzed with the use of methods of modern nonlinear nonequilibrium statistical mechanics. The formation, stability, and interaction of spatial-temporal patterns of columnar firings are explicitly calculated, to test hypothesized mechanisms relating to information processing. In this context, most probable patterns of columnar firings are associated with chemical and electrical synaptic modifications. It is stressed that synaptic modifications and shifts in most-probable firing patterns are highly nonlinear and interactive sets of phenomena. A detailed scenario of information processing is calculated of columnar coding of external stimuli, short-term storage via hysteresis, and long-term storage via synaptic modification.
Human turnover dynamics during sleep: Statistical behavior and its modeling
NASA Astrophysics Data System (ADS)
Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi
2014-03-01
Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ⩽10 s and the other at ⩾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (⩽10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (≈30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.
Statistical treatment of dynamical electron diffraction from growing surfaces
NASA Astrophysics Data System (ADS)
Dudarev, S. L.; Vvedensky, D. D.; Whelan, M. J.
1994-11-01
Statistical methods developed previously for the evaluation of the electrical conductivity of metals and the description of the propagation of waves through random media are applied to the problem of scattering of high-energy electrons from a rough growing surface of a crystal where the roughness is caused by local fluctuations of site occupation numbers occurring during the growth. We derive the relevant Dyson and Bethe-Salpeter equations and define the short-range order correlation functions that determine the behavior of the reflection high-energy electron diffraction (RHEED) intensities. To analyze the temporal evolution of these correlation functions, we employ an exactly solvable model of the local perfect layer growth [A. K. Myers-Beaghton and D. D. Vvedensky, J. Phys. A 22, L467 (1989)]. Our approach makes it possible to separate individual contributions of various processes that give rise to oscillations of the RHEED reflections. We found that provided that the Bragg conditions of incidence are satisfied, it is the diffuse scattering by the disordered surface layer which is largely responsible for oscillations of the RHEED intensities. The temporal evolution of the angular distribution of the diffusely scattered electrons exhibits the effect of enhancement of the intensity of the Kikuchi lines with increasing surface disorder, as was observed experimentally [J. Zhang et al., Appl. Phys. A 42, 317 (1987)]. An explanation of the origin of this phenomenon is given using the concept of the final-state standing wave pattern.
Superconducting Magnetic Energy Storage and other large-scale SDI cryogenic applications programs
NASA Astrophysics Data System (ADS)
Verga, Richard L.
The paper describes the Superconducting Magnetic Energy Storage (SMES) program for terrestrial storage of energy for use in powering ground-based directed energy weapons. Special attention is given to SMES technology for SDI applications, the components of a SMES system, the SMES Engineering Test Model Development Program, and the SMES critical technologies. It is pointed out that SMES has applications other than SDI, such as the commercial electric utility industry and space power systems, including hydrogen-cooled cryoconductors, superconducting turboalternators, and high-temperature superconductor power leads.
Statistical Physics Approaches to Respiratory Dynamics and Lung Structure
NASA Astrophysics Data System (ADS)
Suki, Bela
2004-03-01
The lung consists of a branching airway tree embedded in viscoelastic tissue and provides life-sustaining gas exchange to the body. In diseases, its structure is damaged and its function is compromised. We review two recent works about lung structure and dynamics and how they change in disease. 1) We introduced a new acoustic imaging approach to study airway structure. When airways in a collapsed lung are inflated, they pop open in avalanches. A single opening emits a sound package called crackle consisting of an initial spike (s) followed by ringing. The distribution n(s) of s follows a power law and the exponent of n(s) can be used to calculate the diameter ratio d defined as the ratio of the diameters of an airway to that of its parent averaged over all bifurcations. To test this method, we measured crackles in dogs, rabbits, rats and mice by inflating collapsed isolated lungs with air or helium while recording crackles with a microphone. In each species, n(s) follows a power law with an exponent that depends on species, but not on gas in agreement with theory. Values of d from crackles compare well with those calculated from morphometric data suggesting that this approach is suitable to study airway structure in disease. 2) Using novel experiments and computer models, we studied pulmonary emphysema which is caused by cigarette smoking. In emphysema, the elastic protein fibers of the tissue are actively remodeled by lung cells due to the chemicals present in smoke. We measured the mechanical properties of tissue sheets from normal and emphysematous lungs and imaged its structure which appears as a heterogeneous hexagonal network of fibers. We found evidence that during uniaxial stretching, the collagen and elastin fibers in emphysematous tissue can fail at a critical stress generating holes of various sizes (h). We developed network models of the failure process. When the failure is governed by mechanical forces, the distribution n(h) of h is a power law which compares well with Computed Tomographic images of patients. These results suggest that the progressive nature of emphysema may be due to a complex breakdown process initiated by chemicals in the smoke and maintained by mechanical failure of the remodeled fiber network.
NASA Astrophysics Data System (ADS)
Eftaxias, Konstantinos; Minadakis, George; Potirakis, Stelios. M.; Balasis, Georgios
2013-02-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. Several authors have suggested that earthquake dynamics and neurodynamics can be analyzed within similar mathematical frameworks. Recently, authors have shown that a dynamical analogy supported by scale-free statistics exists between seizures and earthquakes, analyzing populations of different seizures and earthquakes, respectively. The purpose of this paper is to suggest a shift in emphasis from the large to the small scale: our analyses focus on a single epileptic seizure generation and the activation of a single fault (earthquake) and not on the statistics of sequences of different seizures and earthquakes. We apply the concepts of the nonextensive statistical physics to support the suggestion that a dynamical analogy exists between the two different extreme events, seizures and earthquakes. We also investigate the existence of such an analogy by means of scale-free statistics (the Gutenberg-Richter distribution of event sizes and the distribution of the waiting time until the next event). The performed analysis confirms the existence of a dynamic analogy between earthquakes and seizures, which moreover follow the dynamics of magnetic storms and solar flares.
Social Development in Hong Kong: Development Issues Identified by Social Development Index (SDI)
ERIC Educational Resources Information Center
Chua, Hoi-wai; Wong, Anthony K. W.; Shek, Daniel T. L.
2010-01-01
Surviving the aftermaths of the Asian Financial Crisis and SARS in 2003, Hong Kong's economy has re-gained its momentum and its economic growth has been quite remarkable too in recent few years. Nevertheless, as reflected by the Social Development Index (SDI), economic growth in Hong Kong does not seem to have benefited the people of the city at…
Near-surface soil water and temperature for SDI, LEPA, and spray irrigation
Technology Transfer Automated Retrieval System (TEKTRAN)
Near-surface soil temperatures and volumetric soil water contents were compared for SDI, LEPA, and spray irrigation in a Pullman clay loam soil planted in cotton. Soil temperatures were measured by type-T thermocouples and volumetric water contents were measured by time domain reflectometry (TDR) in...
Kazumba, Shija; Gillerman, Leonid; DeMalach, Yoel; Oron, Gideon
2010-01-01
Scarcity of fresh high-quality water has heightened the importance of wastewater reuse primarily in dry regions together with improving its efficient use by implementing the Subsurface Drip Irrigation (SDI) method. Sustainable effluent reuse combines soil and plant aspects, along with the maintainability of the application system. In this study, field experiments were conducted for two years on the commercial farm of Revivim and Mashabay-Sade farm (RMF) southeast of the City of Beer-Sheva, Israel. The purpose was to examine the response of alfalfa (Medicago sativa) as a perennial model crop to secondary domestic effluent application by means of a SDI system as compared with conventional overhead sprinkler irrigation. Emitters were installed at different depths and spacing. Similar amounts of effluent were applied to all plots during the experimental period. The results indicated that in all SDI treatments, the alfalfa yields were 11% to 25% higher than the ones obtained under sprinkler irrigated plots, besides the one in which the drip laterals were 200 cm apart. The average Water Use Efficiency (WUE) was better in all SDI treatments in comparison with the sprinkler irrigated plots. An economic assessment reveals the dependence of the net profit on the emitters' installation geometry, combined with the return for alfalfa in the market. PMID:20150698
Measuring dynamical randomness of quantum chaos by statistics of Schmidt eigenvalues
NASA Astrophysics Data System (ADS)
Kubotani, Hiroto; Adachi, Satoshi; Toda, Mikito
2013-06-01
We study statistics of entanglement generated by quantum chaotic dynamics. Using an ensemble of the very large number (≳107) of quantum states obtained from the temporally evolving coupled kicked tops, we verify that the estimated one-body distribution of the squared Schmidt eigenvalues for the quantum chaotic dynamics can agree surprisingly well with the analytical one for the universality class of the random matrices described by the fixed trace ensemble (FTE). In order to quantify this agreement, we introduce the L1 norm of the difference between the one-body distributions for the quantum chaos and FTE and use it as an indicator of the dynamical randomness. As we increase the scaled coupling constant, the L1 difference decreases. When the effective Planck constant is not small enough, the decrease saturates, which implies quantum suppression of dynamical randomness. On the other hand, when the effective Planck constant is small enough, the decrease of the L1 difference continues until it is masked by statistical fluctuation due to finiteness of the ensemble. Furthermore, we carry out two statistical analyses, the χ2 goodness of fit test and an autocorrelation analysis, on the difference between the distributions to seek for dynamical remnants buried under the statistical fluctuation. We observe that almost all fluctuating deviations are statistical. However, even for well-developed quantum chaos, unexpectedly, we find a slight nonstatistical deviation near the largest Schmidt eigenvalue. In this way, the statistics of Schmidt eigenvalues enables us to measure dynamical randomness of quantum chaos with reference to the random matrix theory of FTE.
Statistical-Dynamical Seasonal Forecasts of Central-Southwest Asian Winter Precipitation.
NASA Astrophysics Data System (ADS)
Tippett, Michael K.; Goddard, Lisa; Barnston, Anthony G.
2005-06-01
Interannual precipitation variability in central-southwest (CSW) Asia has been associated with East Asian jet stream variability and western Pacific tropical convection. However, atmospheric general circulation models (AGCMs) forced by observed sea surface temperature (SST) poorly simulate the region's interannual precipitation variability. The statistical-dynamical approach uses statistical methods to correct systematic deficiencies in the response of AGCMs to SST forcing. Statistical correction methods linking model-simulated Indo-west Pacific precipitation and observed CSW Asia precipitation result in modest, but statistically significant, cross-validated simulation skill in the northeast part of the domain for the period from 1951 to 1998. The statistical-dynamical method is also applied to recent (winter 1998/99 to 2002/03) multimodel, two-tier December-March precipitation forecasts initiated in October. This period includes 4 yr (winter of 1998/99 to 2001/02) of severe drought. Tercile probability forecasts are produced using ensemble-mean forecasts and forecast error estimates. The statistical-dynamical forecasts show enhanced probability of below-normal precipitation for the four drought years and capture the return to normal conditions in part of the region during the winter of 2002/03.May Kabul be without gold, but not without snow.—Traditional Afghan proverb
Stochastic dynamics of N correlated binary variables and non-extensive statistical mechanics
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Ruseckas, J.
2016-04-01
The non-extensive statistical mechanics has been applied to describe a variety of complex systems with inherent correlations and feedback loops. Here we present a dynamical model based on previously proposed static model exhibiting in the thermodynamic limit the extensivity of the Tsallis entropy with q < 1 as well as a q-Gaussian distribution. The dynamical model consists of a one-dimensional ring of particles characterized by correlated binary random variables, which are allowed to flip according to a simple random walk rule. The proposed dynamical model provides an insight how a mesoscopic dynamics characterized by the non-extensive statistical mechanics could emerge from a microscopic description of the system.
Dynamics of statistical distance: Quantum limits for two-level clocks
Braunstein, S.L. ); Milburn, G.J. )
1995-03-01
We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
Sapsis, Themistoklis P.; Majda, Andrew J.
2013-01-01
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
Nguyen, Y.; Nguyen, Nam X.; Rogers, Jamie L.; Liao, Jun; MacMillan, John B.; Jiang, Youxing; Sperandio, Vanessa
2015-05-19
Bacteria engage in chemical signaling, termed quorum sensing (QS), to mediate intercellular communication, mimicking multicellular organisms. The LuxR family of QS transcription factors regulates gene expression, coordinating population behavior by sensing endogenous acyl homoserine lactones (AHLs). However, some bacteria (such as Escherichia coli) do not produce AHLs. These LuxR orphans sense exogenous AHLs but also regulate transcription in the absence of AHLs. Importantly, this AHL-independent regulatory mechanism is still largely unknown. Here we present several structures of one such orphan LuxR-type protein, SdiA, from enterohemorrhagic E. coli (EHEC), in the presence and absence of AHL. SdiA is actually not inmore » an apo state without AHL but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL), which is ubiquitously found throughout the tree of life and serves as an energy source, signaling molecule, and substrate for membrane biogenesis. While exogenous AHL renders to SdiA higher stability and DNA binding affinity, OCL may function as a chemical chaperone placeholder that stabilizes SdiA, allowing for basal activity. Structural comparison between SdiA-AHL and SdiA-OCL complexes provides crucial mechanistic insights into the ligand regulation of AHL-dependent and -independent function of LuxR-type proteins. Importantly, in addition to its contribution to basic science, this work has implications for public health, inasmuch as the SdiA signaling system aids the deadly human pathogen EHEC to adapt to a commensal lifestyle in the gastrointestinal (GI) tract of cattle, its main reservoir. These studies open exciting and novel avenues to control shedding of this human pathogen in the environment. IMPORTANCE Quorum sensing refers to bacterial chemical signaling. The QS acyl homoserine lactone (AHL) signals are recognized by LuxR-type receptors that regulate gene transcription. However, some bacteria have orphan LuxR-type receptors and do not produce AHLs, sensing them from other bacteria. We solved three structures of the E. coli SdiA orphan, in the presence and absence of AHL. SdiA with no AHL is not in an apo state but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL). OCL is ubiquitously found in prokaryotes and eukaryotes and is a phospholipid precursor for membrane biogenesis and a signaling molecule. While AHL renders to SdiA higher stability and DNA-binding affinity, OCL functions as a chemical chaperone placeholder, stabilizing SdiA and allowing for basal activity. Our studies provide crucial mechanistic insights into the ligand regulation of SdiA activity.« less
Nguyen, Y.; Nguyen, Nam X.; Rogers, Jamie L.; Liao, Jun; MacMillan, John B.; Jiang, Youxing; Sperandio, Vanessa
2015-05-19
Bacteria engage in chemical signaling, termed quorum sensing (QS), to mediate intercellular communication, mimicking multicellular organisms. The LuxR family of QS transcription factors regulates gene expression, coordinating population behavior by sensing endogenous acyl homoserine lactones (AHLs). However, some bacteria (such as Escherichia coli) do not produce AHLs. These LuxR orphans sense exogenous AHLs but also regulate transcription in the absence of AHLs. Importantly, this AHL-independent regulatory mechanism is still largely unknown. Here we present several structures of one such orphan LuxR-type protein, SdiA, from enterohemorrhagic E. coli (EHEC), in the presence and absence of AHL. SdiA is actually not in an apo state without AHL but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL), which is ubiquitously found throughout the tree of life and serves as an energy source, signaling molecule, and substrate for membrane biogenesis. While exogenous AHL renders to SdiA higher stability and DNA binding affinity, OCL may function as a chemical chaperone placeholder that stabilizes SdiA, allowing for basal activity. Structural comparison between SdiA-AHL and SdiA-OCL complexes provides crucial mechanistic insights into the ligand regulation of AHL-dependent and -independent function of LuxR-type proteins. Importantly, in addition to its contribution to basic science, this work has implications for public health, inasmuch as the SdiA signaling system aids the deadly human pathogen EHEC to adapt to a commensal lifestyle in the gastrointestinal (GI) tract of cattle, its main reservoir. These studies open exciting and novel avenues to control shedding of this human pathogen in the environment. IMPORTANCE Quorum sensing refers to bacterial chemical signaling. The QS acyl homoserine lactone (AHL) signals are recognized by LuxR-type receptors that regulate gene transcription. However, some bacteria have orphan LuxR-type receptors and do not produce AHLs, sensing them from other bacteria. We solved three structures of the E. coli SdiA orphan, in the presence and absence of AHL. SdiA with no AHL is not in an apo state but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL). OCL is ubiquitously found in prokaryotes and eukaryotes and is a phospholipid precursor for membrane biogenesis and a signaling molecule. While AHL renders to SdiA higher stability and DNA-binding affinity, OCL functions as a chemical chaperone placeholder, stabilizing SdiA and allowing for basal activity. Our studies provide crucial mechanistic insights into the ligand regulation of SdiA activity.
Statistical and dynamical assessment of vegetation feedbacks on climate over the boreal forest
NASA Astrophysics Data System (ADS)
Notaro, Michael; Liu, Zhengyu
2008-11-01
Vegetation feedbacks over Asiatic Russia are assessed through a combined statistical and dynamical approach in a fully coupled atmosphere-ocean-land model, FOAM-LPJ. The dynamical assessment is comprised of initial value ensemble experiments in which the forest cover fraction is initially reduced over Asiatic Russia, replaced by grass cover, and then the climatic response is determined. The statistical feedback approach, adopted from previous studies of ocean-atmosphere interactions, is applied to compute the feedback of forest cover on subsequent temperature and precipitation in the control simulation. Both methodologies indicate a year-round positive feedback on temperature and precipitation, strongest in spring and moderately substantial in summer. Reduced boreal forest cover enhances the surface albedo, leading to an extended snow season, lower air temperatures, increased atmospheric stability, and enhanced low cloud cover. Changes in the hydrological cycle include diminished transpiration and moisture recycling, supporting a reduction in precipitation. The close agreement in sign and magnitude between the statistical and dynamical feedback assessments testifies to the reliability of the statistical approach. An additional statistical analysis of monthly vegetation feedbacks over Asiatic Russia reveals a robust positive feedback on air temperature of similar quantitative strength in two coupled models, FOAM-LPJ and CAM3-CLM3, and the observational record.
Enriching Spatial Data Infrastructure (sdi) by User Generated Contents for Transportation
NASA Astrophysics Data System (ADS)
Shakeri, M.; Alimohammadi, A.; Sadeghi-Niaraki, A.; Alesheikh, A. A.
2013-09-01
Spatial data is one of the most critical elements underpinning decision making for many disciplines. Accessing and sharing spatial data have always been a great struggle for researchers. Spatial data infrastructure (SDI) plays a key role in spatial data sharing by building a suitable platform for collaboration and cooperation among the different data producer organizations. In recent years, SDI vision has been moved toward a user-centric platform which has led to development of a new and enriched generation of SDI (third generation). This vision is to provide an environment where users can cooperate to handle spatial data in an effective and satisfactory way. User-centric SDI concentrates on users, their requirements and preferences while in the past, SDI initiatives were mainly concentrated on technological issues such as the data harmonization, standardized metadata models, standardized web services for data discovery, visualization and download. On the other hand, new technologies such as the GPS-equipped smart phones, navigation devices and Web 2.0 technologies have enabled citizens to actively participate in production and sharing of the spatial information. This has led to emergence of the new phenomenon called the Volunteered Geographic Information (VGI). VGI describes any type of content that has a geographic element which has been voluntarily collected. However, its distinctive element is the geographic information that can be collected and produced by citizens with different formal expertise and knowledge of the spatial or geographical concepts. Therefore, ordinary citizens can cooperate in providing massive sources of information that cannot be ignored. These can be considered as the valuable spatial information sources in SDI. These sources can be used for completing, improving and updating of the existing databases. Spatial information and technologies are an important part of the transportation systems. Planning, design and operation of the transportation systems requires the exchange of large volumes of spatial data and often close cooperation among the various organizations. However, there is no technical and organizational process to get a suitable data infrastructure to address diverse needs of the transportation. Hence, development of a common standards and a simple data exchange mechanism is strongly needed in the field of transportation for decision support. Since one of the main purposes of transportation projects is to improve the quality of services provided to users, it is necessary to involve the users themselves in the decision making processes. This should be done through a public participation and involvement in all stages of the transportation projects. In other words, using public knowledge and information as another source of information is very important to make better and more efficient decisions. Public participation in transportation projects can also help organizations to enhance their public supports; because the lack of public support can lead to failure of technically valid projects. However, due to complexity of the transportation tasks, lack of appropriate environment and methods for facilitation of the public participation, collection and analysis of the public information and opinions, public participation in this field has not been well considered so far. This paper reviews the previous researches based on the enriched SDI development and its movement toward the VGI by focusing on the public participation in transportation projects. To this end, methods and models that have been used in previous researches are studied and classified initially. Then, methods of the previous researchers on VGI and transportation are conceptualized in SDI. Finally, the suggested method for transportation projects is presented. Results indicate success of the new generation of SDI in integration with public participation for transportation projects.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
NASA Astrophysics Data System (ADS)
Schepen, Andrew; Wang, Q. J.
2015-03-01
The Australian Bureau of Meteorology produces statistical and dynamic seasonal streamflow forecasts. The statistical and dynamic forecasts are similarly reliable in ensemble spread; however, skill varies by catchment and season. Therefore, it may be possible to optimize forecasting skill by weighting and merging statistical and dynamic forecasts. Two model averaging methods are evaluated for merging forecasts for 12 locations. The first method, Bayesian model averaging (BMA), applies averaging to forecast probability densities (and thus cumulative probabilities) for a given forecast variable value. The second method, quantile model averaging (QMA), applies averaging to forecast variable values (quantiles) for a given cumulative probability (quantile fraction). BMA and QMA are found to perform similarly in terms of overall skill scores and reliability in ensemble spread. Both methods improve forecast skill across catchments and seasons. However, when both the statistical and dynamical forecasting approaches are skillful but produce, on special occasions, very different event forecasts, the BMA merged forecasts for these events can have unusually wide and bimodal distributions. In contrast, the distributions of the QMA merged forecasts for these events are narrower, unimodal and generally more smoothly shaped, and are potentially more easily communicated to and interpreted by the forecast users. Such special occasions are found to be rare. However, every forecast counts in an operational service, and therefore the occasional contrast in merged forecasts between the two methods may be more significant than the indifference shown by the overall skill and reliability performance.
A new statistical dynamic analysis of ecological niches for China’s financial centres
NASA Astrophysics Data System (ADS)
Du, Huibin; Xia, Qiongqiong; Ma, Xuan; Chai, Lihe
2014-02-01
This study, undertaken from the perspective of statistical dynamics, proposes the treatment of financial centres as an ecosystem, creates a multidimensional financial centre niche (FC-niche) under given generalised entropy and constraints, and interprets the evolutionary process of an FC-niche with dynamic equations obtained from the maximum generalised entropy principle (MGEP). To solve these dynamic equations, a self-organised feature map (SOM) is designed. Finally, the values and evolutionary rules of FC-niches in China’s 29 major cities are simulated as a case study.
Dynamical and statistical description of multifragmentation in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Mao, Lihua; Wang, Ning; Ou, Li
2015-04-01
To explore the roles of the dynamical model and statistical model in the description of multifragmentation in heavy-ion collisions at intermediate energies, the fragments charge distributions of 197Au+197Au at 35 MeV/u are analyzed by using the hybrid model of improved quantum molecular dynamics (ImQMD) model plus the statistical model GEMINI. We find that, the ImQMD model can well describe the charge distributions of fragments produced in central and semicentral collisions. But for the peripheral collisions of Au+Au at 35 MeV/u, the statistical model is required at the end of the ImQMD simulations for the better description of the charge distribution of fragments. By using the hybrid model of ImQMD+GEMINI, the fragment charge distribution of Au+Au at 35 MeV/u can be reproduced reasonably well. The time evolution of the excitation energies of primary fragments is simultaneously investigated.
Bargueño, P; Jambrina, P G; Alvariño, J M; Menéndez, M; Verdasco, E; Hankel, M; Smith, S C; Aoiz, F J; González-Lezana, T
2011-05-14
The dynamics of the reaction O((1)D) + HCl → ClO + H, OH + Cl has been investigated in detail by means of a time-dependent wave packet (TDWP) method in comparison with quasiclassical trajectory (QCT) and statistical approaches on the ground potential energy surface by Martínez et al. [Phys. Chem. Chem. Phys., 2000, 2, 589]. Fully coupled quantum mechanical (QM) reaction probabilities for high values of the total angular momentum (J≤ 50) are reported for the first time. At the low collision energy regime (E(c)≤ 0.4 eV) the TDWP probabilities are well reproduced by the QCT and statistical results for the ClO forming product channel, but for the OH + Cl arrangement, only QCT probabilities are found to agree with the QM values. The good accordance found between the rigorous statistical models and the dynamical QM and QCT calculations for the O + HCl → ClO + H process underpins the assumption that the reaction pathway leading to ClO is predominantly governed by a complex-forming mechanism. In addition, to further test the statistical character of this reaction channel, the laboratory angular distribution and time-of-flight spectra obtained in a crossed molecular beam study by Balucani et al. [Chem. Phys. Lett. 1991, 180, 34] at a collision energy as high as 0.53 eV have been simulated using the state resolved differential cross section obtained with the statistical approaches yielding a satisfactory agreement with the experimental results. For the other channel, O + HCl → OH + Cl, noticeable differences between the statistical results and those found with the QCT calculation suggest that the dynamics of the reaction are controlled by a direct mechanism. The comparison between the QCT and QM-TDWP results in the whole range of collision energies lends credence to the QCT description of the dynamics of this reaction. PMID:21431209
Indole is an inter-species biofilm signal mediated by SdiA
Lee, Jintae; Jayaraman, Arul; Wood, Thomas K
2007-01-01
Background As a stationary phase signal, indole is secreted in large quantities into rich medium by Escherichia coli and has been shown to control several genes (e.g., astD, tnaB, gabT), multi-drug exporters, and the pathogenicity island of E. coli; however, its impact on biofilm formation has not been well-studied. Results Through a series of global transcriptome analyses, confocal microscopy, isogenic mutants, and dual-species biofilms, we show here that indole is a non-toxic signal that controls E. coli biofilms by repressing motility, inducing the sensor of the quorum sensing signal autoinducer-1 (SdiA), and influencing acid resistance (e.g., hdeABD, gadABCEX). Isogenic mutants showed these associated proteins are directly related to biofilm formation (e.g., the sdiA mutation increased biofilm formation 50-fold), and SdiA-mediated transcription was shown to be influenced by indole. The reduction in motility due to indole addition results in the biofilm architecture changing from scattered towers to flat colonies. Additionally, there are 12-fold more E. coli cells in dual-species biofilms grown in the presence of Pseudomonas cells engineered to express toluene o-monooxygenase (TOM, which converts indole to an insoluble indigoid) than in biofilms with pseudomonads that do not express TOM due to a 22-fold reduction in extracellular indole. Also, indole stimulates biofilm formation in pseudomonads. Further evidence that the indole effects are mediated by SdiA and homoserine lactone quorum sensing is that the addition of N-butyryl-, N-hexanoyl-, and N-octanoyl-L-homoserine lactones repress E. coli biofilm formation in the wild-type strain but not with the sdiA mutant. Conclusion Indole is an interspecies signal that decreases E. coli biofilms through SdiA and increases those of pseudomonads. Indole may be manipulated to control biofilm formation by oxygenases of bacteria that do not synthesize it in a dual-species biofilm. Furthermore, E. coli changes its biofilm in response to signals it cannot synthesize (homoserine lactones), and pseudomonads respond to signals they do not synthesize (indole). PMID:17511876
NASA Astrophysics Data System (ADS)
Chaikov, Leonid L.; Kirichenko, Marina N.; Krivokhizha, Svetlana V.; Zaritskiy, Alexander R.
2015-05-01
The work is devoted to the study of sizes and concentrations of proteins, and their aggregates in blood plasma samples, using static and dynamic light scattering methods. A new approach is proposed based on multiple repetition of measurements of intensity size distribution and on counting the number of registrations of different sizes, which made it possible to obtain statistically confident particle sizes and concentrations in the blood plasma. It was revealed that statistically confident particle sizes in the blood plasma were stable during 30 h of observations, whereas the concentrations of particles of different sizes varied as a result of redistribution of material between them owing to the protein degradation processes.
Yakhnin, Helen; Baker, Carol S.; Berezin, Igor; Evangelista, Michael A.; Rassin, Alisa; Romeo, Tony; Babitzke, Paul
2011-01-01
The RNA binding protein CsrA is the central component of a conserved global regulatory system that activates or represses gene expression posttranscriptionally. In every known example of CsrA-mediated translational control, CsrA binds to the 5′ untranslated region of target transcripts, thereby repressing translation initiation and/or altering the stability of the RNA. Furthermore, with few exceptions, repression by CsrA involves binding directly to the Shine-Dalgarno sequence and blocking ribosome binding. sdiA encodes the quorum-sensing receptor for N-acyl-l-homoserine lactone in Escherichia coli. Because sdiA indirectly stimulates transcription of csrB, which encodes a small RNA (sRNA) antagonist of CsrA, we further explored the relationship between sdiA and the Csr system. Primer extension analysis revealed four putative transcription start sites within 85 nucleotides of the sdiA initiation codon. Potential σ70-dependent promoters were identified for each of these primer extension products. In addition, two CsrA binding sites were predicted in the initially translated region of sdiA. Expression of chromosomally integrated sdiA′-′lacZ translational fusions containing the entire promoter and CsrA binding site regions indicates that CsrA represses sdiA expression. The results from gel shift and footprint studies demonstrate that tight binding of CsrA requires both of these sites. Furthermore, the results from toeprint and in vitro translation experiments indicate that CsrA represses translation of sdiA by directly competing with 30S ribosomal subunit binding. Thus, this represents the first example of CsrA preventing translation by interacting solely within the coding region of an mRNA target. PMID:21908661
NASA Astrophysics Data System (ADS)
Taylor, Keith; Amitay, Michael
2015-11-01
The presence of dynamic stall on wind turbines complicates the goal of energy production, as variations in input loading runs counter to the end goal of producing continuous level power output from a wind turbine. While dynamic stall has been extensively studied experimentally and computationally, the control of dynamic stall through active flow control is still a nascent field of research. In order to understand the flow field around a dynamically pitching finite span airfoil, a new method of characterizing the effectiveness of flow control in a statistical sense is presented. This method leverages the gamma one criterion on Particle Image Velocimetry images to identify the vortices shed, then statistically describes how the distribution of the circulation strength of identified vortices changes during dynamic stall. This is in contrast to previous work, which only addressed the phase averaged flow field, which does not fully illustrate how the flow field varies loop by loop, as there is significant variation between phase averaged flow fields and instantaneous flow fields measured. The purpose of this work is to present a new method of characterizing the effectiveness of flow control under dynamic conditions, without the need to capture PIV at high frequencies.
NASA Astrophysics Data System (ADS)
Notaro, M.; Liu, Z.
2007-12-01
Vegetation feedbacks over Asiatic Russia are assessed through a combined statistical and dynamical approach in a fully coupled atmosphere-ocean-land model, FOAM-LPJ. The dynamical assessment is comprised of initial value ensemble experiments in which the forest cover fraction is initially reduced over Asiatic Russia, replaced by grass cover, and then the climatic response is determined. The statistical feedback approach, adopted from previous studies of ocean-atmosphere interactions, is applied to compute the feedback of forest cover on subsequent temperature and precipitation in the control simulation. Both methodologies indicate a year-round positive feedback on temperature and precipitation, strongest in spring and moderately substantial in summer. Reduced boreal forest cover enhances the surface albedo, leading to an extended snow season, lower air temperatures, increased atmospheric stability, and enhanced low cloud cover. Changes in the hydrological cycle include diminished transpiration and moisture recycling, supporting a reduction in precipitation. The close agreement in sign and magnitude between the statistical and dynamical feedback assessments testifies to the reliability of the statistical approach. This study supports the previous finding of a strong positive vegetation feedback on air temperature over Asiatic Russia in the observational record.
SDI-based business processes: A territorial analysis web information system in Spain
NASA Astrophysics Data System (ADS)
Béjar, Rubén; Latre, Miguel Á.; Lopez-Pellicer, Francisco J.; Nogueras-Iso, Javier; Zarazaga-Soria, F. J.; Muro-Medrano, Pedro R.
2012-09-01
Spatial Data Infrastructures (SDIs) provide access to geospatial data and operations through interoperable Web services. These data and operations can be chained to set up specialized geospatial business processes, and these processes can give support to different applications. End users can benefit from these applications, while experts can integrate the Web services in their own business processes and developments. This paper presents an SDI-based territorial analysis Web information system for Spain, which gives access to land cover, topography and elevation data, as well as to a number of interoperable geospatial operations by means of a Web Processing Service (WPS). Several examples illustrate how different territorial analysis business processes are supported. The system has been established by the Spanish National SDI (Infraestructura de Datos Espaciales de España, IDEE) both as an experimental platform for geoscientists and geoinformation system developers, and as a mechanism to contribute to the Spanish citizens knowledge about their territory.
Dynamical role of anyonic excitation statistics in rapidly rotating bose gases.
Fischer, Uwe R
2004-10-15
We show that for rotating harmonically trapped Bose gases in a fractional quantum Hall state, the anyonic excitation statistics in the rotating gas can effectively play a dynamical role. For particular values of the two-dimensional coupling constant g=-2pih2(2k-1)/m, where k is a positive integer, the system becomes a noninteracting gas of anyons, with exactly obtainable solutions satisfying Bogomol'nyi self-dual order parameter equations. Attractive Bose gases under rapid rotation thus can be stabilized in the thermodynamic limit due to the anyonic statistics of their quasiparticle excitations. PMID:15524959
a Statistical Dynamic Approach to Structural Evolution of Complex Capital Market Systems
NASA Astrophysics Data System (ADS)
Shao, Xiao; Chai, Li H.
As an important part of modern financial systems, capital market has played a crucial role on diverse social resource allocations and economical exchanges. Beyond traditional models and/or theories based on neoclassical economics, considering capital markets as typical complex open systems, this paper attempts to develop a new approach to overcome some shortcomings of the available researches. By defining the generalized entropy of capital market systems, a theoretical model and nonlinear dynamic equation on the operations of capital market are proposed from statistical dynamic perspectives. The US security market from 1995 to 2001 is then simulated and analyzed as a typical case. Some instructive results are discussed and summarized.
The applications of Complexity Theory and Tsallis Non-extensive Statistics at Solar Plasma Dynamics
NASA Astrophysics Data System (ADS)
Pavlos, George
2015-04-01
As the solar plasma lives far from equilibrium it is an excellent laboratory for testing complexity theory and non-equilibrium statistical mechanics. In this study, we present the highlights of complexity theory and Tsallis non extensive statistical mechanics as concerns their applications at solar plasma dynamics, especially at sunspot, solar flare and solar wind phenomena. Generally, when a physical system is driven far from equilibrium states some novel characteristics can be observed related to the nonlinear character of dynamics. Generally, the nonlinearity in space plasma dynamics can generate intermittent turbulence with the typical characteristics of the anomalous diffusion process and strange topologies of stochastic space plasma fields (velocity and magnetic fields) caused by the strange dynamics and strange kinetics (Zaslavsky, 2002). In addition, according to Zelenyi and Milovanov (2004) the complex character of the space plasma system includes the existence of non-equilibrium (quasi)-stationary states (NESS) having the topology of a percolating fractal set. The stabilization of a system near the NESS is perceived as a transition into a turbulent state determined by self-organization processes. The long-range correlation effects manifest themselves as a strange non-Gaussian behavior of kinetic processes near the NESS plasma state. The complex character of space plasma can also be described by the non-extensive statistical thermodynamics pioneered by Tsallis, which offers a consistent and effective theoretical framework, based on a generalization of Boltzmann - Gibbs (BG) entropy, to describe far from equilibrium nonlinear complex dynamics (Tsallis, 2009). In a series of recent papers, the hypothesis of Tsallis non-extensive statistics in magnetosphere, sunspot dynamics, solar flares, solar wind and space plasma in general, was tested and verified (Karakatsanis et al., 2013; Pavlos et al., 2014; 2015). Our study includes the analysis of solar plasma time series at three cases: sunspot index, solar flare and solar wind data. The non-linear analysis of the sunspot index is embedded in the non-extensive statistical theory of Tsallis (1988; 2004; 2009). The q-triplet of Tsallis, as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the SVD components of the sunspot index timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using the q-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000, 2001). Our analysis showed clearly the following: (a) a phase transition process in the solar dynamics from high dimensional non-Gaussian SOC state to a low dimensional non-Gaussian chaotic state, (b) strong intermittent solar turbulence and anomalous (multifractal) diffusion solar process, which is strengthened as the solar dynamics makes a phase transition to low dimensional chaos in accordance to Ruzmaikin, Zelenyi and Milovanov's studies (Zelenyi and Milovanov, 1991; Milovanov and Zelenyi, 1993; Ruzmakin et al., 1996), (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of: (i) non-Gaussian probability distribution function P(x), (ii) multifractal scaling exponent spectrum f(a) and generalized Renyi dimension spectrum Dq, (iii) exponent spectrum J(p) of the structure functions estimated for the sunspot index and its underlying non equilibrium solar dynamics. Also, the q-triplet of Tsallis as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the singular value decomposition (SVD) components of the solar flares timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using the q-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000). Our analysis showed clearly the following: (a) a phase transition process in the solar flare dynamics from a high dimensional non-Gaussian self-organized critical (SOC) state to a low dimensional also non-Gaussian chaotic state, (b) strong intermittent solar corona turbulence and an anomalous (multifractal) diffusion solar corona process, which is strengthened as the solar corona dynamics makes a phase transition to low dimensional chaos, (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of the functions: (i) non-Gaussian probability distribution function P(x), (ii) f(a) and D(q), and (iii) J(p) for the solar flares timeseries and its underlying non-equilibrium solar dynamics, and (d) the solar flare dynamical profile is revealed similar to the dynamical profile of the solar corona zone as far as the phase transition process from self-organized criticality (SOC) to chaos state. However the solar low corona (solar flare) dynamical characteristics can be clearly discriminated from the dynamical characteristics of the solar convection zone. At last we present novel results revealing non-equilibrium phase transition processes in the solar wind plasma during a strong shock event, which can take place in Solar wind plasma system. The solar wind plasma as well as the entire solar plasma system is a typical case of stochastic spatiotemporal distribution of physical state variables such as force fields ( ) and matter fields (particle and current densities or bulk plasma distributions). This study shows clearly the non-extensive and non-Gaussian character of the solar wind plasma and the existence of multi-scale strong correlations from the microscopic to the macroscopic level. It also underlines the inefficiency of classical magneto-hydro-dynamic (MHD) or plasma statistical theories, based on the classical central limit theorem (CLT), to explain the complexity of the solar wind dynamics, since these theories include smooth and differentiable spatial-temporal functions (MHD theory) or Gaussian statistics (Boltzmann-Maxwell statistical mechanics). On the contrary, the results of this study indicate the presence of non-Gaussian non-extensive statistics with heavy tails probability distribution functions, which are related to the q-extension of CLT. Finally, the results of this study can be understood in the framework of modern theoretical concepts such as non-extensive statistical mechanics (Tsallis, 2009), fractal topology (Zelenyi and Milovanov, 2004), turbulence theory (Frisch, 1996), strange dynamics (Zaslavsky, 2002), percolation theory (Milovanov, 1997), anomalous diffusion theory and anomalous transport theory (Milovanov, 2001), fractional dynamics (Tarasov, 2013) and non-equilibrium phase transition theory (Chang, 1992). References 1. T. Arimitsu, N. Arimitsu, Tsallis statistics and fully developed turbulence, J. Phys. A: Math. Gen. 33 (2000) L235. 2. T. Arimitsu, N. Arimitsu, Analysis of turbulence by statistics based on generalized entropies, Physica A 295 (2001) 177-194. 3. T. Chang, Low-dimensional behavior and symmetry braking of stochastic systems near criticality can these effects be observed in space and in the laboratory, IEEE 20 (6) (1992) 691-694. 4. U. Frisch, Turbulence, Cambridge University Press, Cambridge, UK, 1996, p. 310. 5. L.P. Karakatsanis, G.P. Pavlos, M.N. Xenakis, Tsallis non-extensive statistics, intermittent turbulence, SOC and chaos in the solar plasma. Part two: Solar flares dynamics, Physica A 392 (2013) 3920-3944. 6. A.V. Milovanov, Topological proof for the Alexander-Orbach conjecture, Phys. Rev. E 56 (3) (1997) 2437-2446. 7. A.V. Milovanov, L.M. Zelenyi, Fracton excitations as a driving mechanism for the self-organized dynamical structuring in the solar wind, Astrophys. Space Sci. 264 (1-4) (1999) 317-345. 8. A.V. Milovanov, Stochastic dynamics from the fractional Fokker-Planck-Kolmogorov equation: large-scale behavior of the turbulent transport coefficient, Phys. Rev. E 63 (2001) 047301. 9. G.P. Pavlos, et al., Universality of non-extensive Tsallis statistics and time series analysis: Theory and applications, Physica A 395 (2014) 58-95. 10. G.P. Pavlos, et al., Tsallis non-extensive statistics and solar wind plasma complexity, Physica A 422 (2015) 113-135. 11. A.A. Ruzmaikin, et al., Spectral properties of solar convection and diffusion, ApJ 471 (1996) 1022. 12. V.E. Tarasov, Review of some promising fractional physical models, Internat. J. Modern Phys. B 27 (9) (2013) 1330005. 13. C. Tsallis, Possible generalization of BG statistics, J. Stat. Phys. J 52 (1-2) (1988) 479-487. 14. C. Tsallis, Nonextensive statistical mechanics: construction and physical interpretation, in: G.M. Murray, C. Tsallis (Eds.), Nonextensive Entropy-Interdisciplinary Applications, Oxford Univ. Press, 2004, pp. 1-53. 15. C. Tsallis, Introduction to Non-Extensive Statistical Mechanics, Springer, 2009. 16. G.M. Zaslavsky, Chaos, fractional kinetics, and anomalous transport, Physics Reports 371 (2002) 461-580. 17. L.M. Zelenyi, A.V. Milovanov, Fractal properties of sunspots, Sov. Astron. Lett. 17 (6) (1991) 425. 18. L.M. Zelenyi, A.V. Milovanov, Fractal topology and strange kinetics: from percolation theory to problems in cosmic electrodynamics, Phys.-Usp. 47 (8), (2004) 749-788.
Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane.
He, W; Song, H; Su, Y; Geng, L; Ackerson, B J; Peng, H B; Tong, P
2016-01-01
The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network. PMID:27226072
NASA Astrophysics Data System (ADS)
Laugel, Amélie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; Méndez, Fernando
2014-12-01
The estimation of possible impacts related to climate change on the wave climate is subject to several levels of uncertainty. In this work, we focus on the uncertainties inherent in the method applied to project the wave climate using atmospheric simulations. Two approaches are commonly used to obtain the regional wave climate: dynamical and statistical downscaling from atmospheric data. We apply both approaches based on the outputs of a global climate model (GCM), ARPEGE-CLIMAT, under three possible future scenarios (B1, A1B and A2) of the Fourth Assessment Report, AR4 (IPCC, 2007), along the French coast and evaluate their results for the wave climate with a high level of precision. The performance of the dynamical and the statistical methods is determined through a comparative analysis of the estimated means, standard deviations and monthly quantile distributions of significant wave heights, the joint probability distributions of wave parameters and seasonal and interannual variability. Analysis of the results shows that the statistical projections are able to reproduce the wave climatology as well as the dynamical projections, with some deficiencies being observed in the summer and for the upper tail of the significant wave height. In addition, with its low computational time requirements, the statistical downscaling method allows an ensemble of simulations to be calculated faster than the dynamical method. It then becomes possible to quantify the uncertainties associated with the choice of the GCM or the socio-economic scenarios, which will improve estimates of the impact of wave climate change along the French coast.
NASA Astrophysics Data System (ADS)
Xu, Hao; Lu, Bo; Su, Zhongqing; Cheng, Li
2015-09-01
A previously developed damage identification strategy, named Pseudo-Excitation (PE), was enhanced using a statistical processing approach. In terms of the local dynamic equilibrium of the structural component under inspection, the distribution of its vibration displacements, which are of necessity to construct the damage index in the PE, was re-defined using sole dynamic strains based on the statistical method. On top of those advantages inheriting from the original PE compared with traditional vibration-based damage detection including the independence of baseline signals and pre-developed benchmark structures, the enhanced PE (EPE) possesses improved immunity to the interference of measurement noise. Moreover, the EPE can facilitate practical implementation of online structural health monitoring, benefiting from the use of sole strain information. Proof-of-concept numerical study was conducted to examine the feasibility and accuracy of the EPE, and the effectiveness of the proposed statistical enhancement in re-constructing the vibration displacements was evaluated under noise influence; experimental validation was followed up by characterizing multi-cracks in a beam-like structure, in which the dynamic strains were measured using Lead zirconium titanate (PZT) sensors. For comparison, the original PE, the Gapped Smoothing Method (GSM), and the EPE were respectively used to evaluate the cracks. It was observed from the damage identification results that both the GSM and EPE were able to achieve higher identification accuracy than the original PE, and the robustness of the EPE in damage identification was proven to be superior than that of the GSM.
Model averaging methods to merge statistical and dynamic seasonal streamflow forecasts in Australia
NASA Astrophysics Data System (ADS)
Schepen, A.; Wang, Q. J.
2014-12-01
The Australian Bureau of Meteorology operates a statistical seasonal streamflow forecasting service. It has also developed a dynamic seasonal streamflow forecasting approach. The two approaches produce similarly reliable forecasts in terms of ensemble spread but can differ in forecast skill depending on catchment and season. Therefore, it may be possible to augment the skill of the existing service by objectively weighting and merging the forecasts. Bayesian model averaging (BMA) is first applied to merge statistical and dynamic forecasts for 12 locations using leave-five-years-out cross-validation. It is seen that the BMA merged forecasts can sometimes be too uncertain, as shown by ensemble spreads that are unrealistically wide and even bi-modal. The BMA method applies averaging to forecast probability densities (and thus cumulative probabilities) for a given forecast variable value. An alternative approach is quantile model averaging (QMA), whereby forecast variable values (quantiles) are averaged for a given cumulative probability (quantile fraction). For the 12 locations, QMA is compared to BMA. BMA and QMA perform similarly in terms of forecast accuracy skill scores and reliability in terms of ensemble spread. Both methods improve forecast skill across catchments and seasons by combining the different strengths of the statistical and dynamic approaches. A major advantage of QMA over BMA is that it always produces reasonably well defined forecast distributions, even in the special cases where BMA does not. Optimally estimated QMA weights and BMA weights are similar; however, BMA weights are more efficiently estimated.
NASA Astrophysics Data System (ADS)
Moradkhani, Hamid
2015-04-01
Drought forecasting is vital for resource management and planning. Both societal and agricultural requirements for water weigh heavily on the natural environment, which may become scarce in the event of drought. Although drought forecasts are an important tool for managing water in hydrologic systems, these forecasts are plagued by uncertainties, owing to the complexities of water dynamics and the spatial heterogeneities of pertinent variables. Due to these uncertainties, it is necessary to frame forecasts in a probabilistic manner. Here we present a statistical-dynamical probabilistic drought forecast framework within Bayesian networks. The statistical forecast model applies a family of multivariate distribution functions to forecast future drought conditions given the drought status in the past. The advantage of the statistical forecast model is that it develops conditional probabilities of a given forecast variable, and returns the highest probable forecast along with an assessment of the uncertainty around that value. The dynamical model relies on data assimilation to characterize the initial land surface condition uncertainty which correspondingly reflect on drought forecast. In addition, the recovery of drought will be examined. From these forecasts, it is found that drought recovery is a longer process than suggested in recent literature. Drought in land surface variables (snow, soil moisture) is shown to be persistent up to a year in certain locations, depending on the intensity of the drought. Location within the basin appears to be a driving factor in the ability of the land surface to recover from drought, allowing for differentiation between drought prone and drought resistant regions.
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.
NASA Astrophysics Data System (ADS)
Nielsen, Eric L.; Close, Laird M.; Biller, Beth A.; Masciadri, Elena; Lenzen, Rainer
2008-02-01
We examine the implications for the distribution of extrasolar planets based on the null results from two of the largest direct imaging surveys published to date. Combining the measured contrast curves from 22 of the stars observed with the VLT NACO adaptive optics system by Masciadri and coworkers and 48 of the stars observed with the VLT NACO SDI and MMT SDI devices by Biller and coworkers (for a total of 60 unique stars), we consider what distributions of planet masses and semimajor axes can be ruled out by these data, based on Monte Carlo simulations of planet populations. We can set the following upper limit with 95% confidence: the fraction of stars with planets with semimajor axis between 20 and 100 AU, and mass above 4 MJup, is 20% or less. Also, with a distribution of planet mass of dN/dM propto M‑1.16 in the range of 0.5-13 MJup, we can rule out a power-law distribution for semimajor axis (dN/da propto aα) with index 0 and upper cutoff of 18 AU, and index -0.5 with an upper cutoff of 48 AU. For the distribution suggested by Cumming et al., a power-law of index –0.61, we can place an upper limit of 75 AU on the semimajor axis distribution. In general, we find that even null results from direct imaging surveys are very powerful in constraining the distributions of giant planets (0.5-13 MJup) at large separations, but more work needs to be done to close the gap between planets that can be detected by direct imaging, and those to which the radial velocity method is sensitive.
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui
2012-01-01
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945
NASA Astrophysics Data System (ADS)
Miksovsky, J.; Huth, R.; Halenka, T.; Belda, M.; Farda, A.; Skalak, P.; Stepanek, P.
2009-12-01
To bridge the resolution gap between the outputs of global climate models (GCMs) and finer-scale data needed for studies of the climate change impacts, two approaches are widely used: dynamical downscaling, based on application of regional climate models (RCMs) embedded into the domain of the GCM simulation, and statistical downscaling (SDS), using empirical transfer functions between the large-scale data generated by the GCM and local measurements. In our contribution, we compare the performance of different variants of both techniques for the region of Central Europe. The dynamical downscaling is represented by the outputs of two regional models run in the 10 km horizontal grid, ALADIN-CLIMATE/CZ (co-developed by the Czech Hydrometeorological Institute and Meteo-France) and RegCM3 (developed by the Abdus Salam Centre for Theoretical Physics). The applied statistical methods were based on multiple linear regression, as well as on several of its nonlinear alternatives, including techniques employing artificial neural networks. Validation of the downscaling outputs was carried out using measured data, gathered from weather stations in the Czech Republic, Slovakia, Austria and Hungary for the end of the 20th century; series of daily values of maximum and minimum temperature, precipitation and relative humidity were analyzed. None of the regional models or statistical downscaling techniques could be identified as the universally best one. For instance, while most statistical methods misrepresented the shape of the statistical distribution of the target variables (especially in the more challenging cases such as estimation of daily precipitation), RCM-generated data often suffered from severe biases. It is also shown that further enhancement of the simulated fields of climate variables can be achieved through a combination of dynamical downscaling and statistical postprocessing. This can not only be used to reduce biases and other systematic flaws in the generated time series, but also to further localize the RCM outputs beyond the resolution of their original grid. The resulting data then provide a suitable input for subsequent studies of the local climate and its change in the target region.
Specificity of mathematical description of statistical and dynamical properties of CELSS
NASA Astrophysics Data System (ADS)
Bartsev, Sergey
CELSS for long-term space missions has to be possessed high level of matter turnover closure. Designing, studying, and maintaining such kind of systems seems to be not possible without accounting their specificity -high closure. For measuring this specific property potentially universal coefficient of closure is suggested and disscussed. It can be shown standard statistical formulas are incorrect for estimating mean values of biomass of CELSS components. Account-ing closure as specific constraint of closed ecological systems allows obtaining correct formulas for calculating mean values of biomass and composition of chemical compounds of CELSS. Errors due to using standard statistical evaluations are discussed. Organisms composing bi-ological LSS consume and produce spectrum of different substances. Providing high level of closure -the absence of deadlocks -depends on accuracy of adjusting all organisms input and output to each other. This is practical objective of high importance. Adequate mathematical models ought to describe possibility of organisms to vary their consumption and production spectrum (stoichiometric ratio). Traditional ecological models describing dynamics of limiting element can not be adequately applied for describing CELSS dynamics over all possible oper-ating regimes. Possible use of adaptive metabolism models for providing correct description of CELSS dynamics is considered.
Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions
NASA Astrophysics Data System (ADS)
Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu
2014-04-01
Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics.
Quantum particle statistics on the holographic screen leads to modified Newtonian dynamics
NASA Astrophysics Data System (ADS)
Pazy, E.; Argaman, N.
2012-05-01
Employing a thermodynamic interpretation of gravity based on the holographic principle and assuming underlying particle statistics, fermionic or bosonic, for the excitations of the holographic screen leads to modified Newtonian dynamics (MOND). A connection between the acceleration scale a0 appearing in MOND and the Fermi energy of the holographic fermionic degrees of freedom is obtained. In this formulation the physics of MOND results from the quantum-classical crossover in the fermionic specific heat. However, due to the dimensionality of the screen, the formalism is general and applies to two-dimensional bosonic excitations as well. It is shown that replacing the assumption of the equipartition of energy on the holographic screen by a standard quantum-statistical-mechanics description wherein some of the degrees of freedom are frozen out at low temperatures is the physical basis for the MOND interpolating function μ˜. The interpolating function μ˜ is calculated within the statistical mechanical formalism and compared to the leading phenomenological interpolating functions, most commonly used. Based on the statistical mechanical view of MOND, its cosmological implications are reinterpreted: the connection between a0 and the Hubble constant is described as a quantum uncertainty relation; and the relationship between a0 and the cosmological constant is better understood physically.
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-10-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (˜15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical 18F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ˜30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
NASA Astrophysics Data System (ADS)
Guan, Daren; Yi, Xizhang; Meng, Qingtian; Zheng, Yujun; Liu, Chengbu; Jiang, Yuansheng
2002-07-01
The dynamical Lie algebraic (DLA) method of Alhassid and Levine [Phys. Rev. A 18 (1978) 89] is applied to statistical mechanics in rotationally inelastic scattering of molecules from surfaces. Specifically, the method is generalized to include the motion of surface atoms, i.e., phonons. For given Hamiltonian and initial state, the set of constraints required to obtain the solution of the motion equations is determined by an algebraic procedure. It is furthermore found possible to derive the motion equations for the mean values of the constraints. Application of the method to the scattering of NO molecules from a Pt(1 1 1) surface is made. The mean values of the final energies of NO molecules scattered from the surface obtained using the DLA method are in good agreement with experimental results in qualitative trends. The DLA method thus appears to have a wide range of validity for describing the statistical mechanics of the gas-surface scattering.
Cluster statistics and quasisoliton dynamics in microscopic optimal-velocity models
NASA Astrophysics Data System (ADS)
Yang, Bo; Xu, Xihua; Pang, John Z. F.; Monterola, Christopher
2016-04-01
Using the non-linear optimal velocity models as an example, we show that there exists an emergent intrinsic scale that characterizes the interaction strength between multiple clusters appearing in the solutions of such models. The interaction characterizes the dynamics of the localized quasisoliton structures given by the time derivative of the headways, and the intrinsic scale is analogous to the "charge" of the quasisolitons, leading to non-trivial cluster statistics from the random perturbations to the initial steady states of uniform headways. The cluster statistics depend both on the quasisoliton charge and the density of the traffic. The intrinsic scale is also related to an emergent quantity that gives the extremum headways in the cluster formation, as well as the coexistence curve separating the absolute stable phase from the metastable phase. The relationship is qualitatively universal for general optimal velocity models.
Statistics of voltage drop in distribution circuits: a dynamic programming approach
Turitsyn, Konstantin S
2010-01-01
We analyze a power distribution line with high penetration of distributed generation and strong variations of power consumption and generation levels. In the presence of uncertainty the statistical description of the system is required to assess the risks of power outages. In order to find the probability of exceeding the constraints for voltage levels we introduce the probability distribution of maximal voltage drop and propose an algorithm for finding this distribution. The algorithm is based on the assumption of random but statistically independent distribution of loads on buses. Linear complexity in the number of buses is achieved through the dynamic programming technique. We illustrate the performance of the algorithm by analyzing a simple 4-bus system with high variations of load levels.
Shimada, Tomohiro; Shimada, Kaori; Matsui, Makoto; Kitai, Yuichi; Igarashi, Jun; Suga, Hiroaki; Ishihama, Akira
2014-05-01
In Gram-negative bacteria, N-acylhomoserine lactone (HSL) is used as a signal in cell-cell communication and quorum sensing (QS). The model prokaryote Escherichia coli lacks the system of HSL synthesis, but is capable of monitoring HSL signals in environment. Transcription factor SdiA for cell division control is believed to play a role as a HSL sensor. Using a collection of 477 species of chemically synthesized HSL analogues, we identified three synthetic signal molecules (SSMs) that bind in vitro to purified SdiA. After SELEX-chip screening of SdiA-binding DNA sequences, a striking difference was found between these SSMs in the pattern of regulation target genes on the E. coli genome. Based on Northern blot analysis in vivo, a set of target genes were found to be repressed by SdiA in the absence of effectors and derepressed by the addition of SSMs. Another set of genes were, however, expressed in the absence of effector ligands but repressed by the addition of SSMs. Taken together, we propose that the spectrum of taget gene selection by SdiA is modulated in multiple modes depending on the interacting HSL-like signal molecules. PMID:24645791
An open-source wireless sensor stack: from Arduino to SDI-12 to Water One Flow
NASA Astrophysics Data System (ADS)
Hicks, S.; Damiano, S. G.; Smith, K. M.; Olexy, J.; Horsburgh, J. S.; Mayorga, E.; Aufdenkampe, A. K.
2013-12-01
Implementing a large-scale streaming environmental sensor network has previously been limited by the high cost of the datalogging and data communication infrastructure. The Christina River Basin Critical Zone Observatory (CRB-CZO) is overcoming the obstacles to large near-real-time data collection networks by using Arduino, an open source electronics platform, in combination with XBee ZigBee wireless radio modules. These extremely low-cost and easy-to-use open source electronics are at the heart of the new DIY movement and have provided solutions to countless projects by over half a million users worldwide. However, their use in environmental sensing is in its infancy. At present a primary limitation to widespread deployment of open-source electronics for environmental sensing is the lack of a simple, open-source software stack to manage streaming data from heterogeneous sensor networks. Here we present a functioning prototype software stack that receives sensor data over a self-meshing ZigBee wireless network from over a hundred sensors, stores the data locally and serves it on demand as a CUAHSI Water One Flow (WOF) web service. We highlight a few new, innovative components, including: (1) a versatile open data logger design based the Arduino electronics platform and ZigBee radios; (2) a software library implementing SDI-12 communication protocol between any Arduino platform and SDI12-enabled sensors without the need for additional hardware (https://github.com/StroudCenter/Arduino-SDI-12); and (3) 'midStream', a light-weight set of Python code that receives streaming sensor data, appends it with metadata on the fly by querying a relational database structured on an early version of the Observations Data Model version 2.0 (ODM2), and uses the WOFpy library to serve the data as WaterML via SOAP and REST web services.
Dynamics and statistics of wave-particle interactions in a confined geometry
NASA Astrophysics Data System (ADS)
Gilet, Tristan
2014-11-01
A walker is a droplet bouncing on a liquid surface and propelled by the waves that it generates. This macroscopic wave-particle association exhibits behaviors reminiscent of quantum particles. This article presents a toy model of the coupling between a particle and a confined standing wave. The resulting two-dimensional iterated map captures many features of the walker dynamics observed in different configurations of confinement. These features include the time decomposition of the chaotic trajectory in quantized eigenstates and the particle statistics being shaped by the wave. It shows that deterministic wave-particle coupling expressed in its simplest form can account for some quantumlike behaviors.
NASA Astrophysics Data System (ADS)
Shaklan, Stuart B.; Marchen, Luis; Peterson, Lee; Levine, Marie B.
2014-08-01
We have combined our Excel-based coronagraph dynamics error budget spreadsheets with DAKOTA scripts to perform statistical analyses of the predicted dark-hole contrast. Whereas in the past we have reported the expected contrast level for an input set of allocated parameters, we now generate confidence intervals for the predicted contrast. Further, we explore the sensitivity to individual or groups of parameters and model uncertainty factors through aleatory-epistemic simulations based on a surrogate model fitted to the error budget. We show example results for a generic high-contrast coronagraph.
q-deformed statistical-mechanical structure in the dynamics of the Feigenbaum attractor
NASA Astrophysics Data System (ADS)
Robledo, A.
2010-09-01
We show that the two complementary parts of the dynamics associated to the Feigenbaum attractor, inside and towards the attractor, form together a q-deformed statistical-mechanical structure. A time-dependent partition function produced by summing distances between neighboring positions of the attractor leads to a q-entropy that measures the ratio of ensemble trajectories still away at a given time from the attractor (and the repellor). The values of the q-indexes are given by the attractor's universal constants, while the thermodynamic framework is closely related to that first developed for multifractals.
Statistical study of flux ropes observed by the Solar Dynamics Observatory
NASA Astrophysics Data System (ADS)
Zhang, Jun; Yang, Shuhong; Li, Ting
With the observations from the Atmospheric Imaging Assembly (AIA) aboard the Solar Dynamics Observatory, we statistically investigate the flux ropes from January 2013 to December 2013. 1417 ropes were detected during this period. 725 of the 1417 ropes were tracked by the eruptions or activities of filaments and surges, and the others were detected by emerging from the photosphere or brightening in the corona. 531 ropes, which occupied 38% of the total ropes, were detected in the northern hemisphere, implying a significant imbalance distribution of the ropes over the whole Sun.
NASA Astrophysics Data System (ADS)
Haas, Rabea; Pinto, Joaquim G.
2013-04-01
The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.
Fictitious level dynamics: A novel approach to spectral statistics in disordered conductors
Chalker, J.T.; Lerner, I.V.; Smith, R.A.
1996-10-01
We establish a new approach to calculating spectral statistics in disordered conductors, by considering how energy levels move in response to changes in the impurity potential. We use this fictitious dynamics to calculate the spectral form factor in two ways. First, describing the dynamics using a Fokker{endash}Planck equation, we make a physically motivated decoupling, obtaining the spectral correlations in terms of the quantum return probability. Second, from an identity which we derive between two- and three-particle correlation functions, we make a mathematically controlled decoupling to obtain the same result. We also calculate weak localization corrections to this result, and show for two dimensional systems (which are of most interest) that corrections vanish to three-loop order. {copyright} {ital 1996 American Institute of Physics.}
Hotspots of boundary accumulation: dynamics and statistics of micro-swimmers in flowing films.
Mathijssen, Arnold J T M; Doostmohammadi, Amin; Yeomans, Julia M; Shendruk, Tyler N
2016-02-01
Biological flows over surfaces and interfaces can result in accumulation hotspots or depleted voids of microorganisms in natural environments. Apprehending the mechanisms that lead to such distributions is essential for understanding biofilm initiation. Using a systematic framework, we resolve the dynamics and statistics of swimming microbes within flowing films, considering the impact of confinement through steric and hydrodynamic interactions, flow and motility, along with Brownian and run-tumble fluctuations. Micro-swimmers can be peeled off the solid wall above a critical flow strength. However, the interplay of flow and fluctuations causes organisms to migrate back towards the wall above a secondary critical value. Hence, faster flows may not always be the most efficacious strategy to discourage biofilm initiation. Moreover, we find run-tumble dynamics commonly used by flagellated microbes to be an intrinsically more successful strategy to escape from boundaries than equivalent levels of enhanced Brownian noise in ciliated organisms. PMID:26841796
NASA Astrophysics Data System (ADS)
Arbona, A.; Bona, C.; Miñano, B.; Plastino, A.
2014-09-01
The definition of complexity through Statistical Complexity Measures (SCM) has recently seen major improvements. Mostly, the effort is concentrated in measures on time series. We propose a SCM definition for spatial dynamical systems. Our definition is in line with the trend to combine entropy with measures of structure (such as disequilibrium). We study the behaviour of our definition against the vectorial noise model of Collective Motion. From a global perspective, we show how our SCM is minimal at both the microscale and macroscale, while it reaches a maximum at the ranges that define the mesoscale in this model. From a local perspective, the SCM is minimum both in highly ordered and disordered areas, while it reaches a maximum at the edges between such areas. These characteristics suggest this is a good candidate for detecting the mesoscale of arbitrary dynamical systems as well as regions where the complexity is maximal in such systems.
Neutral dynamics with environmental noise: Age-size statistics and species lifetimes
NASA Astrophysics Data System (ADS)
Kessler, David; Suweis, Samir; Formentin, Marco; Shnerb, Nadav M.
2015-08-01
Neutral dynamics, where taxa are assumed to be demographically equivalent and their abundance is governed solely by the stochasticity of the underlying birth-death process, has proved itself as an important minimal model that accounts for many empirical datasets in genetics and ecology. However, the restriction of the model to demographic [O (√{N }) ] noise yields relatively slow dynamics that appears to be in conflict with both short-term and long-term characteristics of the observed systems. Here we analyze two of these problems—age-size relationships and species extinction time—in the framework of a neutral theory with both demographic and environmental stochasticity. It turns out that environmentally induced variations of the demographic rates control the long-term dynamics and modify dramatically the predictions of the neutral theory with demographic noise only, yielding much better agreement with empirical data. We consider two prototypes of "zero mean" environmental noise, one which is balanced with regard to the arithmetic abundance, another balanced in the logarithmic (fitness) space, study their species lifetime statistics, and discuss their relevance to realistic models of community dynamics.
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
Avalappampatty Sivasamy, Aneetha; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
NASA Astrophysics Data System (ADS)
Huth, Radan; Mikšovský, Jiří; Štěpánek, Petr; Belda, Michal; Farda, Aleš; Chládová, Zuzana; Pišoft, Petr
2015-05-01
Minimum and maximum temperature in two regional climate models and five statistical downscaling models are validated according to a unified set of criteria that have a potential relevance for impact assessments: persistence (temporal autocorrelations), spatial autocorrelations, extreme quantiles, skewness, kurtosis, and the degree of fit to observed data on both short and long times scales. The validation is conducted on two dense grids in central Europe as follows: (1) a station network and (2) a grid with a resolution of 10 km. The gridded dataset is not contaminated by artifacts of the interpolation procedure; therefore, we claim that using a gridded dataset as a validation base is a valid approach. The fit to observations in short time scales is equally good for the statistical downscaling (SDS) models and regional climate models (RCMs) in winter, while it is much better for the SDS models in summer. The reproduction of variability on long time scales, expressed as linear trends, is similarly successful by both SDS models and RCMs. Results for other criteria suggest that there is no justification for preferring dynamical models at the expense of statistical models—and vice versa. The non-linear SDS models do not outperform the linear ones.
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Marks, David A.; Travis, James E.
2014-01-01
A statistical method is developed for comparing precipitation data from measurements performed by (hypothetical) perfect instruments using a recently developed stochastic model of rainfall. The stochastic dynamical equation that describes the underlying random process naturally leads to a consistent spectrum and incorporates the subtle interdependence of the length and time scales governing the statistical fluctuations of the rain rate field. The main attraction of such a model is that the complete set of second-moment statistics embodied in the space-time covariance of both the area-averaged instantaneous rain rate (represented by radar or passive microwave data near the ground) and the time-averaged point rain rate (represented by rain gauge data) can be expressed as suitable integrals over the spectrum. With the help of this framework, the model allows one to carry out a faithful intercomparison of precipitation estimates derived from radar or passive microwave remote sensing over an area with direct observations by rain gauges or disdrometers, assuming all the measuring instruments to be ideal. A standard linear regression analysis approach to the intercomparison of radar and gauge rain rate estimates is formulated in terms of the appropriate observed and model-derived quantities. We also estimate the relative sampling error as well as separate absolute sampling errors for radar and gauge measurements of rainfall from the spectral model.
Sivasamy, Aneetha Avalappampatty; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
SERVIR's Contributions and Benefits to Belize thru Spatial Data Infrastructure (SDI) Development
NASA Technical Reports Server (NTRS)
Irwin, Daniel E.
2006-01-01
Dan Irwin, the SERVIR Project Manager is being honored with the privilege of delivering the opening remarks at Belize s second celebration of GIS Day, a weeklong event to be held at the University of Belize's campus in the nation s capital, Belmopan. The request has been extended by the GIS Day Planning Committee which operates under the auspices of Belize s Ministry of Natural Resources & the Environment, which is the focal ministry for SERVIR. In the 20-30 min. allotted for the opening remarks, the SERVIR Project Manager will expound on how SERVIR, operating under the auspices of NASA s Ecological Forecasting Program, contributes to spatial data infrastructure (SDI) development in Belize. NASA s contributions to the region - particularly work under the Mesoamerican Biological Corridor - will be highlighted. Continuing, the remarks will discuss SERVIR s role in Belize s steadily expanding SDI, particularly in the context of delivering integrated decision support products via web-based infrastructure. The remarks will close with a call to the parties assembled to work together in the application of Earth Observation Systems technologies for the benefit of Belizean society as a whole. NASA s strong presence in Belize s GIS Day celebrations will be highlighted as sustained goodwill of the American people - in partial fulfillment of goals set forth under the Global Earth Observation System of Systems (GEOSS).
NASA Astrophysics Data System (ADS)
Laugel, Amélie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; Mendez, Fernando
2013-04-01
Wave climate forecasting is a major issue for numerous marine and coastal related activities, such as offshore industries, flooding risks assessment and wave energy resource evaluation, among others. Generally, there are two main ways to predict the impacts of the climate change on the wave climate at regional scale: the dynamical and the statistical downscaling of GCM (Global Climate Model). In this study, both methods have been applied on the French coast (Atlantic , English Channel and North Sea shoreline) under three climate change scenarios (A1B, A2, B1) simulated with the GCM ARPEGE-CLIMAT, from Météo-France (AR4, IPCC). The aim of the work is to characterise the wave climatology of the 21st century and compare the statistical and dynamical methods pointing out advantages and disadvantages of each approach. The statistical downscaling method proposed by the Environmental Hydraulics Institute of Cantabria (Spain) has been applied (Menendez et al., 2011). At a particular location, the sea-state climate (Predictand Y) is defined as a function, Y=f(X), of several atmospheric circulation patterns (Predictor X). Assuming these climate associations between predictor and predictand are stationary, the statistical approach has been used to project the future wave conditions with reference to the GCM. The statistical relations between predictor and predictand have been established over 31 years, from 1979 to 2009. The predictor is built as the 3-days-averaged squared sea level pressure gradient from the hourly CFSR database (Climate Forecast System Reanalysis, http://cfs.ncep.noaa.gov/cfsr/). The predictand has been extracted from the 31-years hindcast sea-state database ANEMOC-2 performed with the 3G spectral wave model TOMAWAC (Benoit et al., 1996), developed at EDF R&D LNHE and Saint-Venant Laboratory for Hydraulics and forced by the CFSR 10m wind field. Significant wave height, peak period and mean wave direction have been extracted with an hourly-resolution at 110 coastal locations along the French coast. The model, based on the BAJ parameterization of the source terms (Bidlot et al, 2007) was calibrated against ten years of GlobWave altimeter observations (2000-2009) and validated through deep and shallow water buoy observations. The dynamical downscaling method has been performed with the same numerical wave model TOMAWAC used for building ANEMOC-2. Forecast simulations are forced by the 10m wind fields of ARPEGE-CLIMAT (A1B, A2, B1) from 2010 to 2100. The model covers the Atlantic Ocean and uses a spatial resolution along the French and European coast of 10 and 20 km respectively. The results of the model are stored with a time resolution of one hour. References: Benoit M., Marcos F., and F. Becq, (1996). Development of a third generation shallow-water wave model with unstructured spatial meshing. Proc. 25th Int. Conf. on Coastal Eng., (ICCE'1996), Orlando (Florida, USA), pp 465-478. Bidlot J-R, Janssen P. and Adballa S., (2007). A revised formulation of ocean wave dissipation and its model impact, technical memorandum ECMWF n°509. Menendez, M., Mendez, F.J., Izaguirre,C., Camus, P., Espejo, A., Canovas, V., Minguez, R., Losada, I.J., Medina, R. (2011). Statistical Downscaling of Multivariate Wave Climate Using a Weather Type Approach, 12th International Workshop on Wave Hindcasting and Forecasting and 3rd Coastal Hazard Symposium, Kona (Hawaii).
Statistical analysis of a new European Cloud Dynamics and Radiation Database
NASA Astrophysics Data System (ADS)
Casella, D.; Formenton, M.; Leung, W.-Y.; Mugnai, A.; Sanò, P.; Smith, E. A.; Tripoli, G. J.
2009-04-01
Physically-based algorithms for the retrieval of precipitation from satellite-borne microwave (MW) radiometers, make use of Cloud Radiation Databases (CRD's) that are composed of thousands of detailed microphysical cloud profiles, obtained from Cloud Resolving Model (CRM) simulations, coupled with the corresponding brightness temperatures (TB's), calculated by applying Radiative Transfer (RT) schemes to the CRM outputs. Usually, CRD's are generated on the basis of CRM simulations of past precipitation events and then utilized for the analysis of satellite observations of new events. Notably, retrieval precision and accuracy is strictly related to the appropriate generation of the cloud profile datasets associated to the typologies of the observed precipitation events more than to an a-posteriori statistical treatment of uncertainties. In essence, the retrieval performance can be improved by generating a statistically significant CRD by means of a large number of different CRM simulations representing all precipitation regimes that are of interest for the zone(s) and season(s) under investigation. In addition, it should be noted that despite some reasonable successes with the CRD and the Bayesian approach, there is a considerable reservoir of potential information available that has not been yet tapped. This ancillary information exists in the knowledge of the "synoptic situation" of the considered event and the geographical and temporal location of the event. This knowledge renders some entries into the CRD more relevant than others by virtue of how similar the circumstances of the simulated events are to those of the event for which the database is applied. We can capture this information in the form of "dynamical tags" which can be used to link a satellite-observed event to a subset of the entire CRD using an independent estimate of these tags. To accomplish this, we have expanded the CRD approach so as to include these "dynamical tags" and have developed a new passive MW precipitation retrieval algorithm which employs these tags in addition to the upwelling TB's. We call these the Cloud Dynamics and Radiation Database (CDRD) approach and the CDRD Algorithm, respectively. Recently, we have generated a CDRD database for Europe using a large amount of CRM simulations of precipitating systems over this area by means of the "University of Wisconsin - Non-hydrostatic Modeling System" (UW-NMS). In our presentation, we will briefly review the main design features of the CDRD approach and will show an analysis of the statistical properties of this highly-populated European CDRD database. Finally, we will compare its radiative characteristics with an equivalent set of MW radiometric measurements from polar satellites.
Collisional statistics and dynamics of two-dimensional hard-disk systems: From fluid to solid.
Taloni, Alessandro; Meroz, Yasmine; Huerta, Adrián
2015-08-01
We perform extensive MD simulations of two-dimensional systems of hard disks, focusing on the collisional statistical properties. We analyze the distribution functions of velocity, free flight time, and free path length for packing fractions ranging from the fluid to the solid phase. The behaviors of the mean free flight time and path length between subsequent collisions are found to drastically change in the coexistence phase. We show that single-particle dynamical properties behave analogously in collisional and continuous-time representations, exhibiting apparent crossovers between the fluid and the solid phases. We find that, both in collisional and continuous-time representation, the mean-squared displacement, velocity autocorrelation functions, intermediate scattering functions, and self-part of the van Hove function (propagator) closely reproduce the same behavior exhibited by the corresponding quantities in granular media, colloids, and supercooled liquids close to the glass or jamming transition. PMID:26382368
Extended Dynamic Subgraph Statistics Using h-Index Parameterized Data Structures
NASA Astrophysics Data System (ADS)
Eppstein, David; Goodrich, Michael T.; Strash, Darren; Trott, Lowell
We present techniques for maintaining subgraph frequencies in a dynamic graph, using data structures that are parameterized in terms of h, the h-index of the graph. Our methods extend previous results of Eppstein and Spiro for maintaining statistics for undirected subgraphs of size three to directed subgraphs and to subgraphs of size four. For the directed case, we provide a data structure to maintain counts for all 3-vertex induced subgraphs in O(h) amortized time per update. For the undirected case, we maintain the counts of size-four subgraphs in O(h 2) amortized time per update. These extensions enable a number of new applications in Bioinformatics and Social Networking research.
A copula approach on the dynamics of statistical dependencies in the US stock market
NASA Astrophysics Data System (ADS)
Münnix, Michael C.; Schäfer, Rudi
2011-11-01
We analyze the statistical dependence structure of the S&P 500 constituents in the 4-year period from 2007 to 2010 using intraday data from the New York Stock Exchange’s TAQ database. Instead of using a given parametric copula with a predetermined shape, we study the empirical pairwise copula directly. We find that the shape of this copula resembles the Gaussian copula to some degree, but exhibits a stronger tail dependence, for both correlated and anti-correlated extreme events. By comparing the tail dependence dynamically to the market’s average correlation level as a commonly used quantity we disclose the average level of error of the Gaussian copula, which is implied in the calculation of many correlation coefficients.
Defect-phase-dynamics approach to statistical domain-growth problem of clock models
NASA Technical Reports Server (NTRS)
Kawasaki, K.
1985-01-01
The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
NASA Astrophysics Data System (ADS)
Kruschke, T.; Lorenz, P.; Osinski, R.; Voigt, M.; Leckebusch, G. C.; Ulbrich, U.
2012-04-01
Extreme winter wind storms are major natural catastrophes leading to enormous socio-economic impacts in Europe. The impact of a single events depends on the severity and extent of the event itself but also on the region hit by the storm, combined with its specific exposure of values and vulnerability. The spatial distribution of exposed values and their vulnerability is highly heterogeneous. Therefore, it is necessary to analyze extremes of surface wind speeds within winter wind storms with high spatial resolution. This study analyzes if rather simple linear regression methods are suitable for estimating extreme surface wind gusts of high spatial resolution, using different coarse resolution predictors. The statistical relationships between coarse resolution predictors from ECMWF reanalysis data and high resolution (~7km x 7km) predictands, i.e. the maximum gusts, are derived from dynamical simulations of extreme historical events performed with the German Weather Service (DWD) model chain GME—COSMO-EU. Validation of the results of the statistical downscaling confirms the high skill of linear regressions for different European sub-regions. Hence, the application of these methods to more extensive datasets in order to estimate extreme wind gusts and their exceedance probabilities or return periods is justified.
Cai Jing; Read, Paul W.; Larner, James M.; Jones, David R.; Benedict, Stanley H.; Sheng Ke
2008-11-15
Purpose: To investigate the statistical reproducibility of craniocaudal probability distribution function (PDF) of interfraction lung motion using dynamic magnetic resonance imaging. Methods and Materials: A total of 17 subjects, 9 healthy volunteers and 8 lung tumor patients, underwent two to three continuous 300-s magnetic resonance imaging scans in the sagittal plane, repeated 2 weeks apart. Three pulmonary vessels from different lung regions (upper, middle, and lower) in the healthy subjects and lung tumor patients were selected for tracking, and the displacement PDF reproducibility was evaluated as a function of scan time and frame rate. Results: For both healthy subjects and patients, the PDF reproducibility improved with increased scan time and converged to an equilibrium state during the 300-s scan. The PDF reproducibility at 300 s (mean, 0.86; range, 0.70-0.96) were significantly (p < 0.001) increased compared with those at 5 s (mean, 0.65; range, 0.25-0.79). PDF reproducibility showed less sensitivity to imaging frame rates that were >2 frames/s. Conclusion: A statistically significant improvement in PDF reproducibility was observed with a prolonged scan time among the 17 participants. The confirmation of PDF reproducibility over times much shorter than stereotactic body radiotherapy delivery duration is a vital part of the initial validation process of probability-based treatment planning for stereotactic body radiotherapy for lung cancer.
Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin
2014-01-01
The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799–1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi–Dirac or Bose–Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919
Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin
2014-01-01
The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799-1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi-Dirac or Bose-Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919
Drought episodes over Greece as simulated by dynamical and statistical downscaling approaches
NASA Astrophysics Data System (ADS)
Anagnostopoulou, Christina
2016-04-01
Drought over the Greek region is characterized by a strong seasonal cycle and large spatial variability. Dry spells longer than 10 consecutive days mainly characterize the duration and the intensity of Greek drought. Moreover, an increasing trend of the frequency of drought episodes has been observed, especially during the last 20 years of the 20th century. Moreover, the most recent regional circulation models (RCMs) present discrepancies compared to observed precipitation, while they are able to reproduce the main patterns of atmospheric circulation. In this study, both a statistical and a dynamical downscaling approach are used to quantify drought episodes over Greece by simulating the Standardized Precipitation Index (SPI) for different time steps (3, 6, and 12 months). A statistical downscaling technique based on artificial neural network is employed for the estimation of SPI over Greece, while this drought index is also estimated using the RCM precipitation for the time period of 1961-1990. Overall, it was found that the drought characteristics (intensity, duration, and spatial extent) were well reproduced by the regional climate models for long term drought indices (SPI12) while ANN simulations are better for the short-term drought indices (SPI3).
An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Saether, E.; Glaessgen, E.H.; Yamakov, V.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
Dislocation dynamics, plasticity and avalanche statistics using the phase-field crystal model
NASA Astrophysics Data System (ADS)
Angheluta, Luiza
2013-03-01
The plastic deformation of stressed crystalline materials is characterized by intermittency and scaling behavior. The sudden strain bursts arise from collective interactions between depinned crystal defects such as dislocations. Recent experiments on sheared nanocrystals provide insights into the connection between the crystal plasticity and the mean field theory of the depinning transition, based on the similar power-law statistics of avalanche events. However, a complete theoretical formulation of this connection is still lacking, as are high quality numerical data. Phase field crystal modelling provides an efficient numerical approach to simulating the dynamics of dislocations in plastic flows at finite temperature. Dislocations are naturally created as defects in a periodic ground state that is being sheared, without any ad hoc creation and annihilation rules. These crystal defects interact and annihilate with one another, generating a collective effect of avalanches in the global plastic strain rate. We examine the statistics of plastic avalanches both at finite and zero temperatures, and find good agreement with the predictions of the mean field interface depinning theory. Moreover, we predict universal scaling forms for the extreme statistics of avalanches and universal relations between the power-law exponents of avalanche duration, size and extreme value. These results account for the observed power-law distribution of the maximum amplitudes in acoustic emission experiments of crystal plasticity, but are also broadly applicable to other systems in the mean-field interface depinning universality class, ranging from magnets to earthquakes. The work reported here was performed in collaboration with: Georgios Tsekenis, Michael LeBlanc, Patrick Y Chan, Jon Dantzig, Karin Dahmen, and Nigel Goldenfeld. The work was supported by the Center for Physics of Geological Processes (Norway) through a post-doctoral grant, the National Science Foundation through grant NSF-DMR-03-25939, NSF_DMR-1005209 and NSF-DMS-1069224 and DOE Subcontract No. 4000076535 (J.D.)
NASA Astrophysics Data System (ADS)
Tessarotto, Massimo; Cremaschini, Claudio
2014-07-01
In this investigation, exact particular realizations are sought for the microscopic statistical description which is associated with the classical dynamical system (CDS) formed by N identical smooth hard spheres subject to elastic collisions ( S N -CDS). The problem is posed in the framework of the ab initio statistical description of S N -CDS recently developed. It is shown that the Liouville equation associated with SN-CDS admits an exact particular solution for the N-body probability density function (PDF). This is factorized in terms of the i-th particle 1-body PDF (for all i = 1, N) via suitable weighting factors, which are denoted here as particle occupation coefficients. The latter are found to depend functionally only on the 1-body PDFs which are associated with each of the remaining particles belonging to S N -CDS. Furthermore, the 1-body PDF is proved to obey a well-defined statistical equation, referred to here as Master kinetic equation. This is an exact kinetic equation which takes into account the occurrence of configuration-space correlations due to the finite size of the extended particles, while depending functionally on the same 1-body PDF only. The asymptotic approximation of the Master equation, which holds in validity of the Boltzmann-Grad limit, is shown to recover in a suitable asymptotic sense the customary Boltzmann equation. Finally, a critical analysis is presented of the original and modified versions of the Enskog kinetic equation, as well as of some of the non-linear kinetic approaches formulated in the past for dense granular gases. Their conditions of validity and main differences with respect to the present theory are pointed out.
NASA Astrophysics Data System (ADS)
Funk, C. C.; Shukla, S.; Hoerling, M. P.; Robertson, F. R.; Hoell, A.; Liebmann, B.
2013-12-01
During boreal spring, eastern portions of Kenya and Somalia have experienced more frequent droughts since 1999. Given the region's high levels of food insecurity, better predictions of these droughts could provide substantial humanitarian benefits. We show that dynamical-statistical seasonal climate forecasts, based on the latest generation of coupled atmosphere-ocean and uncoupled atmospheric models, effectively predict boreal spring rainfall in this area. Skill sources are assessed by comparing ensembles driven with full-ocean forcing with ensembles driven with ENSO-only sea surface temperatures (SSTs). Our analysis suggests that both ENSO and non-ENSO Indo-Pacific SST forcing have played an important role in the increase in drought frequencies. Over the past 30 years, La Nia drought teleconnections have strengthened, while non-ENSO Indo-Pacific convection patterns have also supported increased (decreased) Western Pacific (East African) rainfall. To further examine the relative contribution of ENSO, low frequency warming and the Pacific Decadal Oscillation, we present decompositions of ECHAM5, GFS, CAM4 and GMAO AMIP simulations. These decompositions suggest that rapid warming in the western Pacific and steeper western-to-central Pacific SST gradients have likely played an important role in the recent intensification of the Walker circulation, and the associated increase in East African aridity. A linear combination of time series describing the Pacific Decadal Oscillation and the strength of Indo-Pacific warming are shown to track East African rainfall reasonably well. The talk concludes with a few thoughts linking the potentially important interplay of attribution and prediction. At least for recent East African droughts, it appears that a characteristic Indo-Pacific SST and precipitation anomaly pattern can be linked statistically to support forecasts and attribution analyses. The combination of traditional AGCM attribution analyses with simple yet physically plausible statistical estimation procedures may help us better untangle some climate mysteries.
ERIC Educational Resources Information Center
BIVONA, WILLIAM A.
A SET OF GUIDELINES FOR IMPLEMENTING AND OPERATING A REPLICA OF A PROTOTYPE SELECTIVE DISSEMINATION OF INFORMATION (SDI) SYSTEM TESTED AT U.S. ARMY NATICK LABORATORIES, AND REPORTED IN LI 000 273, IS GIVEN IN THIS MANUAL. INFORMATION IS SUPPLIED WHICH IS USEFUL IN THE INITIAL STAGES OF IMPLEMENTATION. THE APPLICATION OF SPECIFIC CRITERIA FOR…
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI) as with all microirrigation systems is typically only used on crops with greater value. In the U.S. Great Plains region, the typical irrigated crops are the cereal and oil seed crops and cotton. These crops have less economic revenue than typical microirrigated cro...
Technology Transfer Automated Retrieval System (TEKTRAN)
An experimental field moisture controlled subsurface drip irrigation (SDI) system was designed and installed as a field trial in a Vertisol in the Alabama Black Belt region for two years. The system was designed to start hydraulic dosing only when field moisture was below field capacity. Results sho...
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2011-03-01
The idea that quantum randomness can be reduced to randomness of classical fields (fluctuating at time and space scales which are essentially finer than scales approachable in modern quantum experiments) is rather old. Various models have been proposed, e.g., stochastic electrodynamics or the semiclassical model. Recently a new model, so called prequantum classical statistical field theory (PCSFT), was developed. By this model a "quantum system" is just a label for (so to say "prequantum") classical random field. Quantum averages can be represented as classical field averages. Correlations between observables on subsystems of a composite system can be as well represented as classical correlations. In particular, it can be done for entangled systems. Creation of such classical field representation demystifies quantum entanglement. In this paper we show that quantum dynamics (given by Schrdinger's equation) of entangled systems can be represented as the stochastic dynamics of classical random fields. The "effect of entanglement" is produced by classical correlations which were present at the initial moment of time, cf. views of Albert Einstein.
Ni, Bo; He, Fazhi; Yuan, ZhiYong
2015-12-01
Segmenting the lesion areas from ultrasound (US) images is an important step in the intra-operative planning of high-intensity focused ultrasound (HIFU). However, accurate segmentation remains a challenge due to intensity inhomogeneity, blurry boundaries in HIFU US images and the deformation of uterine fibroids caused by patient's breathing or external force. This paper presents a novel dynamic statistical shape model (SSM)-based segmentation method to accurately and efficiently segment the target region in HIFU US images of uterine fibroids. For accurately learning the prior shape information of lesion boundary fluctuations in the training set, the dynamic properties of stochastic differential equation and Fokker-Planck equation are incorporated into SSM (referred to as SF-SSM). Then, a new observation model of lesion areas (named to RPFM) in HIFU US images is developed to describe the features of the lesion areas and provide a likelihood probability to the prior shape given by SF-SSM. SF-SSM and RPFM are integrated into active contour model to improve the accuracy and robustness of segmentation in HIFU US images. We compare the proposed method with four well-known US segmentation methods to demonstrate its superiority. The experimental results in clinical HIFU US images validate the high accuracy and robustness of our approach, even when the quality of the images is unsatisfactory, indicating its potential for practical application in HIFU therapy. PMID:26459767
NASA Astrophysics Data System (ADS)
Fitzgerald, J.; Farrell, B.
2013-12-01
Equatorial deep jets (EDJs) are persistent, zonally-coherent jets found within one degree of the equator in all ocean basins (Luyten and Swallow, 1976). The jets are characterized by a vertically oscillating ('stacked') structure between ~500-2000m depth, with jet amplitudes on the order of 10 cm/s superimposed upon a large-scale background shear flow. EDJs are a striking feature of the equatorial climate system and play an important role in equatorial ocean transport. However, the physical mechanism responsible for the presence of EDJs remains uncertain. Previous theoretical models for EDJs have suggested mechanisms involving the reflection and constructive interference of equatorially trapped waves (Wunsch 1977, McCreary 1984) and the instability of mixed Rossby-gravity waves with EDJs as the fastest-growing eigenfunction (Hua et al. 2008, Eden et al. 2008). In this work we explore the jet formation mechanism and the parameter dependence of EDJ structure in the idealized theoretical model of the stochastically-driven equatorial beta plane. The model is formulated in three ways: 1) Fully nonlinear equations of motion 2) Quasilinear (or mean-field) dynamics 3) Statistical state dynamics employing a second order closure method (stochastic structural stability theory). Results from the three models are compared, and the implications for both the jet formation and equilibration mechanisms, as well as the role of eddy-eddy nonlinearity in the EDJ system, are discussed.
NASA Astrophysics Data System (ADS)
Ahn, J.; Lee, J.; Shim, K.; Kim, Y.
2013-12-01
In spite of dense meteorological observation conducting over South Korea (The average distance between stations: ~ 12.7km), the detailed topographical effect is not reflected properly due to its mountainous terrains and observation sites mostly situated on low altitudes. A model represents such a topographical effect well, but due to systematic biases in the model, the general temperature distribution is sometimes far different from actual observation. This study attempts to produce a detailed mean temperature distribution for South Korea through a method combining dynamical downscaling and statistical correction. For the dynamical downscaling, a multi-nesting technique is applied to obtain 3-km resolution data with a focus on the domain for the period of 10 years (1999-2008). For the correction of systematic biases, a perturbation method divided into the mean and the perturbation part was used with a different correction method being applied to each part. The mean was corrected by a weighting function while the perturbation was corrected by the self-organizing maps method. The results with correction agree well with the observed pattern compared to those without correction, improving the spatial and temporal correlations as well as the RMSE. In addition, they represented detailed spatial features of temperature including topographic signals, which cannot be expressed properly by gridded observation. Through comparison with in-situ observation with gridded values after objective analysis, it was found that the detailed structure correctly reflected topographically diverse signals that could not be derived from limited observation data. We expect that the correction method developed in this study can be effectively used for the analyses and projections of climate downscaled by using region climate models. Acknowledgements This work was carried out with the support of Korea Meteorological Administration Research and Development Program under Grant CATER 2012-3083 and Rural Development Administration Cooperative Research Program for Agriculture Science and Technology Development under Grant Project No. PJ009353, Republic of Korea. Reference Ahn, J.-B., Lee, J.-L., and Im, E.-S., 2012: The reproducibility of surface air temperature over South Korea using dynamical downscaling and statistical correction, J. Meteor. Soc. Japan, 90, 493-507, doi: 10.2151/jmsj.2012-404
Dynamical and statistical effects of the intrinsic curvature of internal space of molecules.
Teramoto, Hiroshi; Takatsuka, Kazuo
2005-02-15
The Hamilton dynamics of a molecule in a translationally and/or rotationally symmetric field is kept rigorously constrained in its phase space. The relevant dynamical laws should therefore be extracted from these constrained motions. An internal space that is induced by a projection of such a limited phase space onto configuration space is an intrinsically curved space even for a system of zero total angular momentum. In this paper we discuss the general effects of this curvedness on dynamics and structures of molecules in such a manner that is invariant with respect to the selection of coordinates. It is shown that the regular coordinate originally defined by Riemann is particularly useful to expose the curvature correction to the dynamics and statistical properties of molecules. These effects are significant both qualitatively and quantitatively and are studied in two aspects. One is the direct effect on dynamics: A trajectory receives a Lorentz-like force from the curved space as though it was placed in a magnetic field. The well-known problem of the trapping phenomenon at the transition state is analyzed from this point of view. By showing that the trapping force is explicitly described in terms of the curvature of the internal space, we clarify that the physical origin of the trapped motion is indeed originated from the curvature of the internal space and hence is not dependent of the selection of coordinate system. The other aspect is the effect of phase space volume arising from the curvedness: We formulate a general expression of the curvature correction of the classical density of states and extract its physical significance in the molecular geometry along with reaction rate in terms of the scalar curvature and volume loss (gain) due to the curvature. The transition state theory is reformulated from this point of view and it is applied to the structural transition of linear chain molecules in the so-called dihedral angle model. It is shown that the curvature effect becomes large roughly linearly with the size of molecule. PMID:15743215
NASA Astrophysics Data System (ADS)
Hastings, Whitney Allen
This dissertation combines rigid body motion kinematics and statistical analysis techniques to extract information from detailed dynamic simulations and large databases of biomolecular structures. This information is then used to quantify and elucidate structural patterns that could be used to design functional nano-structures or provide new targets for ligand-based drug design. In this regard, three particular classes of problems are examined. First, we propose new methods for estimating the stiffness of continuum filament models of helical nucleic acid structures. In this work, molecular dynamics is used to sample RNA helices consisting of several base-pairs fluctuating about an equilibrium position. At equilibrium, each base-pair has a tightly clustered probability distribution and so we can describe the rigid body motion of the helix as the convolution of highly concentrated probability densities on SE(3). Second, the structure and dynamics of a common RNA non-helical motif is classified. We examine several RNA bulges with varying sequences and helix curvature, and establish degrees of similarity (and dissimilarity) in the bulge motif according to the nucleic acid type of the bulge and surrounding base-pairs. Both the "static" X-ray-crystal and NMR structures and the dynamics generated from molecular dynamics simulations are used to quantify the flexibility and conservative aspects of the motif. The resulting classification scheme provides bulge motifs that could be included in a toolbox of "nanostructures" where one could pick the pieces to design a structure that has the needed shape and desired behavior. Finally, we analyze a large collection of adenosine binding sites, focusing on the functional region of the binding site. We provide a new analysis tool that finds spatial patterns in adenosine binding pockets by examining the relative pose (position and orientation) between the adenosine ligand and the amino acids at each binding site. The similarities of the numerous adenosine binding pockets are calculated according to the pose similarity and homogeny of the structures. We show that correlations between the binding pockets are multifaceted and illustrate our findings using similarity plots and multiple correlation calculations for a comprehensive analysis.
NASA Astrophysics Data System (ADS)
Balasis, G.
2012-04-01
Dynamical complexity detection for output time series of complex systems is one of the foremost problems in physics, biology, engineering, and economic sciences. Especially in geomagnetism and magnetospheric physics, accurate detection of the dissimilarity between normal and abnormal states (e.g. pre-storm activity and magnetic storms) can vastly improve geomagnetic field modelling as well as space weather forecasting, respectively. Nonextensive statistical mechanics through Tsallis entropy provides a solid theoretical basis for describing and analyzing complex systems out of equilibrium, particularly systems exhibiting long-range correlations or fractal properties. Entropy measures (e.g., Tsallis entropy, Shannon entropy, block entropy, Kolmogorov entropy, T-complexity, and approximate entropy) have been proven effectively applicable for the investigation of dynamical complexity in Dst time series. It has been demonstrated that as a magnetic storm approaches, there is clear evidence of significantly lower complexity in the magnetosphere. The observed higher degree of organization of the system agrees with results previously inferred from fractal analysis via estimates of the Hurst exponent based on wavelet transform. This convergence between entropies and linear analyses provides a more reliable detection of the transition from the quiet time to the storm time magnetosphere, thus showing evidence that the occurrence of an intense magnetic storm is imminent. Moreover, based on the general behavior of complex system dynamics it has been recently found that Dst time series exhibit discrete scale invariance which in turn leads to log-periodic corrections to scaling that decorate the pure power law. The latter can be used for the determination of the time of occurrence of an approaching magnetic storm.
Groundwater dynamic in a coastal aquifer using statistical analysis and geochemical modeling
NASA Astrophysics Data System (ADS)
Garone, A.; Battistel, M.; Barbieri, M.; Parisse, B.
2012-04-01
Coastal aquifers are natural environments particularly vulnerable and seriously threatened. Coastal areas are densely populated and this leads to a massive withdrawal of groundwater. These conditions may induce a salinization process of groundwater, due to the change of the balances that govern the coexistence between fresh and salt water. Hydrochemical data from well water samples collected during a year of monitoring in Palo Laziale area are used to evaluate water quality and to determine processes that control water chemistry. Geochemical ratios (EC vs rCl; Na vs Cl; rCl/rBr vs Cl) give information about mixing, cation exchange and salinity acquisition process. Multivariate statistical approach and geochemical modeling are adopted to assist the interpretation of the geochemical data, particularly cluster analysis and Principal Component Analysis. Software PHREEQC was used for geochemical modeling and data processing. In add to determining chemical components and some chemical-physical properties (T, pH,electrical conductivity, TDS) the studied provided following: statistical analysis of data, thermodynamic equilibrium of aquifer with quantitative analysis of the saturation index and the speciation of trace and minor elements. According to statistical analysis is possible to identify two different groups of water: typical of domestic wells in Palo SIC area and external wells. PCA analysis suggests that Palo Laziale SIC is located in an area of interface between fresh and salt water and there is a significant amount of water recharge (the monitoring of the area occurred in a year particularly rainy). Calculation of saturation indices (SI) for primary minerals (aragonite, calcite a dolomite-d) was carried out to obtain a quantitative estimate of the instability of these phases. The result of the calculation of saturation indices allow to define a sequence of instability of minerals: calcite > dolomite > aragonite. The order of solubility indicates that during the leaching process, calcite and aragonite are characterized by a kinetics faster than the dissolution of dolomite. A combination of statistic and geochemical techniques proved to be a reliable tool in the interpretation of hydrogeochemical dynamic of a coastal area.
The scientists' opposition to SDI: How political views affect technical analysis
Tait, G.E.
1989-01-01
This study examines the scientists' opposition to President Reagan's Strategic Defense Initiative (1983-1989) with a focus on the relationship between the scientists' political and strategic opposition to ballistic missile defenses (BMD) and their technical doubts about BMD technologies. The study begins with a review of the scientists' increased influence in United State's national security decision making because of the development of atomic weapons. The study then examines the scientists' role in developing and promoting a theory of arms control based upon mutual societal vulnerability. Because of this theory, a large segment of the American scientific community came to believe that the development of ballistic missile defenses would destabilize the strategic balance and therefore took the lead in arguing against BMD deployments. These background chapters conclude with an analysis of the scientists' involvement in the political campaign to stop the proposed Sentinel and Safeguard Anti-Ballistic Missile defense. The study then turns to the contemporary scientific opposition to BMD deployments and the SDI research program. After examining the polls and petitions that identify the scientists opposed to SDI, the study analyzes the tactics that three scientists use in their political effort to prevent BMD deployments. Next, an examination of the political and strategic assumptions behind the scientists' opposition to BMD reveals that a belief in the arms control process and deterrence by punishment, especially Assured Destruction deterrence, with a fear of an action-reaction arms race inspires much of the contemporary opposition to BMD. Finally, the scientists' technical doubts about BMD technologies are analyzed through the prism of peer critique. These critiques show that the scientists opposed to BMD deployments us pessimistic and unrealistic assumptions to skew their technical analysis of BMD technologies.
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-07-15
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937.9% for the 500 mA FBP, 25 mA SIR, and 25 mA FBP, respectively. In numerical simulations, SIR mitigated streak artifacts in the low dose data and yielded flow maps with mean error <7% and standard deviation <9% of mean, for 30×30 pixel ROIs (12.9 × 12.9 mm{sup 2}). In comparison, low dose FBP flow errors were −38% to +258%, and standard deviation was 6%–93%. Additionally, low dose SIR achieved 4.6 times improvement in flow map CNR{sup 2} per unit input dose compared to low dose FBP. Conclusions: SIR reconstruction can reduce image noise and mitigate streaking artifacts caused by photon starvation in dynamic CT myocardial perfusion data sets acquired at low dose (low tube current), and improve perfusion map quality in comparison to FBP reconstruction at the same dose.
Physical insight into superdiffusive dynamics of Sinai billiard through collision statistics
NASA Astrophysics Data System (ADS)
Kokshenev, Valery B.; Vicentini, Eduardo
2006-02-01
We report on distinct steady-motion dynamic regimes in chaotic Sinai billiard (SB). A numerical study on elastic reflections from the SB boundary (square wall of length L and circle obstacle of radius R) is carried out for different R/L. The research is based on the exploration of the generalized diffusion equation and on the analysis of wall-collision and the circle-collision distributions observed at late times. The asymptotes for the diffusion coefficientDR and the corresponding diffusion exponentzR are established for all geometries. The universal ( R-independent) diffusion with D1∽t and z1=1.5 replaces the ballistic motion regime ( z0=1) attributed to square billiard ( R=0). Geometrically, this superdiffusive regime is bounded by small radii 0
Dynamical and statistical modelling of many body collisions Part II: Energy exchange
NASA Astrophysics Data System (ADS)
Agbormbai, A. A.
2001-08-01
While rarefied gas dynamics has traditionally assumed a dilute gas, whose densities are so low that only binary collisions and single-body gas surface interactions occur, expressions for many-body collision rates and for many-body gas surface interaction (GSI) rates seem to suggest that at lower heights the dilute gas is not valid. In particular, in the pure rarefied regime, two-body GSIs and some three-body interactions occur whereas, in the transition regime into continuum flow, four body collisions and four-body GSIs occur. In this paper I formulate the problem of many-body energy exchange as a complement to the problem of many body scattering which I formulated in Part I. I showed there that a many body collision can be formulated in terms of the motion of a number of reduced particles which scatter in space as well as exchange energy with one another and with the center of mass. In this paper I concentrate on the energy exchange and present a statistical theory based on reciprocity or detailed balance. I derive statistical transformation models for three and four body encounters. The same techniques can be applied to any number of interacting bodies—the final transformation model simply becomes more complex. The determination of the post-collision energies of the reduced particles makes it possible to calculate their post-collision speeds. This is needed, in combination with the scattering results, to calculate the laboratory velocities of the interacting particles. The approach produces results for use in the Direct Simulation Monte Carlo (DSMC) method, which is the standard method of computing rarefied gas phenomena.
NASA Astrophysics Data System (ADS)
Kim, Ok-Yeon; Kim, Hye-Mi; Lee, Myong-In; Min, Young-Mi
2016-03-01
This study aims at predicting the seasonal number of typhoons (TY) over the western North Pacific with an Asia-Pacific Climate Center (APCC) multi-model ensemble (MME)-based dynamical-statistical hybrid model. The hybrid model uses the statistical relationship between the number of TY during the typhoon season (July-October) and the large-scale key predictors forecasted by APCC MME for the same season. The cross validation result from the MME hybrid model demonstrates high prediction skill, with a correlation of 0.67 between the hindcasts and observation for 1982-2008. The cross validation from the hybrid model with individual models participating in MME indicates that there is no single model which consistently outperforms the other models in predicting typhoon number. Although the forecast skill of MME is not always the highest compared to that of each individual model, the skill of MME presents rather higher averaged correlations and small variance of correlations. Given large set of ensemble members from multi-models, a relative operating characteristic score reveals an 82 % (above-) and 78 % (below-normal) improvement for the probabilistic prediction of the number of TY. It implies that there is 82 % (78 %) probability that the forecasts can successfully discriminate between above normal (below-normal) from other years. The forecast skill of the hybrid model for the past 7 years (2002-2008) is more skillful than the forecast from the Tropical Storm Risk consortium. Using large set of ensemble members from multi-models, the APCC MME could provide useful deterministic and probabilistic seasonal typhoon forecasts to the end-users in particular, the residents of tropical cyclone-prone areas in the Asia-Pacific region.
Zhang, Yu J; Harte, John
2015-11-01
Model predictions for species competition outcomes highly depend on the assumed form of the population growth function. In this paper we apply an alternative inferential method based on statistical mechanics, maximizing Boltzmann entropy, to predict resource-constrained population dynamics and coexistence. Within this framework, population dynamics and competition outcome can be determined without assuming any particular form of the population growth function. The dynamics of each species is determined by two parameters: the mean resource requirement θ (related to the mean metabolic rate) and individual distinguishability Dr (related to intra- compared to interspecific functional variation). Our theory clarifies the condition for the energetic equivalence rule (EER) to hold, and provide a statistical explanation for the importance of species functional variation in determining population dynamics and coexistence patterns. PMID:26226230
Dynamical and statistical modelling of many body collisions Part I: Scattering
NASA Astrophysics Data System (ADS)
Agbormbai, A. A.
2001-08-01
Although rarefied gas dynamics has traditionally rested on the dilute gas assumption, which presupposes that only binary collisions and single-body gas surface interactions occur, expressions for many-body collision rates and for many-body gas surface interaction (GSI) rates seem to suggest that at lesser heights the dilute gas assumption is not valid. In particular, in the pure rarefied regime, two-body GSIs and some three-body interactions occur whereas, in the transition regime into continuum flow, four body collisions and four body GSIs occur. In this paper I formulate many body collisions using dynamical and statistical modelling. I show that the results of binary collisions can be useful in formulating many body collisions. In particular, I show that the equations of motion of a many body system can be recast into the equations of motion of a number of reduced particles which scatter in space as well as exchange energy with one another and with the center of mass of the system. For N-interacting bodies there are N-1 reduced particles. In this paper I focus on the scattering of the reduced particles. For two-body collisions the scattering of the single reduced particle is confined to a single plane. However, for many body collisions the equations of motion show that the scattering of each reduced particle occurs in space. Hence, there is an in-plane deflection as well as an out-of-plane deflection. I calculate each deflection using the two-body formula, but with important modifications to account for the resolution of the intermolecular potential into in-plane and out-of-plane components. The magnitude of the potential is also adjusted to take account of coefficients derived from the equations of motion. I also propose an approximate model, which computes the in-plane deflection as in binary collisions but computes the out-of-plane deflection using a statistical model based on reciprocity or detailed balance. The models are designed for Direct Simulation Monte Carlo computations of rarefied gas phenomena.
NASA Astrophysics Data System (ADS)
Demaria, E. M.; Troch, P. A.; Durcik, M.; Dominguez, F.; Rajagopal, S.
2010-12-01
The Phoenix, AZ metro area is the twelfth largest in the US, and the Phoenix valley is experiencing rapid population growth. Water resource availability in the 21st century for the region is of great concern to water managers. The City of Phoenix sources its surface water supplies from two watersheds: the Colorado River and the Salt/Verde Rivers. In this research we simulated the potential impacts of climate change on the hydrology of the Salt and Verde River basins in Arizona, using statistically and dynamically downscaled climate scenarios from the Hadley Centre Coupled Model, version 3 (HadCM3). Statistically (STA) and Dynamically (DYN) downscaled precipitation and temperature data were used to force the Variable Infiltration Capacity (VIC) hydrological model. DYN streamflow simulations for the winter season showed no significant changes throughout the century whereas STA streamflow simulations decreased in the first three decades before increasing in the final five decades of the century. Simulated streamflows in the summer season were larger than streamflows in the historical record for the DYN data; these increases were strongly tied to increased precipitation. The STA data showed simulated streamflows systematically below the historical period for the Salt River basin and a similar pattern to the simulated winter flows for the Verde River basin. An analysis of the frequency of maximum monthly volumes indicated a slight increase in the magnitude of events in the future whereas streamflow deficits are more extreme in the 21st century for the STA simulated flows. DYN simulated maximum monthly streamflows will become slightly smaller than in the present and the severity of streamflow deficits will be reduced, particularly in the Salt River basin. Potential reasons for the discrepancies between STA and DYN simulations might be explained by differences in the temporal and spatial distribution of rainfall events, from the temporal disaggregation of monthly precipitation and temperature performed to the STA data, and from a better representation of intra and interannual variability in the DYN data. These results suggest that the downscaling method used plays an important role in the magnitude of simulated streamflows. This research can be used as a planning tool by Phoenix area water managers: the simulated streamflow could force their reservoir and water resource management models.
NASA Astrophysics Data System (ADS)
Nesterenko, O. I.; Kabakov, D. V.
An algorithm is proposed for estimating the input actions (angular body vibrations) of a dynamically tunable gyroscope in the presence of drifts associated with the instability of thermal fields. The algorithm is based on the separation of the useful input signal and the thermal drift with allowance for their statistical properties. The efficiency of the estimation algorithm and its accuracy have been demonstrated experimentally using a real dynamically tunable gyroscope.
NASA Astrophysics Data System (ADS)
Molini, A.
2012-12-01
Precipitation is one of the major drivers of ecosystem dynamics. Such control is the result of complex dynamical interactions, seldom non linear, and exerted over a wide range of space and time scales. For this reason, if for example precipitation variability and intermittency are known to be among the main drivers of plants production, with a consequent influence on Carbon and Nitrogen cycles, the complete pathway of such a forcing remains often unclear. Traditional time series analysis bases the study of these inter-connections on linear correlation statistics. However, the possible presence of causal dynamical connections, as well as non-linear couplings and non-stationarity can affect the performance of these tools. Additionally, dynamical drivers can act simultaneously over different space and time scales. Given this premise, this talk explores linear and non-linear correlation patterns, information flows and directional couplings characterizing the control of precipitation on ecosystem dynamics by using an ensemble of statistics borrowed from information theory, non-linear dynamical systems analysis and multi-resolution spectral decomposition. In particular, we focus on the development of an extension to the frequency domain of delayed correlation and conditional mutual information functions, and on the implementation of directional coupling measures as conditional spectral causality, phase-slope index, and transfer entropy in the wavelet domain. Several examples, from different climatic regimes, are discussed with the goal of highlighting strengths and weaknesses of these statistics.
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
NASA Technical Reports Server (NTRS)
Zhang, Taiping
1994-01-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations.
Dynamical and statistical behavior of discrete combustion waves: a theoretical and numerical study.
Bharath, Naine Tarun; Rashkovskiy, Sergey A; Tewari, Surya P; Gundawar, Manoj Kumar
2013-04-01
We present a detailed theoretical and numerical study of combustion waves in a discrete one-dimensional disordered system. The distances between neighboring reaction cells were modeled with a gamma distribution. The results show that the random structure of the microheterogeneous system plays a crucial role in the dynamical and statistical behavior of the system. This is a consequence of the nonlinear interaction of the random structure of the system with the thermal wave. An analysis of the experimental data on the combustion of a gasless system (Ti + xSi) and a wide range of thermite systems was performed in view of the developed model. We have shown that the burning rate of the powder system sensitively depends on its internal structure. The present model allows for reproducing theoretically the experimental data for a wide range of pyrotechnic mixtures. We show that Arrhenius' macrokinetics at combustion of disperse systems can take place even in the absence of Arrhenius' microkinetics; it can have a purely thermal nature and be related to their heterogeneity and to the existence of threshold temperature. It is also observed that the combustion of disperse systems always occurs in the microheterogeneous mode according to the relay-race mechanism. PMID:23679470
Statistical characteristics of dynamics for population migration driven by the economic interests
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Zhao, Ning; Hao, Rui
2016-06-01
Population migration typically occurs under some constraints, which can deeply affect the structure of a society and some other related aspects. Therefore, it is critical to investigate the characteristics of population migration. Data from the China Statistical Yearbook indicate that the regional gross domestic product per capita relates to the population size via a linear or power-law relation. In addition, the distribution of population migration sizes or relative migration strength introduced here is dominated by a shifted power-law relation. To reveal the mechanism that creates the aforementioned distributions, a dynamic model is proposed based on the population migration rule that migration is facilitated by higher financial gains and abated by fewer employment opportunities at the destination, considering the migration cost as a function of the migration distance. The calculated results indicate that the distribution of the relative migration strength is governed by a shifted power-law relation, and that the distribution of migration distances is dominated by a truncated power-law relation. These results suggest the use of a power-law to fit a distribution may be not always suitable. Additionally, from the modeling framework, one can infer that it is the randomness and determinacy that jointly create the scaling characteristics of the distributions. The calculation also demonstrates that the network formed by active nodes, representing the immigration and emigration regions, usually evolves from an ordered state with a non-uniform structure to a disordered state with a uniform structure, which is evidenced by the increasing structural entropy.
Exploring the String Landscape: The Dynamics, Statistics, and Cosmology of Parallel Worlds
NASA Astrophysics Data System (ADS)
Ahlqvist, Stein Pontus
This dissertation explores various facets of the low-energy solutions in string theory known as the string landscape. Three separate questions are addressed - the tunneling dynamics between these vacua, the statistics of their location in moduli space, and the potential realization of slow-roll inflation in the flux potentials generated in string theory. We find that the tunneling transitions that occur between a certain class of supersymmetric vacua related to each other via monodromies around the conifold point are sensitive to the details of warping in the near-conifold regime. We also study the impact of warping on the distribution of vacua near the conifold and determine that while previous work has concluded that the conifold point acts as an accumulation point for vacua, warping highly dilutes the distribution in precisely this regime. Finally we investigate a novel form of inflation dubbed spiral inflation to see if it can be realized near the conifold point. We conclude that for our particular models, spiral inflation seems to rely on a de Sitter-like vacuum energy. As a result, whenever spiral inflation is realized, the inflation is actually driven by a vacuum energy.
Passage Time Statistics in Exponential Distributed Time-Delay Models: Noisy Asymptotic Dynamics
NASA Astrophysics Data System (ADS)
Cáceres, Manuel O.
2014-07-01
The stochastic dynamics toward the final attractor in exponential distributed time-delay non-linear models is presented, then the passage time statistic is studied analytically in the small noise approximation. The problem is worked out by going to the associated two-dimensional system. The mean first passage time from the unstable state for this non-Markovian type of system has been worked out using two different approaches: firstly, by a rigorous adiabatic Markovian approximation (in the small mean delay-time ); secondly, by introducing the stochastic path perturbation approach to get a non-adiabatic theory for any . This first passage time distribution can be written in terms of the important parameters of the models. We have compared both approaches and we have found excellent agreement between them in the adiabatic limit. In addition, using our non-adiabatic approach we predict a crossover and a novel behavior for the relaxation scaling-time as a function of the delay parameter which for goes as.
Dynamical and statistical behavior of discrete combustion waves: A theoretical and numerical study
NASA Astrophysics Data System (ADS)
Bharath, Naine Tarun; Rashkovskiy, Sergey A.; Tewari, Surya P.; Gundawar, Manoj Kumar
2013-04-01
We present a detailed theoretical and numerical study of combustion waves in a discrete one-dimensional disordered system. The distances between neighboring reaction cells were modeled with a gamma distribution. The results show that the random structure of the microheterogeneous system plays a crucial role in the dynamical and statistical behavior of the system. This is a consequence of the nonlinear interaction of the random structure of the system with the thermal wave. An analysis of the experimental data on the combustion of a gasless system (Ti + xSi) and a wide range of thermite systems was performed in view of the developed model. We have shown that the burning rate of the powder system sensitively depends on its internal structure. The present model allows for reproducing theoretically the experimental data for a wide range of pyrotechnic mixtures. We show that Arrhenius’ macrokinetics at combustion of disperse systems can take place even in the absence of Arrhenius’ microkinetics; it can have a purely thermal nature and be related to their heterogeneity and to the existence of threshold temperature. It is also observed that the combustion of disperse systems always occurs in the microheterogeneous mode according to the relay-race mechanism.
ERIC Educational Resources Information Center
Callamaras, Peter
1983-01-01
This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)
OneGeology Web Services and Portal as a global geological SDI - latest standards and technology
NASA Astrophysics Data System (ADS)
Duffy, Tim; Tellez-Arenas, Agnes
2014-05-01
The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone wishing to discover the availability of global geological web services and has new functionality to view and use such services including multiple projection support. KEYWORDS : OneGeology; GeoSciML V 3.2; Data exchange; Portal; INSPIRE; Standards; OGC; Interoperability; GeoScience information; WMS; WFS; Cookbook.
Gaffney, Inez M.
1973-01-01
CAN/SDI is Canada's national Selective Dissemination of Information Service offering a choice of nine data bases to its scientific and technical community. The system is based on central processing at the National Science Library combined with the utilization of decentralized expertise and resources for profile formulation and user education. Its greatest strength lies in its wide interdisciplinary quality. The major advantage of centralized processing of many data bases is that Canadians need learn only one method of profile formulation to access many files. A breakdown of services used confirms that a single tape service does not cover all the information requirements of most users. On the average each profile accesses approximately 1.5 data bases. Constant subscriber growth and a low cancellation rate indicate that CAN/SDI is and will continue to be an important element in Canada's information system. PMID:4740714
Gaffney, I M
1973-07-01
CAN/SDI is Canada's national Selective Dissemination of Information Service offering a choice of nine data bases to its scientific and technical community. The system is based on central processing at the National Science Library combined with the utilization of decentralized expertise and resources for profile formulation and user education. Its greatest strength lies in its wide interdisciplinary quality. The major advantage of centralized processing of many data bases is that Canadians need learn only one method of profile formulation to access many files. A breakdown of services used confirms that a single tape service does not cover all the information requirements of most users. On the average each profile accesses approximately 1.5 data bases. Constant subscriber growth and a low cancellation rate indicate that CAN/SDI is and will continue to be an important element in Canada's information system. PMID:4740714
Statistical Properties and Pre-Hit Dynamics of Price Limit Hits in the Chinese Stock Markets
Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
2015-01-01
Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders’ short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners. PMID:25874716
Sensitivity Properties of a Biosphere Model Based on BATS and a Statistical-Dynamical Climate Model.
NASA Astrophysics Data System (ADS)
Zhang, Taiping
1994-06-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model.The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations.
NASA Astrophysics Data System (ADS)
McCauley, P. I.; Su, Y. N.; Schanche, N.; Evans, K. E.; Su, C.; McKillop, S.; Reeves, K. K.
2015-06-01
We present a statistical study of prominence and filament eruptions observed by the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO). Several properties are recorded for 904 events that were culled from the Heliophysics Event Knowledgebase (HEK) and incorporated into an online catalog for general use. These characteristics include the filament and eruption type, eruption symmetry and direction, apparent twisting and writhing motions, and the presence of vertical threads and coronal cavities. Associated flares and white-light coronal mass ejections (CME) are also recorded. Total rates are given for each property along with how they differ among filament types. We also examine the kinematics of 106 limb events to characterize the distinct slow- and fast-rise phases often exhibited by filament eruptions. The average fast-rise onset height, slow-rise duration, slow-rise velocity, maximum field-of-view (FOV) velocity, and maximum FOV acceleration are 83 Mm, 4.4 hours, 2.1 km s-1, 106 km s-1, and 111 m s-2, respectively. All parameters exhibit lognormal probability distributions similar to that of CME speeds. A positive correlation between latitude and fast-rise onset height is found, which we attribute to a corresponding negative correlation in the average vertical magnetic field gradient, or decay index, estimated from potential field source surface (PFSS) extrapolations. We also find the decay index at the fast-rise onset point to be 1.1 on average, consistent with the critical instability threshold theorized for straight current channels. Finally, we explore relationships between the derived kinematics properties and apparent twisting motions. We find that events with evident twist have significantly faster CME speeds and significantly lower fast-rise onset heights, suggesting relationships between these values and flux rope helicity.
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
Zhang, T. )
1994-06-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.
Statistical properties and pre-hit dynamics of price limit hits in the Chinese stock markets.
Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
2015-01-01
Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders' short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners. PMID:25874716
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
Wood, Andrew W; Leung, Lai R; Sridhar, V; Lettenmaier, D P
2004-01-01
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregation (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.
He, Jiajie; Dougherty, Mark; Shaw, Joey; Fulton, John; Arriaga, Francisco
2011-10-01
Rural areas represent approximately 95% of the 14000 km(2) Alabama Black Belt, an area of widespread Vertisols dominated by clayey, smectitic, shrink-swell soils. These soils are unsuitable for conventional onsite wastewater treatment systems (OWTS) which are nevertheless widely used in this region. In order to provide an alternative wastewater dosing system, an experimental field moisture controlled subsurface drip irrigation (SDI) system was designed and installed as a field trial. The experimental system that integrates a seasonal cropping system was evaluated for two years on a 500-m(2) Houston clay site in west central Alabama from August 2006 to June 2008. The SDI system was designed to start hydraulic dosing only when field moisture was below field capacity. Hydraulic dosing rates fluctuated as expected with higher dosing rates during warm seasons with near zero or zero dosing rates during cold seasons. Lower hydraulic dosing in winter creates the need for at least a two-month waste storage structure which is an insurmountable challenge for rural homeowners. An estimated 30% of dosed water percolated below 45-cm depth during the first summer which included a 30-year historic drought. This massive volume of percolation was presumably the result of preferential flow stimulated by dry weather clay soil cracking. Although water percolation is necessary for OWTS, this massive water percolation loss indicated that this experimental system is not able to effective control soil moisture within its monitoring zone as designed. Overall findings of this study indicated that soil moisture controlled SDI wastewater dosing is not suitable as a standalone system in these Vertisols. However, the experimental soil moisture control system functioned as designed, demonstrating that soil moisture controlled SDI wastewater dosing may find application as a supplement to other wastewater disposal methods that can function during cold seasons. PMID:21621905
Yano, Ayaka; Nicol, Barbara; Jouanno, Elodie; Quillet, Edwige; Fostier, Alexis; Guyomard, René; Guiguen, Yann
2013-01-01
All salmonid species investigated to date have been characterized with a male heterogametic sex-determination system. However, as these species do not share any Y-chromosome conserved synteny, there remains a debate on whether they share a common master sex-determining gene. In this study, we investigated the extent of conservation and evolution of the rainbow trout (Oncorhynchus mykiss) master sex-determining gene, sdY (sexually dimorphic on the Y-chromosome), in 15 different species of salmonids. We found that the sdY sequence is highly conserved in all salmonids and that sdY is a male-specific Y-chromosome gene in the majority of these species. These findings demonstrate that most salmonids share a conserved sex-determining locus and also strongly suggest that sdY may be this conserved master sex-determining gene. However, in two whitefish species (subfamily Coregoninae), sdY was found both in males and females, suggesting that alternative sex-determination systems may have also evolved in this family. Based on the wide conservation of sdY as a male-specific Y-chromosome gene, efficient and easy molecular sexing techniques can now be developed that will be of great interest for studying these economically and environmentally important species. PMID:23745140
NASA Astrophysics Data System (ADS)
Karakatsanis, L. P.; Pavlos, G. P.; Xenakis, M. N.
2013-09-01
In this study which is the continuation of the first part (Pavlos et al. 2012) [1], the nonlinear analysis of the solar flares index is embedded in the non-extensive statistical theory of Tsallis (1988) [3]. The q-triplet of Tsallis, as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the singular value decomposition (SVD) components of the solar flares timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using theq-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000) [25]. Our analysis showed clearly the following: (a) a phase transition process in the solar flare dynamics from a high dimensional non-Gaussian self-organized critical (SOC) state to a low dimensional also non-Gaussian chaotic state, (b) strong intermittent solar corona turbulence and an anomalous (multifractal) diffusion solar corona process, which is strengthened as the solar corona dynamics makes a phase transition to low dimensional chaos, (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of the functions: (i) non-Gaussian probability distribution function P(x), (ii) f(a) and D(q), and (iii) J(p) for the solar flares timeseries and its underlying non-equilibrium solar dynamics, and (d) the solar flare dynamical profile is revealed similar to the dynamical profile of the solar corona zone as far as the phase transition process from self-organized criticality (SOC) to chaos state. However the solar low corona (solar flare) dynamical characteristics can be clearly discriminated from the dynamical characteristics of the solar convection zone.
NASA Astrophysics Data System (ADS)
Yoon, Jin-Ho; Ruby Leung, L.; Correia, James, Jr.
2012-11-01
This study compares two approaches, dynamical and statistical downscaling, for their potential to improve regional seasonal forecasts for the United States (U.S.) during the cold season. In the MultiRCM Ensemble Downscaling (MRED) project, seven regional climate models (RCMs) are used to dynamically downscale the Climate Forecast System (CFS) seasonal prediction over the conterminous U.S. out to 5 months for the period of 1982-2003. The simulations cover December to April of next year with 10 ensemble members from each RCM with different initial and boundary conditions from the corresponding ensemble members. These dynamically downscaled forecasts are compared with statistically downscaled forecasts produced by two bias correction methods applied to both the CFS and RCM forecasts. Results of the comparison suggest that the RCMs add value in seasonal prediction application, but the improvements largely depend on location, forecast lead time, variables, and skill metrics used for evaluation. Generally, more improvements are found over the Northwest and North Central U.S. for the shorter lead times. The comparison results also suggest a hybrid forecast system that combines both dynamical and statistical downscaling methods have the potential to maximize prediction skill.
Development of a current collection loss management system for SDI homopolar power supplies
Brown, D.W.
1989-01-01
High speed, high power density current collection systems have been identified as an enabling technology required to construct homopolar power supplies to meet SDI missions. This work is part of a three-year effort directed towards the analysis, experimental verification, and prototype construction of a current collection system designed to operate continuously at 2 kA/cm{sup 2}, at a rubbing speed of 200 m/s, and with acceptable losses in a space environment. To data, no system has achieved these conditions simultaneously. This is the annual report covering the second year period of performance on DOE contract DE-AC03-86SF16518. Major areas covered include design, construction and operation of a cryogenically cooled brush test rig, design and construction of a high speed brush test rig, optimization study for homopolar machines, loss analysis of the current collection system, and an application study which defines the air-core homopolar construction necessary to achieve the goal of 80--90 kW/kg generator power density. 17 figs., 2 tabs.
Development of a current collection loss management system for SDI homopolar power supplies
Hannan, W.F. III.
1987-01-01
High speed, high power density current collection systems have been identified as an enabling technology required to construct homopolar power supplies to meet SDI missions. This work is part of a three-year effort directed towards the analysis, experimental verification, and prototype construction of a current collection system designed to operated continuously at 2 kA/cm{sup 2}, at a rubbing speed of 200 m/s, and with acceptable losses in a space environment. To data, no system has achieved these conditions simultaneously. This is the annual report covering the first year period of performance on DOE contract DE-AC03-86SF16518. Major areas covered include design and construction of a cryogenically-cooled brush test rig, design of a high speed brush test rig, loss analysis of the current collection system, and an application study which defines the air core homopolar construction necessary to achieve the goal of 80--90 kW/kg generator power density. 15 figs.
A review of gas-cooled reactor concepts for SDI (Strategic Defense Initiative) applications
Marshall, A.C.
1989-08-01
We have completed a review of multimegawatt gas-cooled reactor concepts proposed for SDI applications. Our study concluded that the principal reason for considering gas-cooled reactors for burst-mode operation was the potential for significant system mass savings over closed-cycle systems if open-cycle gas-cooled operation (effluent exhausted to space) is acceptable. The principal reason for considering gas-cooled reactors for steady-state operation is that they may represent a lower technology risk than other approaches. In the review, nine gas-cooled reactor concepts were compared to identify the most promising. For burst-mode operation, the NERVA (Nuclear Engine for Rocket Vehicle Application) derivative reactor concept emerged as a strong first choice since its performance exceeds the anticipated operational requirements and the technology has been demonstrated and is retrievable. Although the NERVA derivative concepts were determined to be the lead candidates for the Multimegawatt Steady-State (MMWSS) mode as well, their lead over the other candidates is not as great as for the burst mode. 90 refs., 2 figs., 10 tabs.
ERIC Educational Resources Information Center
Koparan, Timur
2016-01-01
In this study, the effect on the achievement and attitudes of prospective teachers is examined. With this aim ahead, achievement test, attitude scale for statistics and interviews were used as data collection tools. The achievement test comprises 8 problems based on statistical data, and the attitude scale comprises 13 Likert-type items. The study…
NASA Astrophysics Data System (ADS)
Wen, Haohua; Woo, C. H.
2016-03-01
Contributions from the vibrational thermodynamics of phonons and magnons in the dynamic simulations of thermally activated atomic processes in crystalline materials were considered within the framework of classical statistics in conventional studies. The neglect of quantum effects produces the wrong lattice and spin dynamics and erroneous activation characteristics, sometimes leading to the incorrect results. In this paper, we consider the formation and migration of mono-vacancy in BCC iron over a large temperature range from 10 K to 1400 K, across the ferro/paramagnetic phase boundary. Entropies and enthalpies of migration and formation are calculated using quantum heat baths based on a Bose-Einstein statistical description of thermal excitations in terms of phonons and magnons. Corrections due to the use of classical heat baths are evaluated and discussed.
Liang, Shiuan-Ni; Lan, Boon Leong
2012-01-01
The Newtonian and special-relativistic statistical predictions for the mean, standard deviation and probability density function of the position and momentum are compared for the periodically-delta-kicked particle at low speed. Contrary to expectation, we find that the statistical predictions, which are calculated from the same parameters and initial Gaussian ensemble of trajectories, do not always agree if the initial ensemble is sufficiently well-localized in phase space. Moreover, the breakdown of agreement is very fast if the trajectories in the ensemble are chaotic, but very slow if the trajectories in the ensemble are non-chaotic. The breakdown of agreement implies that special-relativistic mechanics must be used, instead of the standard practice of using Newtonian mechanics, to correctly calculate the statistical predictions for the dynamics of a low-speed system. PMID:22606259
The SdiA-Regulated Gene srgE Encodes a Type III Secreted Effector
Habyarimana, Fabien; Sabag-Daigle, Anice
2014-01-01
Salmonella enterica serovar Typhimurium is a food-borne pathogen that causes severe gastroenteritis. The ability of Salmonella to cause disease depends on two type III secretion systems (T3SSs) encoded in two distinct Salmonella pathogenicity islands, 1 and 2 (SPI1 and SPI2, respectively). S. Typhimurium encodes a solo LuxR homolog, SdiA, which can detect the acyl-homoserine lactones (AHLs) produced by other bacteria and upregulate the rck operon and the srgE gene. SrgE is predicted to encode a protein of 488 residues with a coiled-coil domain between residues 345 and 382. In silico studies have provided conflicting predictions as to whether SrgE is a T3SS substrate. Therefore, in this work, we tested the hypothesis that SrgE is a T3SS effector by two methods, a β-lactamase activity assay and a split green fluorescent protein (GFP) complementation assay. SrgE with β-lactamase fused to residue 40, 100, 150, or 300 was indeed expressed and translocated into host cells, but SrgE with β-lactamase fused to residue 400 or 488 was not expressed, suggesting interference by the coiled-coil domain. Similarly, SrgE with GFP S11 fused to residue 300, but not to residue 488, was expressed and translocated into host cells. With both systems, translocation into host cells was dependent upon SPI2. A phylogenetic analysis indicated that srgE is found only within Salmonella enterica subspecies. It is found sporadically within both typhoidal and nontyphoidal serovars, although the SrgE protein sequences found within typhoidal serovars tend to cluster separately from those found in nontyphoidal serovars, suggesting functional diversification. PMID:24727228
Argonne CW Linac (ACWL)—legacy from SDI and opportunities for the future
NASA Astrophysics Data System (ADS)
McMichael, G. E.; Yule, T. J.
1995-09-01
The former Strategic Defense Initiative Organization (SDIO) invested significant resources over a 6-year period to develop and build an accelerator to demonstrate the launching of a cw beam with characteristics suitable for a space-based Neutral Particle Beam (NPB) system. This accelerator, the CWDD (Continuous Wave Deuterium Demonstrator) accelerator, was designed to accelerate 80 mA cw of D- to 7.5 MeV. A considerable amount of hardware was constructed and installed in the Argonne-based facility, and major performance milestones were achieved before program funding from the Department of Defense ended in October 1993. Existing assets have been turned over to Argonne. Assets include a fully functional 200 kV cw D- injector, a cw RFQ that has been tuned, leak checked and aligned, beam lines and a high-power beam stop, all installed in a shielded vault with appropriate safety and interlock systems. In addition, there are two high power (1 MW) cw rf amplifiers and all the ancillary power, cooling and control systems required for a high-power accelerator system. The SDI mission required that the CWDD accelerator structures operate at cryogenic temperatures (26K), a requirement that placed severe limitations on operating period (CWDD would have provided 20 seconds of cw beam every 90 minutes). However, the accelerator structures were designed for full-power rf operation with water cooling and ACWL (Argonne Continuous Wave Linac), the new name for CWDD in its water-cooled, positive-ion configuration, will be able to operate continuously. Project status and achievements will be reviewed. Preliminary design of a proton conversion for the RFQ, and other proposals for turning ACWL into a testbed for cw-linac engineering, will be discussed.
NASA Astrophysics Data System (ADS)
Perdigão, Rui A. P.; Blöschl, Günter
2015-04-01
Emerging Processes in Flood Regime Dynamics are evaluated on the basis of symmetry breaks in the spatiotemporal sensitivity of flood regimes to changes in annual precipitation and a new dynamical model of flood regime change under nonlinearly interacting landscape-climate dynamics. The spatiotemporal sensitivity analysis is performed at regional scale using data from 804 catchments in Austria from 1976 to 2008. Results show that flood peaks change in a more responsive manner with spatial (regional) than with temporal (decadal) variability. Space-wise a 10% increase in precipitation leads to a 23% increase in flood peaks in Austria, whereas timewise a 10% increase in precipitation leads to an increase of just 6% in flood peaks. Looking at hydroclimatic regions in particular, catchments from stable dry lowlands and high wetlands exhibit similarity between the spatial and temporal flood responses to changes in precipitation (spatiotemporal symmetry) and low landscape-climate codependence. This suggests that these regions are not coevolving significantly. However, intermediate regions show differences between those responses (symmetry breaks) and higher landscape-climate codependence, suggesting undergoing coevolution. The break of symmetry is an emergent behaviour of the coupled system, stemming from the nonlinear interactions in the coevolving hydroclimate system. A dynamic coevolution index is then proposed relating spatiotemporal symmetry with relative characteristic celerities, which need to be taken into account in hydrological space-time trading. Coevolution is expressed here by the scale interaction between slow and fast dynamics, represented respectively by spatial and temporal characteristics. The diagnostic assessment of coevolution is complemented by a stylised nonlinear dynamical model of landscape-climate coevolution, in which landform evolution processes take place at the millennial scale (slow dynamics), and climate adjusts in years to decades (fast dynamics). Coevolution is expressed by the interplay between slow and fast dynamics, represented, respectively, by spatial and temporal characteristics of the hydroclimate system. The model captures key features of the joint landscape-climate distribution and associated flood regime changes, supporting the diagnostic assessment. This paper ultimately brings to light signatures of emergence in flood regime dynamics that arise from the nonlinear coupling of the landscape-climate system at slow and fast time scales. The present work builds on Perdigão and Blöschl (2014). Perdigão, R. A. P., and G. Blöschl (2014), Spatiotemporal flood sensitivity to annual precipitation: Evidence for landscape-climate coevolution, Water Resour. Res., 50, doi:10.1002/2014WR015365.
NASA Astrophysics Data System (ADS)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Mike; Gershunov, Alexander; Gutowski, William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun-Young; Lim, Young-Kwon; Prabhat
2016-02-01
The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.
NASA Astrophysics Data System (ADS)
Lode, Axel U. J.; Chakrabarti, Barnali; Kota, Venkata K. B.
2015-09-01
We study the quantum many-body dynamics and the entropy production triggered by an interaction quench in a system of N =10 interacting identical bosons in an external one-dimensional harmonic trap. The multiconfigurational time-dependent Hartree method for bosons (MCTDHB) is used for solving the time-dependent Schrödinger equation at a high level of accuracy. We consider many-body entropy measures such as the Shannon information entropy, number of principal components, and occupation entropy that are computed from the time-dependent many-body basis set used in MCTDHB. These measures quantify relevant physical features such as irregular or chaotic dynamics, statistical relaxation, and thermalization. We monitor the entropy measures as a function of time and assess how they depend on the interaction strength. For larger interaction strength, the many-body information and occupation entropies approach the value predicted for the Gaussian orthogonal ensemble of random matrices. This implies statistical relaxation. The basis states of MCTDHB are explicitly time-dependent and optimized by the variational principle in a way that minimizes the number of significantly contributing ones. It is therefore a nontrivial fact that statistical relaxation prevails in MCTDHB computations. Moreover, we demonstrate a fundamental connection between the production of entropy, the buildup of correlations and loss of coherence in the system. Our findings imply that mean-field approaches such as the time-dependent Gross-Pitaevskii equation cannot capture statistical relaxation and thermalization because they neglect correlations. Since the coherence and correlations are experimentally accessible, their present connection to many-body entropies can be scrutinized to detect statistical relaxation. In this work we use the recent recursive software implementation of the MCTDHB (R-MCTDHB).
NASA Astrophysics Data System (ADS)
Frossard, L.; Rieder, H. E.; Ribatet, M.; Staehelin, J.; Maeder, J. A.; Di Rocco, S.; Davison, A. C.; Peter, T.
2012-05-01
We use models for mean and extreme values of total column ozone on spatial scales to analyze "fingerprints" of atmospheric dynamics and chemistry on long-term ozone changes at northern and southern mid-latitudes. The r-largest order statistics method is used for pointwise analysis of extreme events in low and high total ozone (termed ELOs and EHOs, respectively). For the corresponding mean value analysis a pointwise autoregressive moving average model (ARMA) is used. The statistical models include important atmospheric covariates to describe the dynamical and chemical state of the atmosphere: the solar cycle, the Quasi-Biennial Oscillation (QBO), ozone depleting substances (ODS) in terms of equivalent effective stratospheric chlorine (EESC), the North Atlantic Oscillation (NAO), the Antarctic Oscillation (AAO), the El~Nio/Southern Oscillation (ENSO), and aerosol load after the volcanic eruptions of El Chichn and Mt. Pinatubo. The influence of the individual covariates on mean and extreme levels in total column ozone is derived on a grid cell basis. The results show that "fingerprints", i.e., significant influence, of dynamical and chemical features are captured in both the "bulk" and the tails of the ozone distribution, respectively described by means and EHOs/ELOs. While results for the solar cycle, QBO and EESC are in good agreement with findings of earlier studies, unprecedented spatial fingerprints are retrieved for the dynamical covariates.
NASA Astrophysics Data System (ADS)
Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Kletzing, C. A.; Kurth, W. S.; Hospodarsky, G. B.; Nishimura, Y.
2015-05-01
Plasmaspheric hiss is known to play an important role in controlling the overall structure and dynamics of radiation belt electrons inside the plasmasphere. Using newly available Van Allen Probes wave data, which provide excellent coverage in the entire inner magnetosphere, we evaluate the global distribution of the hiss wave frequency spectrum and wave intensity for different levels of substorm activity. Our statistical results show that observed hiss peak frequencies are generally lower than the commonly adopted value (~550 Hz), which was in frequent use, and that the hiss wave power frequently extends below 100 Hz, particularly at larger L shells (> ~3) on the dayside during enhanced levels of substorm activity. We also compare electron pitch angle scattering rates caused by hiss using the new statistical frequency spectrum and the previously adopted Gaussian spectrum and find that the differences are up to a factor of ~5 and are dependent on energy and L shell. Moreover, the new statistical hiss wave frequency spectrum including wave power below 100 Hz leads to increased pitch angle scattering rates by a factor of ~1.5 for electrons above ~100 keV at L~5, although their effect is negligible at L ≤ 3. Consequently, we suggest that the new realistic hiss wave frequency spectrum should be incorporated into future modeling of radiation belt electron dynamics.
NASA Astrophysics Data System (ADS)
Tang, Jianping; Niu, Xiaorui; Wang, Shuyu; Gao, Hongxia; Wang, Xueyuan; Wu, Jian
2016-03-01
Statistical downscaling and dynamical downscaling are two approaches to generate high-resolution regional climate models based on the large-scale information from either reanalysis data or global climate models. In this study, these two downscaling methods are used to simulate the surface climate of China and compared. The Statistical Downscaling Model (SDSM) is cross validated and used to downscale the regional climate of China. Then, the downscaled historical climate of 1981-2000 and future climate of 2041-2060 are compared with that from the Weather Research and Forecasting (WRF) model driven by the European Center-Hamburg atmosphere model and the Max Planck Institute Ocean Model (ECHAM5/MPI-OM) and the L'Institut Pierre-Simon Laplace Coupled Model, version 5, coupled with the Nucleus for European Modelling of the ocean, low resolution (IPSL-CM5A-LR). The SDSM can reproduce the surface temperature characteristics of the present climate in China, whereas the WRF tends to underestimate the surface temperature over most of China. Both the SDSM and WRF require further work to improve their ability to downscale precipitation. Both statistical and dynamical downscaling methods produce future surface temperatures for 2041-2060 that are markedly different from the historical climatology. However, the changes in projected precipitation differ between the two downscaling methods. Indeed, large uncertainties remain in terms of the direction and magnitude of future precipitation changes over China.
Static Numbers to Dynamic Statistics: Designing a Policy-Friendly Social Policy Indicator Framework
ERIC Educational Resources Information Center
Ahn, Sang-Hoon; Choi, Young Jun; Kim, Young-Mi
2012-01-01
In line with the economic crisis and rapid socio-demographic changes, the interest in "social" and "well-being" indicators has been revived. Social indicator movements of the 1960s resulted in the establishment of social indicator statistical frameworks; that legacy has remained intact in many national governments and international organisations.…
NASA Astrophysics Data System (ADS)
De Bacco, Caterina; Guggiola, Alberto; Kühn, Reimer; Paga, Pierre
2016-05-01
Rare event statistics for random walks on complex networks are investigated using the large deviation formalism. Within this formalism, rare events are realised as typical events in a suitably deformed path-ensemble, and their statistics can be studied in terms of spectral properties of a deformed Markov transition matrix. We observe two different types of phase transition in such systems: (i) rare events which are singled out for sufficiently large values of the deformation parameter may correspond to localised modes of the deformed transition matrix; (ii) ‘mode-switching transitions’ may occur as the deformation parameter is varied. Details depend on the nature of the observable for which the rare event statistics is studied, as well as on the underlying graph ensemble. In the present paper we report results on rare events statistics for path averages of random walks in Erdős–Rényi and scale free networks. Large deviation rate functions and localisation properties are studied numerically. For observables of the type considered here, we also derive an analytical approximation for the Legendre transform of the large deviation rate function, which is valid in the large connectivity limit. It is found to agree well with simulations.
The dynamics of patient visits to a public hospital ED: a statistical model.
Rotstein, Z; Wilf-Miron, R; Lavi, B; Shahar, A; Gabbay, U; Noy, S
1997-10-01
Using a public hospital's computerized database, we formulated a statistical model to explain emergency department (ED) patient volume for better staffing and resource allocation. All patients visiting the ED over a 3-year period were included in this retrospective study. Each observation described the total daily number of referrals and was defined by the following variables: day of the week, month of the year, holiday/ weekday, relative order in a 3-year sequence, and number of visits to the ED on that day. The statistical method used to build the model was analysis of covariance. Periodicity in average number of daily visits existed for each of the seasonal factors that were examined, repeating every year during the study period. Based on a graphic analysis, the model was defined and explained 65% of the variance during the 3-year study, with a relatively low standard deviation of error. A statistically significant correlation existed between time-related factors and the number of visits to the ED. This statistical model may prove to be of value for planning emergency services, which operate under stressful, unpredictable situations. PMID:9337370
NASA Astrophysics Data System (ADS)
Rosa, Bogdan; Parishani, Hossein; Ayala, Orlando; Wang, Lian-Ping; Grabowski, Wojciech W.
2011-12-01
In recent years, direct numerical simulation (DNS) approach has become a reliable tool for studying turbulent collision-coalescence of cloud droplets relevant to warm rain development. It has been shown that small-scale turbulent motion can enhance the collision rate of droplets by either enhancing the relative velocity and collision efficiency or by inertia-induced droplet clustering. A hybrid DNS approach incorporating DNS of air turbulence, disturbance flows due to droplets, and droplet equation of motion has been developed to quantify these effects of air turbulence. Due to the computational complexity of the approach, a major challenge is to increase the range of scales or size of the computation domain so that all scales affecting droplet pair statistics are simulated. Here we discuss our on-going work in this direction by improving the parallel scalability of the code, and by studying the effect of large-scale forcing on pair statistics relevant to turbulent collision. New results at higher grid resolutions show a saturation of pair and collision statistics with increasing flow Reynolds number, for given Kolmogorov scales and small droplet sizes. Furthermore, we examine the orientation dependence of pair statistics which reflects an interesting coupling of gravity and droplet clustering.
Notaro, Michael; Wang, Yi; Liu, Zhengyu; Gallimore, Robert; Levis, Samuel
2008-01-05
A negative feedback of vegetation cover on subsequent annual precipitation is simulated for the mid-Holocene over North Africa using a fully coupled general circulation model with dynamic vegetation, FOAM-LPJ (Fast Ocean Atmosphere Model-Lund Potsdam Jena Model). By computing a vegetation feedback parameter based on lagged autocovariances, the simulated impact of North African vegetation on precipitation is statistically quantified. The feedback is also dynamically assessed through initial value ensemble experiments, in which North African grass cover is initially reduced and the climatic response analyzed. The statistical and dynamical assessments of the negative vegetation feedback agree in sign and relative magnitude for FOAM-LPJ. The negative feedback on annual precipitation largely results from a competition between bare soil evaporation and plant transpiration, with increases in the former outweighing reductions in the latter given reduced grass cover. This negative feedback weakens and eventually reverses sign over time during a transient simulation from the mid-Holocene to present. A similar, but weaker, negative feedback is identified in Community Climate System Model Version 2 (CCSM2) over North Africa for the mid-Holocene.
NASA Astrophysics Data System (ADS)
Burkholder, Michael B.; Litster, Shawn
2016-05-01
In this study, we analyze the stability of two-phase flow regimes and their transitions using chaotic and fractal statistics, and we report new measurements of dynamic two-phase pressure drop hysteresis that is related to flow regime stability and channel water content. Two-phase flow dynamics are relevant to a variety of real-world systems, and quantifying transient two-phase flow phenomena is important for efficient design. We recorded two-phase (air and water) pressure drops and flow images in a microchannel under both steady and transient conditions. Using Lyapunov exponents and Hurst exponents to characterize the steady-state pressure fluctuations, we develop a new, measurable regime identification criteria based on the dynamic stability of the two-phase pressure signal. We also applied a new experimental technique by continuously cycling the air flow rate to study dynamic hysteresis in two-phase pressure drops, which is separate from steady-state hysteresis and can be used to understand two-phase flow development time scales. Using recorded images of the two-phase flow, we show that the capacitive dynamic hysteresis is related to channel water content and flow regime stability. The mixed-wettability microchannel and in-channel water introduction used in this study simulate a polymer electrolyte fuel cell cathode air flow channel.
Muir, Ryan D.; Kissick, David J.; Simpson, Garth J.
2012-01-01
Data from photomultiplier tubes are typically analyzed using either counting or averaging techniques, which are most accurate in the dim and bright signal limits, respectively. A statistical means of adjoining these two techniques is presented by recovering the Poisson parameter from averaged data and relating it to the statistics of binomial counting from Kissick et al. [Anal. Chem. 82, 10129 (2010)]. The point at which binomial photon counting and averaging have equal signal to noise ratios is derived. Adjoining these two techniques generates signal to noise ratios at 87% to approaching 100% of theoretical maximum across the full dynamic range of the photomultiplier tube used. The technique is demonstrated in a second harmonic generation microscope. PMID:22535131
NASA Astrophysics Data System (ADS)
Frossard, L.; Rieder, H. E.; Ribatet, M.; Staehelin, J.; Maeder, J. A.; Di Rocco, S.; Davison, A. C.; Peter, T.
2013-01-01
We use statistical models for mean and extreme values of total column ozone to analyze "fingerprints" of atmospheric dynamics and chemistry on long-term ozone changes at northern and southern mid-latitudes on grid cell basis. At each grid cell, the r-largest order statistics method is used for the analysis of extreme events in low and high total ozone (termed ELOs and EHOs, respectively), and an autoregressive moving average (ARMA) model is used for the corresponding mean value analysis. In order to describe the dynamical and chemical state of the atmosphere, the statistical models include important atmospheric covariates: the solar cycle, the Quasi-Biennial Oscillation (QBO), ozone depleting substances (ODS) in terms of equivalent effective stratospheric chlorine (EESC), the North Atlantic Oscillation (NAO), the Antarctic Oscillation (AAO), the El Nio/Southern Oscillation (ENSO), and aerosol load after the volcanic eruptions of El Chichn and Mt. Pinatubo. The influence of the individual covariates on mean and extreme levels in total column ozone is derived on a grid cell basis. The results show that "fingerprints", i.e., significant influence, of dynamical and chemical features are captured in both the "bulk" and the tails of the statistical distribution of ozone, respectively described by mean values and EHOs/ELOs. While results for the solar cycle, QBO, and EESC are in good agreement with findings of earlier studies, unprecedented spatial fingerprints are retrieved for the dynamical covariates. Column ozone is enhanced over Labrador/Greenland, the North Atlantic sector and over the Norwegian Sea, but is reduced over Europe, Russia and the Eastern United States during the positive NAO phase, and vice-versa during the negative phase. The NAO's southern counterpart, the AAO, strongly influences column ozone at lower southern mid-latitudes, including the southern parts of South America and the Antarctic Peninsula, and the central southern mid-latitudes. Results for both NAO and AAO confirm the importance of atmospheric dynamics for ozone variability and changes from local/regional to global scales.
NASA Astrophysics Data System (ADS)
Koukas, Ioannis; Koukoravas, Vasilis; Mantesi, Konstantina; Sakellari, Katerina; Xanthopoulou, Themis-Demetra; Zarkadoulas, Akis; Markonis, Yannis; Papalexiou, Simon Michael; Koutsoyiannis, Demetris
2014-05-01
The statistical properties of over 300 different proxy records of the last two thousand years derived from the PAGES 2k database years are stochastically analysed. Analyses include estimation of their first four moments and their autocorrelation functions (ACF), as well as the determination of the presence of Hurst-Kolmogorov behaviour (known also as long term persistence). The data are investigated in groups according to their proxy type and location, while their statistical properties are also compared to those of the final temperature reconstructions. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Choi, Ok Ran; Lim, In Kyoung
2011-04-08
Highlights: {yields} Reduced p21 expression in senescent cells treated with DNA damaging agents. {yields} Increase of [{sup 3}H]thymidine and BrdU incorporations in DNA damaged-senescent cells. {yields} Upregulation of miR-93 expression in senescent cells in response to DSB. {yields} Failure of p53 binding to p21 promoter in senescent cells in response to DSB. {yields} Molecular mechanism of increased cancer development in aged than young individuals. -- Abstract: To answer what is a critical event for higher incidence of tumor development in old than young individuals, primary culture of human diploid fibroblasts were employed and DNA damage was induced by doxorubicin or X-ray irradiation. Response to the damage was different between young and old cells; loss of p21{sup sdi1} expression in spite of p53{sup S15} activation in old cells along with [{sup 3}H]thymidine and BrdU incorporation, but not in young cells. The phenomenon was confirmed by other tissue fibroblasts obtained from different donor ages. Induction of miR-93 expression and reduced p53 binding to p21 gene promoter account for loss of p21{sup sdi1} expression in senescent cells after DNA damage, suggesting a mechanism of in vivo carcinogenesis in aged tissue without repair arrest.
NASA Astrophysics Data System (ADS)
Hong, Mei; Zhang, Ren; Wang, Dong; Feng, Mang; Wang, Zhengxin; Singh, Vijay P.
2015-07-01
To address the inaccuracy of long-term El Niño-Southern Oscillation (ENSO) forecasts, a new dynamical-statistical forecasting model of the ENSO index was developed based on dynamical model reconstruction and improved self-memorization. To overcome the problem of single initial prediction values, the largest Lyapunov exponent was introduced to improve the traditional self-memorization function, thereby making it more effective for describing chaotic systems, such as ENSO. Equation reconstruction, based on actual data, was used as a dynamical core to overcome the problem of using a simple core. The developed dynamical-statistical forecasting model of the ENSO index is used to predict the sea surface temperature anomaly in the equatorial eastern Pacific and El Niño/La Niña events. The real-time predictive skills of the improved model were tested. The results show that our model predicted well within lead times of 12 months. Compared with six mature models, both temporal correlation and root mean square error of the improved model are slightly worse than those of the European Centre for Medium-Range Weather Forecasts model, but better than those of the other five models. Additionally, the margin between the forecast results in summer and those in winter is not great, which means that the improved model can overcome the "spring predictability barrier", to some extent. Finally, a real-time prediction experiment is carried out beginning in September 2014. Our model is a new exploration of the ENSO forecasting method.
2010-01-01
Chemical communication mediates signaling between cells. Bacteria also engage in chemical signaling, termed quorum sensing (QS), to coordinate population-wide behavior. The bacterial pathogen enterohemorrhagic E. coli (EHEC), responsible for outbreaks of bloody diarrhea worldwide, exploits QS to promote expression of virulence factors in humans. Although EHEC is a human pathogen, it is a member of the gastrointestinal (GI) flora in cattle, the main reservoir for this bacterium. EHEC cattle colonization requires SdiA, a QS transcription factor that uses acyl-homoserine lactones (AHLs), for proper folding and function. EHEC harbors SdiA, but does not produce AHLs, consequently having to sense AHLs produced by other bacterial species. We recently showed that SdiA is necessary for efficient EHEC passage through the bovine GI tract, and show that AHLs are prominent within cattle rumen, but absent from the other sections of the GI tract. EHEC utilizes the locus of enterocyte effacement (LEE) to colonize the recto-anal junction of cattle, and the glutamate decarboxylase (gad) system to colonize cows. Transcription of the LEE genes is decreased by rumen AHLs through SdiA, while transcription of the gad acid resistant system is increased. It would be expensive for EHEC to express the LEE genes in the rumen where they are not necessary. However, in preparation for the acidic distal stomachs the EHEC gad is activated in the rumen. Hence AHL signaling through SdiA aids EHEC in gauging these environments, and modulates gene expression towards adaptation to a commensal life-style in cattle.1 Inasmuch as EHEC is largely prevalent in cattle herds, interference with SdiA-mediated QS inhibition of cattle colonization could be an attractive approach to diminish contamination of food products due to cattle shedding of this pathogen. PMID:21468228
Statistical structuring theory in parametrically excitable dynamical systems with a Gaussian pump
NASA Astrophysics Data System (ADS)
Klyatskin, V. I.; Koshel, K. V.
2016-03-01
Based on the idea of the statistical topography, we analyze the problem of emergence of stochastic structure formation in linear and quasilinear problems described by first-order partial differential equations. The appearance of a parametric excitation on the background of a Gaussian pump is a specific feature of these problems. We obtain equations for the probability density of the solutions of these equations, whence it follows that the stochastic structure formation emerges with probability one, i.e., for almost every realization of the random parameters of the medium.
Ohnuki, Shinsuke; Enomoto, Kenichi; Yoshimoto, Hiroyuki; Ohya, Yoshikazu
2014-03-01
The vitality of brewing yeasts has been used to monitor their physiological state during fermentation. To investigate the fermentation process, we used the image processing software, CalMorph, which generates morphological data on yeast mother cells and bud shape, nuclear shape and location, and actin distribution. We found that 248 parameters changed significantly during fermentation. Successive use of principal component analysis (PCA) revealed several important features of yeast, providing insight into the dynamic changes in the yeast population. First, PCA indicated that much of the observed variability in the experiment was summarized in just two components: a change with a peak and a change over time. Second, PCA indicated the independent and important morphological features responsible for dynamic changes: budding ratio, nucleus position, neck position, and actin organization. Thus, the large amount of data provided by imaging analysis can be used to monitor the fermentation processes involved in beer and bioethanol production. PMID:24012106
Dynamics and Statistical Mechanics of Rotating and non-Rotating Vortical Flows
Lim, Chjan
2013-12-18
Three projects were analyzed with the overall aim of developing a computational/analytical model for estimating values of the energy, angular momentum, enstrophy and total variation of fluid height at phase transitions between disordered and self-organized flow states in planetary atmospheres. It is believed that these transitions in equilibrium statistical mechanics models play a role in the construction of large-scale, stable structures including super-rotation in the Venusian atmosphere and the formation of the Great Red Spot on Jupiter. Exact solutions of the spherical energy-enstrophy models for rotating planetary atmospheres by Kac's method of steepest descent predicted phase transitions to super-rotating solid-body flows at high energy to enstrophy ratio for all planetary spins and to sub-rotating modes if the planetary spin is large enough. These canonical statistical ensembles are well-defined for the long-range energy interactions that arise from 2D fluid flows on compact oriented manifolds such as the surface of the sphere and torus. This is because in Fourier space available through Hodge theory, the energy terms are exactly diagonalizable and hence has zero range, leading to well-defined heat baths.
Bahlmann, Claus; Burkhardt, Hans
2004-03-01
In this paper, we give a comprehensive description of our writer-independent online handwriting recognition system frog on hand. The focus of this work concerns the presentation of the classification/training approach, which we call cluster generative statistical dynamic time warping (CSDTW). CSDTW is a general, scalable, HMM-based method for variable-sized, sequential data that holistically combines cluster analysis and statistical sequence modeling. It can handle general classification problems that rely on this sequential type of data, e.g., speech recognition, genome processing, robotics, etc. Contrary to previous attempts, clustering and statistical sequence modeling are embedded in a single feature space and use a closely related distance measure. We show character recognition experiments of frog on hand using CSDTW on the UNIPEN online handwriting database. The recognition accuracy is significantly higher than reported results of other handwriting recognition systems. Finally, we describe the real-time implementation of frog on hand on a Linux Compaq iPAQ embedded device. PMID:15376878
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun -Young; Lim, Young -Kwon; Prabhat, -
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic to planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.
Double precision errors in the logistic map: Statistical study and dynamical interpretation
NASA Astrophysics Data System (ADS)
Oteo, J. A.; Ros, J.
2007-09-01
The nature of the round-off errors that occur in the usual double precision computation of the logistic map is studied in detail. Different iterative regimes from the whole panoply of behaviors exhibited in the bifurcation diagram are examined, histograms of errors in trajectories given, and for the case of fully developed chaos an explicit formula is found. It is shown that the statistics of the largest double precision error as a function of the map parameter is characterized by jumps whose location is determined by certain boundary crossings in the bifurcation diagram. Both jumps and locations seem to present geometric convergence characterized by the two first Feigenbaum constants. Eventually, a comparison with Benford’s law for the distribution of the leading digit of compilation of numbers is discussed.
Dynamics and statistics of noise-like pulses in modelocked lasers
NASA Astrophysics Data System (ADS)
Donovan, Graham M.
2015-08-01
Noise-like pulses and optical rogue waves are connected nonlinear phenomena which can occur in passively modelocked laser systems. Here we consider a range of model systems to explore the conditions under which noise-like pulses can be expected to occur, and further when the resulting statistics meet the optical rogue wave criteria. We show, via a series of careful simulations, that noise-like pulses and optical rogue waves can arise either separately or together, and that they may emerge from standard soliton-like solutions via different mechanisms. We also propose a quantitative definition of noise-like pulses, and explore the issues carefully in convergence testing numerical methods for such systems.
DYNAMIC STABILITY OF THE SOLAR SYSTEM: STATISTICALLY INCONCLUSIVE RESULTS FROM ENSEMBLE INTEGRATIONS
Zeebe, Richard E.
2015-01-01
Due to the chaotic nature of the solar system, the question of its long-term stability can only be answered in a statistical sense, for instance, based on numerical ensemble integrations of nearby orbits. Destabilization of the inner planets, leading to close encounters and/or collisions can be initiated through a large increase in Mercury's eccentricity, with a currently assumed likelihood of ∼1%. However, little is known at present about the robustness of this number. Here I report ensemble integrations of the full equations of motion of the eight planets and Pluto over 5 Gyr, including contributions from general relativity. The results show that different numerical algorithms lead to statistically different results for the evolution of Mercury's eccentricity (e{sub M}). For instance, starting at present initial conditions (e{sub M}≃0.21), Mercury's maximum eccentricity achieved over 5 Gyr is, on average, significantly higher in symplectic ensemble integrations using heliocentric rather than Jacobi coordinates and stricter error control. In contrast, starting at a possible future configuration (e{sub M}≃0.53), Mercury's maximum eccentricity achieved over the subsequent 500 Myr is, on average, significantly lower using heliocentric rather than Jacobi coordinates. For example, the probability for e{sub M} to increase beyond 0.53 over 500 Myr is >90% (Jacobi) versus only 40%-55% (heliocentric). This poses a dilemma because the physical evolution of the real system—and its probabilistic behavior—cannot depend on the coordinate system or the numerical algorithm chosen to describe it. Some tests of the numerical algorithms suggest that symplectic integrators using heliocentric coordinates underestimate the odds for destabilization of Mercury's orbit at high initial e{sub M}.
Temporal Dynamics and Nonclassical Photon Statistics of Quadratically Coupled Optomechanical Systems
NASA Astrophysics Data System (ADS)
Singh, Shailendra Kumar; Muniandy, S. V.
2016-01-01
Quantum optomechanical system serves as an interface for coupling between photons and phonons due to mechanical oscillations. We used the Heisenberg-Langevin approach under Markovian white noise approximation to study a quadratically coupled optomechanical system which contains a thin dielectric membrane quadratically coupled to the cavity field. A decorrelation method is employed to solve for a larger number of coupled equations. Transient mean numbers of cavity photons and phonons that provide dynamical behaviour are computed for different coupling regime. We have also obtained the two-boson second-order correlation functions for the cavity field, membrane oscillator and their cross correlations that provide nonclassical properties governed by quadratic optomechanical system.
SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix
Michalski, D; Huq, M; Bednarz, G; Lalonde, R; Yang, Y; Heron, D
2014-06-01
Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same is for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.
NASA Astrophysics Data System (ADS)
Jacquelin, E.; Adhikari, S.; Sinou, J.-J.; Friswell, M. I.
2015-11-01
Polynomial chaos solution for the frequency response of linear non-proportionally damped dynamic systems has been considered. It has been observed that for lightly damped systems the convergence of the solution can be very poor in the vicinity of the deterministic resonance frequencies. To address this, Aitken's transformation and its generalizations are suggested. The proposed approach is successfully applied to the sequences defined by the first two moments of the responses, and this process significantly accelerates the polynomial chaos convergence. In particular, a 2-dof system with respectively 1 and 2 parameter uncertainties has been studied. The first two moments of the frequency response were calculated by Monte Carlo simulation, polynomial chaos expansion and Aitken's transformation of the polynomial chaos expansion. Whereas 200 polynomials are required to have a good agreement with Monte Carlo results around the deterministic eigenfrequencies, less than 50 polynomials transformed by the Aitken's method are enough. This latter result is improved if a generalization of Aitken's method (recursive Aitken's transformation, Shank's transformation) is applied. With the proposed convergence acceleration, polynomial chaos may be reconsidered as an efficient method to estimate the first two moments of a random dynamic response.
Financial price dynamics and pedestrian counterflows: a comparison of statistical stylized facts.
Parisi, Daniel R; Sornette, Didier; Helbing, Dirk
2013-01-01
We propose and document the evidence for an analogy between the dynamics of granular counterflows in the presence of bottlenecks or restrictions and financial price formation processes. Using extensive simulations, we find that the counterflows of simulated pedestrians through a door display eight stylized facts observed in financial markets when the density around the door is compared with the logarithm of the price. Finding so many stylized facts is very rare indeed among all agent-based models of financial markets. The stylized properties are present when the agents in the pedestrian model are assumed to display a zero-intelligent behavior. If agents are given decision-making capacity and adapt to partially follow the majority, periods of herding behavior may additionally occur. This generates the very slow decay of the autocorrelation of absolute return due to an intermittent dynamics. Our findings suggest that the stylized facts in the fluctuations of the financial prices result from a competition of two groups with opposite interests in the presence of a constraint funneling the flow of transactions to a narrow band of prices with limited liquidity. PMID:23410385
Financial price dynamics and pedestrian counterflows: A comparison of statistical stylized facts
NASA Astrophysics Data System (ADS)
Parisi, Daniel R.; Sornette, Didier; Helbing, Dirk
2013-01-01
We propose and document the evidence for an analogy between the dynamics of granular counterflows in the presence of bottlenecks or restrictions and financial price formation processes. Using extensive simulations, we find that the counterflows of simulated pedestrians through a door display eight stylized facts observed in financial markets when the density around the door is compared with the logarithm of the price. Finding so many stylized facts is very rare indeed among all agent-based models of financial markets. The stylized properties are present when the agents in the pedestrian model are assumed to display a zero-intelligent behavior. If agents are given decision-making capacity and adapt to partially follow the majority, periods of herding behavior may additionally occur. This generates the very slow decay of the autocorrelation of absolute return due to an intermittent dynamics. Our findings suggest that the stylized facts in the fluctuations of the financial prices result from a competition of two groups with opposite interests in the presence of a constraint funneling the flow of transactions to a narrow band of prices with limited liquidity.
How electronic dynamics with Pauli exclusion produces Fermi-Dirac statistics
Nguyen, Triet S.; Nanguneri, Ravindra; Parkhill, John
2015-04-07
It is important that any dynamics method approaches the correct population distribution at long times. In this paper, we derive a one-body reduced density matrix dynamics for electrons in energetic contact with a bath. We obtain a remarkable equation of motion which shows that in order to reach equilibrium properly, rates of electron transitions depend on the density matrix. Even though the bath drives the electrons towards a Boltzmann distribution, hole blocking factors in our equation of motion cause the electronic populations to relax to a Fermi-Dirac distribution. These factors are an old concept, but we show how they can be derived with a combination of time-dependent perturbation theory and the extended normal ordering of Mukherjee and Kutzelnigg for a general electronic state. The resulting non-equilibrium kinetic equations generalize the usual Redfield theory to many-electron systems, while ensuring that the orbital occupations remain between zero and one. In numerical applications of our equations, we show that relaxation rates of molecules are not constant because of the blocking effect. Other applications to model atomic chains are also presented which highlight the importance of treating both dephasing and relaxation. Finally, we show how the bath localizes the electron density matrix.
Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.
2005-01-01
A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.
NASA Astrophysics Data System (ADS)
Robledo, A.; Moyano, L. G.
2008-03-01
We demonstrate that the dynamics toward and within the Feigenbaum attractor combine to form a q -deformed statistical-mechanical construction. The rate at which ensemble trajectories converge to the attractor (and to the repellor) is described by a q entropy obtained from a partition function generated by summing distances between neighboring positions of the attractor. The values of the q indices involved are given by the unimodal map universal constants, while the thermodynamic structure is closely related to that formerly developed for multifractals. As an essential component in our demonstration we expose, in great detail, the features of the dynamics of trajectories that either evolve toward the Feigenbaum attractor or are captured by its matching repellor. The dynamical properties of the family of periodic superstable cycles in unimodal maps are seen to be key ingredients for the comprehension of the discrete scale invariance features present at the period-doubling transition to chaos. Elements in our analysis are the following. (i) The preimages of the attractor and repellor of each of the supercycles appear entrenched into a fractal hierarchical structure of increasing complexity as period doubling develops. (ii) The limiting form of this rank structure results in an infinite number of families of well-defined phase-space gaps in the positions of the Feigenbaum attractor or of its repellor. (iii) The gaps in each of these families can be ordered with decreasing width in accordance with power laws and are seen to appear sequentially in the dynamics generated by uniform distributions of initial conditions. (iv) The power law with log-periodic modulation associated with the rate of approach of trajectories toward the attractor (and to the repellor) is explained in terms of the progression of gap formation. (v) The relationship between the law of rate of convergence to the attractor and the inexhaustible hierarchy feature of the preimage structure is elucidated. (vi) A "mean field" evaluation of the atypical partition function, a thermodynamic interpretation of the time evolution process, and a crossover to ordinary exponential statistics are given. We make clear the dynamical origin of the anomalous thermodynamic framework existing at the Feigenbaum attractor.
NASA Astrophysics Data System (ADS)
Eckert, Nicolas; Schläppy, Romain; Jomelli, Vincent; Naaim, Mohamed
2013-04-01
A crucial step for proposing relevant long-term mitigation measures in long term avalanche forecasting is the accurate definition of high return period avalanches. Recently, "statistical-dynamical" approach combining a numerical model with stochastic operators describing the variability of its inputs-outputs have emerged. Their main interests is to take into account the topographic dependency of snow avalanche runout distances, and to constrain the correlation structure between model's variables by physical rules, so as to simulate the different marginal distributions of interest (pressure, flow depth, etc.) with a reasonable realism. Bayesian methods have been shown to be well adapted to achieve model inference, getting rid of identifiability problems thanks to prior information. An important problem which has virtually never been considered before is the validation of the predictions resulting from a statistical-dynamical approach (or from any other engineering method for computing extreme avalanches). In hydrology, independent "fossil" data such as flood deposits in caves are sometimes confronted to design discharges corresponding to high return periods. Hence, the aim of this work is to implement a similar comparison between high return period avalanches obtained with a statistical-dynamical approach and independent validation data resulting from careful dendrogeomorphological reconstructions. To do so, an up-to-date statistical model based on the depth-averaged equations and the classical Voellmy friction law is used on a well-documented case study. First, parameter values resulting from another path are applied, and the dendrological validation sample shows that this approach fails in providing realistic prediction for the case study. This may be due to the strongly bounded behaviour of runouts in this case (the extreme of their distribution is identified as belonging to the Weibull attraction domain). Second, local calibration on the available avalanche chronicle is performed with various prior distributions resulting from expert knowledge and/or other paths. For all calibrations, a very successful convergence is obtained, which confirms the robustness of the used Metropolis-Hastings estimation algorithm. This also demonstrates the interest of the Bayesian framework for aggregating information by sequential assimilation in the frequently encountered case of limited data quantity. Confrontation with the dendrological sample stresses the predominant role of the Coulombian friction coefficient distribution's variance on predicted high magnitude runouts. The optimal fit is obtained for a strong prior reflecting the local bounded behavior, and results in a 10-40 m difference for return periods ranging between 10 and 300 years. Implementing predictive simulations shows that this is largely within the range of magnitude of uncertainties to be taken into account. On the other hand, the different priors tested for the turbulent friction coefficient influence predictive performances only slightly, but have a large influence on predicted velocity and flow depth distributions. This all may be of high interest to refine calibration and predictive use of the statistical-dynamical model for any engineering application.
A stochastic-dynamic model for global atmospheric mass field statistics
NASA Technical Reports Server (NTRS)
Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.
1981-01-01
A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.
The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening.
Cibelli, Emily S; Leonard, Matthew K; Johnson, Keith; Chang, Edward F
2015-08-01
Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003
Research on rail surface non-touch statistic and dynamic measuring technique
NASA Astrophysics Data System (ADS)
Bi, Zong-qi; Wang, Lian-fen; Wang, Ai-fang
2014-02-01
On the basis of laser displacement sense principle, one non-contact rail surface sense measuring program is designed. Surface measuring instrument is made. Inspection and measuring for the single-section of rail surface are carried on and data-based image formation is given. The precise measuring is realized for rail surface, especial trail parts; By using the stepper motor diving technique and program operating imagine formation technique, combining the MTLAB programming, the single-surface measurement data is transformed into image and the dynamic measuring for rail vertical smoothing is achieved. By comparing to the standard data, the rail wear state and surface parameters are concluded .This technique met with the needs of non-touch automatic measuring for rail surface.
NASA Astrophysics Data System (ADS)
Liu, Zhichao; Zhao, Yunjie; Zeng, Chen; Computational Biophysics Lab Team
As the main protein of the bacterial flagella, flagellin plays an important role in perception and defense response. The newly discovered locus, FLS2, is ubiquitously expressed. FLS2 encodes a putative receptor kinase and shares many homologies with some plant resistance genes and even with some components of immune system of mammals and insects. In Arabidopsis, FLS2 perception is achieved by the recognition of epitope flg22, which induces FLS2 heteromerization with BAK1 and finally the plant immunity. Here we use both analytical methods such as Direct Coupling Analysis (DCA) and Molecular Dynamics (MD) Simulations to get a better understanding of the defense mechanism of FLS2. This may facilitate a redesign of flg22 or de-novo design for desired specificity and potency to extend the immune properties of FLS2 to other important crops and vegetables.
Statistical techniques for modeling extreme price dynamics in the energy market
NASA Astrophysics Data System (ADS)
Mbugua, L. N.; Mwita, P. N.
2013-02-01
Extreme events have large impact throughout the span of engineering, science and economics. This is because extreme events often lead to failure and losses due to the nature unobservable of extra ordinary occurrences. In this context this paper focuses on appropriate statistical methods relating to a combination of quantile regression approach and extreme value theory to model the excesses. This plays a vital role in risk management. Locally, nonparametric quantile regression is used, a method that is flexible and best suited when one knows little about the functional forms of the object being estimated. The conditions are derived in order to estimate the extreme value distribution function. The threshold model of extreme values is used to circumvent the lack of adequate observation problem at the tail of the distribution function. The application of a selection of these techniques is demonstrated on the volatile fuel market. The results indicate that the method used can extract maximum possible reliable information from the data. The key attraction of this method is that it offers a set of ready made approaches to the most difficult problem of risk modeling.
The Tectonics Model of Coronal Heating: Unsteady Dynamics and Scaling in Statistical Steady State
NASA Astrophysics Data System (ADS)
Ng, C. S.; Lin, L.; Bhattacharjee, A.
2009-11-01
The tectonics model of coronal heating, proposed by Priest et al. [Astrophys. J., 576, 533 (2002)] envisions coronal heating caused by a hierarchy of current sheets produced by the movement of a myriad of flux elements in the magnetic carpet covering the Sun. We have recently obtained new scaling results in two dimensions (2D) suggesting that the heating rate becomes independent of resistivity in a statistical steady state [C. S. Ng and A. Bhattacharjee, Astrophys. J., 675, 899 (2008)]. Our numerical work has now been extended to 3D. Random photospheric footpoint motion is applied to obtain converged average coronal heating rates. In the large Lundquist number limit, we find that the heating rate is independent of the Lundquist number, with average magnetic energy saturating at a constant level due to the formation of strong current layers and subsequent disruptions. In this talk, we will present our latest numerical results from large-scale 3D simulations, and discuss differences with previous scaling laws.
Thompson, Keiran C.; Crittenden, Deborah L.; Kable, Scott H.; Jordan, Meredith J.T.
2006-01-28
Previous experimental and theoretical studies of the radical dissociation channel of T{sub 1} acetaldehyde show conflicting behavior in the HCO and CH{sub 3} product distributions. To resolve these conflicts, a full-dimensional potential-energy surface for the dissociation of CH{sub 3}CHO into HCO and CH{sub 3} fragments over the barrier on the T{sub 1} surface is developed based on RO-CCSD(T)/cc-pVTZ(DZ) ab initio calculations. 20 000 classical trajectories are calculated on this surface at each of five initial excess energies, spanning the excitation energies used in previous experimental studies, and translational, vibrational, and rotational distributions of the radical products are determined. For excess energies near the dissociation threshold, both the HCO and CH{sub 3} products are vibrationally cold; there is a small amount of HCO rotational excitation and little CH{sub 3} rotational excitation, and the reaction energy is partitioned dominantly (>90% at threshold) into relative translational motion. Close to threshold the HCO and CH{sub 3} rotational distributions are symmetrically shaped, resembling a Gaussian function, in agreement with observed experimental HCO rotational distributions. As the excess energy increases the calculated HCO and CH{sub 3} rotational distributions are observed to change from a Gaussian shape at threshold to one more resembling a Boltzmann distribution, a behavior also seen by various experimental groups. Thus the distribution of energy in these rotational degrees of freedom is observed to change from nonstatistical to apparently statistical, as excess energy increases. As the energy above threshold increases all the internal and external degrees of freedom are observed to gain population at a similar rate, broadly consistent with equipartitioning of the available energy at the transition state. These observations generally support the practice of separating the reaction dynamics into two reservoirs: an impulsive reservoir, fed by the exit channel dynamics, and a statistical reservoir, supported by the random distribution of excess energy above the barrier. The HCO rotation, however, is favored by approximately a factor of 3 over the statistical prediction. Thus, at sufficiently high excess energies, although the HCO rotational distribution may be considered statistical, the partitioning of energy into HCO rotation is not.
The non-statistical dynamics of the 18O + 32O2 isotope exchange reaction at two energies
NASA Astrophysics Data System (ADS)
Van Wyngarden, Annalise L.; Mar, Kathleen A.; Quach, Jim; Nguyen, Anh P. Q.; Wiegel, Aaron A.; Lin, Shi-Ying; Lendvay, Gyorgy; Guo, Hua; Lin, Jim J.; Lee, Yuan T.; Boering, Kristie A.
2014-08-01
The dynamics of the 18O(3P) + 32O2 isotope exchange reaction were studied using crossed atomic and molecular beams at collision energies (Ecoll) of 5.7 and 7.3 kcal/mol, and experimental results were compared with quantum statistical (QS) and quasi-classical trajectory (QCT) calculations on the O3(X1A') potential energy surface (PES) of Babikov et al. [D. Babikov, B. K. Kendrick, R. B. Walker, R. T. Pack, P. Fleurat-Lesard, and R. Schinke, J. Chem. Phys. 118, 6298 (2003)]. In both QS and QCT calculations, agreement with experiment was markedly improved by performing calculations with the experimental distribution of collision energies instead of fixed at the average collision energy. At both collision energies, the scattering displayed a forward bias, with a smaller bias at the lower Ecoll. Comparisons with the QS calculations suggest that 34O2 is produced with a non-statistical rovibrational distribution that is hotter than predicted, and the discrepancy is larger at the lower Ecoll. If this underprediction of rovibrational excitation by the QS method is not due to PES errors and/or to non-adiabatic effects not included in the calculations, then this collision energy dependence is opposite to what might be expected based on collision complex lifetime arguments and opposite to that measured for the forward bias. While the QCT calculations captured the experimental product vibrational energy distribution better than the QS method, the QCT results underpredicted rotationally excited products, overpredicted forward-bias and predicted a trend in the strength of forward-bias with collision energy opposite to that measured, indicating that it does not completely capture the dynamic behavior measured in the experiment. Thus, these results further underscore the need for improvement in theoretical treatments of dynamics on the O3(X1A') PES and perhaps of the PES itself in order to better understand and predict non-statistical effects in this reaction and in the formation of ozone (in which the intermediate O3* complex is collisionally stabilized by a third body). The scattering data presented here at two different collision energies provide important benchmarks to guide these improvements.
Statistical and dynamical downscaling in CORDEX-Africa: differing views on the regional climate
NASA Astrophysics Data System (ADS)
Hewitson, Bruce; Lennard, Christopher; Jack, Christopher; Coop, Lisa
2013-04-01
The need for credible regional climate change projections for use in adaptation actions and decision making is well recognised. The CORDEX activity has evolved in large part as a response to this need. For the most part, CORDEX has so far been dominated by regional climate modelling (RCM) activities. However, implicit in CORDEX is the use of statistical downscaling (SD) as a complement to RCMs, although the SD activities lag that of the RCMs. For Africa, the CORDEX RCM work is well advanced with the control climate simulations completed, and a number of RCM-based projections also available. The early results indicate the RCMs produce a credible representation of the regional climate when aggregated in time and/or space, and provide an initial multimodal suite of regional climate change projections for Africa. The SD activities are catching up with this process and the emerging challenge is how to integrate and compare the results from the two downscaling methods. The two approaches, SD and RCMs, have respective strengths and weaknesses, but are considered in the literature to be of comparable overall skill. Where climate change stationarity is not considered a major issue, such as on timescales out to perhaps 2050, it is arguable that SD (comprehensively undertaken) may possibly be more skillful. From the perspective of users of regional scale projections, decision makers and policy developers, it is critical to compare, and assess the relative strengths of the methods on a regional basis. To avoid confusion the contradictions and/or robust messages emerging from the two methods needs to be clearly understood and articulated. The inter-comparison between the RCMs is already the subject of a number of papers, and here we present an initial comparison of early results between the SD and the envelope of RCM downscaling for CORDEX-Africa. Using the available SD results, we consider where the overlap and/or marked differences lie between the two methods. The focus is primarily on the control climate, where the downscaling is forced by the ERA-reanalysis data set, to avoid complicating factors possibly arising from non-stationarity issues with both SD and the RCMs. Following this we consider some early results of future climate projections based on the boundary conditions from CMIP5 GCM data. The primary consideration is how the statistical downscaling results fall within the envelope of the regional climate models. In this we consider both the bias of the regional climate models, the seasonal cycle, and the shorter time scales of weather events and the histogram distribution of daily events including extremes. Of particular concern is how the downscaling methods handle both the high and low frequency variance of the regional climate systems. The SD method uses daily data to derive the deterministic response to the large-scale forcing and adds the high-frequency variants or stochastic component. From this time and space aggregates comparable to the RCM data may be compiled. The primary difference between SD and RCMs lies in the fact that the SD is inherently bias corrected by virtue of the method. Thus the first major difference is accountable for by the RCM bias. Following this the differences are regionally and seasonally dependent and examples of these are presented from which preliminary conclusions about the two methods are drawn
Rodríguez, Begoña; Blas, Juan; Lorenzo, Rubén M; Fernández, Patricia; Abril, Evaristo J
2011-04-01
Personal exposure meters (PEM) are routinely used for the exposure assessment to radio frequency electric or magnetic fields. However, their readings are subject to errors associated with perturbations of the fields caused by the presence of the human body. This paper presents a novel analysis method for the characterization of this effect. Using ray-tracing techniques, PEM measurements have been emulated, with and without an approximation of this shadowing effect. In particular, the Global System for Mobile Communication mobile phone frequency band was chosen for its ubiquity and, specifically, we considered the case where the subject is walking outdoors in a relatively open area. These simulations have been contrasted with real PEM measurements in a 35-min walk. Results show a good agreement in terms of root mean square error and E-field cumulative distribution function (CDF), with a significant improvement when the shadowing effect is taken into account. In particular, the Kolmogorov-Smirnov (KS) test provides a P-value of 0.05 when considering the shadowing effect, versus a P-value of 10⁻¹⁴ when this effect is ignored. In addition, although the E-field levels in the absence of a human body have been found to follow a Nakagami distribution, a lognormal distribution fits the statistics of the PEM values better than the Nakagami distribution. As a conclusion, although the mean could be adjusted by using correction factors, there are also other changes in the CDF that require particular attention due to the shadowing effect because they might lead to a systematic error. PMID:21365665
ERIC Educational Resources Information Center
Olive, G.; And Others
A selective dissemination of information service based on computer scanning of Nuclear Science Abstracts tapes has operated at the Atomic Energy Research Establishment, Harwell, England since October, 1968. The performance of the mechanized SDI service has been compared with that of the pre-existing current awareness service which is based on…
ERIC Educational Resources Information Center
BIVONA, WILLIAM A.
THIS VOLUME PRESENTS THE RESULTS OF A NINE-MONTH TEST OF A PROTOTYPE SELECTIVE DISSEMINATION OF INFORMATION (SDI) SYSTEM DEVELOPED FOR THE ARMY TECHNICAL LIBRARIES. DURING THE PILOT TEST ONE THOUSAND DOCUMENTS WERE CATALOGED, INDEXED, AND DISSEMINATED TO TWENTY-FIVE SCIENTIFIC AND TECHNICAL PERSONNEL. MATCHING OF THE INTEREST PROFILES OF THESE…
ERIC Educational Resources Information Center
Olive, G.; And Others
A selective dissemination of information service based on computer scanning of Nuclear Science Abstracts tapes has operated at the Atomic Energy Research Establishment, Harwell, England since October, 1968. The performance of the mechanized SDI service has been compared with that of the pre-existing current awareness service which is based on
Technology Transfer Automated Retrieval System (TEKTRAN)
QseA and SdiA are two of several transcriptional regulators that regulate virulence gene expression of enterohemorrhagic Escherichia coli (EHEC) O157:H7 via quorum sensing (QS). QseA regulates the expression of the locus of enterocyte effacement (LEE). LEE encodes for a type III secretion (T3S) sys...
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI) wets the soil at the depth of the drip line and in a volume around each emitter, but the soil wetted often does not include the soil surface. Because of this, the soil surface remains completely or at least partially dry and evaporative losses of irrigation water are...
2016-01-01
We propose and develop a general approach based on reaction-diffusion equations for modelling a species dynamics in a realistic two-dimensional (2D) landscape crossed by linear one-dimensional (1D) corridors, such as roads, hedgerows or rivers. Our approach is based on a hybrid “2D/1D model”, i.e, a system of 2D and 1D reaction-diffusion equations with homogeneous coefficients, in which each equation describes the population dynamics in a given 2D or 1D element of the landscape. Using the example of the range expansion of the tiger mosquito Aedes albopictus in France and its main highways as 1D corridors, we show that the model can be fitted to realistic observation data. We develop a mechanistic-statistical approach, based on the coupling between a model of population dynamics and a probabilistic model of the observation process. This allows us to bridge the gap between the data (3 levels of infestation, at the scale of a French department) and the output of the model (population densities at each point of the landscape), and to estimate the model parameter values using a maximum-likelihood approach. Using classical model comparison criteria, we obtain a better fit and a better predictive power with the 2D/1D model than with a standard homogeneous reaction-diffusion model. This shows the potential importance of taking into account the effect of the corridors (highways in the present case) on species dynamics. With regard to the particular case of A. albopictus, the conclusion that highways played an important role in species range expansion in mainland France is consistent with recent findings from the literature. PMID:26986201
Roques, Lionel; Bonnefon, Olivier
2016-01-01
We propose and develop a general approach based on reaction-diffusion equations for modelling a species dynamics in a realistic two-dimensional (2D) landscape crossed by linear one-dimensional (1D) corridors, such as roads, hedgerows or rivers. Our approach is based on a hybrid "2D/1D model", i.e, a system of 2D and 1D reaction-diffusion equations with homogeneous coefficients, in which each equation describes the population dynamics in a given 2D or 1D element of the landscape. Using the example of the range expansion of the tiger mosquito Aedes albopictus in France and its main highways as 1D corridors, we show that the model can be fitted to realistic observation data. We develop a mechanistic-statistical approach, based on the coupling between a model of population dynamics and a probabilistic model of the observation process. This allows us to bridge the gap between the data (3 levels of infestation, at the scale of a French department) and the output of the model (population densities at each point of the landscape), and to estimate the model parameter values using a maximum-likelihood approach. Using classical model comparison criteria, we obtain a better fit and a better predictive power with the 2D/1D model than with a standard homogeneous reaction-diffusion model. This shows the potential importance of taking into account the effect of the corridors (highways in the present case) on species dynamics. With regard to the particular case of A. albopictus, the conclusion that highways played an important role in species range expansion in mainland France is consistent with recent findings from the literature. PMID:26986201
Dynamical flows through dark matter haloes - II. One- and two-point statistics at the virial radius
NASA Astrophysics Data System (ADS)
Aubert, Dominique; Pichon, Christophe
2007-01-01
In a series of three papers, the dynamical interplay between environments and dark matter haloes is investigated, while focusing on the dynamical flows through the virtual virial sphere. It relies on both cosmological simulations, to constrain the environments, and an extension to the classical matrix method to derive the responses of the halo. A companion paper (Paper I) showed how perturbation theory allows us to propagate the statistical properties of the environment to an ensemble description of the dynamical response of the embedded halo. The current paper focuses on the statistical characterization of the environments surrounding haloes, using a set of large-scale simulations; the large statistic of environments presented here allows us to put quantitative and statistically significant constrains on the properties of flows accreted by haloes. The description chosen in this paper relies on a `fluid' halocentric representation. The interactions between the halo and its environment are investigated in terms of a time-dependent external tidal field and a source term characterizing the infall. The former accounts for fly bys and interlopers. The latter stands for the distribution function of the matter accreted through the virial sphere. The method of separation of variables is used to decouple the temporal evolution of these two quantities from their angular and velocity dependence by means of projection on a 5D basis. It is shown that how the flux densities of mass, momentum and energy can provide an alternative description to the 5D projection of the source. Such a description is well suited to regenerate synthetic time lines of accretion which are consistent with environments found in simulations as discussed in the Appendix. The method leading to the measurements of these quantities in simulations is presented in detail and applied to 15000 haloes, with masses between 5 × 1012 and 1014Msolar evolving between z = 1 and 0. The influence of resolution, class of mass, and selection biases are investigated with higher resolution simulations. The emphasis is put on the one- and two-point statistics of the tidal field, and of the flux density of mass, while the full characterization of the other fields is postponed to Paper III. The net accretion at the virial radius is found to decrease with time. This decline results from both an absolute decrease of infall and a growing contribution of outflows. Infall is found to be mainly radial and occurring at velocities ~0.75 times the virial velocity. Outflows are also detected through the virial sphere and occur at lower velocities ~0.6Vc on more circular orbits. The external tidal field is found to be strongly quadrupolar and mostly stationary, possibly reflecting the distribution of matter in the halo's near environment. The coherence time of the small-scale fluctuations of the potential hints a possible anisotropic distribution of accreted satellites. The flux density of mass on the virial sphere appears to be more clustered than the potential, while the shape of its angular power spectrum seems stationary. Most of these results are tabulated with simple fitting laws and are found to be consistent with published work, which rely on a description of accretion in terms of satellites.
NASA Astrophysics Data System (ADS)
Ou, Li; Li, Zhuxia; Wu, Xizhen; Tian, Junlong; Sun, Weili
2009-12-01
The recent GSI data for proton-induced spallation reactions by using inverse kinematics are analyzed by the improved quantum molecular dynamics model (ImQMD05) merged with the generalized evaporation model (GEM2) and GEMINI model. We find that the model of ImQMD05+GEM2 reproduces the experimental data of mass and charge distributions for proton-induced spallation reactions on heavy targets (208Pb, 238U and 197Au) well and the model of ImQMD05+GEMINI reproduces the ones on light targets (56Fe) well. The experimental data for double differential cross sections of emitted neutrons and protons in intermediate energy proton-induced spallation reactions can also be reproduced well with the same models and this shows that they are not very sensitive to the merged statistical model.
NASA Astrophysics Data System (ADS)
Jacobitz, Frank G.; Schneider, Kai; Bos, Wouter J. T.; Farge, Marie
2016-01-01
The acceleration statistics of sheared and rotating homogeneous turbulence are studied using direct numerical simulation results. The statistical properties of Lagrangian and Eulerian accelerations are considered together with the influence of the rotation to shear ratio, as well as the scale dependence of their statistics. The probability density functions (pdfs) of both Lagrangian and Eulerian accelerations show a strong and similar dependence on the rotation to shear ratio. The variance and flatness of both accelerations are analyzed and the extreme values of the Eulerian acceleration are observed to be above those of the Lagrangian acceleration. For strong rotation it is observed that flatness yields values close to three, corresponding to Gaussian-like behavior, and for moderate and vanishing rotation the flatness increases. Furthermore, the Lagrangian and Eulerian accelerations are shown to be strongly correlated for strong rotation due to a reduced nonlinear term in this case. A wavelet-based scale-dependent analysis shows that the flatness of both Eulerian and Lagrangian accelerations increases as scale decreases, which provides evidence for intermittent behavior. For strong rotation the Eulerian acceleration is even more intermittent than the Lagrangian acceleration, while the opposite result is obtained for moderate rotation. Moreover, the dynamics of a passive scalar with gradient production in the direction of the mean velocity gradient is analyzed and the influence of the rotation to shear ratio is studied. Concerning the concentration of a passive scalar spread by the flow, the pdf of its Eulerian time rate of change presents higher extreme values than those of its Lagrangian time rate of change. This suggests that the Eulerian time rate of change of scalar concentration is mainly due to advection, while its Lagrangian counterpart is only due to gradient production and viscous dissipation.
Jacobitz, Frank G; Schneider, Kai; Bos, Wouter J T; Farge, Marie
2016-01-01
The acceleration statistics of sheared and rotating homogeneous turbulence are studied using direct numerical simulation results. The statistical properties of Lagrangian and Eulerian accelerations are considered together with the influence of the rotation to shear ratio, as well as the scale dependence of their statistics. The probability density functions (pdfs) of both Lagrangian and Eulerian accelerations show a strong and similar dependence on the rotation to shear ratio. The variance and flatness of both accelerations are analyzed and the extreme values of the Eulerian acceleration are observed to be above those of the Lagrangian acceleration. For strong rotation it is observed that flatness yields values close to three, corresponding to Gaussian-like behavior, and for moderate and vanishing rotation the flatness increases. Furthermore, the Lagrangian and Eulerian accelerations are shown to be strongly correlated for strong rotation due to a reduced nonlinear term in this case. A wavelet-based scale-dependent analysis shows that the flatness of both Eulerian and Lagrangian accelerations increases as scale decreases, which provides evidence for intermittent behavior. For strong rotation the Eulerian acceleration is even more intermittent than the Lagrangian acceleration, while the opposite result is obtained for moderate rotation. Moreover, the dynamics of a passive scalar with gradient production in the direction of the mean velocity gradient is analyzed and the influence of the rotation to shear ratio is studied. Concerning the concentration of a passive scalar spread by the flow, the pdf of its Eulerian time rate of change presents higher extreme values than those of its Lagrangian time rate of change. This suggests that the Eulerian time rate of change of scalar concentration is mainly due to advection, while its Lagrangian counterpart is only due to gradient production and viscous dissipation. PMID:26871161
Coupled flow-polymer dynamics via statistical field theory: Modeling and computation
NASA Astrophysics Data System (ADS)
Ceniceros, Hector D.; Fredrickson, Glenn H.; Mohler, George O.
2009-03-01
Field-theoretic models, which replace interactions between polymers with interactions between polymers and one or more conjugate fields, offer a systematic framework for coarse-graining of complex fluids systems. While this approach has been used successfully to investigate a wide range of polymer formulations at equilibrium, field-theoretic models often fail to accurately capture the non-equilibrium behavior of polymers, especially in the early stages of phase separation. Here the "two-fluid" approach serves as a useful alternative, treating the motions of fluid components separately in order to incorporate asymmetries between polymer molecules. In this work we focus on the connection of these two theories, drawing upon the strengths of each of the approaches in order to couple polymer microstructure with the dynamics of the flow in a systematic way. For illustrative purposes we work with an inhomogeneous melt of elastic dumbbell polymers, though our methodology will apply more generally to a wide variety of inhomogeneous systems. First we derive the model, incorporating thermodynamic forces into a two-fluid model for the flow through the introduction of conjugate chemical potential and elastic strain fields for the polymer density and stress. The resulting equations are composed of a system of fourth order PDEs coupled with a non-linear, non-local optimization problem to determine the conjugate fields. The coupled system is severely stiff and with a high degree of computational complexity. Next, we overcome the formidable numerical challenges posed by the model by designing a robust semi-implicit method based on linear asymptotic behavior of the leading order terms at small scales, by exploiting the exponential structure of global (integral) operators, and by parallelizing the non-linear optimization problem. The semi-implicit method effectively removes the fourth order stability constraint associated with explicit methods and we observe only a first order time-step restriction. The algorithm for solving the non-linear optimization problem, which takes advantage of the form of the operators being optimized, reduces the overall simulation time by several orders of magnitude. We illustrate the methodology with several examples of phase separation in an initially quiescent flow.
NASA Astrophysics Data System (ADS)
Sugiyama, K.; Nakajima, K.; Odaka, M.; Kuramoto, K.; Hayashi, Y.-Y.
2014-02-01
A series of long-term numerical simulations of moist convection in Jupiter’s atmosphere is performed in order to investigate the idealized characteristics of the vertical structure of multi-composition clouds and the convective motions associated with them, varying the deep abundances of condensable gases and the autoconversion time scale, the latter being one of the most questionable parameters in cloud microphysical parameterization. The simulations are conducted using a two-dimensional cloud resolving model that explicitly represents the convective motion and microphysics of the three cloud components, H2O, NH3, and NH4SH imposing a body cooling that substitutes the net radiative cooling. The results are qualitatively similar to those reported in Sugiyama et al. (Sugiyama, K. et al. [2011]. Intermittent cumulonimbus activity breaking the three-layer cloud structure of Jupiter. Geophys. Res. Lett. 38, L13201. doi:10.1029/2011GL047878): stable layers associated with condensation and chemical reaction act as effective dynamical and compositional boundaries, intense cumulonimbus clouds develop with distinct temporal intermittency, and the active transport associated with these clouds results in the establishment of mean vertical profiles of condensates and condensable gases that are distinctly different from the hitherto accepted three-layered structure (e.g., Atreya, S.K., Romani, P.N. [1985]. Photochemistry and clouds of Jupiter, Saturn and Uranus. In: Recent Advances in Planetary Meteorology. Cambridge Univ. Press, London, pp. 17-68). Our results also demonstrate that the period of intermittent cloud activity is roughly proportional to the deep abundance of H2O gas. The autoconversion time scale does not strongly affect the results, except for the vertical profiles of the condensates. Changing the autoconversion time scale by a factor of 100 changes the intermittency period by a factor of less than two, although it causes a dramatic increase in the amount of condensates in the upper troposphere. The moist convection layer becomes potentially unstable with respect to an air parcel rising from below the H2O lifting condensation level (LCL) well before the development of cumulonimbus clouds. The instability accumulates until an appropriate trigger is provided by the H2O condensate that falls down through the H2O LCL; the H2O condensate drives a downward flow below the H2O LCL as a result of the latent cooling associated with the re-evaporation of the condensate, and the returning updrafts carry moist air from below to the moist convection layer. Active cloud development is terminated when the instability is completely exhausted. The period of intermittency is roughly equal to the time obtained by dividing the mean temperature increase, which is caused by active cumulonimbus development, by the body cooling rate.
ERIC Educational Resources Information Center
Chicot, Katie; Holmes, Hilary
2012-01-01
The use, and misuse, of statistics is commonplace, yet in the printed format data representations can be either over simplified, supposedly for impact, or so complex as to lead to boredom, supposedly for completeness and accuracy. In this article the link to the video clip shows how dynamic visual representations can enliven and enhance the…
NASA Astrophysics Data System (ADS)
Reyers, Mark; Pinto, Joaquim G.; Moemken, Julia
2015-04-01
A statistical-dynamical downscaling (SDD) approach for the regionalisation of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily MSLP fields with the central point being located over Germany. 77 weather classes based on the associated circulation weather type and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamical downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different datasets the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate SDD is able to simulate realistic PDFs of 10m-wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD simulated Eout. In terms of decadal hindcasts results of SDD are similar to DD simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout timeseries of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the MPI-ESM decadal prediction system. Long-term climate change projections in SRES scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to results of other studies using DD methods, with increasing Eout over Northern Europe and a negative trend over Southern Europe. Despite some biases it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.
NASA Astrophysics Data System (ADS)
Vaittinada Ayar, Pradeebane; Vrac, Mathieu; Bastin, Sophie; Carreau, Julie; Déqué, Michel; Gallardo, Clemente
2016-02-01
Given the coarse spatial resolution of General Circulation Models, finer scale projections of variables affected by local-scale processes such as precipitation are often needed to drive impacts models, for example in hydrology or ecology among other fields. This need for high-resolution data leads to apply projection techniques called downscaling. Downscaling can be performed according to two approaches: dynamical and statistical models. The latter approach is constituted by various statistical families conceptually different. If several studies have made some intercomparisons of existing downscaling models, none of them included all those families and approaches in a manner that all the models are equally considered. To this end, the present study conducts an intercomparison exercise under the EURO- and MED-CORDEX initiative hindcast framework. Six Statistical Downscaling Models (SDMs) and five Regional Climate Models (RCMs) are compared in terms of precipitation outputs. The downscaled simulations are driven by the ERAinterim reanalyses over the 1989-2008 period over a common area at 0.44° of resolution. The 11 models are evaluated according to four aspects of the precipitation: occurrence, intensity, as well as spatial and temporal properties. For each aspect, one or several indicators are computed to discriminate the models. The results indicate that marginal properties of rain occurrence and intensity are better modelled by stochastic and resampling-based SDMs, while spatial and temporal variability are better modelled by RCMs and resampling-based SDM. These general conclusions have to be considered with caution because they rely on the chosen indicators and could change when considering other specific criteria. The indicators suit specific purpose and therefore the model evaluation results depend on the end-users point of view and how they intend to use with model outputs. Nevertheless, building on previous intercomparison exercises, this study provides a consistent intercomparison framework, including both SDMs and RCMs, which is designed to be flexible, i.e., other models and indicators can easily be added. More generally, this framework provides a tool to select the downscaling model to be used according to the statistical properties of the local-scale climate data to drive properly specific impact models.
NASA Astrophysics Data System (ADS)
Vaittinada Ayar, Pradeebane; Vrac, Mathieu; Bastin, Sophie; Carreau, Julie; Dqu, Michel; Gallardo, Clemente
2015-05-01
Given the coarse spatial resolution of General Circulation Models, finer scale projections of variables affected by local-scale processes such as precipitation are often needed to drive impacts models, for example in hydrology or ecology among other fields. This need for high-resolution data leads to apply projection techniques called downscaling. Downscaling can be performed according to two approaches: dynamical and statistical models. The latter approach is constituted by various statistical families conceptually different. If several studies have made some intercomparisons of existing downscaling models, none of them included all those families and approaches in a manner that all the models are equally considered. To this end, the present study conducts an intercomparison exercise under the EURO- and MED-CORDEX initiative hindcast framework. Six Statistical Downscaling Models (SDMs) and five Regional Climate Models (RCMs) are compared in terms of precipitation outputs. The downscaled simulations are driven by the ERAinterim reanalyses over the 1989-2008 period over a common area at 0.44 of resolution. The 11 models are evaluated according to four aspects of the precipitation: occurrence, intensity, as well as spatial and temporal properties. For each aspect, one or several indicators are computed to discriminate the models. The results indicate that marginal properties of rain occurrence and intensity are better modelled by stochastic and resampling-based SDMs, while spatial and temporal variability are better modelled by RCMs and resampling-based SDM. These general conclusions have to be considered with caution because they rely on the chosen indicators and could change when considering other specific criteria. The indicators suit specific purpose and therefore the model evaluation results depend on the end-users point of view and how they intend to use with model outputs. Nevertheless, building on previous intercomparison exercises, this study provides a consistent intercomparison framework, including both SDMs and RCMs, which is designed to be flexible, i.e., other models and indicators can easily be added. More generally, this framework provides a tool to select the downscaling model to be used according to the statistical properties of the local-scale climate data to drive properly specific impact models.
NASA Astrophysics Data System (ADS)
Chatzopoulos, S.; Fritz, T. K.; Gerhard, O.; Gillessen, S.; Wegg, C.; Genzel, R.; Pfuhl, O.
2015-02-01
We derive new constraints on the mass, rotation, orbit structure, and statistical parallax of the Galactic old nuclear star cluster and the mass of the supermassive black hole. We combine star counts and kinematic data from Fritz et al., including 2500 line-of-sight velocities and 10 000 proper motions obtained with VLT instruments. We show that the difference between the proper motion dispersions σl and σb cannot be explained by rotation, but is a consequence of the flattening of the nuclear cluster. We fit the surface density distribution of stars in the central 1000 arcsec by a superposition of a spheroidal cluster with scale ˜100 arcsec and a much larger nuclear disc component. We compute the self-consistent two-integral distribution function f(E, Lz) for this density model, and add rotation self-consistently. We find that (i) the orbit structure of the f(E, Lz) gives an excellent match to the observed velocity dispersion profiles as well as the proper motion and line-of-sight velocity histograms, including the double-peak in the vl-histograms. (ii) This requires an axial ratio near q1 = 0.7 consistent with our determination from star counts, q1 = 0.73 ± 0.04 for r < 70 arcsec. (iii) The nuclear star cluster is approximately described by an isotropic rotator model. (iv) Using the corresponding Jeans equations to fit the proper motion and line-of-sight velocity dispersions, we obtain best estimates for the nuclear star cluster mass, black hole mass, and distance M*(r < 100 arcsec) = (8.94 ± 0.31|stat ± 0.9|syst) × 106 M⊙, M• = (3.86 ± 0.14|stat ± 0.4|syst) × 106 M⊙, and R0 = 8.27 ± 0.09|stat ± 0.1|syst kpc, where the estimated systematic errors account for additional uncertainties in the dynamical modelling. (v) The combination of the cluster dynamics with the S-star orbits around Sgr A* strongly reduces the degeneracy between black hole mass and Galactic Centre distance present in previous S-star studies. A joint statistical analysis with the results of Gillessen et al., gives M• = (4.23 ± 0.14) × 106 M⊙ and R0 = 8.33 ± 0.11 kpc.
NASA Astrophysics Data System (ADS)
Chae, Kyu-Hyun; Gong, In-Taek
2015-08-01
Modified Newtonian dynamics (MOND) proposed by Milgrom provides a paradigm alternative to dark matter (DM) that has been successful in fitting and predicting the rich phenomenology of rotating disc galaxies. There have also been attempts to test MOND in dispersion-supported spheroidal early-type galaxies, but it remains unclear whether MOND can fit the various empirical properties of early-type galaxies for the whole ranges of mass and radius. As a way of rigorously testing MOND in elliptical galaxies we calculate the MOND-predicted velocity dispersion profiles (VDPs) in the inner regions of ˜2000 nearly round Sloan Digital Sky Survey elliptical galaxies under a variety of assumptions on velocity dispersion (VD) anisotropy, and then compare the predicted distribution of VDP slopes with the observed distribution in 11 ATLAS3D galaxies selected with essentially the same criteria. We find that the MOND model parametrized with an interpolating function that works well for rotating galaxies can also reproduce the observed distribution of VDP slopes based only on the observed stellar mass distribution without DM or any other galaxy-to-galaxy varying factor. This is remarkable in view that Newtonian dynamics with DM requires a specific amount and/or profile of DM for each galaxy in order to reproduce the observed distribution of VDP slopes. When we analyse non-round galaxy samples using the MOND-based spherical Jeans equation, we do not find any systematic difference in the mean property of the VDP slope distribution compared with the nearly round sample. However, in line with previous studies of MOND through individual analyses of elliptical galaxies, varying MOND interpolating function or VD anisotropy can lead to systematic change in the VDP slope distribution, indicating that a statistical analysis of VDPs can be used to constrain specific MOND models with an accurate measurement of VDP slopes or a prior constraint on VD anisotropy.
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.
2015-12-01
We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5 keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.
NASA Astrophysics Data System (ADS)
Krantz, Richard; Douthett, Jack; Cartwright, Julyan; Gonzalez, Diego; Piro, Oreste
2010-10-01
Some time ago two apparently dissimilar presentations were given at the 2007 Helmholtz Workshop in Berlin. One by J. Douthett and R. Krantz focused on the commonality between the mathematical descriptions of musical scales and the long-ranged, one-dimensional, anti-ferromagnetic Ising model of statistical physics. The other by J. Cartwright, D. Gonzalez, and O. Piro articulated a nonlinear dynamical model of pitch perception. Both approaches lead to a Farey series devil's staircase structure. In the first case, the ground state magnetic phase diagram of the Ising model is a Farey series devil's staircase. In the second case, the ear is modeled as a nonlinear system leading to a three-frequency resonant pitch perception model of the auditory system that exhibits a devil's staircase phase-locked structure. In this poster we present a summary of each of these works side-by-side to illuminate the link between these two seemingly disparate systems. Adapted from JMM Vol. 4, No. 1, 57, Mar. 2010.
NASA Astrophysics Data System (ADS)
Mezghani, Abdelkader; Benestad, Rasmus E.
2014-05-01
The global climate community has produced a wide range of results from atmospheric-ocean general circulation models, which are considered as the primary source of information on the future climate change. However, there are still gaps between the spatial resolution of climate model outputs and the point-scale requirement of most of climate change impact studies. Thus, empirical-statistical downscaling (ESD) and dynamical downscaling (DD) techniques continue to be used as alternatives and various models have been made available by the scientific community. Several comparative studies have been done during the last decade,dealing with downscaling local weather variables such as temperature and precipitation over a region of interest. Accordingly, in this work, new methods and strategies based on merging ESD and DD results will be proposed in order to increase the quality of the local climate projections with a special focus on seasonal and decadal precipitation and temperature based on CMIP3/5 experiments. A new freely available ESD R-package developed by MET Norway is used and will be also presented.
Dodds, Peter Sheridan; Mitchell, Lewis; Reagan, Andrew J.; Danforth, Christopher M.
2016-01-01
Instabilities and long term shifts in seasons, whether induced by natural drivers or human activities, pose great disruptive threats to ecological, agricultural, and social systems. Here, we propose, measure, and explore two fundamental markers of location-sensitive seasonal variations: the Summer and Winter Teletherms—the on-average annual dates of the hottest and coldest days of the year. We analyse daily temperature extremes recorded at 1218 stations across the contiguous United States from 1853–2012, and observe large regional variation with the Summer Teletherm falling up to 90 days after the Summer Solstice, and 50 days for the Winter Teletherm after the Winter Solstice. We show that Teletherm temporal dynamics are substantive with clear and in some cases dramatic shifts reflective of system bifurcations. We also compare recorded daily temperature extremes with output from two regional climate models finding considerable though relatively unbiased error. Our work demonstrates that Teletherms are an intuitive, powerful, and statistically sound measure of local climate change, and that they pose detailed, stringent challenges for future theoretical and computational models. PMID:27167740
NASA Astrophysics Data System (ADS)
Goodess, C. M.; Haylock, M. R.; Jones, P. D.; Bardossy, A.; Frei, C.; Schmith, T.
2003-04-01
STARDEX will provide a rigorous and systematic inter-comparison and evaluation of statistical and dynamical downscaling methods for the construction of scenarios of extremes. The more robust techniques will be identified and used to produce scenarios for European case-study regions for the end of the 21st century. During the first year of the project, work has focused on the following objectives, for which preliminary results will be presented: 1. To focus on an agreed, standard set of daily temperature extremes (e.g., percentiles of daily max./min. temperature, frost severity and duration indices and a heatwave duration index) and daily precipitation extremes (e.g., max. length of dry/wet spells, magnitude of the 90th percentile, percentage of rain falling on days with amounts above the 90th percentile) together with derived indices/parameters (e.g., thermal discomfort and fire hazard indices). 2. To focus on specific regions of Europe, ensuring that the case-study regions reflect the range of European climatic regimes and that the size/location of each region is appropriate for the extreme being studied (the selected regions encompass the British Isles, the Alps, the Mediterranean, Scandinavia and Germany). 3. To use a consistent approach (in terms of regions, data inputs, variables and statistics studied and time periods) for all analyses and case studies in order to allow rigorous and systematic evaluation and direct inter-comparison of the results. 4. To analyse observed data series for the second half of the 20th century from specific regions of Europe and for Europe as a whole in order to identify trends in the magnitude and frequency of occurrence of extremes (and, for specific events, their losses in life and financial costs) and to investigate whether these changes are related to changes in other climatic variables (i.e., potential predictor variables derived primarily from NCEP Reanalysis data, such as large-scale and regional objective circulation indices and patterns, including the North Atlantic Oscillation, measures of atmospheric humidity and stability and sea surface temperatures). 5. To analyse output from GCMs and RCMs, focusing on their ability to simulate extremes (including their magnitude, frequency of occurrence and trends) and potential predictor variables (including their relationships with surface climate). Acknowledgements: STARDEX (http://www.cru.uea.ac.uk/projects/stardex/) is supported by the European Commission under the Framework V Thematic Programme 'Energy, Environment and Sustainable Development', 2002-2005.
NASA Astrophysics Data System (ADS)
Mackay, R. M.; Khalil, M. A. K.
1995-10-01
The zonally averaged response of the Global Change Research Center two-dimensional (2-D) statistical dynamical climate model (GCRC 2-D SDCM) to a doubling of atmospheric carbon dioxide (350 parts per million by volume (ppmv) to 700 ppmv) is reported. The model solves the two-dimensional primitive equations in finite difference form (mass continuity, Newton's second law, and the first law of thermodynamics) for the prognostic variables: zonal mean density, zonal mean zonal velocity, zonal mean meridional velocity, and zonal mean temperature on a grid that has 18 nodes in latitude and 9 vertical nodes (plus the surface). The equation of state, p=ρRT, and an assumed hydrostatic atmosphere, Delta;p=-ρgΔz, are used to diagnostically calculate the zonal mean pressure and vertical velocity for each grid node, and the moisture balance equation is used to estimate the precipitation rate. The model includes seasonal variations in solar intensity, including the effects of eccentricity, and has observed land and ocean fractions set for each zone. Seasonally varying values of cloud amounts, relative humidity profiles, ozone, and sea ice are all prescribed in the model. Equator to pole ocean heat transport is simulated in the model by turbulent diffusion. The change in global mean annual surface air temperature due to a doubling of atmospheric CO2 in the 2-D model is 1.61 K, which is close to that simulated by the one-dimensional (1-D) radiative convective model (RCM) which is at the heart of the 2-D model radiation code (1.67 K for the moist adiabatic lapse rate assumption in 1-D RCM). We find that the change in temperature structure of the model atmosphere has many of the characteristics common to General Circulation Models, including amplified warming at the poles and the upper tropical troposphere, and stratospheric cooling. Because of the potential importance of atmospheric circulation feedbacks on climate change, we have also investigated the response of the zonal wind field to a doubling of CO2 and have found distinct patterns of change that are related to the change in temperature structure. In addition, we find that both the global mean kinetic energy and simulated Hadley circulation increase when CO2 is doubled. The increase in mean kinetic energy is a result of the increase in upper level meridional temperature gradients simulated by the model. It is stressed that changes in atmospheric dynamics associated with increased carbon dioxide may also be very important to the final steady state distribution of such greenhouse gases as ozone and water vapor. Hence further research in this regard is warranted.
NASA Astrophysics Data System (ADS)
Sherman, James P.; She, Chiao-Yao
2006-06-01
One thousand three hundred and eleven 15-min profiles of nocturnal mesopause region (80 105 km) temperature and horizontal wind, observed by Colorado State University sodium lidar over Fort Collins, CO (41°N, 105°W), between May 2002 and April 2003, were analyzed. From these profiles, taken over 390 h and each possessing vertical resolution of 2 km, a statistical analysis of seasonal variations in wind shears, convective and dynamical instabilities was performed. Large wind shears were most often observed near 100 km and during winter months. Thirty-five percent of the winter profiles contained wind shears exceeding 40 m/s per km at some altitude. In spite of large winds and shears, the mesopause region (at a resolution of 2 km and 15 min) is a very stable region. At a given altitude, the probability for convective instability is less than 1.4% for all seasons and the probability for dynamic instability (in the sense of Richardson number) ranges from 2.7% to 6.0%. Wind shear measurements are compared with four decades of chemical release measurements, compiled in a study by Larson [2002. Winds and shears in the mesosphere and lower thermosphere: results from four decades of chemical release wind measurements. Journal of Geophysical Research 107(A8), 1215]. Instability results are compared with those deduced from an annual lidar study conducted with higher spatial and temporal resolution at the Starfire Optical Range (SOR) in Albuquerque, NM, by Zhao et al. [2003. Measurements of atmospheric stability in the mesopause region at Starfire Optical Range, NM. Journal of Atmospheric and Solar-Terrestrial Physics 65, 219 232], and from a study by Li et al. [2005b. Characteristics of instabilities in the mesopause region over Maui, Hawaii. Journal of Geophysical Research 110, D09S12] with 19 days of data acquired from Maui Mesosphere and Lower Thermosphere (Maui MALT) Campaign . The Fort Collins lidar profiles were also analyzed using 1-h temporal resolution to compare instances of instabilities observed on different time scales.
Healey, R.D.
1987-08-15
Software Built-In Test (BIT) is a design technique for collecting information from operational software that will assist in identifying differences between the real Operating Environment and either the Design or Test Environments. The BIT senses and indicates where the software is operating in new or overload environmental conditions and may, therefore, be more likely to fail. (This anomalous situation may be the result of either hardware failure or software design error.) The technical challenge is to incorporate the large number of relatively simple BIT tests into the fault-tolerant and continuously operating environment likely to characterize a solution to the battle management portion of the SDI mission. The management challenge is to provide these technical assists in such a way that they can be implemented in operational software with a minimal increase in software development time; it is then reasonable to expect that BIT will not shift from a hard requirement to a nice-to-have feature as schedule pressures potentially impact development. This approach overcomes the management problem by providing a standard set of tools for use within the software development environment which will implement BIT with a minimum amount of programmer action.
NASA Astrophysics Data System (ADS)
Eslamizadeh, H.
2015-09-01
The fission probability, pre-scission neutron, proton and alpha multiplicities, anisotropy of fission fragment angular distribution and the fission time have been calculated for the compound nuclei 200Pb and 197Tl based on the modified statistical model and four-dimensional dynamical model. In dynamical calculations, dissipation was generated through the chaos weighted wall and window friction formula. The projection of the total spin of the compound nucleus to the symmetry axis, K, was considered as the fourth-dimension in Langevin dynamical calculations. In our dynamical calculations, we have used a constant dissipation coefficient of K, {γ }K=0.077{({{MeV}} {{zs}})}-{1/2}, and a non-constant dissipation coefficient to reproduce the above-mentioned experimental data. Comparison of the theoretical results of the fission probability and pre-scission particle multiplicities with the experimental data showed that the difference between the results of both dynamical models is small whereas, for the anisotropy of fission fragment angular distribution, it is slightly large. Furthermore, comparison of the results of the modified statistical model with the above-mentioned experimental data showed that with choosing appropriate values of the temperature coefficient of the effective potential, λ , and the scaling factor of the fission-barrier height, {r}s, the experimental data were satisfactorily reproduced.
NASA Astrophysics Data System (ADS)
Sun, F.; Hall, A. D.; Walton, D.; Capps, S. B.; Qu, X.; Huang, H. J.; Berg, N.; Jousse, A.; Schwartz, M.; Nakamura, M.; Cerezo-Mota, R.
2012-12-01
Using a combination of dynamical and statistical downscaling techniques, we projected mid-21st century warming in the Los Angeles region at 2-km resolution. To account for uncertainty associated with the trajectory of future greenhouse gas emissions, we examined projections for both "business-as-usual" (RCP8.5) and "mitigation" (RCP2.6) emissions scenarios from the Fifth Coupled Model Intercomparison Project (CMIP5). To account for the considerable uncertainty associated with choice of global climate model, we downscaled results for all available global climate models in CMIP5. For the business-as-usual scenario, we find that by the mid-21st century, the most likely warming is roughly 2.6°C averaged over the region's land areas, with a 95% confidence that the warming lies between 0.9 and 4.2°C. The high resolution of the projections reveals a pronounced spatial pattern in the warming: High elevations and inland areas separated from the coast by at least one mountain complex warm 20 to 50% more than the areas near the coast or within the Los Angeles basin. This warming pattern is especially apparent in summertime. The summertime warming contrast between the inland and coastal zones has a large effect on the most likely expected number of extremely hot days per year. Coastal locations and areas within the Los Angeles basin see roughly two to three times the number of extremely hot days, while high elevations and inland areas typically experience approximately three to five times the number of extremely hot days. Under the mitigation emissions scenario, the most likely warming and increase in heat extremes are somewhat smaller. However, the majority of the warming seen in the business-as-usual scenario still occurs at all locations in the most likely case under the mitigation scenario, and heat extremes still increase significantly. This warming study is the first part of a series studies of our project. More climate change impacts on the Santa Ana wind, rainfall, snowfall and snowmelt, cloud and surface hydrology are forthcoming and could be found in www.atmos.ucla.edu/csrl.he ensemble-mean, annual-mean surface air temperature change and its uncertainty from the available CMIP5 GCMs under the RCP8.5 (left) and RCP2.6 (right) emissions scenarios, unit: °C.
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Alados-Arboledas, L.
2012-01-01
This work presents the first analysis of longterm correlative day-to-night columnar aerosol optical properties. The aim is to better understand columnar aerosol dynamic from ground-based observations, which are poorly studied until now. To this end we have used a combination of sun-and-star photometry measurements acquired in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) from 2007 to 2010. For the whole study period, mean aerosol optical depth (AOD) around 440 nm (+/-standard deviation) is 0.18 +/- 0.10 and 0.19 +/- 0.11 for daytime and nighttime, respectively, while the mean Angstr¨om exponent (alpha ) is 1.0 +/- 0.4 and 0.9 +/- 0.4 for daytime and nighttime. The ANOVA statistical tests reveal that there are no significant differences between AOD and obtained at daytime and those at nighttime. Additionally, the mean daytime values of AOD and obtained during this study period are coherent with the values obtained in the surrounding AERONET stations. On the other hand, AOD around 440 nm present evident seasonal patterns characterised by large values in summer (mean value of 0.20 +/- 0.10 both at daytime and nighttime) and low values in winter (mean value of 0.15 +/- 0.09 at daytime and 0.17 +/- 0.10 at nighttime). The Angstr¨om exponents also present seasonal patterns, but with low values in summer (mean values of 0.8 +/- 0.4 and 0.9 +/- 0.4 at dayand night-time) and relatively large values in winter (mean values of 1.2 +/- 0.4 and 1.0 +/- 0.3 at daytime and nighttime). These seasonal patterns are explained by the differences in the meteorological conditions and by the differences in the strength of the aerosol sources. To take more insight about the changes in aerosol particles between day and night, the spectral differences of the Angstrom exponent as function of the Angstr¨om exponent are also studied. These analyses reveal increases of the fine mode radius and of the fine mode contribution to AOD during nighttime, being more remarkable in the summer seasons. These variations are explained by the changes of the local aerosol sources and by the meteorological conditions between daytime and nighttime, as well as aerosol aging processes. Case studies during summer and winter for different aerosol loads and types are also presented to clearly illustrate these findings.
Developing a Web-based system by integrating VGI and SDI for real estate management and marketing
NASA Astrophysics Data System (ADS)
Salajegheh, J.; Hakimpour, F.; Esmaeily, A.
2014-10-01
Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.
p21{sup WAF1/Cip1/Sdi1} knockout mice respond to doxorubicin with reduced cardiotoxicity
Terrand, Jerome; Xu, Beibei; Morrissy, Steve; Dinh, Thai Nho; Williams, Stuart; Chen, Qin M.
2011-11-15
Doxorubicin (Dox) is an antineoplastic agent that can cause cardiomyopathy in humans and experimental animals. As an inducer of reactive oxygen species and a DNA damaging agent, Dox causes elevated expression of p21{sup WAF1/Cip1/Sdi1} (p21) gene. Elevated levels of p21 mRNA and p21 protein have been detected in the myocardium of mice following Dox treatment. With chronic treatment of Dox, wild type (WT) animals develop cardiomyopathy evidenced by elongated nuclei, mitochondrial swelling, myofilamental disarray, reduced cardiac output, reduced ejection fraction, reduced left ventricular contractility, and elevated expression of ANF gene. In contrast, p21 knockout (p21KO) mice did not show significant changes in the same parameters in response to Dox treatment. In an effort to understand the mechanism of the resistance against Dox induced cardiomyopathy, we measured levels of antioxidant enzymes and found that p21KO mice did not contain elevated basal or inducible levels of glutathione peroxidase and catalase. Measurements of 6 circulating cytokines indicated elevation of IL-6, IL-12, IFN{gamma} and TNF{alpha} in Dox treated WT mice but not p21KO mice. Dox induced elevation of IL-6 mRNA was detected in the myocardium of WT mice but not p21KO mice. While the mechanism of the resistance against Dox induced cardiomyopathy remains unclear, lack of inflammatory response may contribute to the observed cardiac protection in p21KO mice. -- Highlights: Black-Right-Pointing-Pointer Doxorubicin induces p21 elevation in the myocardium. Black-Right-Pointing-Pointer Doxorubicin causes dilated cardiomyopathy in wild type mice. Black-Right-Pointing-Pointer p21 Knockout mice are resistant against doxorubicin induced cardiomyopathy. Black-Right-Pointing-Pointer Lack of inflammatory response correlates with the resistance in p21 knockout mice.
NASA Astrophysics Data System (ADS)
Shreeman, Paul K.
The statistical dynamical diffraction theory, which has been initially developed by late Kato remained in obscurity for many years due to intense and difficult mathematical treatment that proved to be quite challenging to implement and apply. With assistance of many authors in past (including Bushuev, Pavlov, Pungeov, and among the others), it became possible to implement this unique x-ray diffraction theory that combines the kinematical (ideally imperfect) and dynamical (the characteristically perfect diffraction) into a single system of equations controlled by two factors determined by long range order and correlation function within the structure. The first stage is completed by the publication (Shreeman and Matyi, J. Appl. Cryst., 43, 550 (2010)) demonstrating the functionality of this theory with new modifications hence called modified statistical dynamical diffraction theory (mSDDT). The foundation of the theory is also incorporated into this dissertation, and the next stage of testing the model against several ion-implanted SiGe materials has been published: (Shreeman and Matyi, physica status solidi (a)208(11), 2533-2538, 2011). The dissertation with all the previous results summarized, dives into comprehensive analysis of HRXRD analyses complete with several different types of reflections (symmetrical, asymmetrical and skewed geometry). The dynamical results (with almost no defects) are compared with well-known commercial software. The defective materials, to which commercially available modeling software falls short, is then characterized and discussed in depth. The results will exemplify the power of the novel approach in the modified statistical dynamical diffraction theory: Ability to detect and measure defective structures qualitatively and quantitatively. The analysis will be compared alongside with TEM data analysis for verification and confirmation. The application of this theory will accelerate the ability to quickly characterize the relaxed/partially relaxed/fully strained semiconductors using non-destructive HRXRD metrology.
NASA Astrophysics Data System (ADS)
Gauer, P.; Lied, K.; Bakkehoi, S.; Kronholm, K.; Rammer, L.; Hoeller, P.
2009-04-01
Hazard and risk assessment in avalanche prone areas involves the estimation of the runout of potential avalanches. Methods for determination of the runout may be grouped into two groups: 1) based on statistical methods such as the well known α - Î² model or 2) based on numerical avalanche models such as the PCM-model or Voellmy-Salm type models (just to name the more traditional ones). The later method has the advantage that besides the runout also information on velocity and impact pressure distributions along the avalanche track can be obtained. However, the success of the dynamical models depends on the knowledge of suitable rheological models and their parameters. The statistical α-Î² model was developed at NGI and governs maximum runout distance solely as a function of topography. The runout distance equations are found by regression analysis, correlating the longest registered runout distance from several hundred avalanche paths to a selection of topographic parameters. Similar regression analysis were also performed for different regions of United States, Canada or Austria. We re-evaluate the Norwegian and Austrian avalanche data on which the α - Î² model were based with respect to dynamical measures. As all those avalanche data belong more or less to extreme events (i.e. avalanches with return periods of 100 years to more) the dynamical measures can give hints for suitable rheological model for dynamical models suitable for extreme avalanche events.
Rundle, John B.; Klein, William
2015-09-29
We have carried out research to determine the dynamics of failure in complex geomaterials, specifically focusing on the role of defects, damage and asperities in the catastrophic failure processes (now popularly termed “Black Swan events”). We have examined fracture branching and flow processes using models for invasion percolation, focusing particularly on the dynamics of bursts in the branching process. We have achieved a fundamental understanding of the dynamics of nucleation in complex geomaterials, specifically in the presence of inhomogeneous structures.
Dayou, Fabrice; Larrégaray, Pascal; Bonnet, Laurent; Rayez, Jean-Claude; Arenas, Pedro Nilo; González-Lezana, Tomás
2008-05-01
The dynamics of the singlet channel of the Si+O(2)-->SiO+O reaction is investigated by means of quasiclassical trajectory (QCT) calculations and two statistical based methods, the statistical quantum method (SQM) and a semiclassical version of phase space theory (PST). The dynamics calculations have been performed on the ground (1)A(') potential energy surface of Dayou and Spielfiedel [J. Chem. Phys. 119, 4237 (2003)] for a wide range of collision energies (E(c)=5-400 meV) and initial O(2) rotational states (j=1-13). The overall dynamics is found to be highly sensitive to the selected initial conditions of the reaction, the increase in either the collisional energy or the O(2) rotational excitation giving rise to a continuous transition from a direct abstraction mechanism to an indirect insertion mechanism. The product state properties associated with a given collision energy of 135 meV and low rotational excitation of O(2) are found to be consistent with the inverted SiO vibrational state distribution observed in a recent experiment. The SQM and PST statistical approaches, especially designed to deal with complex-forming reactions, provide an accurate description of the QCT total integral cross sections and opacity functions for all cases studied. The ability of such statistical treatments in providing reliable product state properties for a reaction dominated by a competition between abstraction and insertion pathways is carefully examined, and it is shown that a valuable information can be extracted over a wide range of selected initial conditions. PMID:18465922
NASA Technical Reports Server (NTRS)
Mcmillan, S. L. W.; Lightman, A. P.
1984-01-01
A unified N-body and statistical treatment of stellar dynamics is developed and applied to the late stages of core collapse and early stages of post collapse evolution in globular clusters. A 'hybrid' computer code is joined to a direct N-body code which is used to calculate exactly the behavior of particles in the inner spatial region, and the combination is used to follow particles statistically in the outer spatial region. A transition zone allows the exchange of particles and energy between the two regions. The main application results include: formation of a hard central binary system, reversal of core collapse and expansion due to the heat input from this binary, ejection of the binary from the core, and recollapse of the core; density profiles that form a one-parameter sequence during the core oscillations; and indications that these oscillations will eventually cease.
Technology Transfer Automated Retrieval System (TEKTRAN)
It is known that irrigation application method can impact crop water use and water use efficiency, but the mechanisms involved are incompletely understood, particularly in terms of the water and energy balances during the growing season from pre-irrigation through planting, early growth and yield de...
Gessner, Manuel; Breuer, Heinz-Peter
2013-04-01
We obtain exact analytic expressions for a class of functions expressed as integrals over the Haar measure of the unitary group in d dimensions. Based on these general mathematical results, we investigate generic dynamical properties of complex open quantum systems, employing arguments from ensemble theory. We further generalize these results to arbitrary eigenvalue distributions, allowing a detailed comparison of typical regular and chaotic systems with the help of concepts from random matrix theory. To illustrate the physical relevance and the general applicability of our results we present a series of examples related to the fields of open quantum systems and nonequilibrium quantum thermodynamics. These include the effect of initial correlations, the average quantum dynamical maps, the generic dynamics of system-environment pure state entanglement and, finally, the equilibration of generic open and closed quantum systems. PMID:23679393
NASA Astrophysics Data System (ADS)
Pahlavani, M. R.; Firoozi, B.
2015-11-01
Within a developed particle-hole approach, a systematic study of the β- transition from the ground state of the 16N nucleus to the ground and some exited states of the 16O nucleus has been carried out. The energy spectrum and the wave functions of pure configuration of the 16N and 16O nuclei are numerically obtained using the mean-field shell model with respect to the Woods-Saxon nuclear potential accompanying spin-orbit and Coulomb interaction. Considering SDI residual interaction, mixed configuration of ground and excited pnTDA and TDA states are extracted for the aforementioned nucleus. These energy spectra and corresponding eigenstates are highly correspondent to the experimental energy spectrum and eigenstates after adjusting the residual potential parameters using the Nelder-Mead (NM) algorithm. In this approach, the endpoint energy, log ft and the partial half-lives of some possible transitions are calculated. The obtained results using the optimized SDI approach are reasonably close to the available experimental data.
Ravichandiran, Vinothkannan; Shanmugam, Karthi; Solomon, Adline Princy
2013-09-01
Plants have always been a supreme source of drugs and India is endowed with a wide variety of them with high medicinal values. The Quorum Sensing (QS) quenching efficiency of various solvent extracts of Melia dubia seeds was investigated against uropathogenic Escherichia coli (UPEC) to screen the competitive inhibitor of SdiA, a transcriptional activator of quorum sensing in E. coli. In this study, potentiality of five different extracts of Melia dubia seeds for quorum sensing inhibitory activity was investigated against uropathogenic Escherichia coli (UPEC). Assays such as cell density, swarming motility, protein, protease, hemolysis, hemagglutination, hydrophobicity and biofilm inhibition were performed. Biofilm, hemolysis and swarming motility were found to be inhibited by 92.1%, 20.9 % and 48.52% respectively, when the medium was supplemented with 30 mg/ml of the ethanolic extract. GC-MS spectrum of the ethanolic extract showed an array of 27 structurally unlinked compounds with natural ligand C8HSL. The docking against QS transcriptional regulator SdiA was predicted by in silico studies and the ligand C6 showed significant activity with -10.8 GScore. In vitro and in silico docking analysis showed fairly a good correlation, suggesting that the ethanolic extract showed potency to attenuate quorum sensing of uropathogenic E. coli. Further studies by in vitro and in vivo strategies are necessary to foresee the quorum quenching effect of the ligands. PMID:23210902
Soofer, R.M.
1987-01-01
This dissertation has four distinctive aspects. 1. By outlining the position of France, Britain, and West Germany on SDI and BMD, it hopes to elucidate the nature and extent of official and private European criticism and support for research into BMD as well as actual deployment of missile defenses - both in the US and Western Europe. 2. By examining European strategic thought as it pertains to deterrence, NATO strategy, and arms control, it attempts to explain the basis for the various views of SDI held by European governments and opposition groups, while affording the reader a better understanding of the Western European security predicament as well. 3. By analyzing the impact of various BMD deployment schemes in the continental US, Western Europe, and Soviet Union - on NATO strategy and European security, it hopes to contribute to the ongoing search for ways to strengthen NATO defense, and hence, deterrence capabilities. 4. Finally, this study seeks to examine the relationship between generally held security paradigms and specific strategic force initiatives. It is concluded that missile defenses of US strategic nuclear forces and command structure, as well as limited area defense of the continental US, would contribute to western European security by strengthening the credibility of the US strategic nuclear guarantee - the bedrock of NATO strategy.
Terai, Asuka; Nakagawa, Masanori
2007-08-01
The purpose of this paper is to construct a model that represents the human process of understanding metaphors, focusing specifically on similes of the form an "A like B". Generally speaking, human beings are able to generate and understand many sorts of metaphors. This study constructs the model based on a probabilistic knowledge structure for concepts which is computed from a statistical analysis of a large-scale corpus. Consequently, this model is able to cover the many kinds of metaphors that human beings can generate. Moreover, the model implements the dynamic process of metaphor understanding by using a neural network with dynamic interactions. Finally, the validity of the model is confirmed by comparing model simulations with the results from a psychological experiment. PMID:17696291
NASA Astrophysics Data System (ADS)
Kochendorfer, J. P.; Ramírez, J. A.
2010-10-01
The statistical-dynamical annual water balance model of Eagleson (1978) is a pioneering work in the analysis of climate, soil and vegetation interactions. This paper describes several enhancements and modifications to the model that improve its physical realism at the expense of its mathematical elegance and analytical tractability. In particular, the analytical solutions for the root zone fluxes are re-derived using separate potential rates of transpiration and bare-soil evaporation. Those potential rates, along with the rate of evaporation from canopy interception, are calculated using the two-component Shuttleworth-Wallace (1985) canopy model. In addition, the soil column is divided into two layers, with the upper layer representing the dynamic root zone. The resulting ability to account for changes in root-zone water storage allows for implementation at the monthly timescale. This new version of the Eagleson model is coined the Statistical-Dynamical Ecohydrology Model (SDEM). The ability of the SDEM to capture the seasonal dynamics of the local-scale soil-water balance is demonstrated for two grassland sites in the US Great Plains. Sensitivity of the results to variations in peak green leaf area index (LAI) suggests that the mean peak green LAI is determined by some minimum in root zone soil moisture during the growing season. That minimum appears to be close to the soil matric potential at which the dominant grass species begins to experience water stress and well above the wilting point, thereby suggesting an ecological optimality hypothesis in which the need to avoid water-stress-induced leaf abscission is balanced by the maximization of carbon assimilation (and associated transpiration). Finally, analysis of the sensitivity of model-determined peak green LAI to soil texture shows that the coupled model is able to reproduce the so-called "inverse texture effect", which consists of the observation that natural vegetation in dry climates tends to be most productive in sandier soils despite their lower water holding capacity. Although the determination of LAI based on complete or near-complete utilization of soil moisture is not a new approach in ecohydrology, this paper demonstrates its use for the first time with a new monthly statistical-dynamical model of the water balance. Accordingly, the SDEM provides a new framework for studying the controls of soil texture and climate on vegetation density and evapotranspiration.
NASA Astrophysics Data System (ADS)
Kochendorfer, J. P.; Ramírez, J. A.
2008-03-01
The statistical-dynamical annual water balance model of Eagleson (1978) is a pioneering work in the analysis of climate, soil and vegetation interactions. This paper describes several enhancements and modifications to the model that improve its physical realism at the expense of its mathematical elegance and analytical tractability. In particular, the analytical solutions for the root zone fluxes are re-derived using separate potential rates of transpiration and bare-soil evaporation. Those potential rates, along with the rate of evaporation from canopy interception, are calculated using the two-component Shuttleworth-Wallace (1985) canopy model. In addition, the soil column is divided into two layers, with the upper layer representing the dynamic root zone. The resulting ability to account for changes in root-zone water storage allows for implementation at the monthly timescale. This new version of the Eagleson model is coined the Statistical-Dynamical Ecohydrology Model (SDEM). The ability of the SDEM to capture the seasonal dynamics of the local-scale soil-water balance is demonstrated for two grassland sites in the US Great Plains. Sensitivity of the results to variations in peak green Leaf Area Index (LAI) suggests that the mean peak green LAI is determined by some minimum in root zone soil moisture during the growing season. That minimum appears to be close to the soil matric potential at which the dominant grass species begins to experience water stress and well above the wilting point, thereby suggesting an ecological optimality hypothesis in which the need to avoid water-stress-induced leaf abscission is balanced by the maximization of carbon assimilation (and associated transpiration). Finally, analysis of the sensitivity of model-determined peak green LAI to soil texture shows that the coupled model is able to reproduce the so-called "inverse texture effect", which consists of the observation that natural vegetation in dry climates tends to be most productive in sandier soils despite their lower water holding capacity. Although the determination of LAI based on near-complete utilization of soil moisture is not a new approach in ecohydrology, this paper demonstrates its use for the first time with a new monthly statistical-dynamical model of the water balance. Accordingly, the SDEM provides a new framework for studying the controls of soil texture and climate on vegetation density and evapotranspiration.
NASA Astrophysics Data System (ADS)
Hong, Mei; Zhang, Ren; Wang, Dong; Chen, Xi; Shi, Jian; Singh, Vijay
2014-12-01
The western Pacific subtropical high (WPSH) is closely correlated with the East Asian climate. To date, the underlying mechanisms and sustaining factors have not been positively elucidated. Based on the concept of dynamical system model reconstruction, this paper presents a nonlinear statistical-dynamical model of the subtropical high ridge line (SHRL) in concurrence with four summer monsoon factors. SHRL variations from 1990 to 2011 are subdivided into three categories, while parameter differences relating to three differing models are examined. Dynamical characteristics of SHRL are analyzed and an aberrance mechanism subsequently developed. Modeling suggests that different parameters may lead to significant variance pertaining to monsoon variables corresponding with numerous WPSH activities. Dynamical system bifurcation and mutation indicates that the South China Sea monsoon trough is a significant factor with respect to the occurrence and maintenance of the 'double-ridge' phenomenon. Moreover, the occurrence of the Mascarene cold high is predicted to cause an abnormal northward location of WPSH, resulting in the “empty plum” phenomenon.
NASA Astrophysics Data System (ADS)
Zhao, Jun-Hu; Yang, Liu; Hou, Wei; Liu, Gang; Zeng, Yu-Xing
2015-05-01
The cold vortex is a major high impact weather system in northeast China during the warm season, its frequent activities also affect the short-term climate throughout eastern China. How to objectively and quantitatively predict the intensity trend of the cold vortex is an urgent and difficult problem for current short-term climate prediction. Based on the dynamical-statistical combining principle, the predicted results of the Beijing Climate Center’s global atmosphere-ocean coupled model and rich historical data are used for dynamic-statistical extra-seasonal prediction testing and actual prediction of the summer 500-hPa geopotential height over the cold vortex activity area. The results show that this method can significantly reduce the model’s prediction error over the cold vortex activity area, and improve the prediction skills. Furthermore, the results of the sensitivity test reveal that the predicted results are highly dependent on the quantity of similar factors and the number of similar years. Project supported by the National Natural Science Foundation of China (Grant No. 41375078), the National Basic Research Program of China (Grant Nos. 2012CB955902 and 2013CB430204), and the Special Scientific Research Fund of Public Welfare Profession of China (Grant No. GYHY201306021).
NASA Astrophysics Data System (ADS)
Verma, M.; Denker, C.
2014-03-01
Context. Solar pores are penumbra-lacking magnetic features, that mark two important transitions in the spectrum of magnetohydrodynamic processes: (1) the magnetic field becomes sufficiently strong to suppress the convective energy transport and (2) at some critical point some pores develop a penumbra and become sunspots. Aims: The purpose of this statistical study is to comprehensively describe solar pores in terms of their size, perimeter, shape, photometric properties, and horizontal proper motions. The seeing-free and uniform data of the Japanese Hinode mission provide an opportunity to compare flow fields in the vicinity of pores in different environments and at various stages of their evolution. Methods: The extensive database of high-resolution G-band images observed with the Hinode Solar Optical Telescope (SOT) is a unique resource to derive statistical properties of pores using advanced digital image processing techniques. The study is based on two data sets: (1) photometric and morphological properties inferred from single G-band images cover almost seven years from 2006 October 25 to 2013 August 31; and (2) horizontal flow fields derived from 356 one-hour sequences of G-band images using local correlation tracking (LCT) for a shorter period of time from 2006 November 3 to 2008 January 6 comprising 13 active regions. Results: A total of 7643/2863 (single/time-averaged) pores builds the foundation of the statistical analysis. Pores are preferentially observed at low latitudes in the southern hemisphere during the deep minimum of solar cycle No. 23. This imbalance reverses during the rise of cycle No. 24, when the pores migrate from high to low latitudes. Pores are rarely encountered in quiet-Sun G-band images, and only about 10% of pores exist in isolation. In general, pores do not exhibit a circular shape. Typical aspect ratios of the semi-major and -minor axes are 3:2 when ellipses are fitted to pores. Smaller pores (more than two-thirds are smaller than 5 Mm2) tend to be more circular, and their boundaries are less corrugated. Both the area and perimeter length of pores obey log-normal frequency distributions. The frequency distribution of the intensity can be reproduced by two Gaussians representing dark and bright components. Bright features resembling umbral dots and even light bridges cover about 20% of the pores' area. Averaged radial profiles show a peak in the intensity at normalized radius RN = r/Rpore = 2.1, followed by maxima of the divergence at RN = 2.3 and the radial component of the horizontal velocity at RN = 4.6. The divergence is negative within pores strongly suggesting converging flows towards the center of pores, whereas exterior flows are directed towards neighboring supergranular boundaries. The photometric radius of pores, where the intensity reaches quiet-Sun levels at RN = 1.4, corresponds to the position where the divergence is zero at RN = 1.6. Conclusions: Morphological and photometric properties as well as horizontal flow fields have been obtained for a statistically meaningful sample of pores. This provides critical boundary conditions for MHD simulations of magnetic flux concentrations, which eventually evolve into sunspots or just simply erode and fade away. Numerical models of pores (and sunspots) have to fit within these confines, and more importantly ensembles of pores have to agree with the frequency distributions of observed parameters.
Quantum statistical determinism
Bitsakis, E.
1988-03-01
This paper attempts to analyze the concept of quantum statistical determinism. This is done after we have clarified the epistemic difference between causality and determinism and discussed the content of classical forms of determinism-mechanical and dynamical. Quantum statistical determinism transcends the classical forms, for it expresses the multiple potentialities of quantum systems. The whole argument is consistent with a statistical interpretation of quantum mechanics.
NASA Astrophysics Data System (ADS)
Perdigão, R. A. P.; Bloeschl, G.
2014-12-01
Emergent features of landscape-climate coevolution are evaluated on the basis of the sensitivity of floods to annual precipitation in space and time. For that purpose, a spatiotemporal sensitivity analysis is performed at regional scale using data from 804 catchments in Austria from 1976 to 2008. Results show that flood peaks are more responsive to spatial (regional) than to temporal (decadal) variability. Space-wise a 10% increase in precipitation leads to a 23% increase in flood peaks in Austria, whereas timewise a 10% increase in precipitation leads to an increase of just 6% in flood peaks. Catchments from dry lowlands and high wetlands exhibit similarity between the spatial and temporal sensitivities (spatiotemporal symmetry) and low landscape-climate codependence. This suggests that such regions are not coevolving significantly. However, intermediate regions show differences between those sensitivities (symmetry breaks) and higher landscape-climate codependence, suggesting undergoing coevolution. The break of symmetry is considered an emergent behavior of the coupled system. A new coevolution index is then proposed relating spatiotemporal symmetry with relative characteristic celerities. The descriptive assessment of coevolution is complemented by a simple nonlinear dynamical model of landscape-climate coevolution, in which landform evolution processes take place at the millennial scale (slow dynamics), and climate adjusts in years to decades (fast dynamics). Coevolution is expressed by the interplay between slow and fast dynamics, represented, respectively, by spatial and temporal characteristics. The model captures key features of the joint landscape-climate distribution, supporting the descriptive assessment. This paper ultimately brings to light signatures of coevolution that arise from the nonlinear coupling of the landscape-climate system at slow and fast time scales. The presented work builds on Perdigão and Blöschl (2014). Perdigão, R. A. P., and G. Blöschl (2014), Spatiotemporal flood sensitivity to annual precipitation: Evidence for landscape-climate coevolution, Water Resour. Res., 50, doi:10.1002/2014WR015365.
Technology Transfer Automated Retrieval System (TEKTRAN)
Quorum sensing transcriptional regulator SdiA has been shown to enhance the survival of Escherichia coli O157:H7 (O157) in the acidic compartment of bovine rumen in response to N-acyl-L-homoserine lactones (AHLs) produced by the rumen bacteria. Bacteria that survive the rumen environment subsequentl...
NASA Astrophysics Data System (ADS)
Mutz, Sebastian; Paeth, Heiko; Winkler, Stefan
2016-03-01
The long-term behaviour of Norwegian glaciers is reflected by the long mass-balance records provided by the Norwegian Water Resources and Energy Directorate. These show positive annual mass balances in the 1980s and 1990s at maritime glaciers followed by rapid mass loss since 2000. This study assesses the influence of various atmospheric variables on mass changes of selected Norwegian glaciers by correlation- and cross-validated stepwise multiple regression analyses. The atmospheric variables are constructed from reanalyses by the National Centers for Environmental Prediction and the European Centre for Medium-Range Weather Forecasts. Transfer functions determined by the multiple regression are applied to predictors derived from a multi-model ensemble of climate projections to estimate future mass-balance changes until 2100. The statistical relationship to the North Atlantic Oscillation (NAO), the strongest predictor, is highest for maritime glaciers and less for more continental ones. The mass surplus in the 1980s and 1990s can be attributed to a strong NAO phase and lower air temperatures during the ablation season. The mass loss since 2000 can be explained by an increase of summer air temperatures and a slight weakening of the NAO. From 2000 to 2100 the statistical model predicts predicts changes for glaciers in more continental settings of c. -20 m w.e. (water equivalent) or 0.2 m w.e./a. The corresponding range for their more maritime counterparts is -0.5 to +0.2 m w.e./a. Results from Bayesian classification of observed atmospheric states associated with high melt or high accumulation in the past into different simulated climates in the future suggest that climatic conditions towards the end of the twenty-first century favour less winterly accumulation and more ablation in summer. The posterior probabilities for high accumulation at the end of the twenty-first century are typically 1.5-3 times lower than in the twentieth century while the posterior probabilities for high melt are often 1.5-3 times higher at the end of the twenty-first century than in the twentieth and early twenty-first century.
NASA Astrophysics Data System (ADS)
Vannitsem, Stéphane; Lucarini, Valerio
2016-06-01
We study a simplified coupled atmosphere-ocean model using the formalism of covariant Lyapunov vectors (CLVs), which link physically-based directions of perturbations to growth/decay rates. The model is obtained via a severe truncation of quasi-geostrophic equations for the two fluids, and includes a simple yet physically meaningful representation of their dynamical/thermodynamical coupling. The model has 36 degrees of freedom, and the parameters are chosen so that a chaotic behaviour is observed. There are two positive Lyapunov exponents (LEs), sixteen negative LEs, and eighteen near-zero LEs. The presence of many near-zero LEs results from the vast time-scale separation between the characteristic time scales of the two fluids, and leads to nontrivial error growth properties in the tangent space spanned by the corresponding CLVs, which are geometrically very degenerate. Such CLVs correspond to two different classes of ocean/atmosphere coupled modes. The tangent space spanned by the CLVs corresponding to the positive and negative LEs has, instead, a non-pathological behaviour, and one can construct robust large deviations laws for the finite time LEs, thus providing a universal model for assessing predictability on long to ultra-long scales along such directions. Interestingly, the tangent space of the unstable manifold has substantial projection on both atmospheric and oceanic components. The results show the difficulties in using hyperbolicity as a conceptual framework for multiscale chaotic dynamical systems, whereas the framework of partial hyperbolicity seems better suited, possibly indicating an alternative definition for the chaotic hypothesis. They also suggest the need for an accurate analysis of error dynamics on different time scales and domains and for a careful set-up of assimilation schemes when looking at coupled atmosphere-ocean models.
Diegert, Carl F.
2006-12-01
We define a new diagnostic method where computationally-intensive numerical solutions are used as an integral part of making difficult, non-contact, nanometer-scale measurements. The limited scope of this report comprises most of a due diligence investigation into implementing the new diagnostic for measuring dynamic operation of Sandia's RF Ohmic Switch. Our results are all positive, providing insight into how this switch deforms during normal operation. Future work should contribute important measurements on a variety of operating MEMS devices, with insights that are complimentary to those from measurements made using interferometry and laser Doppler methods. More generally, the work opens up a broad front of possibility where exploiting massive high-performance computers enable new measurements.
NASA Technical Reports Server (NTRS)
Mcmillan, S. L. W.
1986-01-01
The period immediately following the core collapse phase in the evolution of a globular cluster is studied using a hybrid N-body/Fokker-Planck stellar dynamical code. Several core oscillations of the type predicted in earlier work are seen. The oscillations are driven by the formation, hardening, and ejection of binaries by three-body processes, and appear to decay on a timescale of about 10 to the 7th yr, for the choice of 'typical' cluster parameters made here. There is no evidence that they are gravothermal in nature. The mechanisms responsible for the decay are discussed in some detail. The distribution of hard binaries produced by the oscillations is compared with theoretical expectations and the longer term evolution of the system is considered.
NASA Astrophysics Data System (ADS)
Franchito, Sergio H.; Brahmananda Rao, V.; Moraes, E. C.
2011-11-01
In this study, a zonally-averaged statistical climate model (SDM) is used to investigate the impact of global warming on the distribution of the geobotanic zones over the globe. The model includes a parameterization of the biogeophysical feedback mechanism that links the state of surface to the atmosphere (a bidirectional interaction between vegetation and climate). In the control experiment (simulation of the present-day climate) the geobotanic state is well simulated by the model, so that the distribution of the geobotanic zones over the globe shows a very good agreement with the observed ones. The impact of global warming on the distribution of the geobotanic zones is investigated considering the increase of CO2 concentration for the B1, A2 and A1FI scenarios. The results showed that the geobotanic zones over the entire earth can be modified in future due to global warming. Expansion of subtropical desert and semi-desert zones in the Northern and Southern Hemispheres, retreat of glaciers and sea-ice, with the Arctic region being particularly affected and a reduction of the tropical rainforest and boreal forest can occur due to the increase of the greenhouse gases concentration. The effects were more pronounced in the A1FI and A2 scenarios compared with the B1 scenario. The SDM results confirm IPCC AR4 projections of future climate and are consistent with simulations of more complex GCMs, reinforcing the necessity of the mitigation of climate change associated to global warming.
The non-statistical dynamics of the ¹⁸O + ³²O₂ isotope exchange reaction at two energies.
Van Wyngarden, Annalise L; Mar, Kathleen A; Quach, Jim; Nguyen, Anh P Q; Wiegel, Aaron A; Lin, Shi-Ying; Lendvay, Gyorgy; Guo, Hua; Lin, Jim J; Lee, Yuan T; Boering, Kristie A
2014-08-14
The dynamics of the (18)O((3)P) + (32)O2 isotope exchange reaction were studied using crossed atomic and molecular beams at collision energies (E(coll)) of 5.7 and 7.3 kcal/mol, and experimental results were compared with quantum statistical (QS) and quasi-classical trajectory (QCT) calculations on the O3(X(1)A') potential energy surface (PES) of Babikov et al. [D. Babikov, B. K. Kendrick, R. B. Walker, R. T. Pack, P. Fleurat-Lesard, and R. Schinke, J. Chem. Phys. 118, 6298 (2003)]. In both QS and QCT calculations, agreement with experiment was markedly improved by performing calculations with the experimental distribution of collision energies instead of fixed at the average collision energy. At both collision energies, the scattering displayed a forward bias, with a smaller bias at the lower E(coll). Comparisons with the QS calculations suggest that (34)O2 is produced with a non-statistical rovibrational distribution that is hotter than predicted, and the discrepancy is larger at the lower E(coll). If this underprediction of rovibrational excitation by the QS method is not due to PES errors and/or to non-adiabatic effects not included in the calculations, then this collision energy dependence is opposite to what might be expected based on collision complex lifetime arguments and opposite to that measured for the forward bias. While the QCT calculations captured the experimental product vibrational energy distribution better than the QS method, the QCT results underpredicted rotationally excited products, overpredicted forward-bias and predicted a trend in the strength of forward-bias with collision energy opposite to that measured, indicating that it does not completely capture the dynamic behavior measured in the experiment. Thus, these results further underscore the need for improvement in theoretical treatments of dynamics on the O3(X(1)A') PES and perhaps of the PES itself in order to better understand and predict non-statistical effects in this reaction and in the formation of ozone (in which the intermediate O3* complex is collisionally stabilized by a third body). The scattering data presented here at two different collision energies provide important benchmarks to guide these improvements. PMID:25134575
Van Wyngarden, Annalise L.; Mar, Kathleen A.; Wiegel, Aaron A.; Quach, Jim; Nguyen, Anh P. Q.; Lin, Shi-Ying; Lendvay, Gyorgy; Guo, Hua; Lin, Jim J.; Lee, Yuan T.; Boering, Kristie A.
2014-08-14
The dynamics of the {sup 18}O({sup 3}P) + {sup 32}O{sub 2} isotope exchange reaction were studied using crossed atomic and molecular beams at collision energies (E{sub coll}) of 5.7 and 7.3 kcal/mol, and experimental results were compared with quantum statistical (QS) and quasi-classical trajectory (QCT) calculations on the O{sub 3}(X{sup 1}A’) potential energy surface (PES) of Babikov et al. [D. Babikov, B. K. Kendrick, R. B. Walker, R. T. Pack, P. Fleurat-Lesard, and R. Schinke, J. Chem. Phys. 118, 6298 (2003)]. In both QS and QCT calculations, agreement with experiment was markedly improved by performing calculations with the experimental distribution of collision energies instead of fixed at the average collision energy. At both collision energies, the scattering displayed a forward bias, with a smaller bias at the lower E{sub coll}. Comparisons with the QS calculations suggest that {sup 34}O{sub 2} is produced with a non-statistical rovibrational distribution that is hotter than predicted, and the discrepancy is larger at the lower E{sub coll}. If this underprediction of rovibrational excitation by the QS method is not due to PES errors and/or to non-adiabatic effects not included in the calculations, then this collision energy dependence is opposite to what might be expected based on collision complex lifetime arguments and opposite to that measured for the forward bias. While the QCT calculations captured the experimental product vibrational energy distribution better than the QS method, the QCT results underpredicted rotationally excited products, overpredicted forward-bias and predicted a trend in the strength of forward-bias with collision energy opposite to that measured, indicating that it does not completely capture the dynamic behavior measured in the experiment. Thus, these results further underscore the need for improvement in theoretical treatments of dynamics on the O{sub 3}(X{sup 1}A’) PES and perhaps of the PES itself in order to better understand and predict non-statistical effects in this reaction and in the formation of ozone (in which the intermediate O{sub 3}{sup *} complex is collisionally stabilized by a third body). The scattering data presented here at two different collision energies provide important benchmarks to guide these improvements.
NASA Technical Reports Server (NTRS)
Zheng, Quanan; Yan, Xiao-Hai; Klemas, Vic
1993-01-01
The internal waves on the continental shelf on the Middle Atlantic Bight seen on Space Shuttle photographs taken during the STS-40 mission in June 1991 are measured and analyzed. The internal wave field in the sample area has a three-level structure which consists of packet groups, packets, and solitons. An average packet group wavelength of 17.5 km and an average soliton wavelength of 0.6 km are measured. Finite-depth theory is used to derive the dynamic parameters of the internal solitons: the maximum amplitude of 5.6 m, the characteristic phase speed of 0.42 m/s, the characteristic period of 23.8 min, the velocity amplitude of the water particles in the upper and lower layers of 0.13 m/s and 0.030 m/s respectively, and the theoretical energy per unit crest line of 6.8 x 10 exp 4 J/m. The frequency distribution of solitons is triple-peaked rather than continuous. The major generation source is at 160 m water depth, and a second is at 1800 m depth, corresponding to the upper and lower edges of the shelf break.
Demkin, V. P.; Mel'nichuk, S. V.
2014-09-15
In the present work, results of investigations into the dynamics of secondary electrons with helium atoms in the presence of the reverse electric field arising in the flare of a high-voltage pulsed beam-type discharge and leading to degradation of the primary electron beam are presented. The electric field in the discharge of this type at moderate pressures can reach several hundred V/cm and leads to considerable changes in the kinetics of secondary electrons created in the process of propagation of the electron beam generated in the accelerating gap with a grid anode. Moving in the accelerating electric field toward the anode, secondary electrons create the so-called compensating current to the anode. The character of electron motion and the compensating current itself are determined by the ratio of the field strength to the concentration of atoms (E/n). The energy and angular spectra of secondary electrons are calculated by the Monte Carlo method for different ratios E/n of the electric field strength to the helium atom concentration. The motion of secondary electrons with threshold energy is studied for inelastic collisions of helium atoms and differential analysis is carried out of the collisional processes causing energy losses of electrons in helium for different E/n values. The mechanism of creation and accumulation of slow electrons as a result of inelastic collisions of secondary electrons with helium atoms and selective population of metastable states of helium atoms is considered. It is demonstrated that in a wide range of E/n values the motion of secondary electrons in the beam-type discharge flare has the character of drift. At E/n values characteristic for the discharge of the given type, the drift velocity of these electrons is calculated and compared with the available experimental data.
Pasquaretta, Cristian; Klenschi, Elizabeth; Pansanel, Jérôme; Battesti, Marine; Mery, Frederic; Sueur, Cédric
2016-01-01
Social learning - the transmission of behaviors through observation or interaction with conspecifics - can be viewed as a decision-making process driven by interactions among individuals. Animal group structures change over time and interactions among individuals occur in particular orders that may be repeated following specific patterns, change in their nature, or disappear completely. Here we used a stochastic actor-oriented model built using the RSiena package in R to estimate individual behaviors and their changes through time, by analyzing the dynamic of the interaction network of the fruit fly Drosophila melanogaster during social learning experiments. In particular, we re-analyzed an experimental dataset where uninformed flies, left free to interact with informed ones, acquired and later used information about oviposition site choice obtained by social interactions. We estimated the degree to which the uninformed flies had successfully acquired the information carried by informed individuals using the proportion of eggs laid by uninformed flies on the medium their conspecifics had been trained to favor. Regardless of the degree of information acquisition measured in uninformed individuals, they always received and started interactions more frequently than informed ones did. However, information was efficiently transmitted (i.e., uninformed flies predominantly laid eggs on the same medium informed ones had learn to prefer) only when the difference in contacts sent between the two fly types was small. Interestingly, we found that the degree of reciprocation, the tendency of individuals to form mutual connections between each other, strongly affected oviposition site choice in uninformed flies. This work highlights the great potential of RSiena and its utility in the studies of interaction networks among non-human animals. PMID:27148146
Pasquaretta, Cristian; Klenschi, Elizabeth; Pansanel, Jérôme; Battesti, Marine; Mery, Frederic; Sueur, Cédric
2016-01-01
Social learning – the transmission of behaviors through observation or interaction with conspecifics – can be viewed as a decision-making process driven by interactions among individuals. Animal group structures change over time and interactions among individuals occur in particular orders that may be repeated following specific patterns, change in their nature, or disappear completely. Here we used a stochastic actor-oriented model built using the RSiena package in R to estimate individual behaviors and their changes through time, by analyzing the dynamic of the interaction network of the fruit fly Drosophila melanogaster during social learning experiments. In particular, we re-analyzed an experimental dataset where uninformed flies, left free to interact with informed ones, acquired and later used information about oviposition site choice obtained by social interactions. We estimated the degree to which the uninformed flies had successfully acquired the information carried by informed individuals using the proportion of eggs laid by uninformed flies on the medium their conspecifics had been trained to favor. Regardless of the degree of information acquisition measured in uninformed individuals, they always received and started interactions more frequently than informed ones did. However, information was efficiently transmitted (i.e., uninformed flies predominantly laid eggs on the same medium informed ones had learn to prefer) only when the difference in contacts sent between the two fly types was small. Interestingly, we found that the degree of reciprocation, the tendency of individuals to form mutual connections between each other, strongly affected oviposition site choice in uninformed flies. This work highlights the great potential of RSiena and its utility in the studies of interaction networks among non-human animals. PMID:27148146
NASA Astrophysics Data System (ADS)
Kissick, David J.; Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.; Simpson, Garth J.
2013-02-01
Despite the ubiquitous use of multi-photon and confocal microscopy measurements in biology, the core techniques typically suffer from fundamental compromises between signal to noise (S/N) and linear dynamic range (LDR). In this study, direct synchronous digitization of voltage transients coupled with statistical analysis is shown to allow S/N approaching the theoretical maximum throughout an LDR spanning more than 8 decades, limited only by the dark counts of the detector on the low end and by the intrinsic nonlinearities of the photomultiplier tube (PMT) detector on the high end. Synchronous digitization of each voltage transient represents a fundamental departure from established methods in confocal/multi-photon imaging, which are currently based on either photon counting or signal averaging. High information-density data acquisition (up to 3.2 GB/s of raw data) enables the smooth transition between the two modalities on a pixel-by-pixel basis and the ultimate writing of much smaller files (few kB/s). Modeling of the PMT response allows extraction of key sensor parameters from the histogram of voltage peak-heights. Applications in second harmonic generation (SHG) microscopy are described demonstrating S/N approaching the shot-noise limit of the detector over large dynamic ranges.
Shi, Runhua; McLarty, Jerry W
2009-10-01
In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications. PMID:19891281
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2008-01-01
As a branch of knowledge, Statistics is ubiquitous and its applications can be found in (almost) every field of human endeavour. In this article, the authors track down the possible source of the link between the "Siren song" and applications of Statistics. Answers to their previous five questions and five new questions on Statistics are presented.
NASA Astrophysics Data System (ADS)
Zekri, Nouredine; Clerc, Jean Pierre
We study numerically in this work the statistical and dynamical properties of the clusters in a one dimensional small world model. The parameters chosen correspond to a realistic network of children of school age where a disease like measles can propagate. Extensive results on the statistical behavior of the clusters around the percolation threshold, as well as the evoltion with time, are discussed. To cite this article: N. Zekri, J.P. Clerc, C. R. Physique 3 (2002) 741-747.
Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy
2015-10-15
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy - taking a reasonable computational effort - when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems' performance under normal and shock events. PMID:26248321
NASA Astrophysics Data System (ADS)
Sun, F.; Hall, A. D.; Walton, D.; Capps, S. B.; Reich, K.
2013-12-01
Using a combination of dynamical and statistical downscaling techniques, we produced 2-km-resolution regional climate reconstructions and future projections of surface warming and snowfall changes in the Los Angeles region at the middle and end of the 21st century. Projections for both time periods were compared to a validated simulation of a baseline period (1981-2000). We examined outcomes associated with two IPCC-AR5 greenhouse gas emissions scenarios: a "business-as-usual" scenario (RCP8.5) and a "mitigation" scenario (RCP2.6). Output from all available global climate models in the CMIP5 archive was downscaled. We first statistically downscaled surface warming and then applied a statistical model between the surface temperature and snowfall to project the snowfall change. By mid-century, the mountainous areas in the Los Angeles region are likely to receive substantially less snowfall than in the baseline period. In RCP8.5, about 60% of the snowfall is most likely to persist, while in RCP2.6, the likely amount remaining is somewhat higher (about 70%). By end-of-century, however, the two scenarios diverge significantly. In RCP8.5, snowfall sees a dramatic further reduction, with only about a third of baseline snowfall persisting. For RCP2.6, snowfall sees only a negligible further reduction from mid-century. Due to significant differences in climate change outcomes across the global models, we estimated these numbers associated with uncertainty, in the range of 15-30 percentage points. For both scenarios and both time slices, the snowfall loss is consistently greatest at low elevations, and the lower-lying mountain ranges are somewhat more vulnerable to snowfall loss. The similarity in the two scenarios' most likely snowfall outcomes at mid-century illustrates the inevitability of climate change in the coming decades, no matter what mitigation measures are taken. Their stark contrast at century's end reveals that reduction of greenhouse gas emissions will help avoid a dramatic loss of snowfall by the end of the century. In addition to snowfall projections, the warming-accelerated snow melting of the already reduced snowfall will be discussed as well.
NASA Astrophysics Data System (ADS)
Li, R.; Wang, S.-Y.; Gillies, R. R.
2016-04-01
Large biases associated with climate projections are problematic when it comes to their regional application in the assessment of water resources and ecosystems. Here, we demonstrate a method that can reduce systematic biases in regional climate projections. The global and regional climate models employed to demonstrate the technique are the Community Climate System Model (CCSM) and the Weather Research and Forecasting (WRF) model. The method first utilized a statistical regression technique and a global reanalysis dataset to correct biases in the CCSM-simulated variables (e.g., temperature, geopotential height, specific humidity, and winds) that are subsequently used to drive the WRF model. The WRF simulations were conducted for the western United States and were driven with (a) global reanalysis, (b) original CCSM, and (c) bias-corrected CCSM data. The bias-corrected CCSM data led to a more realistic regional climate simulation of precipitation and associated atmospheric dynamics, as well as snow water equivalent (SWE), in comparison to the original CCSM-driven WRF simulation. Since most climate applications rely on existing global model output as the forcing data (i.e., they cannot re-run or change the global model), which often contain large biases, this method provides an effective and economical tool to reduce biases in regional climate downscaling simulations of water resource variables.
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the statistical…
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1987-01-01
A dynamic rain attenuation prediction model is developed for use in obtaining the temporal characteristics, on time scales of minutes or hours, of satellite communication link availability. Analagous to the associated static rain attenuation model, which yields yearly attenuation predictions, this dynamic model is applicable at any location in the world that is characterized by the static rain attenuation statistics peculiar to the geometry of the satellite link and the rain statistics of the location. Such statistics are calculated by employing the formalism of Part I of this report. In fact, the dynamic model presented here is an extension of the static model and reduces to the static model in the appropriate limit. By assuming that rain attenuation is dynamically described by a first-order stochastic differential equation in time and that this random attenuation process is a Markov process, an expression for the associated transition probability is obtained by solving the related forward Kolmogorov equation. This transition probability is then used to obtain such temporal rain attenuation statistics as attenuation durations and allowable attenuation margins versus control system delay.
Wang, Hai-long; Zhang, Zhou-long
2014-09-01
Ultrasonic light scattering tomography system is a new imaging technique for breast function, which associates with diffused optical tomography (DOT) with ultrasonic examination. It locates breast neoplasm with ultrasonic examination and measures the total hemoglobin concentration inside the tumor with DOT photon emission to reflect the metabolic state of tumors and then comes to synthesis diagnostic index to judge benign and malignant tumors. This diagnosis method has significant affection on diagnosis of benign and malignant tumors at home and abroad. In the development of breast cancer, local tissue hypoxia leads to a large number of new blood vessels when the tumor grows faster than the rate of angiogenesis. A recent study found microvessel density (MVD), vascular endothelial growth factor (VEGF) and hypoxia-inducible factor-1 alpha (HIF-1α) play a major role in angiogenesis of breast cancer. This study analyses the relationship between breast cancer ultrasound synthesis diagnostic index (SDI) and the expression of MVD, VEGF and HIF-1α by testing the expression level of the breast cancer gene MVD, VEGF and HIF-1α. PMID:24659092
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
NASA Astrophysics Data System (ADS)
Li, Zheng; Borner, Arnaud; Levin, Deborah A.
2014-06-01
Homogeneous water condensation and ice formation in supersonic expansions to vacuum for stagnation pressures from 12 to 1000 mbar are studied using the particle-based Ellipsoidal-Statistical Bhatnagar-Gross-Krook (ES-BGK) method. We find that when condensation starts to occur, at a stagnation pressure of 96 mbar, the increase in the degree of condensation causes an increase in the rotational temperature due to the latent heat of vaporization. The simulated rotational temperature profiles along the plume expansion agree well with measurements confirming the kinetic homogeneous condensation models and the method of simulation. Comparisons of the simulated gas and cluster number densities, cluster size for different stagnation pressures along the plume centerline were made and it is found that the cluster size increase linearly with respect to stagnation pressure, consistent with classical nucleation theory. The sensitivity of our results to cluster nucleation model and latent heat values based on bulk water, specific cluster size, or bulk ice are examined. In particular, the ES-BGK simulations are found to be too coarse-grained to provide information on the phase or structure of the clusters formed. For this reason, molecular dynamics simulations of water condensation in a one-dimensional free expansion to simulate the conditions in the core of a plume are performed. We find that the internal structure of the clusters formed depends on the stagnation temperature. A larger cluster of average size 21 was tracked down the expansion, and a calculation of its average internal temperature as well as a comparison of its radial distribution functions (RDFs) with values measured for solid amorphous ice clusters lead us to conclude that this cluster is in a solid-like rather than liquid form. In another molecular-dynamics simulation at a much lower stagnation temperature, a larger cluster of size 324 and internal temperature 200 K was extracted from an expansion plume and equilibrated to determine its RDF and self-diffusion coefficient. The value of the latter shows that this cluster is formed in a supercooled liquid state rather than in an amorphous solid state.
Manos, Thanos; Robnik, Marko
2013-06-01
We study the kicked rotator in the classically fully chaotic regime using Izrailev's N-dimensional model for various N≤4000, which in the limit N→∞ tends to the quantized kicked rotator. We do treat not only the case K=5, as studied previously, but also many different values of the classical kick parameter 5≤K≤35 and many different values of the quantum parameter kε[5,60]. We describe the features of dynamical localization of chaotic eigenstates as a paradigm for other both time-periodic and time-independent (autonomous) fully chaotic or/and mixed-type Hamilton systems. We generalize the scaling variable Λ=l(∞)/N to the case of anomalous diffusion in the classical phase space by deriving the localization length l(∞) for the case of generalized classical diffusion. We greatly improve the accuracy and statistical significance of the numerical calculations, giving rise to the following conclusions: (1) The level-spacing distribution of the eigenphases (or quasienergies) is very well described by the Brody distribution, systematically better than by other proposed models, for various Brody exponents β(BR). (2) We study the eigenfunctions of the Floquet operator and characterize their localization properties using the information entropy measure, which after normalization is given by β(loc) in the interval [0,1]. The level repulsion parameters β(BR) and β(loc) are almost linearly related, close to the identity line. (3) We show the existence of a scaling law between β(loc) and the relative localization length Λ, now including the regimes of anomalous diffusion. The above findings are important also for chaotic eigenstates in time-independent systems [Batistić and Robnik, J. Phys. A: Math. Gen. 43, 215101 (2010); arXiv:1302.7174 (2013)], where the Brody distribution is confirmed to a very high degree of precision for dynamically localized chaotic eigenstates, even in the mixed-type systems (after separation of regular and chaotic eigenstates). PMID:23848746
The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.
... population, or about 25 million Americans, has experienced tinnitus lasting at least five minutes in the past ... by NIDCD Epidemiology and Statistics Program staff: (1) tinnitus prevalence was obtained from the 2008 National Health ...
NASA Astrophysics Data System (ADS)
Su, Haifeng; Xiong, Zhe; Yan, Xiaodong; Dai, Xingang; Wei, Wenguang
2016-04-01
Monthly rainfall in the Heihe River Basin (HRB) was simulated by the dynamical downscaling model (DDM) and statistical downscaling model (SDM). The rainy-season rainfall in the HRB obtained by SDM and DDM was compared with the observed datasets (OBS) over the period of 2003-2012. The results showed the following: (1) Both methods reasonably reproduced the spatial pattern of rainy-season rainfall in the HRB with a high-level skill. Rainfall simulated by DDM was better than that by SDM in the upstream, with biases of -12.09 and -13.59 %, respectively; rainfall simulated by SDM was better than that by DDM in the midstream, with biases of 3.91 and -23.22 %, respectively; there was little difference between the rainfall simulated by SDM and DDM in the downstream, with biases of -10.89 and -9.50 %, respectively. (2) Both methods reasonably reproduced monthly rainfall in rainy season in different subregions. Rainfall simulated by DDM was better than that by SDM in May and July in the upstream, whereas rainfall simulated by SDM was closer to OBS except August in the midstream and except August and September in the downstream. (3) For multi-year mean rainy-season rainfall in different stations, there was a little difference between the rainfall simulated by DDM and SDM in Tuole station in the upstream, with biases of -13.16 and -12.40 %, respectively; rainfall in Zhangye station simulated by SDM was overestimated with bias of 14.02 %, and rainfall simulated by DDM was underestimated with bias of -14.60 %; rainfall in Dingxin station simulated by DDM was reproduced better than that by SDM, with biases of -19.34 and -32.75 %, respectively.
NASA Technical Reports Server (NTRS)
Yao, Mao-Sung; Stone, Peter H.
1987-01-01
The moist convection parameterization used in the GISS 3-D GCM is adapted for use in a two-dimensional (2-D) zonally averaged statistical-dynamical model. Experiments with different versions of the parameterization show that its impact on the general circulation in the 2-D model does not parallel its impact in the 3-D model unless the effect of zonal variations is parameterized in the moist convection calculations. A parameterization of the variations in moist static energy is introduced in which the temperature variations are calculated from baroclinic stability theory, and the relative humidity is assumed to be constant. Inclusion of the zonal variations of moist static energy in the 2-D moist convection parameterization allows just a fraction of a latitude circle to be unstable and enhances the amount of deep convection. This leads to a 2-D simulation of the general circulation very similar to that in the 3-D model. The experiments show that the general circulation is sensitive to the parameterized amount of deep convection in the subsident branch of the Hadley cell. The more there is, the weaker are the Hadley cell circulations and the westerly jets. The experiments also confirm the effects of momentum mixing associated with moist convection found by earlier investigators and, in addition, show that the momentum mixing weakens the Ferrel cell. An experiment in which the moist convection was removed while the hydrological cycle was retained and the eddy forcing was held fixed shows that moist convection by itself stabilizes the tropics, reduces the Hadley circulation, and reduces the maximum speeds in the westerly jets.
ERIC Educational Resources Information Center
Penfield, Douglas A.
The 30 papers in the area of educational statistics that were presented at the 1972 AERA Conference are reviewed. The papers are categorized into five broad areas of interest: (1) theory of univariate analysis, (2) nonparametric methods, (3) regression-prediction theory, (4) multivariable methods, and (5) factor analysis. A list of the papers…
NASA Technical Reports Server (NTRS)
Giles, B. L.; Chappell, C. R.; Moore, T. E.; Comfort, R. H.; Waite, J. H., Jr.
1994-01-01
Core (0-50 eV) ion pitch angle measurements from the retarding ion mass spectrometer on Dynamics Explorer 1 are examined with respect to magnetic disturbance, invariant latitude, magnetic local time, and altitude for ions H(+), He(+), O(+), M/Z = 2 (D(+) or He(++)), and O(++). Included are outflow events in the auroral zone, polar cap, and cusp, separated into altitude regions below and above 3 R(sub E). In addition to the customary division into beam, conic, and upwelling distributions, the high-latitude observations fall into three categories corresponding to ion bulk speeds that are (1) less than, (2) comparable to, or (3) faster than that of the spacecraft. This separation, along with the altitude partition, serves to identify conditions under which ionospheric source ions are gravita- tionally bound and when they are more energetic and able to escape to the outer magnetosphere. Features of the cleft ion fountain inferred from single event studies are clearly identifiable in the statistical results. In addition, it is found that the dayside pre-noon cleft is a dayside afternoon cleft, or auroral zone, becomes an additional source for increased activity. The auroral oval as a whole appears to be a steady source of escape velocity H(+), a steady source of escape velocity He(+) ions for the dusk sector, and a source of escape velocity heavy ions for dusk local times primarily during increased activity. The polar cap above the auroral zone is a consistent source of low-energy ions, although only the lighter mass particles appear to have sufficient velocity, on average, to escape to higher altitudes. The observations support two concepts for outflow: (1) The cleft ion fountain consists of ionospheric plasma of 1-20 eV energy streaming upward into the magnetosphere where high-latitude convection electric fields cause poleward dispersion. (2) The auroral ion fountain involves field-aligned beams which flow out along auroral latitude field lines; and, in addition, for late afternoon local times, they experience additional acceleration such that the ion energy distribution tends to exceed the detection range of the instrument (greater than 50-60 eV).
NASA Astrophysics Data System (ADS)
Kardar, Mehran
2006-06-01
While many scientists are familiar with fractals, fewer are familiar with the concepts of scale-invariance and universality which underly the ubiquity of their shapes. These properties may emerge from the collective behaviour of simple fundamental constituents, and are studied using statistical field theories. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook demonstrates how such theories are formulated and studied. Perturbation theory, exact solutions, renormalization groups, and other tools are employed to demonstrate the emergence of scale invariance and universality, and the non-equilibrium dynamics of interfaces and directed paths in random media are discussed. Ideal for advanced graduate courses in statistical physics, it contains an integrated set of problems, with solutions to selected problems at the end of the book. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873413. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 65 exercises, with solutions to selected problems Features a thorough introduction to the methods of Statistical Field theory Ideal for graduate courses in Statistical Physics
Bogomolny, E; Gerland, U; Schmit, C
2001-03-01
We consider the statistical distribution of zeros of random meromorphic functions whose poles are independent random variables. It is demonstrated that correlation functions of these zeros can be computed analytically, and explicit calculations are performed for the two-point correlation function. This problem naturally appears in, e.g., rank-1 perturbation of an integrable Hamiltonian and, in particular, when a delta-function potential is added to an integrable billiard. PMID:11308740
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Kaji, Takahiro; Ito, Syoji; Iwai, Shigenori; Miyasaka, Hiroshi
2009-10-22
Single-molecule and ensemble time-resolved fluorescence measurements were applied for the investigation of the conformational dynamics of single-stranded DNA, ssDNA, connected with a fluorescein dye by a C6 linker, where the motions both of DNA and the C6 linker affect the geometry of the system. From the ensemble measurement of the fluorescence quenching via photoinduced electron transfer with a guanine base in the DNA sequence, three main conformations were found in aqueous solution: a conformation unaffected by the guanine base in the excited state lifetime of fluorescein, a conformation in which the fluorescence is dynamically quenched in the excited-state lifetime, and a conformation leading to rapid quenching via nonfluorescent complex. The analysis by using the parameters acquired from the ensemble measurements for interphoton time distribution histograms and FCS autocorrelations by the single-molecule measurement revealed that interconversion in these three conformations took place with two characteristic time constants of several hundreds of nanoseconds and tens of microseconds. The advantage of the combination use of the ensemble measurements with the single-molecule detections for rather complex dynamic motions is discussed by integrating the experimental results with those obtained by molecular dynamics simulation. PMID:19780517
Statistical characterization of dislocation ensembles
El-Azab, A; Deng, J; Tang, M
2006-05-17
We outline a method to study the spatial and orientation statistics of dynamical dislocation systems by modeling the dislocations as a stochastic fiber process. Statistical measures have been introduced for the density, velocity, and flux of dislocations, and the connection between these measures and the dislocation state and plastic distortion rate in the crystal is explained. A dislocation dynamics simulation model has been used to extract numerical data to study the evolution of these statistical measures numerically in a body-centered cubic crystal under deformation. The orientation distribution of the dislocation density, velocity and dislocation flux, as well as the dislocation correlations have been computed. The importance of the statistical measures introduced here in building continuum models of dislocation systems is highlighted.
NASA Astrophysics Data System (ADS)
Graham, D. B.; Cairns, Iver H.; Skjaeraasen, O.; Robinson, P. A.
2012-02-01
The temperature ratio Ti/Te of ions to electrons affects both the ion-damping rate and the ion-acoustic speed in plasmas. The effects of changing the ion-damping rate and ion-acoustic speed are investigated for electrostatic strong turbulence and electromagnetic strong turbulence in three dimensions. When ion damping is strong, density wells relax in place and act as nucleation sites for the formation of new wave packets. In this case, the density perturbations are primarily density wells supported by the ponderomotive force. For weak ion damping, corresponding to low Ti/Te, ion-acoustic waves are launched radially outwards when wave packets dissipate at burnout, thereby increasing the level of density perturbations in the system and thus raising the level of scattering of Langmuir waves off density perturbations. Density wells no longer relax in place so renucleation at recent collapse sites no longer occurs, instead wave packets form in background low density regions, such as superpositions of troughs of propagating ion-acoustic waves. This transition is found to occur at Ti/Te ≈ 0.1. The change in behavior with Ti/Te is shown to change the bulk statistical properties, scaling behavior, spectra, and field statistics of strong turbulence. For Ti/Te>rsim0.1, the electrostatic results approach the predictions of the two-component model of Robinson and Newman, and good agreement is found for Ti/Te>rsim0.15.
NASA Astrophysics Data System (ADS)
Mueschke, Nicholas J.; Schilling, Oleg
2009-01-01
A 1152×760×1280 direct numerical simulation (DNS) using initial conditions, geometry, and physical parameters chosen to approximate those of a transitional, small Atwood number, nonreacting Rayleigh-Taylor mixing experiment was presented in Paper I [Mueschke and Schilling, Phys. Fluids 21, 014106 (2009)]. In addition, the DNS model of the experiment was validated by comparing quantities from the simulation to experimental measurements, including large-scale quantities, higher-order statistics, and vertical velocity and density variance spectra. In Paper II of this study, other quantities not measured in the experiment are obtained from the DNS and discussed, such as the integral- and Taylor-scale Reynolds numbers, Reynolds stress and dissipation anisotropy, two-dimensional density and velocity variance spectra, hypothetical chemical product formation measures (similar to those used in reacting shear flow experiments), other local and global mixing parameters, and the statistical composition of mixed fluid. The integral- and Taylor-scale Reynolds numbers, together with visualizations of vertical and center plane slices of the density and vorticity fields, are used to elucidate the various evolutionary stages of the flow. It is shown that the early-time evolution retains a primarily two-dimensional character until the flow begins to transition to a more three-dimensional state at later times, as also observed in the experiment. The evolution of the three diagonal components of the anisotropy tensors showed that anisotropy persists to the latest times in the simulation. Compensated spectra at the latest time in the DNS suggest very short k-5/3 and k-5/4 inertial subrange scalings of the vertical velocity and density variance spectra, respectively. By interpreting the mixing between the two fluids as a hypothetical, infinitely fast, reversible chemical reaction between the species, the local formation of chemical product, equivalent product thickness, and other standard measures of mixing used in shear-driven turbulence are obtained from the DNS and discussed. Other measures of molecular mixing are shown to be qualitatively similar to the molecular mixing parameter θ on the center plane. Finally, the statistical composition of the mixed fluid is examined using the probability distribution function of the heavy-fluid volume fraction and the averaged composition of mixed fluid. Thus, DNS modeled closely after a physical Rayleigh-Taylor instability and mixing experiment can provide additional insights into the flow physics complementary to the experiment.
Cosmetic Plastic Surgery Statistics
2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...
NASA Technical Reports Server (NTRS)
Vangelder, B. H. W.
1978-01-01
Non-Bayesian statistics were used in simulation studies centered around laser range observations to LAGEOS. The capabilities of satellite laser ranging especially in connection with relative station positioning are evaluated. The satellite measurement system under investigation may fall short in precise determinations of the earth's orientation (precession and nutation) and earth's rotation as opposed to systems as very long baseline interferometry (VLBI) and lunar laser ranging (LLR). Relative station positioning, determination of (differential) polar motion, positioning of stations with respect to the earth's center of mass and determination of the earth's gravity field should be easily realized by satellite laser ranging (SLR). The last two features should be considered as best (or solely) determinable by SLR in contrast to VLBI and LLR.
NASA Astrophysics Data System (ADS)
Tsai, C. S.; Lin, Yung-Chang; Chen, Wen-Shin; Su, H. C.
2010-03-01
Recently, the high-tech industry has become a key industry for economic development in many countries. However, vibration sensitive equipment located in these industrial buildings is vulnerable during earthquakes, which may cause huge economic loss. In this study, an innovative isolator for safeguarding the vibration sensitive equipment, namely, the static dynamics interchangeable-ball pendulum system (SDI-BPS) is proposed and investigated to examine its protective capability for the vibration sensitive equipment during earthquakes through a series of tri-directional shaking table tests. The experimental results illustrate that the SDI-BPS isolator can provide significant damping to rolling types of base isolation systems for reducing the bearing displacement and size, and avoid the stress concentration, which can cause damage or scratches on the rolling surface of the isolator, to prolong its life span of service. The SDI-BPS isolator also provides excellent capability in protecting the vibration sensitive equipment and exhibits a stable behavior under long terms of service loadings and earthquakes.
NASA Technical Reports Server (NTRS)
Balcer-Kubiczek, E. K.; Zhang, X. F.; Harrison, G. H.; Zhou, X. J.; Vigneulle, R. M.; Ove, R.; McCready, W. A.; Xu, J. F.
1999-01-01
PURPOSE: Differences in gene expression underlie the phenotypic differences between irradiated and unirradiated cells. The goal was to identify late-transcribed genes following irradiations differing in quality, and to determine the RBE of 1 GeV/n Fe ions. MATERIALS AND METHODS: Clonogenic assay was used to determine the RBE of Fe ions. Differential hybridization to cDNA target clones was used to detect differences in expression of corresponding genes in mRNA samples isolated from MCF7 cells irradiated with iso-survival doses of Fe ions (0 or 2.5 Gy) or fission neutrons (0 or 1.2 Gy) 7 days earlier. Northern analysis was used to confirm differential expression of cDNA-specific mRNA and to examine expression kinetics up to 2 weeks after irradiation. RESULTS: Fe ion RBE values were between 2.2 and 2.6 in the lines examined. Two of 17 differentially expressed cDNA clones were characterized. hpS2 mRNA was elevated from 1 to 14 days after irradiation, whereas CIP1/WAF1/SDI1 remained elevated from 3 h to 14 days after irradiation. Induction of hpS2 mRNA by irradiation was independent of p53, whereas induction of CIP1/WAF1/SDI1 was observed only in wild-type p53 lines. CONCLUSIONS: A set of coordinately regulated genes, some of which are independent of p53, is associated with change in gene expression during the first 2 weeks post-irradiation.
NASA Astrophysics Data System (ADS)
Schläppy, Romain; Eckert, Nicolas; Jomelli, Vincent; Grancher, Delphine; Brunstein, Daniel; Stoffel, Markus; Naaim, Mohamed
2013-04-01
Documenting past avalanche activity represents an indispensable step in avalanche hazard assessment. Nevertheless, (i) archival records of past avalanche events do not normally yield data with satisfying spatial and temporal resolution and (ii) precision concerning runout distance is generally poorly defined. In addition, historic documentation is most often (iii) biased toward events that caused damage to structure or loss of life on the one hand and (iv) undersampled in unpopulated areas on the other hand. On forested paths dendrogeomorphology has been demonstrated to represent a powerful tool to reconstruct past activity of avalanches with annual resolution and for periods covering the past decades to centuries. This method is based on the fact that living trees may be affected by snow avalanches during their flow and deposition phases. Affected trees will react upon these disturbances with a certain growth response. An analysis of the responses recorded in tree rings coupled with an evaluation of the position of reacting trees within the path allows the dendrogeomorphic expert to identify past snow avalanche events and deduced their minimum runout distance. The objective of the work presented here is firstly to dendrochronogically -reconstruct snow avalanche activity in the Château Jouan path located near Montgenèvre in the French Alps. Minimal runout distances are then determined for each reconstructed event by considering the point of further reach along the topographic profile. Related empirical return intervals are evaluated, combining the extent of each event with the average local frequency of the dendrological record. In a second step, the runout distance distribution derived from dendrochronological reconstruction is compared to the one derived from historical archives and to high return period avalanches predicted by an up-to-date locally calibrated statistical-numerical model. It appears that dendrochronological reconstructions correspond mostly to rare events, i.e. to the tail of the local runout distance distribution. Furthermore, a good agreement exists with the statistical-numerical model's prediction, i.e. a 10-40 m difference for return periods ranging between 10 and 300 years, which is rather small with regards to the uncertainty levels to be considered in avalanche probabilistic modeling and dendrochronological reconstructions. It is important to note that such a cross validation on independent extreme predictions has never been undertaken before. It suggest that i) dendrochronological reconstruction can provide valuable information for anticipating future extreme avalanche events in the context of risk management, and, in turn, that ii) the statistical-numerical model, while properly calibrated, can be used with reasonable confidence to refine these predictions, with for instance evaluation of pressure and flow depth distributions at each position of the runout zone. A strong sensitivity to the determination of local avalanche and dendrological record frequencies is however highlighted, indicating that this step is an essential step for an accurate probabilistic characterization of large-extent events.
NASA Astrophysics Data System (ADS)
Grinberg, Horacio
The interaction of a two-level XY n-spin system with a two-mode cavity field is investigated through a generalized Jaynes-Cummings model in the rotating wave approximation. The spontaneous decay of a spin level was treated by considering the interaction of the two-level spin system with the modes of the universe in the vacuum state. The different cases of interest, characterized in terms of a detuning parameter for each mode, which emerge from the nonvanishing of certain commutation relations between interaction picture Hamiltonians associated with each mode, were analytically implemented and numerically discussed for various values of the initial mean photon number and spin-photon coupling constants. Photon distribution, time evolution of the spin population inversion, as well as the statistical properties of the field leading to the possible production of nonclassical states, such as antibunched light and violations of the Cauchy-Schwartz inequality are examined for an excited initial state. It was assumed that the two modes are initially in coherent states and have the same photon distribution. The case of zero detuning of both modes was treated in terms of a linearization of the expansion of the time evolution operator, while in other three cases, the computations were conducted via second- and third-order Dyson perturbation expansion of the time evolution operator matrix elements for the excited and ground states respectively.
Krommes, J.A. . Plasma Physics Lab.); Kim, Chang-Bae . Inst. for Fusion Studies)
1990-06-01
The fundamental problem in the theory of turbulent transport is to find the flux {Gamma} of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound {Gamma} by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulence electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: the dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain upon considering all bounding calculations to date one can conclude, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research. 29 refs.
Statistical Mechanics of Motorized Molecules
NASA Astrophysics Data System (ADS)
Prentis, Jeffrey
2002-03-01
We have designed a set of experiments that illustrate the basic principles of statistical mechanics, including the fundamental postulate, the ergodic hypothesis, and the canonical statistics. The experimental system is a granular fluid of "motorized molecules" (self-propelled balls). Mechanical properties are measured using motion sensors, force probes, and digital video. Statistical properties are determined by a dynamical probability - the fraction of time that the system spends in each state. Thermal properties are represented by time averages. The process by which statistical patterns appear in the mechanical data vividly illustrates how thermal order emerges from molecular chaos. The pV diagram of a gas of motorized molecules is obtained by monitoring the random force exerted by the molecules beating against a piston. Brownian motion is studied by monitoring the random walk of a Brownian cube in a fluid of self-propelled spheres. Canonical statistics is illustrated using a "Boltzmann machine" - a working dynamical model of a two-level quantum system in a temperature bath. Polymer statistics is illustrated using a granular polymer solution - a chain of ping-pong balls immersed in a solvent of motorized molecules.
NASA Astrophysics Data System (ADS)
Tanoh, K. S.; Adohi, B. J.-P.; Coulibaly, I. S.; Amory-Mazaudier, C.; Kobea, A. T.; Assamoi, P.
2015-01-01
In this paper, we report on the night-time equatorial F-layer height behaviour at Korhogo (9.2° N, 5° W; 2.4° S dip lat), Ivory Coast, in the West African sector during the solar minimum period 1995-1997. The data were collected from quarter-hourly ionograms of an Ionospheric Prediction Service (IPS) 42-type vertical sounder. The main focus of this work was to study the seasonal changes in the F-layer height and to clarify the equinox transition process recently evidenced at Korhogo during 1995, the year of declining solar flux activity. The F-layer height was found to vary strongly with time, with up to three main phases. The night-to-night variability of these morphological phases was then analysed. The early post-sunset slow rise, commonly associated with rapid chemical recombination processes in the bottom part of the F layer, remained featureless and was observed regardless of the date. By contrast, the following event, either presented like the post-sunset height peak associated with the evening E × B drift, or was delayed to the midnight sector, thus involving another mechanism. The statistical analysis of the occurrence of these events throughout the solar minimum period 1995-1997 revealed two main F-layer height patterns, each characteristic of a specific season. The one with the post-sunset height peak was associated with the northern winter period, whereas the other, with the midnight height peak, characterized the northern summer period. The transition process from one pattern to the other took place during the equinox periods and was found to last only a few weeks. We discuss these results in the light of earlier works.
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho
In this thesis, we study two interdisciplinary problems in the framework of statistical physics, which show the broad applicability of physics on problems with various origins. The first problem corresponds to an optimization problem in allocating resources on random regular networks. Frustrations arise from competition for resources. When the initial resources are uniform, different regimes with discrete fractions of satisfied nodes are observed, resembling the Devil's staircase. We apply the spin glass theory in analyses and demonstrate how functional recursions are converted to simple recursions of probabilities. Equilibrium properties such as the average energy and the fraction of free nodes are derived. When the initial resources are bimodally distributed, increases in the fraction of rich nodes induce a glassy transition, entering a glassy phase described by the existence of multiple metastable states, in which we employ the replica symmetry breaking ansatz for analysis. The second problem corresponds to the study of multi-agent systems modeling financial markets. Agents in the system trade among themselves, and self-organize to produce macroscopic trading behaviors resembling the real financial markets. These behaviors include the arbitraging activities, the setting up and the following of price trends. A phase diagram of these behaviors is obtained, as a function of the sensitivity of price and the market impact factor. We finally test the applicability of the models with real financial data including the Hang Seng Index, the Nasdaq Composite and the Dow Jones Industrial Average. A substantial fraction of agents gains faster than the inflation rate of the indices, suggesting the possibility of using multi-agent systems as a tool for real trading.
NASA Astrophysics Data System (ADS)
Liu, Wenjia; Schmittmann, B.; Zia, R. K. P.
2014-05-01
In a recent work (Liu et al, 2013 J. Stat. Mech. P08001), we introduced dynamic networks with preferred degrees and presented simulation and analytic studies of a single, homogeneous system as well as two interacting networks. Here, we extend these studies to a wider range of parameter space, in a more systematic fashion. Though the interaction we introduced seems simple and intuitive, it produced dramatically different behavior in the single- and two-network systems. Specifically, partitioning the single network into two identical sectors, we find the cross-link distribution to be a sharply peaked Gaussian. In stark contrast, we find a very broad and flat plateau in the case of two interacting identical networks. A sound understanding of this phenomenon remains elusive. Exploring more asymmetric interacting networks, we discover a kind of ‘universal behavior’ for systems in which the ‘introverts’ (nodes with smaller preferred degree) are far outnumbered. Remarkably, an approximation scheme for their degree distribution can be formulated, leading to very successful predictions.
NASA Astrophysics Data System (ADS)
Rabbel, Hauke; Frey, Holger; Schmid, Friederike
2015-12-01
The reaction of ABm monomers (m = 2, 3) with a multifunctional Bf-type polymer chain ("hypergrafting") is studied by coarse-grained molecular dynamics simulations. The ABm monomers are hypergrafted using the slow monomer addition strategy. Fully dendronized, i.e., perfectly branched polymers are also simulated for comparison. The degree of branching of the molecules obtained with the "hypergrafting" process critically depends on the rate with which monomers attach to inner monomers compared to terminal monomers. This ratio is more favorable if the ABm monomers have lower reactivity, since the free monomers then have time to diffuse inside the chain. Configurational chain properties are also determined, showing that the stretching of the polymer backbone as a consequence of the "hypergrafting" procedure is much less pronounced than for perfectly dendronized chains. Furthermore, we analyze the scaling of various quantities with molecular weight M for large M (M > 100). The Wiener index scales as M2.3, which is intermediate between linear chains (M3) and perfectly branched polymers (M2ln(M)). The polymer size, characterized by the radius of gyration Rg or the hydrodynamic radius Rh, is found to scale as Rg,h ∝ Mν with ν ≈ 0.38, which lies between the exponent of diffusion limited aggregation (ν = 0.4) and the mean-field exponent predicted by Konkolewicz and co-workers [Phys. Rev. Lett. 98, 238301 (2007)] (ν = 0.33).
Rabbel, Hauke; Frey, Holger; Schmid, Friederike
2015-12-28
The reaction of ABm monomers (m = 2, 3) with a multifunctional Bf-type polymer chain ("hypergrafting") is studied by coarse-grained molecular dynamics simulations. The ABm monomers are hypergrafted using the slow monomer addition strategy. Fully dendronized, i.e., perfectly branched polymers are also simulated for comparison. The degree of branching of the molecules obtained with the "hypergrafting" process critically depends on the rate with which monomers attach to inner monomers compared to terminal monomers. This ratio is more favorable if the ABm monomers have lower reactivity, since the free monomers then have time to diffuse inside the chain. Configurational chain properties are also determined, showing that the stretching of the polymer backbone as a consequence of the "hypergrafting" procedure is much less pronounced than for perfectly dendronized chains. Furthermore, we analyze the scaling of various quantities with molecular weight M for large M (M > 100). The Wiener index scales as M(2.3), which is intermediate between linear chains (M(3)) and perfectly branched polymers (M(2)ln(M)). The polymer size, characterized by the radius of gyration Rg or the hydrodynamic radius Rh, is found to scale as Rg,h ∝ M(ν) with ν ≈ 0.38, which lies between the exponent of diffusion limited aggregation (ν = 0.4) and the mean-field exponent predicted by Konkolewicz and co-workers [Phys. Rev. Lett. 98, 238301 (2007)] (ν = 0.33). PMID:26723610
Statistical ecology comes of age.
Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-12-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
Statistical ecology comes of age
Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-01-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
Predict! Teaching Statistics Using Informational Statistical Inference
ERIC Educational Resources Information Center
Makar, Katie
2013-01-01
Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…
Statistics Poker: Reinforcing Basic Statistical Concepts
ERIC Educational Resources Information Center
Leech, Nancy L.
2008-01-01
Learning basic statistical concepts does not need to be tedious or dry; it can be fun and interesting through cooperative learning in the small-group activity of Statistics Poker. This article describes a teaching approach for reinforcing basic statistical concepts that can help students who have high anxiety and makes learning and reinforcing…
Statistics Poker: Reinforcing Basic Statistical Concepts
ERIC Educational Resources Information Center
Leech, Nancy L.
2008-01-01
Learning basic statistical concepts does not need to be tedious or dry; it can be fun and interesting through cooperative learning in the small-group activity of Statistics Poker. This article describes a teaching approach for reinforcing basic statistical concepts that can help students who have high anxiety and makes learning and reinforcing
Statistical Thermodynamics of Biomembranes
Devireddy, Ram V.
2010-01-01
An overview of the major issues involved in the statistical thermodynamic treatment of phospholipid membranes at the atomistic level is summarized: thermodynamic ensembles, initial configuration (or the physical system being modeled), force field representation as well as the representation of long-range interactions. This is followed by a description of the various ways that the simulated ensembles can be analyzed: area of the lipid, mass density profiles, radial distribution functions (RDFs), water orientation profile, Deuteurium order parameter, free energy profiles and void (pore) formation; with particular focus on the results obtained from our recent molecular dynamic (MD) simulations of phospholipids interacting with dimethylsulfoxide (Me2SO), a commonly used cryoprotective agent (CPA). PMID:19460363
Statistical physics ""Beyond equilibrium
Ecke, Robert E
2009-01-01
The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.
Elements of Statistical Mechanics
NASA Astrophysics Data System (ADS)
Sachs, Ivo; Sen, Siddhartha; Sexton, James
2006-05-01
This textbook provides a concise introduction to the key concepts and tools of modern statistical mechanics. It also covers advanced topics such as non-relativistic quantum field theory and numerical methods. After introducing classical analytical techniques, such as cluster expansion and Landau theory, the authors present important numerical methods with applications to magnetic systems, Lennard-Jones fluids and biophysics. Quantum statistical mechanics is discussed in detail and applied to Bose-Einstein condensation and topics in astrophysics and cosmology. In order to describe emergent phenomena in interacting quantum systems, canonical non-relativistic quantum field theory is introduced and then reformulated in terms of Feynman integrals. Combining the authors' many years' experience of teaching courses in this area, this textbook is ideal for advanced undergraduate and graduate students in physics, chemistry and mathematics. Analytical and numerical techniques in one text, including sample codes and solved problems on the web at www.cambridge.org/0521841984 Covers a wide range of applications including magnetic systems, turbulence astrophysics, and biology Contains a concise introduction to Markov processes and molecular dynamics
Statistical Reference Datasets
National Institute of Standards and Technology Data Gateway
Statistical Reference Datasets (Web, free access) The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.
Statistical dynamics of early river networks
NASA Astrophysics Data System (ADS)
Wang, Xu-Ming; Wang, Peng; Zhang, Ping; Hao, Rui; Huo, Jie
2012-10-01
Based on local erosion rule and fluctuations in rainfall, geology and parameters of a river channel, a generalized Langevin equation is proposed to describe the random prolongation of a river channel. This equation is transformed into the Fokker-Plank equation to follow the early evolution of a river network and the variation of probability distribution of channel lengths. The general solution of the equation is in the product form of two terms. One term is in power form and the other is in exponent form. This distribution shows a complete history of a river network evolving from its infancy to “adulthood”). The infancy is characterized by the Gaussian distribution of the channel lengths, while the adulthood is marked by a power law distribution of the channel lengths. The variation of the distribution from the Gaussian to the power law displays a gradual developing progress of the river network. The distribution of basin areas is obtained by means of Hack's law. These provide us with new understandings towards river networks.
Statistical Ensemble of Large Eddy Simulations
NASA Technical Reports Server (NTRS)
Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.
NASA Astrophysics Data System (ADS)
Kryza, Maciej; Wałaszek, Kinga; Ojrzyńska, Hanna; Szymanowski, Mariusz; Werner, Małgorzata; Dore, Anthony J.
2016-03-01
In this work, we present the results of high-resolution dynamical downscaling of air temperature, relative humidity, wind speed and direction, for the area of Poland, with the Weather Research and Forecasting (WRF) model. The model is configured using three nested domains, with spatial resolution of 45 km × 45 km, 15 km × 15 km and 5 km × 5 km. The ERA-Interim database is used for boundary conditions. The results are evaluated by comparison with station measurements for the period 1981-2010. The model is capable of reproducing the main climatological features of the study area. The results are in very close agreement with the measurements, especially for the air temperature. For all four meteorological variables, the model performance captures seasonal and daily cycles. For the air temperature and winter season, the model underestimates the measurements. For summer, the model shows higher values, compared with the measurements. The opposite is the case for relative humidity. There is a strong diurnal pattern in mean error, which changes seasonally. The agreement with the measurements is worse for the seashore and mountain areas, which suggests that the 5 km × 5 km grid might still have an insufficient spatial resolution. There is no statistically significant temporal trend in the model performance. The larger year-to-year changes in the model performance, e.g. for the years 1982 and 2010 for the air temperature should therefore be linked with the natural variability of meteorological conditions.
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Borges, Ernesto P.
2016-03-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results.
A statistical mechanical problem?
Costa, Tommaso; Ferraro, Mario
2014-01-01
The problem of deriving the processes of perception and cognition or the modes of behavior from states of the brain appears to be unsolvable in view of the huge numbers of elements involved. However, neural activities are not random, nor independent, but constrained to form spatio-temporal patterns, and thanks to these restrictions, which in turn are due to connections among neurons, the problem can at least be approached. The situation is similar to what happens in large physical ensembles, where global behaviors are derived by microscopic properties. Despite the obvious differences between neural and physical systems a statistical mechanics approach is almost inescapable, since dynamics of the brain as a whole are clearly determined by the outputs of single neurons. In this paper it will be shown how, starting from very simple systems, connectivity engenders levels of increasing complexity in the functions of the brain depending on specific constraints. Correspondingly levels of explanations must take into account the fundamental role of constraints and assign at each level proper model structures and variables, that, on one hand, emerge from outputs of the lower levels, and yet are specific, in that they ignore irrelevant details. PMID:25228891
Overweight and Obesity Statistics
... View the full list of resources . Overweight and Obesity Statistics Page Content About Overweight and Obesity Prevalence ... Activity Statistics Clinical Trials Resources About Overweight and Obesity This publication describes the prevalence of overweight and ...
Minnesota Health Statistics 1988.
ERIC Educational Resources Information Center
Minnesota State Dept. of Health, St. Paul.
This document comprises the 1988 annual statistical report of the Minnesota Center for Health Statistics. After introductory technical notes on changes in format, sources of data, and geographic allocation of vital events, an overview is provided of vital health statistics in all areas. Thereafter, separate sections of the report provide tables…
ERIC Educational Resources Information Center
Lenard, Christopher; McCarthy, Sally; Mills, Terence
2014-01-01
There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…
Thorslund, J; Misfeldt, J
1989-07-01
The classical methodological problem of suicidology is reliability of official statistics. In this article, some recent contributions to the debate, particularly concerning the increased problem of suicide among Inuit, are reviewed. Secondly the suicide statistics of Greenland are analyzed, with the conclusion that the official statistics, as published by the Danish Board of Health, are generally reliable concerning Greenland. PMID:2789569
ERIC Educational Resources Information Center
Strasser, Nora
2007-01-01
Avoiding statistical mistakes is important for educators at all levels. Basic concepts will help you to avoid making mistakes using statistics and to look at data with a critical eye. Statistical data is used at educational institutions for many purposes. It can be used to support budget requests, changes in educational philosophy, changes to…
Explorations in statistics: statistical facets of reproducibility.
Curran-Everett, Douglas
2016-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science. PMID:27231259
Detection statistics in the micromaser
NASA Astrophysics Data System (ADS)
Johnson, David B.; Schieve, W. C.
2001-03-01
We present a general method for the derivation of various statistical quantities describing the detection of a beam of atoms emerging from a micromaser. The use of non-normalized conditioned density operators and a linear master equation for the dynamics between detection events is discussed as are the counting statistics, sequence statistics, and waiting time statistics. In particular, we derive expressions for the mean number of successive detections of atoms in one of any two orthogonal states of the two-level atom. We also derive expressions for the mean waiting times between detections. We show that the mean waiting times between detections of atoms in like states are equivalent to the mean waiting times calculated from the uncorrelated steady state detection rates, although like atoms are indeed correlated. The mean waiting times between detections of atoms in unlike states exhibit correlations. We evaluate the expressions for various detector efficiencies using numerical integration, reporting results for the standard micromaser arrangement in which the cavity is pumped by excited atoms and the excitation levels of the emerging atoms are measured. In addition, the atomic inversion and the Fano-Mandel function for the detection of deexcited atoms are calculated for comparison to the experimental results of Weidinger et al. [Phys. Rev. Lett. 82, 3795 (1999)], who report the observation of trapping states.
Nonlinear Statistical Modeling of Speech
NASA Astrophysics Data System (ADS)
Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.
2009-12-01
Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and discriminative training algorithms for these new models to improve noise robustness.
Statistical Properties of Online Auctions
NASA Astrophysics Data System (ADS)
Namazi, Alireza; Schadschneider, Andreas
We characterize the statistical properties of a large number of online auctions run on eBay. Both stationary and dynamic properties, like distributions of prices, number of bids etc., as well as relations between these quantities are studied. The analysis of the data reveals surprisingly simple distributions and relations, typically of power-law form. Based on these findings we introduce a simple method to identify suspicious auctions that could be influenced by a form of fraud known as shill bidding. Furthermore the influence of bidding strategies is discussed. The results indicate that the observed behavior is related to a mixture of agents using a variety of strategies.
Statistical mechanics of violent relaxation
NASA Technical Reports Server (NTRS)
Spergel, David N.; Hernquist, Lars
1992-01-01
We propose a functional that is extremized through violent relaxation. It is based on the Ansatz that the wave-particle scattering during violent dynamical processes can be approximated as a sequence of discrete scattering events that occur near a particle's perigalacticon. This functional has an extremum whose structure closely resembles that of spheroidal stellar systems such as elliptical galaxies. The results described here, therefore, provide a simple framework for understanding the physical nature of violent relaxation and support the view that galaxies are structured in accord with fundamental statistical principles.
Nonstationary statistical theory for multipactor
Anza, S.; Vicente, C.; Gil, J.
2010-06-15
This work presents a new and general approach to the real dynamics of the multipactor process: the nonstationary statistical multipactor theory. The nonstationary theory removes the stationarity assumption of the classical theory and, as a consequence, it is able to adequately model electron exponential growth as well as absorption processes, above and below the multipactor breakdown level. In addition, it considers both double-surface and single-surface interactions constituting a full framework for nonresonant polyphase multipactor analysis. This work formulates the new theory and validates it with numerical and experimental results with excellent agreement.
The Statistical Mechanics of Zombies
NASA Astrophysics Data System (ADS)
Alemi, Alexander A.; Bierbaum, Matthew; Myers, Christopher R.; Sethna, James P.
2015-03-01
We present results and analysis from a large scale exact stochastic dynamical simulation of a zombie outbreak. Zombies have attracted some attention lately as a novel and interesting twist on classic disease models. While most of the initial investigations have focused on the continuous, fully mixed dynamics of a differential equation model, we have explored stochastic, discrete simulations on lattices. We explore some of the basic statistical mechanical properties of the zombie model, including its phase diagram and critical exponents. We report on several variant models, including both homogeneous and inhomogeneous lattices, as well as allowing diffusive motion of infected hosts. We build up to a full scale simulation of an outbreak in the United States, and discover that for `realistic' parameters, we are largely doomed.
Statistical theory of cubic Langmuir turbulence
NASA Technical Reports Server (NTRS)
Sun, G.-Z.; Nicholson, D. R.; Rose, H. A.
1985-01-01
The cubic direct interaction approximation is applied to a truncated (in Fourier space) version of the cubically nonlinear Schroedinger equation model of Langmuir physics. The results are compared (in the three-mode case) to those for an ensemble of numerical solutions of the dynamical equations with 10,000 different sets of Gaussianly distributed initial conditions. In the undriven, undamped case, the statistical theory (but not the ensemble) evolves to a state of thermal equilibrium. In the driven, damped case, the statistical theory appears to evolve to a state close to that corresponding to one of the limit cycles of the dynamical equations.
Clustering statistics in cosmology
NASA Astrophysics Data System (ADS)
Martinez, Vicent; Saar, Enn
2002-12-01
The main tools in cosmology for comparing theoretical models with the observations of the galaxy distribution are statistical. We will review the applications of spatial statistics to the description of the large-scale structure of the universe. Special topics discussed in this talk will be: description of the galaxy samples, selection effects and biases, correlation functions, Fourier analysis, nearest neighbor statistics, Minkowski functionals and structure statistics. Special attention will be devoted to scaling laws and the use of the lacunarity measures in the description of the cosmic texture.
Statistical distribution sampling
NASA Technical Reports Server (NTRS)
Johnson, E. S.
1975-01-01
Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.
Statistical regimes of random laser fluctuations
Lepri, Stefano; Cavalieri, Stefano; Oppo, Gian-Luca; Wiersma, Diederik S.
2007-06-15
Statistical fluctuations of the light emitted from amplifying random media are studied theoretically and numerically. The characteristic scales of the diffusive motion of light lead to Gaussian or power-law (Levy) distributed fluctuations depending on external control parameters. In the Levy regime, the output pulse is highly irregular leading to huge deviations from a mean-field description. Monte Carlo simulations of a simplified model which includes the population of the medium demonstrate the two statistical regimes and provide a comparison with dynamical rate equations. Different statistics of the fluctuations helps to explain recent experimental observations reported in the literature.
Introductory statistical mechanics for electron storage rings
NASA Astrophysics Data System (ADS)
Jowett, John M.
1987-02-01
These lectures concentrate on statistical phenomena in electron storage rings. A stored electron beam is a dissipative, fluctuating system far from equilibrium whose mathematical description can be based upon non-equilibrium statistical mechanics. Stochastic differential equations are used to describe the quantum fluctuations of synchrotron radiation which is the main cause of randomness in electron dynamics. Fluctuating radiation reaction forces can be described via stochastic terms in Hamilton's equations of motion. Normal modes of particle motion, radiation damping effects, quantum diffusion in single-particle phase space are all discussed in this statistical formalism. (AIP)
STAZ: interactive software for undergraduate statistics.
Hatchette, V; Zivian, A R; Zivian, M T; Okada, R
1999-02-01
STAZ is an interactive computer program that demonstrates statistical concepts, many of which cannot be readily demonstrated using conventional methods. Use of dynamic graphics encourages active engagement with challenging statistical concepts. The program consists of 13 graphical demonstrations, most of which allow for interactive participation by students. A detailed Help file with guided explanations accompanies each demonstration. STAZ is a multiple document interface program that makes full use of Windows features, such as tiling, links, and multitasking. Designed to be used as a supplement for any undergraduate statistics course, STAZ may be used by either instructors in class-room settings or students working independently. PMID:10495829
Introductory statistical mechanics for electron storage rings
Jowett, J.M.
1986-07-01
These lectures introduce the beam dynamics of electron-positron storage rings with particular emphasis on the effects due to synchrotron radiation. They differ from most other introductions in their systematic use of the physical principles and mathematical techniques of the non-equilibrium statistical mechanics of fluctuating dynamical systems. A self-contained exposition of the necessary topics from this field is included. Throughout the development, a Hamiltonian description of the effects of the externally applied fields is maintained in order to preserve the links with other lectures on beam dynamics and to show clearly the extent to which electron dynamics in non-Hamiltonian. The statistical mechanical framework is extended to a discussion of the conceptual foundations of the treatment of collective effects through the Vlasov equation.
Statistical theory of nonadiabatic transitions.
Neufeld, A A
2005-04-22
Based on results of the preceding paper, and assuming fast equilibration in phase space to the temperature of the surrounding media compared to the time scale of a reaction, we formulate a statistical theory of intramolecular nonadiabatic transitions. A classical mechanics description of phase space dynamics allows for an ab initio treatment of multidimensional reaction coordinates and easy combination with any standard molecular dynamics (MD) method. The presented approach has several features that distinguishes it from existing methodologies. First, the applicability limits of the approach are well defined. Second, the nonadiabatic transitions are treated dynamically, with full account of detailed balance, including zero-point energy, quantum coherence effects, arbitrarily long memory, and change of the free energy of the bath. Compared to popular trajectory surface hopping schemes, our MD-based algorithm is more efficient computationally, and does not use artificial ad hoc constructions like a "fewest switching" algorithm, and rescaling of velocities to conserve total energy. The enhanced capabilities of the new method are demonstrated considering a model of two coupled harmonic oscillators. We show that in the rate regime and at moderate friction the approach precisely reproduces the free-energy-gap law. It also predicts a general trend of the reaction dynamics in the low friction limit, and is valid beyond the rate regime. PMID:15945676
Symbolic sequence statistical analysis for free liquid jets
Godelle; Letellier
2000-12-01
Free liquid jets are investigated here as nonlinear dynamical systems. A scalar time series corresponding to the time evolution of the jet diameter is then used to investigate the underlying dynamics in terms of reconstructed phase portraits, Poincare sections, and first-return maps. Particular attention is paid to characterizing the behavior using symbolic sequence statistics that enable different atomization regimes to be distinguished. Such statistics are first applied on theoretical maps to support the results obtained on the jet dynamics. PMID:11138081
ERIC Educational Resources Information Center
Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain
2004-01-01
Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses…
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. PMID:26466186
ERIC Educational Resources Information Center
Council of Ontario Universities, Toronto.
Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.
The purpose of the Disability Statistics Center is to produce and disseminate statistical information on disability and the status of people with disabilities in American society and to establish and monitor indicators of how conditions are changing over time to meet their health...
Statistical Mapping by Computer.
ERIC Educational Resources Information Center
Utano, Jack J.
The function of a statistical map is to provide readers with a visual impression of the data so that they may be able to identify any geographic characteristics of the displayed phenomena. The increasingly important role played by the computer in the production of statistical maps is manifested by the varied examples of computer maps in recent…
Vijayaraj, Veeraraghavan; Cheriyadat, Anil M; Bhaduri, Budhendra L; Vatsavai, Raju; Bright, Eddie A
2008-01-01
Statistical properties of high-resolution overhead images representing different land use categories are analyzed using various local and global statistical image properties based on the shape of the power spectrum, image gradient distributions, edge co-occurrence, and inter-scale wavelet coefficient distributions. The analysis was performed on a database of high-resolution (1 meter) overhead images representing a multitude of different downtown, suburban, commercial, agricultural and wooded exemplars. Various statistical properties relating to these image categories and their relationship are discussed. The categorical variations in power spectrum contour shapes, the unique gradient distribution characteristics of wooded categories, the similarity in edge co-occurrence statistics for overhead and natural images, and the unique edge co-occurrence statistics of downtown categories are presented in this work. Though previous work on natural image statistics has showed some of the unique characteristics for different categories, the relationships for overhead images are not well understood. The statistical properties of natural images were used in previous studies to develop prior image models, to predict and index objects in a scene and to improve computer vision models. The results from our research findings can be used to augment and adapt computer vision algorithms that rely on prior image statistics to process overhead images, calibrate the performance of overhead image analysis algorithms, and derive features for better discrimination of overhead image categories.
Explorations in Statistics: Correlation
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This sixth installment of "Explorations in Statistics" explores correlation, a familiar technique that estimates the magnitude of a straight-line relationship between two variables. Correlation is meaningful only when the…
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect…
Multidimensional Visual Statistical Learning
ERIC Educational Resources Information Center
Turk-Browne, Nicholas B.; Isola, Phillip J.; Scholl, Brian J.; Treat, Teresa A.
2008-01-01
Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not
ERIC Educational Resources Information Center
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…
ERIC Educational Resources Information Center
Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain
2004-01-01
Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Ector, Hugo
2010-12-01
I still remember my first book on statistics: "Elementary statistics with applications in medicine and the biological sciences" by Frederick E. Croxton. For me, it has been the start of pursuing understanding statistics in daily life and in medical practice. It was the first volume in a long row of books. In his introduction, Croxton pretends that"nearly everyone involved in any aspect of medicine needs to have some knowledge of statistics". The reality is that for many clinicians, statistics are limited to a "P < 0.05 = ok". I do not blame my colleagues who omit the paragraph on statistical methods. They have never had the opportunity to learn concise and clear descriptions of the key features. I have experienced how some authors can describe difficult methods in a well understandable language. Others fail completely. As a teacher, I tell my students that life is impossible without a basic knowledge of statistics. This feeling has resulted in an annual seminar of 90 minutes. This tutorial is the summary of this seminar. It is a summary and a transcription of the best pages I have detected. PMID:21302664
STATWIZ - AN ELECTRONIC STATISTICAL TOOL (ABSTRACT)
StatWiz is a web-based, interactive, and dynamic statistical tool for researchers. It will allow researchers to input information and/or data and then receive experimental design options, or outputs from data analysis. StatWiz is envisioned as an expert system that will walk rese...
LED champing: statistically blessed?
Wang, Zhuo
2015-06-10
LED champing (smart mixing of individual LEDs to match the desired color and lumens) and color mixing strategies have been widely used to maintain the color consistency of light engines. Light engines with champed LEDs can easily achieve the color consistency of a couple MacAdam steps with widely distributed LEDs to begin with. From a statistical point of view, the distributions for the color coordinates and the flux after champing are studied. The related statistical parameters are derived, which facilitate process improvements such as Six Sigma and are instrumental to statistical quality control for mass productions. PMID:26192863
Hemophilia Data and Statistics
... Hemophilia Women Healthcare Providers Partners Media Policy Makers Data & Statistics Language: English Español (Spanish) Recommend on Facebook ... at a very young age. Based on CDC data, the median age at diagnosis is 36 months ...
... Websites About Us Information For... Media Policy Makers Data & Statistics Language: English Español (Spanish) Recommend on Facebook ... with sickle cell disease (SCD) by matching up data from studies that monitor all people with SCD ...
Cooperative Learning in Statistics.
ERIC Educational Resources Information Center
Keeler, Carolyn M.; And Others
1994-01-01
Formal use of cooperative learning techniques proved effective in improving student performance and retention in a freshman level statistics course. Lectures interspersed with group activities proved effective in increasing conceptual understanding and overall class performance. (11 references) (Author)
... Secret Agent ID’d in Mice More Additional Mental Health Information from NIMH Medications Statistics Clinical Trials Coping ... Finder Publicaciones en Español The National Institute of Mental Health (NIMH) is part of the National Institutes of ...
Titanic: A Statistical Exploration.
ERIC Educational Resources Information Center
Takis, Sandra L.
1999-01-01
Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)
... Working Group. United States Cancer Statistics: 1999–2012 Incidence and Mortality Web-based Report. Atlanta (GA): Department of Health and Human Services, Centers for Disease Control and Prevention, and National Cancer Institute; 2015. ... ...
Teaching Statistics with Minitab.
ERIC Educational Resources Information Center
Hubbard, Ruth
1992-01-01
Discusses the use of the computer software MINITAB in teaching statistics to explore concepts, simulate games of chance, transform the normal variable into a z-score, and stimulate small and large group discussions. (MDH)
Understanding Solar Flare Statistics
NASA Astrophysics Data System (ADS)
Wheatland, M. S.
2005-12-01
A review is presented of work aimed at understanding solar flare statistics, with emphasis on the well known flare power-law size distribution. Although avalanche models are perhaps the favoured model to describe flare statistics, their physical basis is unclear, and they are divorced from developing ideas in large-scale reconnection theory. An alternative model, aimed at reconciling large-scale reconnection models with solar flare statistics, is revisited. The solar flare waiting-time distribution has also attracted recent attention. Observed waiting-time distributions are described, together with what they might tell us about the flare phenomenon. Finally, a practical application of flare statistics to flare prediction is described in detail, including the results of a year of automated (web-based) predictions from the method.
Purposeful Statistical Investigations
ERIC Educational Resources Information Center
Day, Lorraine
2014-01-01
Lorraine Day provides us with a great range of statistical investigations using various resources such as maths300 and TinkerPlots. Each of the investigations link mathematics to students' lives and provide engaging and meaningful contexts for mathematical inquiry.
Presenting the statistical results.
Ng, K H; Peh, W C G
2009-01-01
Statistical methods are reported in a scientific paper to summarise the data that has been collected for a study and to enable its analysis. These methods should be described with enough detail to allow a knowledgeable reader who has access to the original data to verify the reported results. This article provides basic guidelines to aid authors in reporting the statistical aspects of the results of their studies clearly and accurately. PMID:19224078
Corn production with Spray, LEPA, and SDI
Technology Transfer Automated Retrieval System (TEKTRAN)
Corn, a major irrigated crop in the U.S. Great Plains, has a large irrigation requirement making efficient, effective irrigation technology important. The objective of this paper was to compare corn productivity for different irrigation methods and irrigation rates in 2009 and 2010 at Bushland, Texa...
SDI satellite autonomy using AI and Ada
NASA Technical Reports Server (NTRS)
Fiala, Harvey E.
1990-01-01
The use of Artificial Intelligence (AI) and the programming language Ada to help a satellite recover from selected failures that could lead to mission failure are described. An unmanned satellite will have a separate AI subsystem running in parallel with the normal satellite subsystems. A satellite monitoring subsystem (SMS), under the control of a blackboard system, will continuously monitor selected satellite subsystems to become alert to any actual or potential problems. In the case of loss of communications with the earth or the home base, the satellite will go into a survival mode to reestablish communications with the earth. The use of an AI subsystem in this manner would have avoided the tragic loss of the two recent Soviet probes that were sent to investigate the planet Mars and its moons. The blackboard system works in conjunction with an SMS and a reconfiguration control subsystem (RCS). It can be shown to be an effective way for one central control subsystem to monitor and coordinate the activities and loads of many interacting subsystems that may or may not contain redundant and/or fault-tolerant elements. The blackboard system will be coded in Ada using tools such as the ABLE development system and the Ada Production system.
Accelerated molecular dynamics methods
Perez, Danny
2011-01-04
The molecular dynamics method, although extremely powerful for materials simulations, is limited to times scales of roughly one microsecond or less. On longer time scales, dynamical evolution typically consists of infrequent events, which are usually activated processes. This course is focused on understanding infrequent-event dynamics, on methods for characterizing infrequent-event mechanisms and rate constants, and on methods for simulating long time scales in infrequent-event systems, emphasizing the recently developed accelerated molecular dynamics methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics). Some familiarity with basic statistical mechanics and molecular dynamics methods will be assumed.
Statistical Mechanics of Turbulent Dynamos
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2014-01-01
Incompressible magnetohydrodynamic (MHD) turbulence and magnetic dynamos, which occur in magnetofluids with large fluid and magnetic Reynolds numbers, will be discussed. When Reynolds numbers are large and energy decays slowly, the distribution of energy with respect to length scale becomes quasi-stationary and MHD turbulence can be described statistically. In the limit of infinite Reynolds numbers, viscosity and resistivity become zero and if these values are used in the MHD equations ab initio, a model system called ideal MHD turbulence results. This model system is typically confined in simple geometries with some form of homogeneous boundary conditions, allowing for velocity and magnetic field to be represented by orthogonal function expansions. One advantage to this is that the coefficients of the expansions form a set of nonlinearly interacting variables whose behavior can be described by equilibrium statistical mechanics, i.e., by a canonical ensemble theory based on the global invariants (energy, cross helicity and magnetic helicity) of ideal MHD turbulence. Another advantage is that truncated expansions provide a finite dynamical system whose time evolution can be numerically simulated to test the predictions of the associated statistical mechanics. If ensemble predictions are the same as time averages, then the system is said to be ergodic; if not, the system is nonergodic. Although it had been implicitly assumed in the early days of ideal MHD statistical theory development that these finite dynamical systems were ergodic, numerical simulations provided sufficient evidence that they were, in fact, nonergodic. Specifically, while canonical ensemble theory predicted that expansion coefficients would be (i) zero-mean random variables with (ii) energy that decreased with length scale, it was found that although (ii) was correct, (i) was not and the expected ergodicity was broken. The exact cause of this broken ergodicity was explained, after much investigation, by greatly extending the statistical theory of ideal MHD turbulence. The mathematical details of broken ergodicity, in fact, give a quantitative explanation of how coherent structure, dynamic alignment and force-free states appear in turbulent magnetofluids. The relevance of these ideal results to real MHD turbulence occurs because broken ergodicity is most manifest in the ideal case at the largest length scales and it is in these largest scales that a real magnetofluid has the least dissipation, i.e., most closely approaches the behavior of an ideal magnetofluid. Furthermore, the effects grow stronger when cross and magnetic helicities grow large with respect to energy, and this is exactly what occurs with time in a real magnetofluid, where it is called selective decay. The relevance of these results found in ideal MHD turbulence theory to the real world is that they provide at least a qualitative explanation of why confined turbulent magnetofluids, such as the liquid iron that fills the Earth's outer core, produce stationary, large-scale magnetic fields, i.e., the geomagnetic field. These results should also apply to other planets as well as to plasma confinement devices on Earth and in space, and the effects should be manifest if Reynolds numbers are high enough and there is enough time for stationarity to occur, at least approximately. In the presentation, details will be given for both theoretical and numerical results, and references will be provided.
Nonlinear statistical coupling
NASA Astrophysics Data System (ADS)
Nelson, Kenric P.; Umarov, Sabir
2010-06-01
By considering a nonlinear combination of the probabilities of a system, a physical interpretation of Tsallis statistics as representing the nonlinear coupling or decoupling of statistical states is proposed. The escort probability is interpreted as the coupled probability, with Q=1-q defined as the degree of nonlinear coupling between the statistical states. Positive values of Q have coupled statistical states, a larger entropy metric, and a maximum coupled-entropy distribution of compact-support coupled-Gaussians. Negative values of Q have decoupled statistical states and for -2
Statistical Physics of Fracture
Alava, Mikko; Nukala, Phani K; Zapperi, Stefano
2006-05-01
Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances in computer power have enabled considerable progress in the understanding of such models. Among these partly still controversial issues, are the scaling and size-effects in material strength and accumulated damage, the statistics of avalanches or bursts of microfailures, and the morphology of the crack surface. Here we present an overview of the results obtained with lattice models for fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches.
Statistical benchmark for BosonSampling
NASA Astrophysics Data System (ADS)
Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas
2016-03-01
Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.
Random paths and current fluctuations in nonequilibrium statistical mechanics
Gaspard, Pierre
2014-07-15
An overview is given of recent advances in nonequilibrium statistical mechanics about the statistics of random paths and current fluctuations. Although statistics is carried out in space for equilibrium statistical mechanics, statistics is considered in time or spacetime for nonequilibrium systems. In this approach, relationships have been established between nonequilibrium properties such as the transport coefficients, the thermodynamic entropy production, or the affinities, and quantities characterizing the microscopic Hamiltonian dynamics and the chaos or fluctuations it may generate. This overview presents results for classical systems in the escape-rate formalism, stochastic processes, and open quantum systems.
Candidate Assembly Statistical Evaluation
Energy Science and Technology Software Center (ESTSC)
1998-07-15
The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that amore » significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.« less
Candidate Assembly Statistical Evaluation
Cude, B. W.
1998-07-15
The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that a significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.
Statistical electron densities
Pipek, J.; Varga, I.
1996-12-31
It is known that in numerous interesting systems one-electron states appear with multifractal internal structure. Physical intuition suggest, however, that electron densities should be smooth both at atomic distances and close to the macroscopic limit. Multifractal behavior is expected at intermediate length scales, with observable non-trivial statistical properties in considerably, but far from macroscopically sized clusters. We have demonstrated that differences of generalized Renyi entropies serve as relevant quantities for the global characterization of the statistical nature of such electron densities. Asymptotic expansion formulas are elaborated for these values as functions of the length scale of observation. The transition from deterministic electron densities to statistical ones along various length of resolution is traced both theoretically and by numerical calculations.
NASA Astrophysics Data System (ADS)
Joyner, Christopher H.; Müller, Sebastian; Sieber, Martin
2014-09-01
Energy level statistics following the Gaussian Symplectic Ensemble (GSE) of Random Matrix Theory have been predicted theoretically and observed numerically in numerous quantum chaotic systems. However, in all these systems there has been one unifying feature: the combination of half-integer spin and time-reversal invariance. Here we provide an alternative mechanism for obtaining GSE statistics that is derived from geometric symmetries of a quantum system which alleviates the need for spin. As an example, we construct a quantum graph with a discrete symmetry given by the quaternion group Q8 and observe GSE statistics within one of its subspectra. We then show how to isolate this subspectrum and construct a quantum graph with a scalar valued wave function and a pure GSE spectrum.
Suite versus composite statistics
Balsillie, J.H.; Tanner, W.F.
1999-01-01
Suite and composite methodologies, two statistically valid approaches for producing statistical descriptive measures, are investigated for sample groups representing a probability distribution where, in addition, each sample is probability distribution. Suite and composite means (first moment measures) are always equivalent. Composite standard deviations (second moment measures) are always larger than suite standard deviations. Suite and composite values for higher moment measures have more complex relationships. Very seldom, however, are they equivalent, and they normally yield statistically significant but different results. Multiple samples are preferable to single samples (including composites) because they permit the investigator to examine sample-to-sample variability. These and other relationships for suite and composite probability distribution analyses are investigated and reported using granulometric data.
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Computing statistics for Hamiltonian systems
NASA Astrophysics Data System (ADS)
Tupper, P. F.
2007-08-01
We present the results of a set of numerical experiments designed to investigate the appropriateness of various integration schemes for molecular dynamics simulations. In particular, we wish to identify which numerical methods, when applied to an ergodic Hamiltonian system, sample the state-space in an unbiased manner. We do this by describing two Hamiltonian system for which we can analytically compute some of the important statistical features of its trajectories, and then applying various numerical integration schemes to them. We can then compare the results from the numerical simulation against the exact results for the system and see how closely they agree. The statistic we study is the empirical distribution of particle velocity over long trajectories of the systems. We apply four methods: one symplectic method (Stormer-Verlet) and three energy-conserving step-and-project methods. The symplectic method performs better on both test problems, accurately computing empirical distributions for all step-lengths consistent with stability. Depending on the test system and the method, the step-and-project methods are either no longer ergodic for any step length (thus giving the wrong empirical distribution) or give the correct distribution only in the limit of step-size going to zero.
Statistical complexity measure of pseudorandom bit generators
NASA Astrophysics Data System (ADS)
González, C. M.; Larrondo, H. A.; Rosso, O. A.
2005-08-01
Pseudorandom number generators (PRNG) are extensively used in Monte Carlo simulations, gambling machines and cryptography as substitutes of ideal random number generators (RNG). Each application imposes different statistical requirements to PRNGs. As L’Ecuyer clearly states “the main goal for Monte Carlo methods is to reproduce the statistical properties on which these methods are based whereas for gambling machines and cryptology, observing the sequence of output values for some time should provide no practical advantage for predicting the forthcoming numbers better than by just guessing at random”. In accordance with different applications several statistical test suites have been developed to analyze the sequences generated by PRNGs. In a recent paper a new statistical complexity measure [Phys. Lett. A 311 (2003) 126] has been defined. Here we propose this measure, as a randomness quantifier of a PRNGs. The test is applied to three very well known and widely tested PRNGs available in the literature. All of them are based on mathematical algorithms. Another PRNGs based on Lorenz 3D chaotic dynamical system is also analyzed. PRNGs based on chaos may be considered as a model for physical noise sources and important new results are recently reported. All the design steps of this PRNG are described, and each stage increase the PRNG randomness using different strategies. It is shown that the MPR statistical complexity measure is capable to quantify this randomness improvement. The PRNG based on the chaotic 3D Lorenz dynamical system is also evaluated using traditional digital signal processing tools for comparison.
Lifetime statistics in chaotic dielectric microresonators
Schomerus, Henning; Wiersig, Jan; Main, Joerg
2009-05-15
We discuss the statistical properties of lifetimes of electromagnetic quasibound states in dielectric microresonators with fully chaotic ray dynamics. Using the example of a resonator of stadium geometry, we find that a recently proposed random-matrix model very well describes the lifetime statistics of long-lived resonances, provided that two effective parameters are appropriately renormalized. This renormalization is linked to the formation of short-lived resonances, a mechanism also known from the fractal Weyl law and the resonance-trapping phenomen0008.
Statistical instability of barrier microdischarges operating in townsend regime
Nagorny, V. P.
2007-01-15
The dynamics of barrier microdischarges operating in a Townsend regime is studied analytically and via kinetic particle-in-cell/Monte Carlo simulations. It is shown that statistical fluctuations of the number of charged particles in the discharge gap strongly influence the dynamics of natural oscillations of the discharge current and may even lead to a disruption of the discharge. Analysis of the statistical effects based on a simple model is suggested. The role of external sources in stabilizing microdischarges is clarified.
ERIC Educational Resources Information Center
Akram, Muhammad; Siddiqui, Asim Jamal; Yasmeen, Farah
2004-01-01
In order to learn the concept of statistical techniques one needs to run real experiments that generate reliable data. In practice, the data from some well-defined process or system is very costly and time consuming. It is difficult to run real experiments during the teaching period in the university. To overcome these difficulties, statisticians
Statistics for Learning Genetics
ERIC Educational Resources Information Center
Charles, Abigail Sheena
2012-01-01
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,…
Education Statistics Quarterly, 2003.
ERIC Educational Resources Information Center
Marenus, Barbara; Burns, Shelley; Fowler, William; Greene, Wilma; Knepper, Paula; Kolstad, Andrew; McMillen Seastrom, Marilyn; Scott, Leslie
2003-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…
Revising Educational Statistics.
ERIC Educational Resources Information Center
Banner, James M., Jr.
When gathering and presenting educational statistics, five principles should be considered. (1) The data must be accurate, valid, and complete. Limitations, weaknesses, margins of error, and levels of confidence should be clearly stated. (2) The data must include comparable information, sought in comparable ways, in comparable forms, from…
ERIC Educational Resources Information Center
Alaska Univ., Fairbanks.
The first annual Statistical Abstract for the University of Alaska System provides factual information for use by the Board of Regents, college administrators, and public officials in the development of university plans and programs. Topics cover: enrollments, programs and awards, faculty and staff, facilities and space, fiscal analysis,…
Statistical Tables on Manpower.
ERIC Educational Resources Information Center
Manpower Administration (DOL), Washington, DC.
The President sends to the Congress each year a report on the Nation's manpower, as required by the Manpower Development and Training Act of 1962, which includes a comprehensive report by the Department of Labor on manpower requirements, resources, utilization, and training. This statistical appendix to the Department of Labor report presents data…
Juvenile Court Statistics 1985.
ERIC Educational Resources Information Center
Snyder, Howard N.; And Others
This report is the 59th in the "Juvenile Court Statistics" series, a series begun in 1929 which serves as the primary source of information on the activities of juvenile courts. It describes the number and characteristics of delinquency and status offense cases disposed during 1985 by courts with juvenile jurisdiction and addresses some important…
Quartiles in Elementary Statistics
ERIC Educational Resources Information Center
Langford, Eric
2006-01-01
The calculation of the upper and lower quartile values of a data set in an elementary statistics course is done in at least a dozen different ways, depending on the text or computer/calculator package being used (such as SAS, JMP, MINITAB, "Excel," and the TI-83 Plus). In this paper, we examine the various methods and offer a suggestion for a new…
Graduate Statistics: Student Attitudes
ERIC Educational Resources Information Center
Kennedy, Robert L.; Broadston, Pamela M.
2004-01-01
This study investigated the attitudes toward statistics of graduate students who used a computer program as part of the instruction, which allowed for an individualized, self-paced, student-centered, activity-based course. The twelve sections involved in this study were offered in the spring and fall 2001, spring and fall 2002, spring and fall…
Pharmacokinetics: statistical moment calculations.
TOXLINE Toxicology Bibliographic Information
Gouyette A
1983-01-01
A program for the HP-41C calculator allows the determination of the first three statistical moments, area under the curve (moment zero), mean residence time (first moment) and variance of residence time (second moment) of drug concentration-time curves which are used for noncompartmental pharmacokinetic analyses. Applications of this theory are given.
Pharmacokinetics: statistical moment calculations.
Gouyette, A
1983-01-01
A program for the HP-41C calculator allows the determination of the first three statistical moments, area under the curve (moment zero), mean residence time (first moment) and variance of residence time (second moment) of drug concentration-time curves which are used for noncompartmental pharmacokinetic analyses. Applications of this theory are given. PMID:6681972
Juvenile Court Statistics - 1972.
ERIC Educational Resources Information Center
Office of Youth Development (DHEW), Washington, DC.
This report is a statistical study of juvenile court cases in 1972. The data demonstrates how the court is frequently utilized in dealing with juvenile delinquency by the police as well as by other community agencies and parents. Excluded from this report are the ordinary traffic cases handled by juvenile court. The data indicate that: (1) in…
Library Research and Statistics.
ERIC Educational Resources Information Center
Lynch, Mary Jo; St. Lifer, Evan; Halstead, Kent; Fox, Bette-Lee; Miller, Marilyn L.; Shontz, Marilyn L.
2001-01-01
These nine articles discuss research and statistics on libraries and librarianship, including libraries in the United States, Canada, and Mexico; acquisition expenditures in public, academic, special, and government libraries; price indexes; state rankings of public library data; library buildings; expenditures in school library media centers; and
ERIC Educational Resources Information Center
Office of the Assistant Secretary of Defense -- Comptroller (DOD), Washington, DC.
This document contains summaries of basic manpower statistical data for the Department of Defense, with the Army, Navy, Marine Corps, and Air Force totals shown separately and collectively. Included are figures for active duty military personnel, civilian personnel, reserve components, and retired military personnel. Some of the data show
Statistical insight: a review.
Vardell, Emily; Garcia-Barcena, Yanira
2012-01-01
Statistical Insight is a database that offers the ability to search across multiple sources of data, including the federal government, private organizations, research centers, and international intergovernmental organizations in one search. Two sample searches on the same topic, a basic and an advanced, were conducted to evaluate the database. PMID:22853304
Statistics for Learning Genetics
ERIC Educational Resources Information Center
Charles, Abigail Sheena
2012-01-01
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,
Statistics for Learning Genetics
NASA Astrophysics Data System (ADS)
Charles, Abigail Sheena
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless, although the necessity for infusing these quantitative subjects with genetics and, overall, the biological sciences is growing (topics including synthetic biology, molecular systems biology and phylogenetics) there remains little time in the semester to be dedicated to the consolidation of learning and understanding.
Statistics of the galaxy distribution
NASA Astrophysics Data System (ADS)
Lachièze-Rey, Marc
1989-09-01
The universe appears from recent observational results to be a highly structured but also highly disordered medium. This accounts for the difficulties with a conventional statistical approach. Since the statistics of disordered media is an increasingly well-studied field in physics, it is tempting to try to adapt its methods for the study of the universe (the use of correlation functions also resulted from the adaptation of techniques from a very different field to astrophysics). This is already the case for the fractal analysis, which, mainly developed in microscopic statistics, is increasingly used in astrophysics. I suggest a new approach, also derived from the study of disordered media, both from the study of percolation clusters and from the dynamics of so-called “cluster aggregation” gelification models. This approach is briefly presented. Its main interest lies in two points. First, it suggests an analysis able to characterize features of unconventional statistics (those that seem to be present in the galaxy distribution and which conventional indicators are unable to take into account). It appears also a priori very convenient for a synthetic approach, since it can be related to the other indicators used up to now: the link with the void probability function is very straightforward. The connexion with fractals can be said to be contained in the method, since the objects defined during this analysis are themselves fractal: different kinds of fractal dimensions are very easy to extract from the analysis. The link with the percolation studies is also very natural since the method is adapted from the study of percolation clusters. It is also expected that the information concerning the topology is contained in this approach; this seems natural since the method is very sensitive to the topology of the distribution and posses some common characteristics with the topology analysis already developed by Gott et al. (1986). The quantitative relations remain however to be calculated. Additionally, this approach concerns the variation of clustering properties of galaxy groups and clusters with their richness. Although such studies have been made for various cases (like comparison of the correlation functions between galaxies and clusters, or between clusters of different richness classes), the analysis presented here deals with it in a more systematic and synthetic way.
SHARE: Statistical hadronization with resonances
NASA Astrophysics Data System (ADS)
Torrieri, G.; Steinke, S.; Broniowski, W.; Florkowski, W.; Letessier, J.; Rafelski, J.
2005-05-01
SHARE is a collection of programs designed for the statistical analysis of particle production in relativistic heavy-ion collisions. With the physical input of intensive statistical parameters, it generates the ratios of particle abundances. The program includes cascade decays of all confirmed resonances from the Particle Data Tables. The complete treatment of these resonances has been known to be a crucial factor behind the success of the statistical approach. An optional feature implemented is the Breit-Wigner distribution for strong resonances. An interface for fitting the parameters of the model to the experimental data is provided. Program summaryTitle of the program:SHARE, October 2004, version 1.2 Catalogue identifier: ADVD Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVD Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, Pentium III, 512 MB RAM (not hardware dependent) Operating system: Linux: RedHat 6.1, 7.2, FEDORA, etc. (not system dependent) Programming language:FORTRAN77: g77, f77 as well as Mathematica, ver. 4 or 5, for the case of full chemical equilibrium and particle widths set to zero Size of the package: 645 KB directory including example programs (87 KB compressed distribution archive) External routines: KERNLIB, MATHLIB and PACKLIB from the CERN Program Library (see http://cernlib.web.cern.ch for download and installation instructions) Distribution format: tar.gz Number of lines in distributed program, including test data, etc.: 15 277 Number of bytes in distributed program, including test data, etc.: 88 522 Computer: Any computer with an f77 compiler Nature of the physical problem: Statistical analysis of particle production in relativistic heavy-ion collisions involves the formation and the subsequent decays of a large number of resonances. With the physical input of thermal parameters, such as the temperature and fugacities, and considering cascading decays, along with weak interaction feed-down corrections, the observed hadron abundances are obtained. SHARE incorporates diverse physical approaches, with a flexibility of choice of the details of the statistical hadronization model, including the selection of a chemical (non-)equilibrium condition. SHARE also offers evaluation of the extensive properties of the source of particles, such as energy, entropy, baryon number, strangeness, as well as the determination of the best intensive input parameters fitting a set of experimental yields. This allows exploration of a proposed physical hypothesis about hadron production mechanisms and the determination of the properties of their source. Method of solving the problem: Distributions at freeze-out of both the stable particles and the hadronic resonances are set according to a statistical prescription, technically calculated via a series of Bessel functions, using CERN library programs. We also have the option of including finite particle widths of the resonances. While this is computationally expensive, it is necessary to fully implement the essence of the strong interaction dynamics within the statistical hadronization picture. In fact, including finite width has a considerable effect when modeling directly detectable short-lived resonances ( Λ(1520),K, etc.), and is noticeable in fits to experimentally measured yields of stable particles. After production, all hadronic resonances decay. Resonance decays are accomplished by addition of the parent abundances to the daughter, normalized by the branching ratio. Weak interaction decays receive a special treatment, where we introduce daughter particle acceptance factors for both strongly interacting decay products. An interface for fitting to experimental particle ratios of the statistical model parameters with the help of MINUIT[1] is provided. The χ function is defined in the standard way. For an investigated quantity f and experimental error Δ f, χ=((N=N-N. (note that systematic and statistical errors are independent, since the systematic error is not a random variable). Aside of χ, the program also calculates the statistical significance [2], defined as the probability that, given a "true" theory and a statistical (Gaussian) experimental error, the fitted χ assumes the values at or above the considered value. In the case that the best fit has statistical significance significantly below unity, the model under consideration is very likely inappropriate. In the limit of many degrees of freedom ( N), the statistical significance function depends only on χ/N, with 90% statistical significance at χ/N˜1, and falling steeply at χ/N>1. However, the degrees of freedom in fits involving ratios are generally not sufficient to reach the asymptotic limit. Hence, statistical significance depends strongly on χ and N separately. In particular, if N<20, often for a fit to have an acceptable statistical significance, a χ/N significantly less than 1 is required. The fit routine does not always find the true lowest χ minimum. Specifically, multi-parameter fits with too few degrees of freedom generally exhibit a non-trivial structure in parameter space, with several secondary minima, saddle points, valleys, etc. To help the user perform the minimization effectively, we have added tools to compute the χ contours and profiles. In addition, our program's flexibility allows for many strategies in performing the fit. It is therefore possible, by following the techniques described in Section 3.7, to scan the parameter space and ensure that the minimum found is the true one. Further systematic deviations between the model and experiment can be recognized via the program's output, which includes a particle-by-particle comparison between experiment and theory. Additional comments: In consideration of the wide stream of new data coming out from RHIC, there is an on-going activity, with several groups performing analysis of particle yields. It is our hope that SHARE will allow to create an analysis standard within the community. It can be useful in analyzing the experimental data, verifying simple physical assumptions, evaluating expected yields, as well as allowing to compare various similar models and programs which are currently being used. Typical running time: For the Fortran code, the computation time with the provided default input files is about 10 minutes on 1 GHz processor. The time may rise significantly (by a factor of 300) if the full-fledged optimization and finite widths are included. In Mathematica, the typical running times are of the order of minutes. Accessibility: The program is available from: The CPC program library, The following websites: http://www.ifj.edu.pl/Dept4/share.html or http://www.physics.arizona.edu/~torrieri/SHARE/share.html, From the authors upon request.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
The large deviation approach to statistical mechanics
NASA Astrophysics Data System (ADS)
Touchette, Hugo
2009-07-01
The theory of large deviations is concerned with the exponential decay of probabilities of large fluctuations in random systems. These probabilities are important in many fields of study, including statistics, finance, and engineering, as they often yield valuable information about the large fluctuations of a random system around its most probable state or trajectory. In the context of equilibrium statistical mechanics, the theory of large deviations provides exponential-order estimates of probabilities that refine and generalize Einstein’s theory of fluctuations. This review explores this and other connections between large deviation theory and statistical mechanics, in an effort to show that the mathematical language of statistical mechanics is the language of large deviation theory. The first part of the review presents the basics of large deviation theory, and works out many of its classical applications related to sums of random variables and Markov processes. The second part goes through many problems and results of statistical mechanics, and shows how these can be formulated and derived within the context of large deviation theory. The problems and results treated cover a wide range of physical systems, including equilibrium many-particle systems, noise-perturbed dynamics, nonequilibrium systems, as well as multifractals, disordered systems, and chaotic systems. This review also covers many fundamental aspects of statistical mechanics, such as the derivation of variational principles characterizing equilibrium and nonequilibrium states, the breaking of the Legendre transform for nonconcave entropies, and the characterization of nonequilibrium fluctuations through fluctuation relations.
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics
Tirnakli, Ugur; Borges, Ernesto P.
2016-01-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results. PMID:27004989
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics.
Tirnakli, Ugur; Borges, Ernesto P
2016-01-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results. PMID:27004989
This article presents a general and versatile methodology for assessing sustainability with Fisher Information as a function of dynamic changes in urban systems. Using robust statistical methods, six Metropolitan Statistical Areas (MSAs) in Ohio were evaluated to comparatively as...
Statistical Inference at Work: Statistical Process Control as an Example
ERIC Educational Resources Information Center
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Explorations in statistics: power.
Curran-Everett, Douglas
2010-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of Explorations in Statistics revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect power: the probability with which we are willing to reject-by mistake-a true null hypothesis, the magnitude of the difference we want to be able to detect, the variability of the underlying population, and the number of observations in our sample. In an application to an Institutional Animal Care and Use Committee or to the National Institutes of Health, we define power to justify the sample size we propose. PMID:20522895
Nock, Richard; Nielsen, Frank
2004-11-01
This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints. This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces. The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images. Experiments on gray-level and color images, obtained with a short readily available C-code, display the quality of the segmentations obtained. PMID:15521493
Statistical challenges of AIDS.
Becker, N G
1992-08-01
"This paper considers questions concerning the incubation period [of HIV infections], the effects of treatments, prediction of AIDS cases, the choice of surrogate end points for the assessment of treatments and design of strategies for screening blood samples. These issues give rise to a broad range of intriguing problems for statisticians. We describe some of these problems, how they have been tackled so far and what remains to be done. The discussion touches on topical statistical methods such as smoothing, bootstrapping, interval censoring and the ill-posed inverse problem, as well as asking fundamental questions for frequentist statistics." The geographical scope is worldwide, with some data for selected developed countries used to illustrate the models. PMID:12285674
Statistical evaluation of forecasts.
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast. PMID:25215714
Statistical evaluation of forecasts
NASA Astrophysics Data System (ADS)
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.
Explorations in statistics: correlation.
Curran-Everett, Douglas
2010-12-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This sixth installment of Explorations in Statistics explores correlation, a familiar technique that estimates the magnitude of a straight-line relationship between two variables. Correlation is meaningful only when the two variables are true random variables: for example, if we restrict in some way the variability of one variable, then the magnitude of the correlation will decrease. Correlation cannot help us decide if changes in one variable result in changes in the second variable, if changes in the second variable result in changes in the first variable, or if changes in a third variable result in concurrent changes in the first two variables. Correlation can help provide us with evidence that study of the nature of the relationship between x and y may be warranted in an actual experiment in which one of them is controlled. PMID:21098385
Computer intensive statistical methods
NASA Astrophysics Data System (ADS)
Yakowitz, S.
The special session “Computer-Intensive Statistical Methods” was held in morning and afternoon parts at the 1985 AGU Fall Meeting in San Francisco. Calif. Its mission was to provide a forum for hydrologists and statisticians who are active in bringing unconventional, algorithmic-oriented statistical techniques to bear on problems of hydrology. Statistician Emanuel Parzen (Texas A&M University, College Station, Tex.) opened the session by relating recent developments in quantile estimation methods and showing how properties of such methods can be used to advantage to categorize runoff data previously analyzed by I. Rodriguez-Iturbe (Universidad Simon Bolivar, Caracas, Venezuela). Statistician Eugene Schuster (University of Texas, El Paso) discussed recent developments in nonparametric density estimation which enlarge the framework for convenient incorporation of prior and ancillary information. These extensions were motivated by peak annual flow analysis. Mathematician D. Myers (University of Arizona, Tucson) gave a brief overview of “kriging” and outlined some recently developed methodology.
Slinker, B K
1998-04-01
Biological scientists often want to determine whether two agents or events, for example, extracellular stimuli and/or intracellular signaling pathways, act synergistically when eliciting a biological response. When setting out to study whether two experimental treatments act synergistically, most biologists design the correct experiment--they administer four treatment combinations consisting of (1) the first treatment alone, (2) the second treatment alone, (3) both treatments together, and (4) neither treatment (i.e. the control). Many biologists are less clear about the correct statistical approach to determining whether the data collected in such an experimental design support a conclusion regarding synergism, or lack thereof. The non-additivity of two experimental treatments that is central to the definition of synergism leads to an algebraic formulation corresponding to the statistical null hypothesis appropriate for testing whether or not there is synergism. The resulting complex contrast among the four treatment group means is identical to the interaction effect tested in a two-way analysis of variance (ANOVA). This should not be surprising, because synergism, by definition, occurs when two treatments interact, rather than act independently, to influence a biological response. Hence, in the most readily implemented approach, the correct statistical analysis of a question of synergism is based on testing the interaction effect in a two-way ANOVA. This review presents the rationale for this correct approach to analysing data when the question is of synergism, and applies this approach to a recent published example. In addition, a common incorrect approach to analysing data with regards to synergism is presented. Finally, several associated statistical issues with regard to correctly implementing a two-way ANOVA are discussed. PMID:9602421
1979 DOE statistical symposium
Gardiner, D.A.; Truett T.
1980-09-01
The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.
Overdispersion in nuclear statistics
NASA Astrophysics Data System (ADS)
Semkow, Thomas M.
1999-02-01
The modern statistical distribution theory is applied to the development of the overdispersion theory in ionizing-radiation statistics for the first time. The physical nuclear system is treated as a sequence of binomial processes, each depending on a characteristic probability, such as probability of decay, detection, etc. The probabilities fluctuate in the course of a measurement, and the physical reasons for that are discussed. If the average values of the probabilities change from measurement to measurement, which originates from the random Lexis binomial sampling scheme, then the resulting distribution is overdispersed. The generating functions and probability distribution functions are derived, followed by a moment analysis. The Poisson and Gaussian limits are also given. The distribution functions belong to a family of generalized hypergeometric factorial moment distributions by Kemp and Kemp, and can serve as likelihood functions for the statistical estimations. An application to radioactive decay with detection is described and working formulae are given, including a procedure for testing the counting data for overdispersion. More complex experiments in nuclear physics (such as solar neutrino) can be handled by this model, as well as distinguishing between the source and background.
Guta, Madalin; Butucea, Cristina
2010-10-15
The notion of a U-statistic for an n-tuple of identical quantum systems is introduced in analogy to the classical (commutative) case: given a self-adjoint 'kernel' K acting on (C{sup d}){sup '}x{sup r} with r
Bradley, Robert K; Roberts, Adam; Smoot, Michael; Juvekar, Sudeep; Do, Jaeyoung; Dewey, Colin; Holmes, Ian; Pachter, Lior
2009-05-01
We describe a new program for the alignment of multiple biological sequences that is both statistically motivated and fast enough for problem sizes that arise in practice. Our Fast Statistical Alignment program is based on pair hidden Markov models which approximate an insertion/deletion process on a tree and uses a sequence annealing algorithm to combine the posterior probabilities estimated from these models into a multiple alignment. FSA uses its explicit statistical model to produce multiple alignments which are accompanied by estimates of the alignment accuracy and uncertainty for every column and character of the alignment--previously available only with alignment programs which use computationally-expensive Markov Chain Monte Carlo approaches--yet can align thousands of long sequences. Moreover, FSA utilizes an unsupervised query-specific learning procedure for parameter estimation which leads to improved accuracy on benchmark reference alignments in comparison to existing programs. The centroid alignment approach taken by FSA, in combination with its learning procedure, drastically reduces the amount of false-positive alignment on biological data in comparison to that given by other methods. The FSA program and a companion visualization tool for exploring uncertainty in alignments can be used via a web interface at http://orangutan.math.berkeley.edu/fsa/, and the source code is available at http://fsa.sourceforge.net/. PMID:19478997
Escape time statistics for mushroom billiards
NASA Astrophysics Data System (ADS)
Miyaguchi, Tomoshige
2007-06-01
Chaotic orbits of the mushroom billiards display intermittent behaviors. We investigate statistical properties of this system by constructing an infinite partition on the chaotic part of a Poincaré surface, which illustrates details of chaotic dynamics. Each piece of the infinite partition has a unique escape time from the half disk region, and from this result it is shown that, for fixed values of the system parameters, the escape time distribution obeys a power law 1/tesc3 .
Simulating Fibre Suspensions: Lagrangian versus Statistical Approach
NASA Astrophysics Data System (ADS)
Zhao, L. H.; Andersson, H. I.; Gillissen, J. J. J.; Boersma, B. J.
Fibre suspensions exhibit complex dynamical flow phenomena and are at the same time of immense practical importance, notably in the pulp and paper industries. NTNU and TU Delft have in a collaborative research project adopted two alternative strategies in the simulation of dilute fibre suspensions, namely a statistical approach [2] and a Lagrangian particle treatment [4]. The two approaches have their own advantages and disadvantages. In this paper we aim for the first time to compare the performance of the two.
Experimental Mathematics and Computational Statistics
Bailey, David H.; Borwein, Jonathan M.
2009-04-30
The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.
Statistical block compression of images
NASA Astrophysics Data System (ADS)
Petukh, Anatoliy M.; Kojemiako, Volodymyr P.; Maidanuk, Volodymyr P.; Rudyi, Oleh V.
2001-06-01
The receiving of good coefficients of the image compression probably in the event that the compressor is specialized and it takes into account special features of the image structure. The submitted method of the statistical block compression of images takes into account the presence in the image many areas with almost identical brightness. The in itself given method does not represent the special value, as its unique positive quality is account of special features of the image structure. The method makes the analysis of the image on with the purpose of detection of areas with close brightness. But the given method actually reduces volume of an initial file, coding the information thus, at which it is convenient for compression by a method LZW. The method of statistical block compression of images consists in the analysis of blocks, which turn out by division of the entrance image. The size of received blocks is identical. Above received blocks are made affine transformations and those blocks, which satisfy to the element control criterion are considered identical. The basic task consists in search of the greatest amount of acceptable blocks. Certainly, the search is possible to make very much plenty of time, as if we have the image size 100 X 100, the amount of blocks of the different size can be equal 10,000. But to make it is unprofitable, therefore block has a kind of a square. The submitted method has two variants: one uses the block of the fixed size 8 X 8 (given size was determined experimentally), another uses blocks of the various size, which turn out by dynamic division of unacceptable blocks. Upon termination of the analysis of blocks at presence of acceptable blocks in a final file the following information enters the name: amount of blocks, blocks, coded information, which defines the place of the block and the identifier of affine transformation. Two variants of the submitted method practically are identical on efficiency, as use of dynamic division of the block improves the factor of compression on 15 - 20%, but increases the program time.
NASA Technical Reports Server (NTRS)
1995-01-01
NASA Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
Who Needs Statistics? | Poster
You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.