SDI: Statistical dynamic interactions
Blann, M.; Mustafa, M.G. ); Peilert, G.; Stoecker, H.; Greiner, W. . Inst. fuer Theoretische Physik)
1991-04-01
We focus on the combined statistical and dynamical aspects of heavy ion induced reactions. The overall picture is illustrated by considering the reaction {sup 36}Ar + {sup 238}U at a projectile energy of 35 MeV/nucleon. We illustrate the time dependent bound excitation energy due to the fusion/relaxation dynamics as calculated with the Boltzmann master equation. An estimate of the mass, charge and excitation of an equilibrated nucleus surviving the fast (dynamic) fusion-relaxation process is used as input into an evaporation calculation which includes 20 heavy fragment exit channels. The distribution of excitations between residue and clusters is explicitly calculated, as is the further deexcitation of clusters to bound nuclei. These results are compared with the exclusive cluster multiplicity measurements of Kim et al., and are found to give excellent agreement. We consider also an equilibrated residue system at 25% lower initial excitation, which gives an unsatisfactory exclusive multiplicity distribution. This illustrates that exclusive fragment multiplicity may provide a thermometer for system excitation. This analysis of data involves successive binary decay with no compressional effects nor phase transitions. Several examples of primary versus final (stable) cluster decay probabilities for an A = 100 nucleus at excitations of 100 to 800 MeV are presented. From these results a large change in multifragmentation patterns may be understood as a simple phase space consequence, invoking neither phase transitions, nor equation of state information. These results are used to illustrate physical quantities which are ambiguous to deduce from experimental fragment measurements. 14 refs., 4 figs.
SDI: setting the record straight
Adelman, K.L.
1985-01-01
After a few introductory remarks, Mr. Adelman first discusses Soviet propaganda against SDI. He then poses and answers questions regarding the following: SDI and the ABM Treaty; SDI and US arms control objectives; and the ethics of SDI. The final portion of the address reviews US nonproliferation efforts.
DeWolf, H.G.
1989-11-01
President Reagan's Strategic Defense Initiative, or SDI, and the pursuit of defenses to protect against ballistic missile attack are issues of significant debate. Some praise the proposal, first made in a presidential address to the nation on 23 March 1983, as a grand vision that will abolish nuclear blackmail by adopting a totally defensive posture. Others condemn it as being destabilizing, a Pandora's box of strategic transition that could precipitate armed conflict.
Lee, S.
2011-05-05
The Savannah River Remediation (SRR) Organization requested that Savannah River National Laboratory (SRNL) develop a Computational Fluid Dynamics (CFD) method to mix and blend the miscible contents of the blend tanks to ensure the contents are properly blended before they are transferred from the blend tank; such as, Tank 50H, to the Salt Waste Processing Facility (SWPF) feed tank. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The transient CFD governing equations consisting of three momentum equations, one mass balance, two turbulence transport equations for kinetic energy and dissipation rate, and one species transport were solved by an iterative technique until the species concentrations of tank fluid were in equilibrium. The steady-state flow solutions for the entire tank fluid were used for flow pattern analysis, for velocity scaling analysis, and the initial conditions for transient blending calculations. A series of the modeling calculations were performed to estimate the blending times for various jet flow conditions, and to investigate the impact of the cooling coils on the blending time of the tank contents. The modeling results were benchmarked against the pilot scale test results. All of the flow and mixing models were performed with the nozzles installed at the mid-elevation, and parallel to the tank wall. From the CFD modeling calculations, the main results are summarized as follows: (1) The benchmark analyses for the CFD flow velocity and blending models demonstrate their consistency with Engineering Development Laboratory (EDL) and literature test results in terms of local velocity measurements and experimental observations. Thus, an application of the established criterion to SRS full scale tank will provide a better, physically-based estimate of the required mixing time, and elevation of transfer pump for minimum sludge disturbance. (2) An empirical equation for a tank with no cooling coils agrees reasonably with the current modeling results for the dual jet. (3) From the sensitivity study of the cooling coils, it was found that the tank mixing time for the coiled tank was about two times longer than that of the tank fluid with no coils under the 1/10th scale, while the coiled tank required only 50% longer than the one without coils under the full scale Tank 50H. In addition, the time difference is reduced when the pumping U{sub o}d{sub o} value is increased for a given tank. (4) The blending time for T-shape dual jet pump is about 20% longer than that of 15{sup o} upward V-shape pump under the 1/10th pilot-scale tank, while the time difference between the two pumps is about 12% for the full-scale Tank 50H. These results are consistent with the literature information. (5) A transfer pump with a solid-plate suction screen operating at 130 gpm can be located 9.5 inches above settled sludge for 2 in screen height in a 85 ft waste tank without disturbing any sludge. Detailed results are summarized in Table 13. Final pump performance calculations were made by using the established CW pump design, and operating conditions to satisfy the two requirements of minimum sludge disturbance, and adequate blending of tank contents. The final calculation results show that the blending times for the coiled and uncoiled tanks coupled with the CW pump design are 159 and 83 minutes, respectively. All the results are provided in Table 16.
NASP and SDI Spearhead CFD Developments
NASA Technical Reports Server (NTRS)
Mehta, Unmeel B.
1992-01-01
The National Aerospace Plane (NASP) program's purpose, as stated by the National Space Council, is to "develop and demonstrate hypersonic technologies with the ultimate goal of single stage to orbit." The council has also directed that "performance of the experimental flight vehicle will be constrained to the minimum necessary to meet the highest priority research, as opposed to operational objectives .... The program will be conducted in such a way as to minimize technical and cost uncertainty associated with the experimental vehicle." The purpose of the Strategic Defense Initiative (SDI), as defined by President Bush, is "...protection from limited ballistic missile strikes, whatever their source." Computational fluid dynamics (CFD) plays a vital role in both endeavors.
Functional integral approach to classical statistical dynamics
Jensen, R.V.
1980-04-01
A functional integral method is developed for the statistical solution of nonlinear stochastic differential equations which arise in classical dynamics. The functional integral approach provides a very natural and elegant derivation of the statistical dynamical equations that have been derived using the operator formalism of Martin, Siggia, and Rose.
MEDLINE SDI services: how do they compare?*
Shultz, Mary; De Groote, Sandra L.
2003-01-01
Introduction: Selective dissemination of information (SDI) services regularly alert users to new information on their chosen topics. This type of service can increase a user's ability to keep current and may have a positive impact on efficiency and productivity. Currently, there are many venues available where users can establish, store, and automatically run MEDLINE searches. Purpose: To describe, evaluate, and compare SDI services for MEDLINE. Resources: The following SDI services were selected for this study: PubMed Cubby, BioMail, JADE, PubCrawler, OVID, and ScienceDirect. Methodology: Identical searches were established in four of the six selected SDI services and were run on a weekly basis over a period of two months. Eight search strategies were used in each system to test performance under various search conditions. The PubMed Cubby system was used as the baseline against which the other systems were compared. Other aspects were evaluated in all six services and include ease of use, frequency of results, ability to use MeSH, ability to access and edit existing search strategies, and ability to download to a bibliographic management program. Results: Not all MEDLINE SDI services retrieve identical results, even when identical search strategies are used. This study also showed that the services vary in terms of features and functions offered. PMID:14566377
SDI spinoffs: research now, standards later
Smith, T.K. Jr.
1986-04-01
A major benefit of the Strategic Defense Initiative (SDI) its is potential for technological spinoffs. The lack of a consistent answer on the feasibility of developing an effective ballistic missile defense system may force Congress to look at the possible spinoffs in order to make a funding decision on SDI. Spinoffs have historically played an important role in providing industry with commercial applications, but there are also a number of unattractive aspects: unpredictability and possible suppression for national security reasons. Edward Teller is among those who promote X-ray lasers, while others support gamma-ray laser research. The possibility of SDI technology and spinoffs gives scientists and engineers a chance to participate in the development of new standards. 7 references.
Multifragmentation at intermediate energy: Dynamics or statistics
Beaulieu, L.; Phair, L.; Moretto, L.G.; Wozniak, G.J.
1998-01-01
In this report the authors consider two contradictory claims that have been advanced recently: (1) the claim for a predominantly dynamical fragment production mechanism; and (2) the claim for a dominant statistical and thermal process. They present a new analysis in terms of Poissonian reducibility and thermal scaling, which addresses some of the criticisms of the binomial analysis.
Automated SDI Services. (Selective Dissemination of Information).
ERIC Educational Resources Information Center
Altmann, Berthold
An automated SDI service based on tapes supplied by DDC, Science Abstracts, and Engineering Index is evaluated as a component element of the entire HDL information system. Current studies for improving the efficiency are briefly described,--in particular, the establishment of a parameter reference service that should shorten the lead-time for theâ€¦
SDI (Strategic Defense Initiative): a policy analysis
Fought, S.O.
1987-01-01
Contents include -- Foundations of Deterrence; A Model for Stability; Analysis of SDI/Stability; Related Issues; Treatment of Implementation Factors; Historical Evolution and Trends; The Strategic Choices and Flexible Response; The Planners' Perspective; The Impact of Strategic Defense on a Strategy of Flexible Response; Synthesis.
Automated SDI Services. (Selective Dissemination of Information).
ERIC Educational Resources Information Center
Altmann, Berthold
An automated SDI service based on tapes supplied by DDC, Science Abstracts, and Engineering Index is evaluated as a component element of the entire HDL information system. Current studies for improving the efficiency are briefly described,--in particular, the establishment of a parameter reference service that should shorten the lead-time for the…
Toward Statistical Descriptions of Convective Cloud Dynamics
NASA Astrophysics Data System (ADS)
Yano, Jun-Ichi; Quaas, Johannes; Wagner, Till M.; Plant, Robert S.
2008-06-01
Workshop on Concepts for Convective Parameterizations in Large-Scale Models; Hamburg, Germany, 12-14 February 2008; An accurate representation (parameterization) of deep convective clouds is essential for the successful simulation of precipitation in climate models. However, the question of closure (i.e., how to find a closed system of equations) for convective parameterizations remains unsettled. Because the parameterization is conceptually a description of an ensemble of convective clouds, the development of ``statistical cumulus dynamics'' (SCD) would be the ultimate way to provide the closure, just as the statistical mechanics of a microphysical system provides the ultimate basis for its macrophysical thermodynamics.
Takatsuka, Kazuo; Matsumoto, Kentaro
2016-01-21
We present a basic theory to study real-time dynamics embedded in a large environment that is treated using a statistical method. In light of great progress in the molecular-level studies on time-resolved spectroscopies, chemical reaction dynamics, and so on, not only in the gas phase but also in condensed phases like liquid solvents and even in crowded environments in living cells, we need to bridge over a gap between statistical mechanics and microscopic real-time dynamics. For instance, an analogy to gas-phase dynamics in which molecules are driven by the gradient of the potential energy hyper-surfaces (PESs) suggests that particles in condensed phases should run on the free energy surface instead. The question is whether this anticipation is correct. To answer it, we here propose a mixed dynamics and statistical representation to treat chemical dynamics embedded in a statistical ensemble. We first define the entropy functional, which is a function of the phase-space position of the dynamical subsystem, being dressed with statistical weights from the statistical counterpart. We then consider the functionals of temperature, free energy, and chemical potential as their extensions in statistical mechanics, through which one can clarify the relationship between real-time microscopic dynamics and statistical quantities. As an illustrative example we show that molecules in the dynamical subsystem should run on the free-energy functional surface, if and only if the spatial gradients of the temperature functional are all zero. Otherwise, additional forces emerge from the gradient of the temperature functional. Numerical demonstrations are presented at the very basic level of this theory of molecular dissociation in atomic cluster solvents. PMID:26674298
The origins of SDI, 1944--1983
Baucom, D.R.
1992-01-01
The most distinctive and important contribution of this new book on the Strategic Defense Initiative is that it ends where most other studies begin, with President Ronald Reagan's famous (or infamous, depending on one's perspective) March 1983 speech that introduced the Star Wars concept. In taking this approach, Donald R. Baucom - a former Air Force historian who has been the official historian who has been the official historian of the Strategic Defense Initiative Organization since May 1987 - helps to correct the common misperception that US efforts in strategic defense began and ended with the SDI. Although Baucom tells us that The Origins of SDI is a significantly revised version of an SDIO study he completed in 1989, representing his own views and not those of the SDIO, the reader should be warned that the book reads like an official history. It is often dry or too episodic and offers little that is new in the way of analysis or interpretation.
SDI (Strategic Defense Initiative): Shield or sword. Study Project
Butler, C.S.; Spiczak, G.R.
1989-05-15
The paper attempts to answer the fundamental question, is SDI an adjunct to a first strike strategy. As its criteria, it discusses Soviet and U.S. opposing views on SDI, an historical application of Mutual Assured Destruction strategy, and a discussion of Soviet and U.S. thinking on first-strike capability. President Reagan's March 1983 address on SDI is used as the backdrop to set the stage for the discussion. It is the objective of the authors to evaluate and analyze the potential impact of SDI on first strike.
Using SDI-12 with ST microelectronics MCU's
Saari, Alexandra; Hinzey, Shawn Adrian; Frigo, Janette Rose; Proicou, Michael Chris; Borges, Louis
2015-09-03
ST Microelectronics microcontrollers and processors are readily available, capable and economical processors. Unfortunately they lack a broad user base like similar offerings from Texas Instrument, Atmel, or Microchip. All of these devices could be useful in economical devices for remote sensing applications used with environmental sensing. With the increased need for environmental studies, and limited budgets, flexibility in hardware is very important. To that end, and in an effort to increase open support of ST devices, I am sharing my teams' experience in interfacing a common environmental sensor communication protocol (SDI-12) with ST devices.
Statistical Physics Applied to Human Heartbeat Dynamics
NASA Astrophysics Data System (ADS)
Stanley, H. Eugene
2000-03-01
A major problem in biology is the quantitative analysis of nonstationary time series. A central question is whether such noisy fluctuating signals contain information useful for understanding underlying physiological mechanisms. This review talk summarizes recent work that analyzes physiological signals--principally lengthy time series of interbeat heart intervals--using a range of approaches adapted from modern statistical mechanics. These approaches include (i) detrended fluctuation analysis of long-range anticorrelations, (ii) wavelet analysis, and (iii) multifractal analysis. The work reported here was carried out primarily by L. A. Nunes Amaral, A. L. Goldberger, S. Havlin, P. Ch. Ivanov, C.-K. Peng, M. G. Rosenblum, and Z. Struzik; see [1-5] and references therein for details. [1] For an overview, see H. E. Stanley, L. A. N. Amaral, A. L. Goldberger, S. Havlin, P. Ch. Ivanov, and C.-K. Peng, ``Statistical Physics and Physiology: Monofractal and Multifractal Approaches,'' Physica A 270 (1999) 309. [2] C.-K. Peng, S. Havlin, H. E. Stanley, and A. L. Goldberger, ``Quantification of Scaling Exponents and Crossover Phenomena in Nonstationary Heartbeat Time Series,'' Chaos 5 (1995) 82. [3] L. A. N. Amaral, A. L. Goldberger, P. Ch. Ivanov, and H. E. Stanley, ``Scale-Independent Measures and Pathologic Cardiac Dynamics,'' Phys. Rev. Lett. 81 (1998) 2388. [4] P. Ch. Ivanov, A. L. Goldberger, S. Havlin, C.-K. Peng, and H. E. Stanley, ``Wavelets in Medicine and Physiology,'' in Wavelets, edited by H. C. van den Berg (Cambridge University Press, Cambridge, 1999). [5] P. Ch. Ivanov, L. A. N. Amaral, A. L. Goldberger, S. Havlin, M. G. Rosenblum, Z. Struzik, and H. E. Stanley, ``Multifractality in Human Heartbeat Dynamics,'' Nature 399 (1999) 461.
A Statistical Description of Neural Ensemble Dynamics
Long, John D.; Carmena, Jose M.
2011-01-01
The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486
Investigating strategies to improve crop germination when using SDI
Technology Transfer Automated Retrieval System (TEKTRAN)
As the nation's population increases and available irrigation water decreases, new technologies are being developed to maintain or increase production on fewer acres. One of these advancements has been the use of subsurface drip irrigation (SDI) on field crops. Research has shown that SDI is the m...
Lost in space: SDI struggles through its sixth year
MacDonald, B.W.
1989-09-01
After six years of debate, it is clear that Congress is willing to support a robust research program for SDI, but it is also clear that Congress will not support SDI annual outlays on the order of $10 billion. Thus the policy choice is between a good research program that meshes with fiscal reality, or an inadequate and wasteful development program that continues to focus on preparing for a Phase I deployment for which the funds simply will not be available. The Bush administration so far seems trapped by its own rhetoric from coming to grips with the implications of the new SDI reality. The responsibility for getting SDI on a steadier course toward more realistic research objectives thus seems to lie with Congress in the near term. Since Congress has been reluctant to earmark SDI research funds for specific objectives, it will take a change in administration perceptions before SDI program goals can be changed away from Phase I deployment. The only likely way this could happen in the near term would be as a result of a Congress-executive branch summit agreement on SDI objectives and funding levels. In the absence of such an agreement, SDI will be sailing under ever weaker fiscal and political winds and runs the risk of finding itself becalmed, working ceaselessly toward goals that will never be fulfilled.
Surface drip irrigation (SDI): Status of the technology in 2010
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI), although a much smaller fraction of the microirrigated land area than surface drip irrigation, is growing at a much faster rate and is the subject of considerable research and educational efforts in the United States. This paper will discuss the growth in SDI, highl...
SDI Use and Productivity in the Corporate Research Environment.
ERIC Educational Resources Information Center
Mondschein, Lawrence G.
1990-01-01
Use of selective dissemination of information (SDI) by 156 research scientists at 6 corporate research and development facilities was surveyed to assess its relationship to research productivity as measured by the number of papers authored. Findings showed that 70 percent of the scientists use SDI, and regular users are more productive than either…
Teachers' Use of Transnumeration in Solving Statistical Tasks with Dynamic Statistical Software
ERIC Educational Resources Information Center
Lee, Hollylynne S.; Kersaint, Gladis; Harper, Suzanne R.; Driskell, Shannon O.; Jones, Dusty L.; Leatham, Keith R.; Angotti, Robin L.; Adu-Gyamfi, Kwaku
2014-01-01
This study examined a random stratified sample (n = 62) of teachers' work across eight institutions on three tasks that utilized dynamic statistical software. We considered how teachers may utilize and develop their statistical knowledge and technological statistical knowledge when investigating a statistical task. We examined how teachers…
Teachers' Use of Transnumeration in Solving Statistical Tasks with Dynamic Statistical Software
ERIC Educational Resources Information Center
Lee, Hollylynne S.; Kersaint, Gladis; Harper, Suzanne R.; Driskell, Shannon O.; Jones, Dusty L.; Leatham, Keith R.; Angotti, Robin L.; Adu-Gyamfi, Kwaku
2014-01-01
This study examined a random stratified sample (n = 62) of teachers' work across eight institutions on three tasks that utilized dynamic statistical software. We considered how teachers may utilize and develop their statistical knowledge and technological statistical knowledge when investigating a statistical task. We examined how teachersâ€¦
SDI Small Business Innovation Research Program (video). Audio-Visual
Not Available
1991-06-01
The video tape details opportunities for small high tech companies to participate in the Strategic Defense Initiative's (SDI's) Small Business Innovation Research (SBIR) program. The package also includes a paper copy of the story board.
Artificial intelligence applications in space and SDI: A survey
NASA Technical Reports Server (NTRS)
Fiala, Harvey E.
1988-01-01
The purpose of this paper is to survey existing and planned Artificial Intelligence (AI) applications to show that they are sufficiently advanced for 32 percent of all space applications and SDI (Space Defense Initiative) software to be AI-based software. To best define the needs that AI can fill in space and SDI programs, this paper enumerates primary areas of research and lists generic application areas. Current and planned NASA and military space projects in AI will be reviewed. This review will be largely in the selected area of expert systems. Finally, direct applications of AI to SDI will be treated. The conclusion covers the importance of AI to space and SDI applications, and conversely, their importance to AI.
Multifragmentation: New dynamics or old statistics?
Moretto, L.G.; Delis, D.N.; Wozniak, G.J.
1993-10-01
The understanding of the fission process as it has developed over the last fifty years has been applied to multifragmentation. Two salient aspects have been discovered: 1) a strong decoupling of the entrance and exit channels with the formation of well-characterized sources: 2) a statistical competition between two-, three-, four-, five-, ... n-body decays.
Segmenting Dynamic Human Action via Statistical Structure
ERIC Educational Resources Information Center
Baldwin, Dare; Andersson, Annika; Saffran, Jenny; Meyer, Meredith
2008-01-01
Human social, cognitive, and linguistic functioning depends on skills for rapidly processing action. Identifying distinct acts within the dynamic motion flow is one basic component of action processing; for example, skill at segmenting action is foundational to action categorization, verb learning, and comprehension of novel action sequences. Yet…
Photon Counts Statistics in Leukocyte Cell Dynamics
NASA Astrophysics Data System (ADS)
van Wijk, Eduard; van der Greef, Jan; van Wijk, Roeland
2011-12-01
In the present experiment ultra-weak photon emission/ chemiluminescence from isolated neutrophils was recorded. It is associated with the production of reactive oxygen species (ROS) in the "respiratory burst" process which can be activated by PMA (Phorbol 12-Myristate 13-Acetate). Commonly, the reaction is demonstrated utilizing the enhancer luminol. However, with the use of highly sensitive photomultiplier equipment it is also recorded without enhancer. In that case, it can be hypothesized that photon count statistics may assist in understanding the underlying metabolic activity and cooperation of these cells. To study this hypothesis leukocytes were stimulated with PMA and increased photon signals were recorded in the quasi stable period utilizing Fano factor analysis at different window sizes. The Fano factor is defined by the variance over the mean of the number of photon within the observation time. The analysis demonstrated that the Fano factor of true signal and not of the surrogate signals obtained by random shuffling increases when the window size increased. It is concluded that photon count statistics, in particular Fano factor analysis, provides information regarding leukocyte interactions. It opens the perspective to utilize this analytical procedure in (in vivo) inflammation research. However, this needs further validation.
Protein electron transfer: Dynamics and statistics.
Matyushov, Dmitry V
2013-07-14
Electron transfer between redox proteins participating in energy chains of biology is required to proceed with high energetic efficiency, minimizing losses of redox energy to heat. Within the standard models of electron transfer, this requirement, combined with the need for unidirectional (preferably activationless) transitions, is translated into the need to minimize the reorganization energy of electron transfer. This design program is, however, unrealistic for proteins whose active sites are typically positioned close to the polar and flexible protein-water interface to allow inter-protein electron tunneling. The high flexibility of the interfacial region makes both the hydration water and the surface protein layer act as highly polar solvents. The reorganization energy, as measured by fluctuations, is not minimized, but rather maximized in this region. Natural systems in fact utilize the broad breadth of interfacial electrostatic fluctuations, but in the ways not anticipated by the standard models based on equilibrium thermodynamics. The combination of the broad spectrum of static fluctuations with their dispersive dynamics offers the mechanism of dynamical freezing (ergodicity breaking) of subsets of nuclear modes on the time of reaction/residence of the electron at a redox cofactor. The separation of time-scales of nuclear modes coupled to electron transfer allows dynamical freezing. In particular, the separation between the relaxation time of electro-elastic fluctuations of the interface and the time of conformational transitions of the protein caused by changing redox state results in dynamical freezing of the latter for sufficiently fast electron transfer. The observable consequence of this dynamical freezing is significantly different reorganization energies describing the curvature at the bottom of electron-transfer free energy surfaces (large) and the distance between their minima (Stokes shift, small). The ratio of the two reorganization energies establishes the parameter by which the energetic efficiency of protein electron transfer is increased relative to the standard expectations, thus minimizing losses of energy to heat. Energetically efficient electron transfer occurs in a chain of conformationally quenched cofactors and is characterized by flattened free energy surfaces, reminiscent of the flat and rugged landscape at the stability basin of a folded protein. PMID:23862967
Protein electron transfer: Dynamics and statistics
NASA Astrophysics Data System (ADS)
Matyushov, Dmitry V.
2013-07-01
Electron transfer between redox proteins participating in energy chains of biology is required to proceed with high energetic efficiency, minimizing losses of redox energy to heat. Within the standard models of electron transfer, this requirement, combined with the need for unidirectional (preferably activationless) transitions, is translated into the need to minimize the reorganization energy of electron transfer. This design program is, however, unrealistic for proteins whose active sites are typically positioned close to the polar and flexible protein-water interface to allow inter-protein electron tunneling. The high flexibility of the interfacial region makes both the hydration water and the surface protein layer act as highly polar solvents. The reorganization energy, as measured by fluctuations, is not minimized, but rather maximized in this region. Natural systems in fact utilize the broad breadth of interfacial electrostatic fluctuations, but in the ways not anticipated by the standard models based on equilibrium thermodynamics. The combination of the broad spectrum of static fluctuations with their dispersive dynamics offers the mechanism of dynamical freezing (ergodicity breaking) of subsets of nuclear modes on the time of reaction/residence of the electron at a redox cofactor. The separation of time-scales of nuclear modes coupled to electron transfer allows dynamical freezing. In particular, the separation between the relaxation time of electro-elastic fluctuations of the interface and the time of conformational transitions of the protein caused by changing redox state results in dynamical freezing of the latter for sufficiently fast electron transfer. The observable consequence of this dynamical freezing is significantly different reorganization energies describing the curvature at the bottom of electron-transfer free energy surfaces (large) and the distance between their minima (Stokes shift, small). The ratio of the two reorganization energies establishes the parameter by which the energetic efficiency of protein electron transfer is increased relative to the standard expectations, thus minimizing losses of energy to heat. Energetically efficient electron transfer occurs in a chain of conformationally quenched cofactors and is characterized by flattened free energy surfaces, reminiscent of the flat and rugged landscape at the stability basin of a folded protein.
Extreme events: dynamics, statistics and prediction
NASA Astrophysics Data System (ADS)
Ghil, M.; Yiou, P.; Hallegatte, S.; Malamud, B. D.; Naveau, P.; Soloviev, A.; Friederichs, P.; Keilis-Borok, V.; Kondrashov, D.; Kossobokov, V.; Mestre, O.; Nicolis, C.; Rust, H. W.; Shebalin, P.; Vrac, M.; Witt, A.; Zaliapin, I.
2011-05-01
We review work on extreme events, their causes and consequences, by a group of European and American researchers involved in a three-year project on these topics. The review covers theoretical aspects of time series analysis and of extreme value theory, as well as of the deterministic modeling of extreme events, via continuous and discrete dynamic models. The applications include climatic, seismic and socio-economic events, along with their prediction. Two important results refer to (i) the complementarity of spectral analysis of a time series in terms of the continuous and the discrete part of its power spectrum; and (ii) the need for coupled modeling of natural and socio-economic systems. Both these results have implications for the study and prediction of natural hazards and their human impacts.
SDI (Strategic Defense Initiative) and national security policy. Research report
Davis, R.W.
1988-04-01
The paper attempts to answer the fundamental question of Can SDI make a significant contribution to US national security. It uses as its evaluation criteria historical arms-control measurements of stability, reduction in the probability of war, reduction in the consequences of war, economic benefits, and political benefits. A historical discussion of US nuclear strategy development along with Soviet thinking is provided as a backdrop to set the stage for an analysis of the reasons for President Reagan's March 1983 speech. The objectives of SDI are discussed along with the major concerns expressed by the program critics. Using the evaluation criteria defined above, the author analyzes SDI potential position in a long-term integrated national strategy that includes arms control and competitive strategies.
Statistical coarse-graining of molecular dynamics into peridynamics.
Silling, Stewart Andrew; Lehoucq, Richard B.
2007-10-01
This paper describes an elegant statistical coarse-graining of molecular dynamics at finite temperature into peridynamics, a continuum theory. Peridynamics is an efficient alternative to molecular dynamics enabling dynamics at larger length and time scales. In direct analogy with molecular dynamics, peridynamics uses a nonlocal model of force and does not employ stress/strain relationships germane to classical continuum mechanics. In contrast with classical continuum mechanics, the peridynamic representation of a system of linear springs and masses is shown to have the same dispersion relation as the original spring-mass system.
Soviet SDI Rhetoric: The "Evil Empire" Vision of Mikhail Gorbachev.
ERIC Educational Resources Information Center
Kelley, Colleen E.
The symbolic presence of Ronald Reagan's Strategic Defense Initiative (SDI) has been and continues to be the pivot point in all summitry rhetoric between the American President and Soviet General Secretary Mikhail Gorbachev. To examine some of the rhetorical choices made by Gorbachev to dramatize his vision of why Ronald Reagan refuses to…
SDI Investigation, 1967-1969. Volumes 1-5.
ERIC Educational Resources Information Center
Clague, P.
The investigation of the performance, economics, and acceptability to users of the selective dissemination of information (SDI) computer system that supported the International Information Service in Physics, Electrotechnology, Computers, and Control (INSPEC) during its initial testing phase is described. The initial design of the study, involving…
Optimum Degree of User Participation in SDI Profile Generation.
ERIC Educational Resources Information Center
Evans, L.
A study was conducted of the International Information Service in Physics, Electrotechnology, Computers, and Control (INSPEC) selective dissemination of information (SDI) user profile generation. Five degrees of user involvement in the generation of profiles were investigated ranging from having the user provide only a statement of subject…
INSPEC SDI Investigation, 1967-1969. Volume IV.
ERIC Educational Resources Information Center
Clague, P.
Volume IV of this five volume study of the INSPEC SDI system consists of the following appendices to the study: A proposal to investigate the selective dissemination of information; Covering letter to questionnaire; Questionnaire: survey of information use; Chasing letter; Letter of invitation to participate; Chasing letter; Statement of…
Design and Installation of SDI Systems in North Carolina
Technology Transfer Automated Retrieval System (TEKTRAN)
As a part of the humid Southeast, North Carolinaâ€™s climate, topography, soils, cropping systems, and water sources require special consideration when considering and implementing a subsurface drip irrigation (SDI) system. This publication is not a step-by-step design manual, but it will help you in ...
INSPEC SDI Investigation, 1967-1969. Volume V.
ERIC Educational Resources Information Center
Clague, P.
The appendices included in Volume V of this five volume study of the INSPEC SDI system are a continuation of those in Volume IV and consist of: Communications with users etc.; Specimen sheets from descriptor file; Time on profile modification and analysis, and user assessment; Profiles compiled by compilers 1, 2, and 3; Proposed comparison of…
SDI Considerations for North Carolina Growers and Producers
Technology Transfer Automated Retrieval System (TEKTRAN)
Humid areas, such as the southeastern and midsouthern United States, have particular climate, topography, soils, cropping systems, and water sources that require special consideration when implementing a subsurface drip irrigation (SDI) system. These factors are normally different enough in value or...
Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points
ERIC Educational Resources Information Center
Ekol, George
2015-01-01
This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…
Statistical determination of space shuttle component dynamic magnification factors
NASA Technical Reports Server (NTRS)
Lehner, F.
1973-01-01
A method is presented of obtaining vibration design loads for components and brackets. Dynamic Magnification Factors from applicable Saturn/Apollo qualification, reliability, and vibroacoustic tests have been statistically formulated into design nomographs. These design nomographs have been developed for different component and bracket types, mounted on backup structure or rigidly mounted and excited by sinusoidal or random inputs. Typical nomographs are shown.
Spatial statistics and attentional dynamics in scene viewing.
Engbert, Ralf; Trukenbrod, Hans A; Barthelmé, Simon; Wichmann, Felix A
2015-01-01
In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data. PMID:25589298
Dynamic computation of network statistics via updating schema
NASA Astrophysics Data System (ADS)
Sun, Jie; Bagrow, James P.; Bollt, Erik M.; Skufca, Joseph D.
2009-03-01
Given a large network, computing statistics such as clustering coefficient, or modularity, is costly for large networks. When one more edge or vertex is added, traditional methods require that the full (expensive) computation be redone on this slightly modified graph. Alternatively, we introduce here a new approach: under modification to the graph, we update the statistics instead of computing them from scratch. In this paper we provide update schemes for a number of popular statistics, to include degree distribution, clustering coefficient, assortativity, and modularity. Our primary aim is to reduce the computational complexity needed to track the evolving behavior of large networks. As an important consequence, this approach provides efficient methods which may support modeling the evolution of dynamic networks to identify and understand critical transitions. Using the updating scheme, the network statistics can be computed much faster than re-calculating each time that the network evolves. We also note that the update formula can be used to determine which edge or node will lead to the extremal change of network statistics, providing a way of predicting or designing network evolution rules that would optimize some chosen statistic. We present our evolution methods in terms of a network statistics differential notation.
Seasonal statistical-dynamical forecasts of droughts over Western Iberia
NASA Astrophysics Data System (ADS)
Ribeiro, Andreia; Pires, Carlos
2015-04-01
The Standard Precipitation Index (SPI) has been used here as a drought predictand in order to assess seasonal drought predictability over the western Iberia. Hybrid (statistical-dynamical) long-range forecasts of the drought index SPI are estimated with lead-times up to 6 months, over the period of 1987-2008. Operational forecasts of geopotential height and total precipitation from the UK Met Office operational forecasting system are considered. Past ERA-Interim reanalysis data, prior to the forecast launching, are used for the purpose of build a set of SPI predictors, integrating recent past observations. Then, a two-step hybridization procedure is adopted: in the first-step both forecasted and observational large-scale fields are subjected to a Principal Component Analysis (PCA) and forecasted PCs and persistent PCs are used as predictors. The second hybridization step consists on a statistical/hybrid downscaling to the regional scale based on regression techniques, after the selection of the statistically significant predictors. The large-scale filter predictors from past observations and operational forecasts are used to downscale SPI and the advantage of combining predictors with both dynamical and statistical background in the prediction of drought conditions at different lags is evaluated. The SPI estimations and the added value of combining dynamical and statistical methods are evaluated in cross-validation mode. Results show that winter is the most predictable season, and most of the predictive power is on the large-scale fields and at the shorter lead-times. The hybridization improves forecasting drought skill in comparison to purely dynamical forecasts, since the persistence of large-scale patterns displays the main role in the long-range predictability of precipitation. These findings provide clues about the predictability of the SPI, particularly in Portugal, and may contribute to the predictability of crops yields and to some guidance on users (such as farmers) decision making process.
Fractional-Power-Law Level Statistics Due to Dynamical Tunneling
NASA Astrophysics Data System (ADS)
Bäcker, Arnd; Ketzmerick, Roland; Löck, Steffen; Mertig, Normann
2011-01-01
For systems with a mixed phase space we demonstrate that dynamical tunneling universally leads to a fractional power law of the level-spacing distribution P(s) over a wide range of small spacings s. Going beyond Berry-Robnik statistics, we take into account that dynamical tunneling rates between the regular and the chaotic region vary over many orders of magnitude. This results in a prediction of P(s) which excellently describes the spectral data of the standard map. Moreover, we show that the power-law exponent is proportional to the effective Planck constant heff.
Statistical Computations Underlying the Dynamics of Memory Updating
Gershman, Samuel J.; Radulescu, Angela; Norman, Kenneth A.; Niv, Yael
2014-01-01
Psychophysical and neurophysiological studies have suggested that memory is not simply a carbon copy of our experience: Memories are modified or new memories are formed depending on the dynamic structure of our experience, and specifically, on how gradually or abruptly the world changes. We present a statistical theory of memory formation in a dynamic environment, based on a nonparametric generalization of the switching Kalman filter. We show that this theory can qualitatively account for several psychophysical and neural phenomena, and present results of a new visual memory experiment aimed at testing the theory directly. Our experimental findings suggest that humans can use temporal discontinuities in the structure of the environment to determine when to form new memory traces. The statistical perspective we offer provides a coherent account of the conditions under which new experience is integrated into an old memory versus forming a new memory, and shows that memory formation depends on inferences about the underlying structure of our experience. PMID:25375816
Dynamics, stability, and statistics on lattices and networks
Livi, Roberto
2014-07-15
These lectures aim at surveying some dynamical models that have been widely explored in the recent scientific literature as case studies of complex dynamical evolution, emerging from the spatio-temporal organization of several coupled dynamical variables. The first message is that a suitable mathematical description of such models needs tools and concepts borrowed from the general theory of dynamical systems and from out-of-equilibrium statistical mechanics. The second message is that the overall scenario is definitely reacher than the standard problems in these fields. For instance, systems exhibiting complex unpredictable evolution do not necessarily exhibit deterministic chaotic behavior (i.e., Lyapunov chaos) as it happens for dynamical models made of a few degrees of freedom. In fact, a very large number of spatially organized dynamical variables may yield unpredictable evolution even in the absence of Lyapunov instability. Such a mechanism may emerge from the combination of spatial extension and nonlinearity. Moreover, spatial extension allows one to introduce naturally disorder, or heterogeneity of the interactions as important ingredients for complex evolution. It is worth to point out that the models discussed in these lectures share such features, despite they have been inspired by quite different physical and biological problems. Along these lectures we describe also some of the technical tools employed for the study of such models, e.g., Lyapunov stability analysis, unpredictability indicators for â€œstable chaos,â€ hydrodynamic description of transport in low spatial dimension, spectral decomposition of stochastic dynamics on directed networks, etc.
Statistical energy conservation principle for inhomogeneous turbulent dynamical systems.
Majda, Andrew J
2015-07-21
Understanding the complexity of anisotropic turbulent processes over a wide range of spatiotemporal scales in engineering shear turbulence as well as climate atmosphere ocean science is a grand challenge of contemporary science with important societal impact. In such inhomogeneous turbulent dynamical systems there is a large dimensional phase space with a large dimension of unstable directions where a large-scale ensemble mean and the turbulent fluctuations exchange energy and strongly influence each other. These complex features strongly impact practical prediction and uncertainty quantification. A systematic energy conservation principle is developed here in a Theorem that precisely accounts for the statistical energy exchange between the mean flow and the related turbulent fluctuations. This statistical energy is a sum of the energy in the mean and the trace of the covariance of the fluctuating turbulence. This result applies to general inhomogeneous turbulent dynamical systems including the above applications. The Theorem involves an assessment of statistical symmetries for the nonlinear interactions and a self-contained treatment is presented below. Corollary 1 and Corollary 2 illustrate the power of the method with general closed differential equalities for the statistical energy in time either exactly or with upper and lower bounds, provided that the negative symmetric dissipation matrix is diagonal in a suitable basis. Implications of the energy principle for low-order closure modeling and automatic estimates for the single point variance are discussed below. PMID:26150510
Statistical energy conservation principle for inhomogeneous turbulent dynamical systems
Majda, Andrew J.
2015-01-01
Understanding the complexity of anisotropic turbulent processes over a wide range of spatiotemporal scales in engineering shear turbulence as well as climate atmosphere ocean science is a grand challenge of contemporary science with important societal impact. In such inhomogeneous turbulent dynamical systems there is a large dimensional phase space with a large dimension of unstable directions where a large-scale ensemble mean and the turbulent fluctuations exchange energy and strongly influence each other. These complex features strongly impact practical prediction and uncertainty quantification. A systematic energy conservation principle is developed here in a Theorem that precisely accounts for the statistical energy exchange between the mean flow and the related turbulent fluctuations. This statistical energy is a sum of the energy in the mean and the trace of the covariance of the fluctuating turbulence. This result applies to general inhomogeneous turbulent dynamical systems including the above applications. The Theorem involves an assessment of statistical symmetries for the nonlinear interactions and a self-contained treatment is presented below. Corollary 1 and Corollary 2 illustrate the power of the method with general closed differential equalities for the statistical energy in time either exactly or with upper and lower bounds, provided that the negative symmetric dissipation matrix is diagonal in a suitable basis. Implications of the energy principle for low-order closure modeling and automatic estimates for the single point variance are discussed below. PMID:26150510
Soviet military on SDI (Strategic Defense Initiative). Professional paper
Fitzgerald, M.C.
1987-08-01
Numerous Western analysts have suggested that all American assessments of SDI should proceed not only from a consideration of American intentions, but also from the outlook of Soviet perceptions. Since 23 March 1983, the prevailing tone of Soviet military writings on SDI has been overwhelmingly negative. Myron Hedlin has concluded that this harsh reaction to a U.S. initiative still years from realization suggests both a strong concern about the ultimate impact of these plans on the strategic balance, and a perceived opportunity for scoring propaganda points. Indeed, the present review of Soviet writings since President Reagan's so-called Star Wars speech has yielded both objective Soviet concerns and regressions to psychological warfare. This, in turn, has necessitated a careful effort to separate rhetoric from more official assessments of SDI. While there has long been dispute in the West over the validity of Soviet statements, they have time and again been subsequently confirmed in Soviet hardware, exercises, and operational behavior. Some Western analysts will nonetheless contend that the Soviet statements under examination in this study are merely a commodity for export.
A Stochastic Fractional Dynamics Model of Rainfall Statistics
NASA Astrophysics Data System (ADS)
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Classifying cardiac biosignals using ordinal pattern statistics and symbolic dynamics.
Parlitz, U; Berg, S; Luther, S; Schirdewan, A; Kurths, J; Wessel, N
2012-03-01
The performance of (bio-)signal classification strongly depends on the choice of suitable features (also called parameters or biomarkers). In this article we evaluate the discriminative power of ordinal pattern statistics and symbolic dynamics in comparison with established heart rate variability parameters applied to beat-to-beat intervals. As an illustrative example we distinguish patients suffering from congestive heart failure from a (healthy) control group using beat-to-beat time series. We assess the discriminative power of individual features as well as pairs of features. These comparisons show that ordinal patterns sampled with an additional time lag are promising features for efficient classification. PMID:21511252
Solar wind dynamic pressure variations: Quantifying the statistical magnetospheric response
NASA Technical Reports Server (NTRS)
Sibeck, D. G.
1990-01-01
Solar wind dynamic pressure variations are common and have large amplitudes. Existing models for the transient magnetospheric and ionospheric response to the solar wind dynamic pressure variation are quantified. The variations drive large amplitude (approx 1 R sub E) magnetopause motion with velocities of approx. 60 km/s and transient dayside ionospheric flows of 2 km/s which are organized into double convection vortices. Ground magnetometer signatures are more pronounced under the auroral ionosphere, where they reach 60 to 300 nT, and under the equatorial electrojet. A statistical comparison of transient ground magnetometer events seen at a South Pole station and geosynchronous orbit indicates that all but the weakest ground events are associated with clear compressional signatures at the dayside geosynchronous orbit.
A whirlwind tour of statistical methods in structural dynamics.
Booker, J. M.
2004-01-01
Several statistical methods and their corresponding principles of application to structural dynamics problems will be presented. This set was chosen based upon the projects and their corresponding challenges in the Engineering Sciences & Applications (ESA) Division at Los Alamos National Laboratory and focuses on variance-based uncertainty quantification. Our structural dynamics applications are heavily involved in modeling and simulation, often with sparse data availability. In addition to models, heavy reliance is placed upon the use of expertise and experience. Beginning with principles of inference and prediction, some statistical tools for verification and validation are introduced. Among these are the principles of good experimental design for test and model computation planning, and the combination of data, models and knowledge through the use of Bayes Theorem. A brief introduction to multivariate methods and exploratory data analysis will be presented as part of understanding relationships and variation among important parameters, physical quantities of interest, measurements, inputs and outputs. Finally, the use of these methods and principles will be discussed in drawing conclusions from the validation assessment process under uncertainty.
Extreme event statistics of daily rainfall: dynamical systems approach
NASA Astrophysics Data System (ADS)
Cigdem Yalcin, G.; Rabassa, Pau; Beck, Christian
2016-04-01
We analyse the probability densities of daily rainfall amounts at a variety of locations on Earth. The observed distributions of the amount of rainfall fit well to a q-exponential distribution with exponent q close to qâ‰ˆ 1.3. We discuss possible reasons for the emergence of this power law. In contrast, the waiting time distribution between rainy days is observed to follow a near-exponential distribution. A careful investigation shows that a q-exponential with qâ‰ˆ 1.05 yields the best fit of the data. A Poisson process where the rate fluctuates slightly in a superstatistical way is discussed as a possible model for this. We discuss the extreme value statistics for extreme daily rainfall, which can potentially lead to flooding. This is described by FrÃ©chet distributions as the corresponding distributions of the amount of daily rainfall decay with a power law. Looking at extreme event statistics of waiting times between rainy days (leading to droughts for very long dry periods) we obtain from the observed near-exponential decay of waiting times extreme event statistics close to Gumbel distributions. We discuss superstatistical dynamical systems as simple models in this context.
Not Available
1986-07-01
The same Federal budget cuts which are constraining in-space testing of SDI components and systems are slowing the development of environmental facilities to simulate space conditions for testing the components on earth. At Arnold Engineering Development Center (AEDC), an attempt is being made to obtain funds for construction of facilities as national assets, rather than as military appropriations. AEDC is involved in studies of plume signature measurement, vacuum chamber testing, kinetic energy projectile testing, high endoatmospheric interceptor development, and toxic propellant facility support. Some development is devoted to scene-generation capabilities, large optics for collimating signals and the isolation of vacuum chambers from vibration, as well as efforts to produce numerical simulations for computational fluid dynamics and complex geometries. Tests are proceeding on components to be projected with a rail gun operated in corrosive environments.
ERIC Educational Resources Information Center
Scheffler, F. L.; March, J. F.
The Aerospace Materials Information Center (AMIC) Selective Dissemination of Information (SDI) program was evaluated by an interview technique after one year of operation. The data base for the SDI consists of the periodic document index records input to the AMIC system. The users are 63 engineers, scientists, and technical administrators at the…
ERIC Educational Resources Information Center
Mondschein, Lawrence G.
1990-01-01
Discussion of the use of selective dissemination of information (SDI) focuses on a study that proposed a model for evaluating the relationship between SDI and the number of publications authored by basic research scientists working in a corporate research and development (R&D) environment. Hypotheses and their findings are presented. (Seven…
Subsurface drip irrigation (SDI) research at USDA-ARS in Bushland, TX
Technology Transfer Automated Retrieval System (TEKTRAN)
Producers in the Texas High Plains have recently adopted subsurface drip irrigation (SDI) at unprecedented rates in response to drought, declining water resources from the Ogallala Aquifer, and increasing energy costs to pump groundwater. However, SDI has much greater capital and maintenance require...
Forecasting: it is not about statistics, it is about dynamics.
Judd, Kevin; Stemler, Thomas
2010-01-13
In 1963, the mathematician and meteorologist Edward Lorenz published a paper (Lorenz 1963 J. Atmos. Sci. 20, 130-141) that changed the way scientists think about the prediction of geophysical systems, by introducing the ideas of chaos, attractors, sensitivity to initial conditions and the limitations to forecasting nonlinear systems. Three years earlier, the mathematician and engineer Rudolf Kalman had published a paper (Kalman 1960 Trans. ASME Ser. D, J. Basic Eng. 82, 35-45) that changed the way engineers thought about prediction of electronic and mechanical systems. Ironically, in recent years, geophysicists have become increasingly interested in Kalman filters, whereas engineers have become increasingly interested in chaos. It is argued that more often than not the tracking and forecasting of nonlinear systems has more to do with the nonlinear dynamics that Lorenz considered than it has to do with statistics that Kalman considered. A position with which both Lorenz and Kalman would appear to agree. PMID:19948555
Multiscale dynamics of C60 from attosecond to statistical physics
NASA Astrophysics Data System (ADS)
Lépine, F.
2015-06-01
C60 is a fascinating object that has become remarkably useful for experimentalists and theoreticians to study photo-induced processes in large many-particle systems. In this review article, we discuss how the knowledge accumulated over the past 30 years on this molecule provides a large panel of mechanisms that occur from the intrinsic time scale of electronic motion that is attosecond, to long ‘macroscopic’ time scale (second). This illustrates the multiscale aspect of dynamics in photo-excited systems, which connects coherent quantum processes to classical and statistical mechanisms. This also shines light onto future experiments and theoretical work required to complete the global picture of light-induced mechanisms in fullerenes.
Statistical characteristics of topographic surfaces and dynamic smoothing of landscapes
NASA Astrophysics Data System (ADS)
Bartlett, M. S.; Laio, F.; Ridolfi, L.; Vico, G.; Porporato, A. M.
2011-12-01
We analyze the local statistics of topographic surfaces, including slope and aspect, as a function of scale, and explore their relations with landscape features, such as age, vegetation, and geology. These results build upon the previous work of Vico and Porporato (JGR 114, F01011, 2009), which characterized slope using generalized t-Student distributions. We find that the number of degrees of freedom of such distributions, which determines the heaviness of their tails, is linked to the age of the topographic relief of the considered regions, tending to normal distributions for very old mountain ranges. Based on these findings, and inspired by models of critical phenomena, we develop physically-based, space-time stochastic differential equations that reproduce this dynamic smoothing of rough landscapes.
Stability criteria for the LSDM (Livermore Statistical Dynamic Model)
Taylor, K.E.
1986-10-01
One criterion for stable integration of zonally averaged primitive equation models is normally given by the Courant-Fridericks-Lewy condition applied to the fastest (horizontally) progagating waves. In practice this means that the horizontally propagating acoustic wave (Lamb wave) limits the length of the time step that can be used in the integration: ..delta..t < ..delta..x/c, where c is the phase speed of the Lamb wave. In the Livermore Statistical Dynamic Model (LSDM) there are other simulated processes that may place more stringent constraints on the length of the time-step. In the attached notes, I have derived the stability criteria for the LSDM. I have found that to a good approximation (within 15%), the length of time-step must be less than ..delta..t/sub c/ where 1/..delta..t/sub c/ = (1 + ..gamma../2) (1/tau/sub s/ + 1/tau/sub r/) + ..gamma../2 1/tau/sub e/.
OPEN PROBLEM: Orbits' statistics in chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Arnold, V.
2008-07-01
This paper shows how the measurement of the stochasticity degree of a finite sequence of real numbers, published by Kolmogorov in Italian in a journal of insurances' statistics, can be usefully applied to measure the objective stochasticity degree of sequences, originating from dynamical systems theory and from number theory. Namely, whenever the value of Kolmogorov's stochasticity parameter of a given sequence of numbers is too small (or too big), one may conclude that the conjecture describing this sequence as a sample of independent values of a random variables is highly improbable. Kolmogorov used this strategy fighting (in a paper in 'Doklady', 1940) against Lysenko, who had tried to disprove the classical genetics' law of Mendel experimentally. Calculating his stochasticity parameter value for the numbers from Lysenko's experiment reports, Kolmogorov deduced, that, while these numbers were different from the exact fulfilment of Mendel's 3 : 1 law, any smaller deviation would be a manifestation of the report's number falsification. The calculation of the values of the stochasticity parameter would be useful for many other generators of pseudorandom numbers and for many other chaotically looking statistics, including even the prime numbers distribution (discussed in this paper as an example).
Vegetation patchiness: Pareto statistics, cluster dynamics and desertification.
NASA Astrophysics Data System (ADS)
Shnerb, N. M.
2009-04-01
Recent studies [1-4] of cluster distribution of vegetation in the dryland revealed Pareto statistics for the size of spatial colonies. These results were supported by cellular automata simulations that yield robust criticality for endogenous pattern formation based on positive feedback. We show that this self-organized criticality is a manifestation of the law of proportion effec: mapping the stochastic model to a Markov birth-death process, the transition rates are shown to scale linearly with cluster size. This mapping provides a connection between patch statistics and the dynamics of the ecosystem; the "first passage time" for different colonies emerges as a powerful tool that discriminates between endogenous and exogenous clustering mechanisms. Imminent catastrophic shifts (like desertification) manifest themselves in a drastic change of the stability properties of spatial colonies, as the chance of a cluster to disappear depends logarithmically, rather than linearly, on its size. [1] Scanlon et. al., Nature 449, 209212 [2007]. [2] Kefi et. al., Nature 449, 213217 [2007]. [3] Sole R., Nature 449, p. 151 [2007]. [4] Vandermeer et. al., Nature 451, p. 457 [2008].
A statistical model for interpreting computerized dynamic posturography data
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.
2002-01-01
Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.
Statistical Characterization of Multi-Phase Flow by Dynamic Tomography
NASA Astrophysics Data System (ADS)
Tillack, Gerd-Rüdiger; Samadurau, Ulazimir; Artemiev, Valentin; Naumov, Alexander
2003-03-01
The paper presents a special reconstruction algorithm that is capable to monitor density differences in multi-phase flows. The flow cross section is represented as discrete dynamic random field. A fixed gray value is assigned to each flow phase characterizing the material property of the phase. The image model is given by a set of non-linear stochastic difference equations. The corresponding inversion task is not accessible by common tomographic techniques applying reconstruction algorithms like filtered backprojection or algebraic reconstruction technique (ART). The developed algorithm is based on the Kalman filter technique adapted to non-linear phenomena. The average velocity distribution together with the corresponding covariance matrix of the liquid flow through a pipe serves as prior information in statistical sense. To overcome the non-linearity in the process model as well as in the measurement model the statistical linearization technique is applied. Moreover the Riccati equation, giving the error covariance matrix, and the equation for the optimal gain coefficients can be solved in advance and later used in the filter equation. It turns out that the resulting reconstruction or filter algorithm is recursive, i.e. yielding the quasi-optimal solution to the formulated inverse problem at every reconstruction step by successively counting for the new information collected in the projections. The applicability of the developed algorithm is discussed in terms of characterizing or monitoring a multi-phase flow in a pipe.
The role of the medical librarian in SDI systems.
Garfield, E
1969-10-01
Many ongoing selective dissemination systems designers assume that the librarian can be omitted from active participation in execution of the master plan. ISI's four years of experience with ASCA(R) service have shown that librarians must be an integral part of the system and engage in an active dialogue between users and the machine. Specific examples of how librarians can best serve the information needs of scientists using SDI systems are examined. It is the basic contention of this paper that the librarian should serve as an intermediary between users and the numerous new information media. In this manner the librarian can filter and translate the requirements of individual scientists to conform with the inherent limitations of all machine systems while exploiting their capabilities to the fullest. PMID:5823506
Statistical and dynamical remastering of classic exoplanet systems
NASA Astrophysics Data System (ADS)
Nelson, Benjamin Earl
The most powerful constraints on planet formation will come from characterizing the dynamical state of complex multi-planet systems. Unfortunately, with that complexity comes a number of factors that make analyzing these systems a computationally challenging endeavor: the sheer number of model parameters, a wonky shaped posterior distribution, and hundreds to thousands of time series measurements. In this dissertation, I will review our efforts to improve the statistical analyses of radial velocity (RV) data and their applications to some renown, dynamically complex exoplanet system. In the first project (Chapters 2 and 4), we develop a differential evolution Markov chain Monte Carlo (RUN DMC) algorithm to tackle the aforementioned difficult aspects of data analysis. We test the robustness of the algorithm in regards to the number of modeled planets (model dimensionality) and increasing dynamical strength. We apply RUN DMC to a couple classic multi-planet systems and one highly debated system from radial velocity surveys. In the second project (Chapter 5), we analyze RV data of 55 Cancri, a wide binary system known to harbor five planetary orbiting the primary. We find the inner-most planet "e" must be coplanar to within 40 degrees of the outer planets, otherwise Kozai-like perturbations will cause the planet to enter the stellar photosphere through its periastron passage. We find the orbits of planets "b" and "c" are apsidally aligned and librating with low to median amplitude (50+/-6 10 degrees), but they are not orbiting in a mean-motion resonance. In the third project (Chapters 3, 4, 6), we analyze RV data of Gliese 876, a four planet system with three participating in a multi-body resonance, i.e. a Laplace resonance. From a combined observational and statistical analysis computing Bayes factors, we find a four-planet model is favored over one with three-planets. Conditioned on this preferred model, we meaningfully constrain the three-dimensional orbital architecture of all the planets orbiting Gliese 876 based on the radial velocity data alone. By demanding orbital stability, we find the resonant planets have low mutual inclinations phi so they must be roughly coplanar (phicb = 1.41(+/-0.62/0.57) degrees and phibe = 3.87(+/-1.99/1.86 degrees). The three-dimensional Laplace argument librates chaotically with an amplitude of 50.5(+/-7.9/10.0) degrees, indicating significant past disk migration and ensuring long-term stability. In the final project (Chapter 7), we analyze the RV data for nu Octantis, a closely separated binary with an alleged planet orbiting interior and retrograde to the binary. Preliminary results place very tight constraints on the planet-binary mutual inclination but no model is dynamically stable beyond 105 years. These empirically derived models motivate the need for more sophisticated algorithms to analyze exoplanet data and will provide new challenges for planet formation models.
Characterizing Uncertainties in Hydrologic Extremes: Statistical vs. Dynamical Downscaling
NASA Astrophysics Data System (ADS)
Mauger, G. S.; Salathe, E. P., Jr.
2013-12-01
Numerous agencies are now charged with considering the impacts of climate change in management decisions, both from the standpoint of adapting to changing conditions and minimizing emissions of greenhouse gases. These decisions require robust projections of change and defensible estimates of their uncertainty. We present work that is specifically focused on characterizing the uncertainty in projections of hydrologic extremes. Much recent work has been devoted to characterizing the uncertainty in hydrologic projections due to differences in downscaling methodology (e.g., Abatzoglou and Brown, 2012; Bürger et al., 2012; Rasmussen et al., 2011; Wetterhall et al., 2012) and among hydrologic models (e.g., Bennett et al., 2012; Clark et al., 2008; Fenicia et al., 2008; Smith and Marshall, 2010; Vano et al., 2012). These have established a basis for such analyses, but have generally focused on the implications for monthly and annual flows rather than flow extremes. In addition, few among these have been focused within the Pacific Northwest. In this work we assess the uncertainty in projected changes to hydrologic extremes associated with dynamical vs. statistical downscaling. The analysis is focused on 3 distinct watersheds within the Pacific Northwest - the Skagit, Green, and Willamette river basins. Results highlight the sensitivity of flood projections to downscaling approach and hydrologic model assumptions. Sensitivities are characterized as a function of geographic location, hydrologic regime, and climate - identifying circumstances under which projections are reliable and others in which answers differ markedly based on methodology. For example, one notable result is that dynamically downscaled projections appear to refute the assumed relationship between watershed type (snow-dominant vs. rain-dominant) and projected changes to flood risk - currently considered a key indicator of future flood risk. Results presented here provide key information for decision-making as well as for prioritizing future impacts research.
Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming
2013-05-01
Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual eventsâ€”namely an approximate power-law x - Î± with exponent Î±â‰ˆ2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persistsâ€”however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.
A Statistical Model for In Vivo Neuronal Dynamics
Surace, Simone Carlo; Pfister, Jean-Pascal
2015-01-01
Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371
A Statistical Model for In Vivo Neuronal Dynamics.
Surace, Simone Carlo; Pfister, Jean-Pascal
2015-01-01
Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371
Statistical mechanics and dynamics of two supported stacked lipid bilayers.
Manghi, Manoel; Destainville, Nicolas
2010-03-16
The statistical physics and dynamics of double supported bilayers are studied theoretically. The main goal in designing double supported lipid bilayers is to obtain model systems of biomembranes: the upper bilayer is meant to be almost freely floating, the substrate being screened by the lower bilayer. The fluctuation-induced repulsion between membranes and between the lower membrane and the wall are explicitly taken into account using a Gaussian variational approach. It is shown that the variational parameters, the "effective" adsorption strength, and the average distance to the substrate, depend strongly on temperature and membrane elastic moduli, the bending rigidity, and the microscopic surface tension, which is a signature of the crucial role played by membrane fluctuations. The range of stability of these supported membranes is studied, showing a complex dependence on bare adsorption strengths. In particular, the experimental conditions of having an upper membrane slightly perturbed by the lower one and still bound to the surface are found. Included in the theoretical calculation of the damping rates associated with membrane normal modes are hydrodynamic friction by the wall and hydrodynamic interactions between both membranes. PMID:20000797
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Chen, Y. S.
1986-01-01
The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.
NASA Astrophysics Data System (ADS)
Potirakis, Stelios M.; Zitis, Pavlos I.; Eftaxias, Konstantinos
2013-07-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. Several authors have suggested that earthquake dynamics and the dynamics of economic (financial) systems can be analyzed within similar mathematical frameworks. We apply concepts of the nonextensive statistical physics, on time-series data of observable manifestations of the underlying complex processes ending up with these different extreme events, in order to support the suggestion that a dynamical analogy exists between a financial crisis (in the form of share or index price collapse) and a single earthquake. We also investigate the existence of such an analogy by means of scale-free statistics (the Gutenberg-Richter distribution of event sizes). We show that the populations of: (i) fracto-electromagnetic events rooted in the activation of a single fault, emerging prior to a significant earthquake, (ii) the trade volume events of different shares/economic indices, prior to a collapse, and (iii) the price fluctuation (considered as the difference of maximum minus minimum price within a day) events of different shares/economic indices, prior to a collapse, follow both the traditional Gutenberg-Richter law as well as a nonextensive model for earthquake dynamics, with similar parameter values. The obtained results imply the existence of a dynamic analogy between earthquakes and economic crises, which moreover follow the dynamics of seizures, magnetic storms and solar flares.
Statistical predictability in the atmosphere and other dynamical systems
NASA Astrophysics Data System (ADS)
Kleeman, Richard
2007-06-01
Ensemble predictions are an integral part of routine weather and climate prediction because of the sensitivity of such projections to the specification of the initial state. In many discussions it is tacitly assumed that ensembles are equivalent to probability distribution functions (p.d.f.s) of the random variables of interest. In general for vector valued random variables this is not the case (not even approximately) since practical ensembles do not adequately sample the high dimensional state spaces of dynamical systems of practical relevance. In this contribution we place these ideas on a rigorous footing using concepts derived from Bayesian analysis and information theory. In particular we show that ensembles must imply a coarse graining of state space and that this coarse graining implies loss of information relative to the converged p.d.f. To cope with the needed coarse graining in the context of practical applications, we introduce a hierarchy of entropic functionals. These measure the information content of multivariate marginal distributions of increasing order. For fully converged distributions (i.e. p.d.f.s) these functionals form a strictly ordered hierarchy. As one proceeds up the hierarchy with ensembles instead however, increasingly coarser partitions are required by the functionals which implies that the strict ordering of the p.d.f. based functionals breaks down. This breakdown is symptomatic of the necessarily limited sampling by practical ensembles of high dimensional state spaces and is unavoidable for most practical applications. In the second part of the paper the theoretical machinery developed above is applied to the practical problem of mid-latitude weather prediction. We show that the functionals derived in the first part all decline essentially linearly with time and there appears in fact to be a fairly well defined cut off time (roughly 45 days for the model analyzed) beyond which initial condition information is unimportant to statistical prediction.
Comparison of grain sorghum, soybean, and cotton production under spray, LEPA, and SDI
Technology Transfer Automated Retrieval System (TEKTRAN)
Crop production was compared under subsurface drip irrigation (SDI), low energy precision applicators (LEPA), low elevation spray applicators (LESA), and mid elevation spray applicators (MESA) at the USDA-Agricultural Research Service Conservation and Production Research Laboratory, Bushland, Tex., ...
Crystallization and preliminary X-ray studies of SdiA from Escherichia coli
Wu, Chunai; Lokanath, Neratur K.; Kim, Dong Young; Nguyen, Lan Dao Ngoc; Kim, Kyeong Kyu
2008-01-01
E. coli SdiA was overexpressed, purified and crystallized. The crystals belonged to the hexagonal space group P6{sub 1}22 or P6{sub 5}22 and diffracted to 2.7 Å resolution. SdiA enhances cell division by regulating the ftsQAZ operon in Escherichia coli as a transcription activator. In addition, SdiA is suggested to play a role in detecting quorum signals that emanate from other species. It is therefore a homologue of LuxR, a cognate quorum-sensing receptor that recognizes a quorum signal and activates the quorum responses. To elucidate the role of SdiA and its functional and structural relationship to LuxR, structural studies were performed on E. coli SdiA. Recombinant SdiA was overexpressed, purified and crystallized at 287 K using the hanging-drop vapour-diffusion method. X-ray diffraction data from a native crystal were collected with 99.7% completeness to 2.7 Å resolution with an R{sub merge} of 6.0%. The crystals belong to the hexagonal space group P6{sub 1}22 or P6{sub 5}22, with unit-cell parameters a = b = 130.47, c = 125.23 Å.
ERIC Educational Resources Information Center
Lee, Hollylynne Stohl; Kersaint, Gladis; Harper, Suzanne; Driskell, Shannon O.; Leatham, Keith R.
2012-01-01
This study examined a random stratified sample (n = 62) of prospective teachers' work across eight institutions on three tasks that utilized dynamic statistical software. The authors considered how teachers utilized their statistical knowledge and technological statistical knowledge to engage in cycles of investigation. This paper characterizesâ€¦
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues. PMID:24267876
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Material Phase Causality or a Dynamics-Statistical Interpretation of Quantum Mechanics
Koprinkov, I. G.
2010-11-25
The internal phase dynamics of a quantum system interacting with an electromagnetic field is revealed in details. Theoretical and experimental evidences of a causal relation of the phase of the wave function to the dynamics of the quantum system are presented sistematically for the first time. A dynamics-statistical interpretation of the quantum mechanics is introduced.
SdiA Aids Enterohemorrhagic Escherichia coli Carriage by Cattle Fed a Forage or Grain Diet
Sheng, Haiqing; Nguyen, Y. N.
2013-01-01
Enterohemorrhagic Escherichia coli (EHEC) causes hemorrhagic colitis and life-threatening complications. The main reservoirs for EHEC are healthy ruminants. We reported that SdiA senses acyl homoserine lactones (AHLs) in the bovine rumen to activate expression of the glutamate acid resistance (gad) genes priming EHEC's acid resistance before they pass into the acidic abomasum. Conversely, SdiA represses expression of the locus of enterocyte effacement (LEE) genes, whose expression is not required for bacterial survival in the rumen but is necessary for efficient colonization at the rectoanal junction (RAJ) mucosa. Our previous studies show that SdiA-dependent regulation was necessary for efficient EHEC colonization of cattle fed a grain diet. Here, we compared the SdiA role in EHEC colonization of cattle fed a forage hay diet. We detected AHLs in the rumen of cattle fed a hay diet, and these AHLs activated gad gene expression in an SdiA-dependent manner. The rumen fluid and fecal samples from hay-fed cattle were near neutrality, while the same digesta samples from grain-fed animals were acidic. Cattle fed either grain or hay and challenged with EHEC orally carried the bacteria similarly. EHEC was cleared from the rumen within days and from the RAJ mucosa after approximately one month. In competition trials, where animals were challenged with both wild-type and SdiA deletion mutant bacteria, diet did not affect the outcome that the wild-type strain was better able to persist and colonize. However, the wild-type strain had a greater advantage over the SdiA deletion mutant at the RAJ mucosa among cattle fed the grain diet. PMID:23836826
Statistical Anomaly Detection for Monitoring of Human Dynamics
NASA Astrophysics Data System (ADS)
Kamiya, K.; Fuse, T.
2015-05-01
Understanding of human dynamics has drawn attention to various areas. Due to the wide spread of positioning technologies that use GPS or public Wi-Fi, location information can be obtained with high spatial-temporal resolution as well as at low cost. By collecting set of individual location information in real time, monitoring of human dynamics is recently considered possible and is expected to lead to dynamic traffic control in the future. Although this monitoring focuses on detecting anomalous states of human dynamics, anomaly detection methods are developed ad hoc and not fully systematized. This research aims to define an anomaly detection problem of the human dynamics monitoring with gridded population data and develop an anomaly detection method based on the definition. According to the result of a review we have comprehensively conducted, we discussed the characteristics of the anomaly detection of human dynamics monitoring and categorized our problem to a semi-supervised anomaly detection problem that detects contextual anomalies behind time-series data. We developed an anomaly detection method based on a sticky HDP-HMM, which is able to estimate the number of hidden states according to input data. Results of the experiment with synthetic data showed that our proposed method has good fundamental performance with respect to the detection rate. Through the experiment with real gridded population data, an anomaly was detected when and where an actual social event had occurred.
Measures of trajectory ensemble disparity in nonequilibrium statistical dynamics
Crooks, Gavin; Sivak, David
2011-06-03
Many interesting divergence measures between conjugate ensembles of nonequilibrium trajectories can be experimentally determined from the work distribution of the process. Herein, we review the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen-Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Renyi divergence.
Measures of trajectory ensemble disparity in nonequilibrium statistical dynamics
NASA Astrophysics Data System (ADS)
Crooks, Gavin E.; Sivak, David A.
2011-06-01
Many interesting divergence measures between conjugate ensembles of nonequilibrium trajectories can be experimentally determined from the work distribution of the process. Herein, we review the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen-Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Rényi divergence.
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
Statistical analysis of nanoparticle dosing in a dynamic cellular system
NASA Astrophysics Data System (ADS)
Summers, Huw D.; Rees, Paul; Holton, Mark D.; Rowan Brown, M.; Chappell, Sally C.; Smith, Paul J.; Errington, Rachel J.
2011-03-01
The delivery of nanoparticles into cells is important in therapeutic applications and in nanotoxicology. Nanoparticles are generally targeted to receptors on the surfaces of cells and internalized into endosomes by endocytosis, but the kinetics of the process and the way in which cell division redistributes the particles remain unclear. Here we show that the chance of success or failure of nanoparticle uptake and inheritance is random. Statistical analysis of nanoparticle-loaded endosomes indicates that particle capture is described by an over-dispersed Poisson probability distribution that is consistent with heterogeneous adsorption and internalization. Partitioning of nanoparticles in cell division is random and asymmetric, following a binomial distribution with mean probability of 0.52-0.72. These results show that cellular targeting of nanoparticles is inherently imprecise due to the randomness of nature at the molecular scale, and the statistical framework offers a way to predict nanoparticle dosage for therapy and for the study of nanotoxins.
Identification of sdiA-regulated genes in a mouse commensal strain of Enterobacter cloacae
Sabag-Daigle, Anice; Dyszel, Jessica L.; Gonzalez, Juan F.; Ali, Mohamed M.; Ahmer, Brian M. M.
2015-01-01
Many bacteria determine their population density using quorum sensing. The most intensively studied mechanism of quorum sensing utilizes proteins of the LuxI family to synthesize a signaling molecule of the acylhomoserine lactone (AHL) type, and a protein of the LuxR family to bind AHL and regulate transcription. Genes regulated by quorum sensing often encode functions that are most effective when a group of bacteria are working cooperatively (e.g., luminescence, biofilm formation, host interactions). Bacteria in the Escherichia, Salmonella, Klebsiella, and Enterobacter genera do not encode an AHL synthase but they do encode an AHL receptor of the LuxR family, SdiA. Instead of detecting their own AHL synthesis, these organisms use SdiA to detect the AHLs synthesized by other bacterial species. In this study, we used a genetic screen to identify AHL-responsive genes in a commensal Enterobacter cloacae strain that was isolated from a laboratory mouse. The genes include a putative type VI secretion system, copA (a copper transporter), and fepE (extends O-antigen chain length). A new transposon mutagenesis strategy and suicide vectors were used to construct an sdiA mutant of E. cloacae. The AHL-responsiveness of all fusions was entirely sdiA-dependent, although some genes were regulated by sdiA in the absence of AHL. PMID:26075189
Intensity dynamics and statistical properties of random distributed feedback fiber laser.
Gorbunov, Oleg A; Sugavanam, Srikanth; Churkin, Dmitry V
2015-04-15
We present first experimental investigation of fast-intensity dynamics of random distributed feedback (DFB) fiber lasers. We found that the laser dynamics are stochastic on a short time scale and exhibit pronounced fluctuations including generation of extreme events. We also experimentally characterize statistical properties of radiation of random DFB fiber lasers. We found that statistical properties deviate from Gaussian and depend on the pump power. PMID:25872073
Viscoelastic effects in avalanche dynamics: a key to earthquake statistics.
Jagla, E A; Landes, FranÃ§ois P; Rosso, Alberto
2014-05-01
In many complex systems a continuous input of energy over time can be suddenly relaxed in the form of avalanches. Conventional avalanche models disregard the possibility of internal dynamical effects in the interavalanche periods, and thus miss basic features observed in some real systems. We address this issue by studying a model with viscoelastic relaxation, showing how coherent oscillations of the stress field can emerge spontaneously. Remarkably, these oscillations generate avalanche patterns that are similar to those observed in seismic phenomena. PMID:24836251
Eddies in the Red Sea: A statistical and dynamical study
NASA Astrophysics Data System (ADS)
Zhan, Peng; Subramanian, Aneesh C.; Yao, Fengchao; Hoteit, Ibrahim
2014-06-01
Sea level anomaly (SLA) data spanning 1992-2012 were analyzed to study the statistical properties of eddies in the Red Sea. An algorithm that identifies winding angles was employed to detect 4998 eddies propagating along 938 unique eddy tracks. Statistics suggest that eddies are generated across the entire Red Sea but that they are prevalent in certain regions. A high number of eddies is found in the central basin between 18°N and 24°N. More than 87% of the detected eddies have a radius ranging from 50 to 135 km. Both the intensity and relative vorticity scale of these eddies decrease as the eddy radii increase. The averaged eddy lifespan is approximately 6 weeks. AEs and cyclonic eddies (CEs) have different deformation features, and those with stronger intensities are less deformed and more circular. Analysis of long-lived eddies suggests that they are likely to appear in the central basin with AEs tending to move northward. In addition, their eddy kinetic energy (EKE) increases gradually throughout their lifespans. The annual cycles of CEs and AEs differ, although both exhibit significant seasonal cycles of intensity with the winter and summer peaks appearing in February and August, respectively. The seasonal cycle of EKE is negatively correlated with stratification but positively correlated with vertical shear of horizontal velocity and eddy growth rate, suggesting that the generation of baroclinic instability is responsible for the activities of eddies in the Red Sea.
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
NASA Astrophysics Data System (ADS)
Liang, Miaoling; Xie, Zhenghui
2008-07-01
Canopy interception of incident precipitation, as a critical component of a forest’s water budget, can affect the amount of water available to the soil, and ultimately vegetation distribution and function. In this paper, a statistical-dynamic approach based on leaf area index and statistical canopy interception is used to parameterize the canopy interception process. The statistical-dynamic canopy interception scheme is implemented into the Community Land Model with dynamic global vegetation model (CLM-DGVM) to improve its dynamic vegetation simulation. The simulation for continental China by the land surface model with the new canopy interception scheme shows that the new one reasonably represents the precipitation intercepted by the canopy. Moreover, the new scheme enhances the water availability in the root zone for vegetation growth, especially in the densely vegetated and semi-arid areas, and improves the model’s performance of potential vegetation simulation.
Dynamic Modelling and Statistical Analysis of Event Times
Peña, Edsel A.
2006-01-01
This review article provides an overview of recent work in the modelling and analysis of recurrent events arising in engineering, reliability, public health, biomedical, and other areas. Recurrent event modelling possesses unique facets making it different and more difficult to handle than single event settings. For instance, the impact of an increasing number of event occurrences needs to be taken into account, the effects of covariates should be considered, potential association among the inter-event times within a unit cannot be ignored, and the effects of performed interventions after each event occurrence need to be factored in. A recent general class of models for recurrent events which simultaneously accommodates these aspects is described. Statistical inference methods for this class of models are presented and illustrated through applications to real data sets. Some existing open research problems are described. PMID:17906740
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
NASA Astrophysics Data System (ADS)
Kinugawa, Kenichi; Nagao, Hidemi; Ohta, Koji
1999-07-01
A method is proposed for path integral centroid molecular dynamics (CMD) extended to Bose/Fermi statistics. It is based on the `pseudo-Boltzmann' canonical partition function of quantum statistical mechanics. An extended technique of path integral molecular dynamics (PIMD) is further presented for the calculation of thermodynamic properties and centroid mean force of Bose/Fermi systems. Bosonic PIMD and CMD simulations have been performed for 4He and the ideal Bose gas, respectively. The remnant of ? transition is observed for 4He, while Bose statistics causes a decay of the centroid velocity autocorrelation function of the ideal Bose gas in a nanosecond scale.
Human turnover dynamics during sleep: Statistical behavior and its modeling
NASA Astrophysics Data System (ADS)
Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi
2014-03-01
Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at â©½10 s and the other at â©¾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (â©½10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (â‰ˆ30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.
Human turnover dynamics during sleep: statistical behavior and its modeling.
Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi
2014-03-01
Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ?10 s and the other at ?100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (?10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (?30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies. PMID:24730888
Statistical treatment of dynamical electron diffraction from growing surfaces
NASA Astrophysics Data System (ADS)
Dudarev, S. L.; Vvedensky, D. D.; Whelan, M. J.
1994-11-01
Statistical methods developed previously for the evaluation of the electrical conductivity of metals and the description of the propagation of waves through random media are applied to the problem of scattering of high-energy electrons from a rough growing surface of a crystal where the roughness is caused by local fluctuations of site occupation numbers occurring during the growth. We derive the relevant Dyson and Bethe-Salpeter equations and define the short-range order correlation functions that determine the behavior of the reflection high-energy electron diffraction (RHEED) intensities. To analyze the temporal evolution of these correlation functions, we employ an exactly solvable model of the local perfect layer growth [A. K. Myers-Beaghton and D. D. Vvedensky, J. Phys. A 22, L467 (1989)]. Our approach makes it possible to separate individual contributions of various processes that give rise to oscillations of the RHEED reflections. We found that provided that the Bragg conditions of incidence are satisfied, it is the diffuse scattering by the disordered surface layer which is largely responsible for oscillations of the RHEED intensities. The temporal evolution of the angular distribution of the diffusely scattered electrons exhibits the effect of enhancement of the intensity of the Kikuchi lines with increasing surface disorder, as was observed experimentally [J. Zhang et al., Appl. Phys. A 42, 317 (1987)]. An explanation of the origin of this phenomenon is given using the concept of the final-state standing wave pattern.
Algebraic Statistical Model for Biochemical Network Dynamics Inference
Linder, Daniel F.; Rempala, Grzegorz A.
2014-01-01
With modern molecular quantification methods, like, for instance, high throughput sequencing, biologists may perform multiple complex experiments and collect longitudinal data on RNA and DNA concentrations. Such data may be then used to infer cellular level interactions between the molecular entities of interest. One method which formalizes such inference is the stoichiometric algebraic statistical model (SASM) of [2] which allows to analyze the so-called conic (or single source) networks. Despite its intuitive appeal, up until now the SASM has been only heuristically studied on few simple examples. The current paper provides a more formal mathematical treatment of the SASM, expanding the original model to a wider class of reaction systems decomposable into multiple conic subnetworks. In particular, it is proved here that on such networks the SASM enjoys the so-called sparsistency property, that is, it asymptotically (with the number of observed network trajectories) discards the false interactions by setting their reaction rates to zero. For illustration, we apply the extended SASM to in silico data from a generic decomposable network as well as to biological data from an experimental search for a possible transcription factor for the heat shock protein 70 (Hsp70) in the zebrafish retina. PMID:25525612
Statistical Physics Approaches to Respiratory Dynamics and Lung Structure
NASA Astrophysics Data System (ADS)
Suki, Bela
2004-03-01
The lung consists of a branching airway tree embedded in viscoelastic tissue and provides life-sustaining gas exchange to the body. In diseases, its structure is damaged and its function is compromised. We review two recent works about lung structure and dynamics and how they change in disease. 1) We introduced a new acoustic imaging approach to study airway structure. When airways in a collapsed lung are inflated, they pop open in avalanches. A single opening emits a sound package called crackle consisting of an initial spike (s) followed by ringing. The distribution n(s) of s follows a power law and the exponent of n(s) can be used to calculate the diameter ratio d defined as the ratio of the diameters of an airway to that of its parent averaged over all bifurcations. To test this method, we measured crackles in dogs, rabbits, rats and mice by inflating collapsed isolated lungs with air or helium while recording crackles with a microphone. In each species, n(s) follows a power law with an exponent that depends on species, but not on gas in agreement with theory. Values of d from crackles compare well with those calculated from morphometric data suggesting that this approach is suitable to study airway structure in disease. 2) Using novel experiments and computer models, we studied pulmonary emphysema which is caused by cigarette smoking. In emphysema, the elastic protein fibers of the tissue are actively remodeled by lung cells due to the chemicals present in smoke. We measured the mechanical properties of tissue sheets from normal and emphysematous lungs and imaged its structure which appears as a heterogeneous hexagonal network of fibers. We found evidence that during uniaxial stretching, the collagen and elastin fibers in emphysematous tissue can fail at a critical stress generating holes of various sizes (h). We developed network models of the failure process. When the failure is governed by mechanical forces, the distribution n(h) of h is a power law which compares well with Computed Tomographic images of patients. These results suggest that the progressive nature of emphysema may be due to a complex breakdown process initiated by chemicals in the smoke and maintained by mechanical failure of the remodeled fiber network.
Kazumba, Shija; Gillerman, Leonid; DeMalach, Yoel; Oron, Gideon
2010-01-01
Scarcity of fresh high-quality water has heightened the importance of wastewater reuse primarily in dry regions together with improving its efficient use by implementing the Subsurface Drip Irrigation (SDI) method. Sustainable effluent reuse combines soil and plant aspects, along with the maintainability of the application system. In this study, field experiments were conducted for two years on the commercial farm of Revivim and Mashabay-Sade farm (RMF) southeast of the City of Beer-Sheva, Israel. The purpose was to examine the response of alfalfa (Medicago sativa) as a perennial model crop to secondary domestic effluent application by means of a SDI system as compared with conventional overhead sprinkler irrigation. Emitters were installed at different depths and spacing. Similar amounts of effluent were applied to all plots during the experimental period. The results indicated that in all SDI treatments, the alfalfa yields were 11% to 25% higher than the ones obtained under sprinkler irrigated plots, besides the one in which the drip laterals were 200 cm apart. The average Water Use Efficiency (WUE) was better in all SDI treatments in comparison with the sprinkler irrigated plots. An economic assessment reveals the dependence of the net profit on the emitters' installation geometry, combined with the return for alfalfa in the market. PMID:20150698
Social Development in Hong Kong: Development Issues Identified by Social Development Index (SDI)
ERIC Educational Resources Information Center
Chua, Hoi-wai; Wong, Anthony K. W.; Shek, Daniel T. L.
2010-01-01
Surviving the aftermaths of the Asian Financial Crisis and SARS in 2003, Hong Kong's economy has re-gained its momentum and its economic growth has been quite remarkable too in recent few years. Nevertheless, as reflected by the Social Development Index (SDI), economic growth in Hong Kong does not seem to have benefited the people of the city at…
NASA Astrophysics Data System (ADS)
Eftaxias, Konstantinos; Minadakis, George; Potirakis, Stelios. M.; Balasis, Georgios
2013-02-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. Several authors have suggested that earthquake dynamics and neurodynamics can be analyzed within similar mathematical frameworks. Recently, authors have shown that a dynamical analogy supported by scale-free statistics exists between seizures and earthquakes, analyzing populations of different seizures and earthquakes, respectively. The purpose of this paper is to suggest a shift in emphasis from the large to the small scale: our analyses focus on a single epileptic seizure generation and the activation of a single fault (earthquake) and not on the statistics of sequences of different seizures and earthquakes. We apply the concepts of the nonextensive statistical physics to support the suggestion that a dynamical analogy exists between the two different extreme events, seizures and earthquakes. We also investigate the existence of such an analogy by means of scale-free statistics (the Gutenberg-Richter distribution of event sizes and the distribution of the waiting time until the next event). The performed analysis confirms the existence of a dynamic analogy between earthquakes and seizures, which moreover follow the dynamics of magnetic storms and solar flares.
Dynamics of statistical distance: Quantum limits for two-level clocks
Braunstein, S.L. ); Milburn, G.J. )
1995-03-01
We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.
Nguyen, Y.; Nguyen, Nam X.; Rogers, Jamie L.; Liao, Jun; MacMillan, John B.; Jiang, Youxing; Sperandio, Vanessa
2015-05-19
Bacteria engage in chemical signaling, termed quorum sensing (QS), to mediate intercellular communication, mimicking multicellular organisms. The LuxR family of QS transcription factors regulates gene expression, coordinating population behavior by sensing endogenous acyl homoserine lactones (AHLs). However, some bacteria (such as Escherichia coli) do not produce AHLs. These LuxR orphans sense exogenous AHLs but also regulate transcription in the absence of AHLs. Importantly, this AHL-independent regulatory mechanism is still largely unknown. Here we present several structures of one such orphan LuxR-type protein, SdiA, from enterohemorrhagic E. coli (EHEC), in the presence and absence of AHL. SdiA is actually not inmoreÂ Â» an apo state without AHL but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL), which is ubiquitously found throughout the tree of life and serves as an energy source, signaling molecule, and substrate for membrane biogenesis. While exogenous AHL renders to SdiA higher stability and DNA binding affinity, OCL may function as a chemical chaperone placeholder that stabilizes SdiA, allowing for basal activity. Structural comparison between SdiA-AHL and SdiA-OCL complexes provides crucial mechanistic insights into the ligand regulation of AHL-dependent and -independent function of LuxR-type proteins. Importantly, in addition to its contribution to basic science, this work has implications for public health, inasmuch as the SdiA signaling system aids the deadly human pathogen EHEC to adapt to a commensal lifestyle in the gastrointestinal (GI) tract of cattle, its main reservoir. These studies open exciting and novel avenues to control shedding of this human pathogen in the environment. IMPORTANCE Quorum sensing refers to bacterial chemical signaling. The QS acyl homoserine lactone (AHL) signals are recognized by LuxR-type receptors that regulate gene transcription. However, some bacteria have orphan LuxR-type receptors and do not produce AHLs, sensing them from other bacteria. We solved three structures of the E. coli SdiA orphan, in the presence and absence of AHL. SdiA with no AHL is not in an apo state but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL). OCL is ubiquitously found in prokaryotes and eukaryotes and is a phospholipid precursor for membrane biogenesis and a signaling molecule. While AHL renders to SdiA higher stability and DNA-binding affinity, OCL functions as a chemical chaperone placeholder, stabilizing SdiA and allowing for basal activity. Our studies provide crucial mechanistic insights into the ligand regulation of SdiA activity.Â«Â less
Nguyen, Y.; Nguyen, Nam X.; Rogers, Jamie L.; Liao, Jun; MacMillan, John B.; Jiang, Youxing; Sperandio, Vanessa
2015-05-19
Bacteria engage in chemical signaling, termed quorum sensing (QS), to mediate intercellular communication, mimicking multicellular organisms. The LuxR family of QS transcription factors regulates gene expression, coordinating population behavior by sensing endogenous acyl homoserine lactones (AHLs). However, some bacteria (such as Escherichia coli) do not produce AHLs. These LuxR orphans sense exogenous AHLs but also regulate transcription in the absence of AHLs. Importantly, this AHL-independent regulatory mechanism is still largely unknown. Here we present several structures of one such orphan LuxR-type protein, SdiA, from enterohemorrhagic E. coli (EHEC), in the presence and absence of AHL. SdiA is actually not in an apo state without AHL but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL), which is ubiquitously found throughout the tree of life and serves as an energy source, signaling molecule, and substrate for membrane biogenesis. While exogenous AHL renders to SdiA higher stability and DNA binding affinity, OCL may function as a chemical chaperone placeholder that stabilizes SdiA, allowing for basal activity. Structural comparison between SdiA-AHL and SdiA-OCL complexes provides crucial mechanistic insights into the ligand regulation of AHL-dependent and -independent function of LuxR-type proteins. Importantly, in addition to its contribution to basic science, this work has implications for public health, inasmuch as the SdiA signaling system aids the deadly human pathogen EHEC to adapt to a commensal lifestyle in the gastrointestinal (GI) tract of cattle, its main reservoir. These studies open exciting and novel avenues to control shedding of this human pathogen in the environment. IMPORTANCE Quorum sensing refers to bacterial chemical signaling. The QS acyl homoserine lactone (AHL) signals are recognized by LuxR-type receptors that regulate gene transcription. However, some bacteria have orphan LuxR-type receptors and do not produce AHLs, sensing them from other bacteria. We solved three structures of the E. coli SdiA orphan, in the presence and absence of AHL. SdiA with no AHL is not in an apo state but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL). OCL is ubiquitously found in prokaryotes and eukaryotes and is a phospholipid precursor for membrane biogenesis and a signaling molecule. While AHL renders to SdiA higher stability and DNA-binding affinity, OCL functions as a chemical chaperone placeholder, stabilizing SdiA and allowing for basal activity. Our studies provide crucial mechanistic insights into the ligand regulation of SdiA activity.
Sapsis, Themistoklis P.; Majda, Andrew J.
2013-01-01
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
Enriching Spatial Data Infrastructure (sdi) by User Generated Contents for Transportation
NASA Astrophysics Data System (ADS)
Shakeri, M.; Alimohammadi, A.; Sadeghi-Niaraki, A.; Alesheikh, A. A.
2013-09-01
Spatial data is one of the most critical elements underpinning decision making for many disciplines. Accessing and sharing spatial data have always been a great struggle for researchers. Spatial data infrastructure (SDI) plays a key role in spatial data sharing by building a suitable platform for collaboration and cooperation among the different data producer organizations. In recent years, SDI vision has been moved toward a user-centric platform which has led to development of a new and enriched generation of SDI (third generation). This vision is to provide an environment where users can cooperate to handle spatial data in an effective and satisfactory way. User-centric SDI concentrates on users, their requirements and preferences while in the past, SDI initiatives were mainly concentrated on technological issues such as the data harmonization, standardized metadata models, standardized web services for data discovery, visualization and download. On the other hand, new technologies such as the GPS-equipped smart phones, navigation devices and Web 2.0 technologies have enabled citizens to actively participate in production and sharing of the spatial information. This has led to emergence of the new phenomenon called the Volunteered Geographic Information (VGI). VGI describes any type of content that has a geographic element which has been voluntarily collected. However, its distinctive element is the geographic information that can be collected and produced by citizens with different formal expertise and knowledge of the spatial or geographical concepts. Therefore, ordinary citizens can cooperate in providing massive sources of information that cannot be ignored. These can be considered as the valuable spatial information sources in SDI. These sources can be used for completing, improving and updating of the existing databases. Spatial information and technologies are an important part of the transportation systems. Planning, design and operation of the transportation systems requires the exchange of large volumes of spatial data and often close cooperation among the various organizations. However, there is no technical and organizational process to get a suitable data infrastructure to address diverse needs of the transportation. Hence, development of a common standards and a simple data exchange mechanism is strongly needed in the field of transportation for decision support. Since one of the main purposes of transportation projects is to improve the quality of services provided to users, it is necessary to involve the users themselves in the decision making processes. This should be done through a public participation and involvement in all stages of the transportation projects. In other words, using public knowledge and information as another source of information is very important to make better and more efficient decisions. Public participation in transportation projects can also help organizations to enhance their public supports; because the lack of public support can lead to failure of technically valid projects. However, due to complexity of the transportation tasks, lack of appropriate environment and methods for facilitation of the public participation, collection and analysis of the public information and opinions, public participation in this field has not been well considered so far. This paper reviews the previous researches based on the enriched SDI development and its movement toward the VGI by focusing on the public participation in transportation projects. To this end, methods and models that have been used in previous researches are studied and classified initially. Then, methods of the previous researchers on VGI and transportation are conceptualized in SDI. Finally, the suggested method for transportation projects is presented. Results indicate success of the new generation of SDI in integration with public participation for transportation projects.
NASA Astrophysics Data System (ADS)
Schepen, Andrew; Wang, Q. J.
2015-03-01
The Australian Bureau of Meteorology produces statistical and dynamic seasonal streamflow forecasts. The statistical and dynamic forecasts are similarly reliable in ensemble spread; however, skill varies by catchment and season. Therefore, it may be possible to optimize forecasting skill by weighting and merging statistical and dynamic forecasts. Two model averaging methods are evaluated for merging forecasts for 12 locations. The first method, Bayesian model averaging (BMA), applies averaging to forecast probability densities (and thus cumulative probabilities) for a given forecast variable value. The second method, quantile model averaging (QMA), applies averaging to forecast variable values (quantiles) for a given cumulative probability (quantile fraction). BMA and QMA are found to perform similarly in terms of overall skill scores and reliability in ensemble spread. Both methods improve forecast skill across catchments and seasons. However, when both the statistical and dynamical forecasting approaches are skillful but produce, on special occasions, very different event forecasts, the BMA merged forecasts for these events can have unusually wide and bimodal distributions. In contrast, the distributions of the QMA merged forecasts for these events are narrower, unimodal and generally more smoothly shaped, and are potentially more easily communicated to and interpreted by the forecast users. Such special occasions are found to be rare. However, every forecast counts in an operational service, and therefore the occasional contrast in merged forecasts between the two methods may be more significant than the indifference shown by the overall skill and reliability performance.
From statistics of avalanches to microscopic dynamics parameters in a toy model of earthquakes
NASA Astrophysics Data System (ADS)
Bia?ecki, Mariusz
2013-12-01
A toy model of earthquakes — Random Domino Automaton — is investigated in its finite version. A procedure of reconstruction of intrinsic dynamical parameters of the model from produced statistics of avalanches is presented. Examples of exponential, inverse-power and M-shape distributions of avalanches illustrate remarkable flexibility of the model as well as the efficiency of proposed reconstruction procedure.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
A new statistical dynamic analysis of ecological niches for China’s financial centres
NASA Astrophysics Data System (ADS)
Du, Huibin; Xia, Qiongqiong; Ma, Xuan; Chai, Lihe
2014-02-01
This study, undertaken from the perspective of statistical dynamics, proposes the treatment of financial centres as an ecosystem, creates a multidimensional financial centre niche (FC-niche) under given generalised entropy and constraints, and interprets the evolutionary process of an FC-niche with dynamic equations obtained from the maximum generalised entropy principle (MGEP). To solve these dynamic equations, a self-organised feature map (SOM) is designed. Finally, the values and evolutionary rules of FC-niches in China’s 29 major cities are simulated as a case study.
Yakhnin, Helen; Baker, Carol S.; Berezin, Igor; Evangelista, Michael A.; Rassin, Alisa; Romeo, Tony; Babitzke, Paul
2011-01-01
The RNA binding protein CsrA is the central component of a conserved global regulatory system that activates or represses gene expression posttranscriptionally. In every known example of CsrA-mediated translational control, CsrA binds to the 5â€² untranslated region of target transcripts, thereby repressing translation initiation and/or altering the stability of the RNA. Furthermore, with few exceptions, repression by CsrA involves binding directly to the Shine-Dalgarno sequence and blocking ribosome binding. sdiA encodes the quorum-sensing receptor for N-acyl-l-homoserine lactone in Escherichia coli. Because sdiA indirectly stimulates transcription of csrB, which encodes a small RNA (sRNA) antagonist of CsrA, we further explored the relationship between sdiA and the Csr system. Primer extension analysis revealed four putative transcription start sites within 85 nucleotides of the sdiA initiation codon. Potential Ïƒ70-dependent promoters were identified for each of these primer extension products. In addition, two CsrA binding sites were predicted in the initially translated region of sdiA. Expression of chromosomally integrated sdiAâ€²-â€²lacZ translational fusions containing the entire promoter and CsrA binding site regions indicates that CsrA represses sdiA expression. The results from gel shift and footprint studies demonstrate that tight binding of CsrA requires both of these sites. Furthermore, the results from toeprint and in vitro translation experiments indicate that CsrA represses translation of sdiA by directly competing with 30S ribosomal subunit binding. Thus, this represents the first example of CsrA preventing translation by interacting solely within the coding region of an mRNA target. PMID:21908661
Pseudo-dynamic source modeling with 1-point and 2-point statistics of earthquake source parameters
NASA Astrophysics Data System (ADS)
Song, S.; Dalguer, L. A.; Mai, P. M.
2013-12-01
Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modeling has been successfully adopted in physics-based source and ground motion modeling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modeling keeps the form of kinematic modeling with its computational efficiency, but also tries to emulate the physics of source process. In this paper we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modeling with the covariance matrix constructed from target 2-point statistics, i.e., auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger ground motions in general. The proposed new source modeling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.
NASA Astrophysics Data System (ADS)
Chaikov, Leonid L.; Kirichenko, Marina N.; Krivokhizha, Svetlana V.; Zaritskiy, Alexander R.
2015-05-01
The work is devoted to the study of sizes and concentrations of proteins, and their aggregates in blood plasma samples, using static and dynamic light scattering methods. A new approach is proposed based on multiple repetition of measurements of intensity size distribution and on counting the number of registrations of different sizes, which made it possible to obtain statistically confident particle sizes and concentrations in the blood plasma. It was revealed that statistically confident particle sizes in the blood plasma were stable during 30 h of observations, whereas the concentrations of particles of different sizes varied as a result of redistribution of material between them owing to the protein degradation processes.
SDI-based business processes: A territorial analysis web information system in Spain
NASA Astrophysics Data System (ADS)
Béjar, Rubén; Latre, Miguel Á.; Lopez-Pellicer, Francisco J.; Nogueras-Iso, Javier; Zarazaga-Soria, F. J.; Muro-Medrano, Pedro R.
2012-09-01
Spatial Data Infrastructures (SDIs) provide access to geospatial data and operations through interoperable Web services. These data and operations can be chained to set up specialized geospatial business processes, and these processes can give support to different applications. End users can benefit from these applications, while experts can integrate the Web services in their own business processes and developments. This paper presents an SDI-based territorial analysis Web information system for Spain, which gives access to land cover, topography and elevation data, as well as to a number of interoperable geospatial operations by means of a Web Processing Service (WPS). Several examples illustrate how different territorial analysis business processes are supported. The system has been established by the Spanish National SDI (Infraestructura de Datos Espaciales de España, IDEE) both as an experimental platform for geoscientists and geoinformation system developers, and as a mechanism to contribute to the Spanish citizens knowledge about their territory.
Dynamical role of anyonic excitation statistics in rapidly rotating bose gases.
Fischer, Uwe R
2004-10-15
We show that for rotating harmonically trapped Bose gases in a fractional quantum Hall state, the anyonic excitation statistics in the rotating gas can effectively play a dynamical role. For particular values of the two-dimensional coupling constant g=-2pih2(2k-1)/m, where k is a positive integer, the system becomes a noninteracting gas of anyons, with exactly obtainable solutions satisfying Bogomol'nyi self-dual order parameter equations. Attractive Bose gases under rapid rotation thus can be stabilized in the thermodynamic limit due to the anyonic statistics of their quasiparticle excitations. PMID:15524959
a Statistical Dynamic Approach to Structural Evolution of Complex Capital Market Systems
NASA Astrophysics Data System (ADS)
Shao, Xiao; Chai, Li H.
As an important part of modern financial systems, capital market has played a crucial role on diverse social resource allocations and economical exchanges. Beyond traditional models and/or theories based on neoclassical economics, considering capital markets as typical complex open systems, this paper attempts to develop a new approach to overcome some shortcomings of the available researches. By defining the generalized entropy of capital market systems, a theoretical model and nonlinear dynamic equation on the operations of capital market are proposed from statistical dynamic perspectives. The US security market from 1995 to 2001 is then simulated and analyzed as a typical case. Some instructive results are discussed and summarized.
The applications of Complexity Theory and Tsallis Non-extensive Statistics at Solar Plasma Dynamics
NASA Astrophysics Data System (ADS)
Pavlos, George
2015-04-01
As the solar plasma lives far from equilibrium it is an excellent laboratory for testing complexity theory and non-equilibrium statistical mechanics. In this study, we present the highlights of complexity theory and Tsallis non extensive statistical mechanics as concerns their applications at solar plasma dynamics, especially at sunspot, solar flare and solar wind phenomena. Generally, when a physical system is driven far from equilibrium states some novel characteristics can be observed related to the nonlinear character of dynamics. Generally, the nonlinearity in space plasma dynamics can generate intermittent turbulence with the typical characteristics of the anomalous diffusion process and strange topologies of stochastic space plasma fields (velocity and magnetic fields) caused by the strange dynamics and strange kinetics (Zaslavsky, 2002). In addition, according to Zelenyi and Milovanov (2004) the complex character of the space plasma system includes the existence of non-equilibrium (quasi)-stationary states (NESS) having the topology of a percolating fractal set. The stabilization of a system near the NESS is perceived as a transition into a turbulent state determined by self-organization processes. The long-range correlation effects manifest themselves as a strange non-Gaussian behavior of kinetic processes near the NESS plasma state. The complex character of space plasma can also be described by the non-extensive statistical thermodynamics pioneered by Tsallis, which offers a consistent and effective theoretical framework, based on a generalization of Boltzmann - Gibbs (BG) entropy, to describe far from equilibrium nonlinear complex dynamics (Tsallis, 2009). In a series of recent papers, the hypothesis of Tsallis non-extensive statistics in magnetosphere, sunspot dynamics, solar flares, solar wind and space plasma in general, was tested and verified (Karakatsanis et al., 2013; Pavlos et al., 2014; 2015). Our study includes the analysis of solar plasma time series at three cases: sunspot index, solar flare and solar wind data. The non-linear analysis of the sunspot index is embedded in the non-extensive statistical theory of Tsallis (1988; 2004; 2009). The q-triplet of Tsallis, as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the SVD components of the sunspot index timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using the q-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000, 2001). Our analysis showed clearly the following: (a) a phase transition process in the solar dynamics from high dimensional non-Gaussian SOC state to a low dimensional non-Gaussian chaotic state, (b) strong intermittent solar turbulence and anomalous (multifractal) diffusion solar process, which is strengthened as the solar dynamics makes a phase transition to low dimensional chaos in accordance to Ruzmaikin, Zelenyi and Milovanov's studies (Zelenyi and Milovanov, 1991; Milovanov and Zelenyi, 1993; Ruzmakin et al., 1996), (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of: (i) non-Gaussian probability distribution function P(x), (ii) multifractal scaling exponent spectrum f(a) and generalized Renyi dimension spectrum Dq, (iii) exponent spectrum J(p) of the structure functions estimated for the sunspot index and its underlying non equilibrium solar dynamics. Also, the q-triplet of Tsallis as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the singular value decomposition (SVD) components of the solar flares timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using the q-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000). Our analysis showed clearly the following: (a) a phase transition process in the solar flare dynamics from a high dimensional non-Gaussian self-organized critical (SOC) state to a low dimensional also non-Gaussian chaotic state, (b) strong intermittent solar corona turbulence and an anomalous (multifractal) diffusion solar corona process, which is strengthened as the solar corona dynamics makes a phase transition to low dimensional chaos, (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of the functions: (i) non-Gaussian probability distribution function P(x), (ii) f(a) and D(q), and (iii) J(p) for the solar flares timeseries and its underlying non-equilibrium solar dynamics, and (d) the solar flare dynamical profile is revealed similar to the dynamical profile of the solar corona zone as far as the phase transition process from self-organized criticality (SOC) to chaos state. However the solar low corona (solar flare) dynamical characteristics can be clearly discriminated from the dynamical characteristics of the solar convection zone. At last we present novel results revealing non-equilibrium phase transition processes in the solar wind plasma during a strong shock event, which can take place in Solar wind plasma system. The solar wind plasma as well as the entire solar plasma system is a typical case of stochastic spatiotemporal distribution of physical state variables such as force fields ( ) and matter fields (particle and current densities or bulk plasma distributions). This study shows clearly the non-extensive and non-Gaussian character of the solar wind plasma and the existence of multi-scale strong correlations from the microscopic to the macroscopic level. It also underlines the inefficiency of classical magneto-hydro-dynamic (MHD) or plasma statistical theories, based on the classical central limit theorem (CLT), to explain the complexity of the solar wind dynamics, since these theories include smooth and differentiable spatial-temporal functions (MHD theory) or Gaussian statistics (Boltzmann-Maxwell statistical mechanics). On the contrary, the results of this study indicate the presence of non-Gaussian non-extensive statistics with heavy tails probability distribution functions, which are related to the q-extension of CLT. Finally, the results of this study can be understood in the framework of modern theoretical concepts such as non-extensive statistical mechanics (Tsallis, 2009), fractal topology (Zelenyi and Milovanov, 2004), turbulence theory (Frisch, 1996), strange dynamics (Zaslavsky, 2002), percolation theory (Milovanov, 1997), anomalous diffusion theory and anomalous transport theory (Milovanov, 2001), fractional dynamics (Tarasov, 2013) and non-equilibrium phase transition theory (Chang, 1992). References 1. T. Arimitsu, N. Arimitsu, Tsallis statistics and fully developed turbulence, J. Phys. A: Math. Gen. 33 (2000) L235. 2. T. Arimitsu, N. Arimitsu, Analysis of turbulence by statistics based on generalized entropies, Physica A 295 (2001) 177-194. 3. T. Chang, Low-dimensional behavior and symmetry braking of stochastic systems near criticality can these effects be observed in space and in the laboratory, IEEE 20 (6) (1992) 691-694. 4. U. Frisch, Turbulence, Cambridge University Press, Cambridge, UK, 1996, p. 310. 5. L.P. Karakatsanis, G.P. Pavlos, M.N. Xenakis, Tsallis non-extensive statistics, intermittent turbulence, SOC and chaos in the solar plasma. Part two: Solar flares dynamics, Physica A 392 (2013) 3920-3944. 6. A.V. Milovanov, Topological proof for the Alexander-Orbach conjecture, Phys. Rev. E 56 (3) (1997) 2437-2446. 7. A.V. Milovanov, L.M. Zelenyi, Fracton excitations as a driving mechanism for the self-organized dynamical structuring in the solar wind, Astrophys. Space Sci. 264 (1-4) (1999) 317-345. 8. A.V. Milovanov, Stochastic dynamics from the fractional Fokker-Planck-Kolmogorov equation: large-scale behavior of the turbulent transport coefficient, Phys. Rev. E 63 (2001) 047301. 9. G.P. Pavlos, et al., Universality of non-extensive Tsallis statistics and time series analysis: Theory and applications, Physica A 395 (2014) 58-95. 10. G.P. Pavlos, et al., Tsallis non-extensive statistics and solar wind plasma complexity, Physica A 422 (2015) 113-135. 11. A.A. Ruzmaikin, et al., Spectral properties of solar convection and diffusion, ApJ 471 (1996) 1022. 12. V.E. Tarasov, Review of some promising fractional physical models, Internat. J. Modern Phys. B 27 (9) (2013) 1330005. 13. C. Tsallis, Possible generalization of BG statistics, J. Stat. Phys. J 52 (1-2) (1988) 479-487. 14. C. Tsallis, Nonextensive statistical mechanics: construction and physical interpretation, in: G.M. Murray, C. Tsallis (Eds.), Nonextensive Entropy-Interdisciplinary Applications, Oxford Univ. Press, 2004, pp. 1-53. 15. C. Tsallis, Introduction to Non-Extensive Statistical Mechanics, Springer, 2009. 16. G.M. Zaslavsky, Chaos, fractional kinetics, and anomalous transport, Physics Reports 371 (2002) 461-580. 17. L.M. Zelenyi, A.V. Milovanov, Fractal properties of sunspots, Sov. Astron. Lett. 17 (6) (1991) 425. 18. L.M. Zelenyi, A.V. Milovanov, Fractal topology and strange kinetics: from percolation theory to problems in cosmic electrodynamics, Phys.-Usp. 47 (8), (2004) 749-788.
NASA Astrophysics Data System (ADS)
Laugel, AmÃ©lie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; MÃ©ndez, Fernando
2014-12-01
The estimation of possible impacts related to climate change on the wave climate is subject to several levels of uncertainty. In this work, we focus on the uncertainties inherent in the method applied to project the wave climate using atmospheric simulations. Two approaches are commonly used to obtain the regional wave climate: dynamical and statistical downscaling from atmospheric data. We apply both approaches based on the outputs of a global climate model (GCM), ARPEGE-CLIMAT, under three possible future scenarios (B1, A1B and A2) of the Fourth Assessment Report, AR4 (IPCC, 2007), along the French coast and evaluate their results for the wave climate with a high level of precision. The performance of the dynamical and the statistical methods is determined through a comparative analysis of the estimated means, standard deviations and monthly quantile distributions of significant wave heights, the joint probability distributions of wave parameters and seasonal and interannual variability. Analysis of the results shows that the statistical projections are able to reproduce the wave climatology as well as the dynamical projections, with some deficiencies being observed in the summer and for the upper tail of the significant wave height. In addition, with its low computational time requirements, the statistical downscaling method allows an ensemble of simulations to be calculated faster than the dynamical method. It then becomes possible to quantify the uncertainties associated with the choice of the GCM or the socio-economic scenarios, which will improve estimates of the impact of wave climate change along the French coast.
Model averaging methods to merge statistical and dynamic seasonal streamflow forecasts in Australia
NASA Astrophysics Data System (ADS)
Schepen, A.; Wang, Q. J.
2014-12-01
The Australian Bureau of Meteorology operates a statistical seasonal streamflow forecasting service. It has also developed a dynamic seasonal streamflow forecasting approach. The two approaches produce similarly reliable forecasts in terms of ensemble spread but can differ in forecast skill depending on catchment and season. Therefore, it may be possible to augment the skill of the existing service by objectively weighting and merging the forecasts. Bayesian model averaging (BMA) is first applied to merge statistical and dynamic forecasts for 12 locations using leave-five-years-out cross-validation. It is seen that the BMA merged forecasts can sometimes be too uncertain, as shown by ensemble spreads that are unrealistically wide and even bi-modal. The BMA method applies averaging to forecast probability densities (and thus cumulative probabilities) for a given forecast variable value. An alternative approach is quantile model averaging (QMA), whereby forecast variable values (quantiles) are averaged for a given cumulative probability (quantile fraction). For the 12 locations, QMA is compared to BMA. BMA and QMA perform similarly in terms of forecast accuracy skill scores and reliability in terms of ensemble spread. Both methods improve forecast skill across catchments and seasons by combining the different strengths of the statistical and dynamic approaches. A major advantage of QMA over BMA is that it always produces reasonably well defined forecast distributions, even in the special cases where BMA does not. Optimally estimated QMA weights and BMA weights are similar; however, BMA weights are more efficiently estimated.
NASA Astrophysics Data System (ADS)
Moradkhani, Hamid
2015-04-01
Drought forecasting is vital for resource management and planning. Both societal and agricultural requirements for water weigh heavily on the natural environment, which may become scarce in the event of drought. Although drought forecasts are an important tool for managing water in hydrologic systems, these forecasts are plagued by uncertainties, owing to the complexities of water dynamics and the spatial heterogeneities of pertinent variables. Due to these uncertainties, it is necessary to frame forecasts in a probabilistic manner. Here we present a statistical-dynamical probabilistic drought forecast framework within Bayesian networks. The statistical forecast model applies a family of multivariate distribution functions to forecast future drought conditions given the drought status in the past. The advantage of the statistical forecast model is that it develops conditional probabilities of a given forecast variable, and returns the highest probable forecast along with an assessment of the uncertainty around that value. The dynamical model relies on data assimilation to characterize the initial land surface condition uncertainty which correspondingly reflect on drought forecast. In addition, the recovery of drought will be examined. From these forecasts, it is found that drought recovery is a longer process than suggested in recent literature. Drought in land surface variables (snow, soil moisture) is shown to be persistent up to a year in certain locations, depending on the intensity of the drought. Location within the basin appears to be a driving factor in the ability of the land surface to recover from drought, allowing for differentiation between drought prone and drought resistant regions.
NASA Astrophysics Data System (ADS)
Xu, Hao; Lu, Bo; Su, Zhongqing; Cheng, Li
2015-09-01
A previously developed damage identification strategy, named Pseudo-Excitation (PE), was enhanced using a statistical processing approach. In terms of the local dynamic equilibrium of the structural component under inspection, the distribution of its vibration displacements, which are of necessity to construct the damage index in the PE, was re-defined using sole dynamic strains based on the statistical method. On top of those advantages inheriting from the original PE compared with traditional vibration-based damage detection including the independence of baseline signals and pre-developed benchmark structures, the enhanced PE (EPE) possesses improved immunity to the interference of measurement noise. Moreover, the EPE can facilitate practical implementation of online structural health monitoring, benefiting from the use of sole strain information. Proof-of-concept numerical study was conducted to examine the feasibility and accuracy of the EPE, and the effectiveness of the proposed statistical enhancement in re-constructing the vibration displacements was evaluated under noise influence; experimental validation was followed up by characterizing multi-cracks in a beam-like structure, in which the dynamic strains were measured using Lead zirconium titanate (PZT) sensors. For comparison, the original PE, the Gapped Smoothing Method (GSM), and the EPE were respectively used to evaluate the cracks. It was observed from the damage identification results that both the GSM and EPE were able to achieve higher identification accuracy than the original PE, and the robustness of the EPE in damage identification was proven to be superior than that of the GSM.
The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model
NASA Astrophysics Data System (ADS)
Verkley, Wim; Severijns, Camiel
2014-05-01
Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz, Eur. Phys. J. B, 87:7, http://dx.doi.org/10.1140/epjb/e2013-40681-2 (open access).
Nguyen, Y; Nguyen, Nam X.; Rogers, Jamie L.; Liao, Jun; MacMillan, John B.
2015-01-01
ABSTRACT Bacteria engage in chemical signaling, termed quorum sensing (QS), to mediate intercellular communication, mimicking multicellular organisms. The LuxR family of QS transcription factors regulates gene expression, coordinating population behavior by sensing endogenous acyl homoserine lactones (AHLs). However, some bacteria (such as Escherichia coli) do not produce AHLs. These LuxR orphans sense exogenous AHLs but also regulate transcription in the absence of AHLs. Importantly, this AHL-independent regulatory mechanism is still largely unknown. Here we present several structures of one such orphan LuxR-type protein, SdiA, from enterohemorrhagic E. coli (EHEC), in the presence and absence of AHL. SdiA is actually not in an apo state without AHL but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL), which is ubiquitously found throughout the tree of life and serves as an energy source, signaling molecule, and substrate for membrane biogenesis. While exogenous AHL renders to SdiA higher stability and DNA binding affinity, OCL may function as a chemical chaperone placeholder that stabilizes SdiA, allowing for basal activity. Structural comparison between SdiA-AHL and SdiA-OCL complexes provides crucial mechanistic insights into the ligand regulation of AHL-dependent and -independent function of LuxR-type proteins. Importantly, in addition to its contribution to basic science, this work has implications for public health, inasmuch as the SdiA signaling system aids the deadly human pathogen EHEC to adapt to a commensal lifestyle in the gastrointestinal (GI) tract of cattle, its main reservoir. These studies open exciting and novel avenues to control shedding of this human pathogen in the environment. PMID:25827420
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.
NASA Astrophysics Data System (ADS)
Nielsen, Eric L.; Close, Laird M.; Biller, Beth A.; Masciadri, Elena; Lenzen, Rainer
2008-02-01
We examine the implications for the distribution of extrasolar planets based on the null results from two of the largest direct imaging surveys published to date. Combining the measured contrast curves from 22 of the stars observed with the VLT NACO adaptive optics system by Masciadri and coworkers and 48 of the stars observed with the VLT NACO SDI and MMT SDI devices by Biller and coworkers (for a total of 60 unique stars), we consider what distributions of planet masses and semimajor axes can be ruled out by these data, based on Monte Carlo simulations of planet populations. We can set the following upper limit with 95% confidence: the fraction of stars with planets with semimajor axis between 20 and 100 AU, and mass above 4 MJup, is 20% or less. Also, with a distribution of planet mass of dN/dM propto Mâ€‘1.16 in the range of 0.5-13 MJup, we can rule out a power-law distribution for semimajor axis (dN/da propto aÎ±) with index 0 and upper cutoff of 18 AU, and index -0.5 with an upper cutoff of 48 AU. For the distribution suggested by Cumming et al., a power-law of index â€“0.61, we can place an upper limit of 75 AU on the semimajor axis distribution. In general, we find that even null results from direct imaging surveys are very powerful in constraining the distributions of giant planets (0.5-13 MJup) at large separations, but more work needs to be done to close the gap between planets that can be detected by direct imaging, and those to which the radial velocity method is sensitive.
Displaying R spatial statistics on Google dynamic maps with web applications created by Rwui
2012-01-01
Background The R project includes a large variety of packages designed for spatial statistics. Google dynamic maps provide web based access to global maps and satellite imagery. We describe a method for displaying directly the spatial output from an R script on to a Google dynamic map. Methods This is achieved by creating a Java based web application which runs the R script and then displays the results on the dynamic map. In order to make this method easy to implement by those unfamiliar with programming Java based web applications, we have added the method to the options available in the R Web User Interface (Rwui) application. Rwui is an established web application for creating web applications for running R scripts. A feature of Rwui is that all the code for the web application being created is generated automatically so that someone with no knowledge of web programming can make a fully functional web application for running an R script in a matter of minutes. Results Rwui can now be used to create web applications that will display the results from an R script on a Google dynamic map. Results may be displayed as discrete markers and/or as continuous overlays. In addition, users of the web application may select regions of interest on the dynamic map with mouse clicks and the coordinates of the region of interest will automatically be made available for use by the R script. Conclusions This method of displaying R output on dynamic maps is designed to be of use in a number of areas. Firstly it allows statisticians, working in R and developing methods in spatial statistics, to easily visualise the results of applying their methods to real world data. Secondly, it allows researchers who are using R to study health geographics data, to display their results directly onto dynamic maps. Thirdly, by creating a web application for running an R script, a statistician can enable users entirely unfamiliar with R to run R coded statistical analyses of health geographics data. Fourthly, we envisage an educational role for such applications. PMID:22998945
Mechanical-statistical modeling in ecology: from outbreak detections to pest dynamics.
Soubeyrand, S; Neuvonen, S; Penttinen, A
2009-02-01
Knowledge about large-scale and long-term dynamics of (natural) populations is required to assess the efficiency of control strategies, the potential for long-term persistence, and the adaptability to global changes such as habitat fragmentation and global warming. For most natural populations, such as pest populations, large-scale and long-term surveys cannot be carried out at a high resolution. For instance, for population dynamics characterized by irregular abundance explosions, i.e., outbreaks, it is common to report detected outbreaks rather than measuring the population density at every location and time event. Here, we propose a mechanical-statistical model for analyzing such outbreak occurrence data and making inference about population dynamics. This spatio-temporal model contains the main mechanisms of the dynamics and describes the observation process. This construction enables us to account for the discrepancy between the phenomenon scale and the sampling scale. We propose the Bayesian method to estimate model parameters, pest densities and hidden factors, i.e., variables involved in the dynamics but not observed. The model was specified and used to learn about the dynamics of the European pine sawfly (Neodiprion sertifer Geoffr., an insect causing major defoliation of pines in northern Europe) based on Finnish sawfly data covering the years 1961-1990. In this application, a dynamical Beverton-Holt model including a hidden regime variable was incorporated into the model to deal with large variations in the population densities. Our results gave support to the idea that pine sawfly dynamics should be studied as metapopulations with alternative equilibria. The results confirmed the importance of extreme minimum winter temperatures for the occurrence of European pine sawfly outbreaks. The strong positive connection between the ratio of lake area over total area and outbreaks was quantified for the first time. PMID:18843520
NASA Astrophysics Data System (ADS)
Miksovsky, J.; Huth, R.; Halenka, T.; Belda, M.; Farda, A.; Skalak, P.; Stepanek, P.
2009-12-01
To bridge the resolution gap between the outputs of global climate models (GCMs) and finer-scale data needed for studies of the climate change impacts, two approaches are widely used: dynamical downscaling, based on application of regional climate models (RCMs) embedded into the domain of the GCM simulation, and statistical downscaling (SDS), using empirical transfer functions between the large-scale data generated by the GCM and local measurements. In our contribution, we compare the performance of different variants of both techniques for the region of Central Europe. The dynamical downscaling is represented by the outputs of two regional models run in the 10 km horizontal grid, ALADIN-CLIMATE/CZ (co-developed by the Czech Hydrometeorological Institute and Meteo-France) and RegCM3 (developed by the Abdus Salam Centre for Theoretical Physics). The applied statistical methods were based on multiple linear regression, as well as on several of its nonlinear alternatives, including techniques employing artificial neural networks. Validation of the downscaling outputs was carried out using measured data, gathered from weather stations in the Czech Republic, Slovakia, Austria and Hungary for the end of the 20th century; series of daily values of maximum and minimum temperature, precipitation and relative humidity were analyzed. None of the regional models or statistical downscaling techniques could be identified as the universally best one. For instance, while most statistical methods misrepresented the shape of the statistical distribution of the target variables (especially in the more challenging cases such as estimation of daily precipitation), RCM-generated data often suffered from severe biases. It is also shown that further enhancement of the simulated fields of climate variables can be achieved through a combination of dynamical downscaling and statistical postprocessing. This can not only be used to reduce biases and other systematic flaws in the generated time series, but also to further localize the RCM outputs beyond the resolution of their original grid. The resulting data then provide a suitable input for subsequent studies of the local climate and its change in the target region.
Exit from G0 and entry into the cell cycle of cells expressing p21Sdi1 antisense RNA.
Nakanishi, M; Adami, G R; Robetorye, R S; Noda, A; Venable, S F; Dimitrov, D; Pereira-Smith, O M; Smith, J R
1995-01-01
p21Sdi1 (also known as Cip1 and Waf1), an inhibitor of DNA synthesis cloned from senescent human fibroblasts, is an inhibitor of G1 cyclin-dependent kinases (Cdks) in vitro and is transcriptionally regulated by wild-type p53. In addition, p21Sdi1 has been found to inhibit DNA replication by direct interaction with proliferating cell nuclear antigen. In this study we analyzed normal human fibroblast cells arrested in G0 and determined that an excess of p21Sdi1 was present after immunodepletion of various cyclins and Cdks, in contrast to mitogen-stimulated cells in early S phase. Expression of antisense p21Sdi1 RNA in G0-arrested cells resulted in induction of DNA synthesis as well as entry into mitosis. These results suggest that p21Sdi1 functions in G0 and early G1 and that decreased expression of the gene is necessary for cell cycle progression. Images Fig. 1 Fig. 2 Fig. 3 Fig. 4 PMID:7753810
Shimada, Tomohiro; Shimada, Kaori; Matsui, Makoto; Kitai, Yuichi; Igarashi, Jun; Suga, Hiroaki; Ishihama, Akira
2014-05-01
In Gram-negative bacteria, N-acylhomoserine lactone (HSL) is used as a signal in cell-cell communication and quorum sensing (QS). The model prokaryote Escherichia coli lacks the system of HSL synthesis, but is capable of monitoring HSL signals in environment. Transcription factor SdiA for cell division control is believed to play a role as a HSL sensor. Using a collection of 477 species of chemically synthesized HSL analogues, we identified three synthetic signal molecules (SSMs) that bind in vitro to purified SdiA. After SELEX-chip screening of SdiA-binding DNA sequences, a striking difference was found between these SSMs in the pattern of regulation target genes on the E.Â coli genome. Based on Northern blot analysis in vivo, a set of target genes were found to be repressed by SdiA in the absence of effectors and derepressed by the addition of SSMs. Another set of genes were, however, expressed in the absence of effector ligands but repressed by the addition of SSMs. Taken together, we propose that the spectrum of taget gene selection by SdiA is modulated in multiple modes depending on the interacting HSL-like signal molecules. PMID:24645791
Specificity of mathematical description of statistical and dynamical properties of CELSS
NASA Astrophysics Data System (ADS)
Bartsev, Sergey
CELSS for long-term space missions has to be possessed high level of matter turnover closure. Designing, studying, and maintaining such kind of systems seems to be not possible without accounting their specificity -high closure. For measuring this specific property potentially universal coefficient of closure is suggested and disscussed. It can be shown standard statistical formulas are incorrect for estimating mean values of biomass of CELSS components. Account-ing closure as specific constraint of closed ecological systems allows obtaining correct formulas for calculating mean values of biomass and composition of chemical compounds of CELSS. Errors due to using standard statistical evaluations are discussed. Organisms composing bi-ological LSS consume and produce spectrum of different substances. Providing high level of closure -the absence of deadlocks -depends on accuracy of adjusting all organisms input and output to each other. This is practical objective of high importance. Adequate mathematical models ought to describe possibility of organisms to vary their consumption and production spectrum (stoichiometric ratio). Traditional ecological models describing dynamics of limiting element can not be adequately applied for describing CELSS dynamics over all possible oper-ating regimes. Possible use of adaptive metabolism models for providing correct description of CELSS dynamics is considered.
An open-source wireless sensor stack: from Arduino to SDI-12 to Water One Flow
NASA Astrophysics Data System (ADS)
Hicks, S.; Damiano, S. G.; Smith, K. M.; Olexy, J.; Horsburgh, J. S.; Mayorga, E.; Aufdenkampe, A. K.
2013-12-01
Implementing a large-scale streaming environmental sensor network has previously been limited by the high cost of the datalogging and data communication infrastructure. The Christina River Basin Critical Zone Observatory (CRB-CZO) is overcoming the obstacles to large near-real-time data collection networks by using Arduino, an open source electronics platform, in combination with XBee ZigBee wireless radio modules. These extremely low-cost and easy-to-use open source electronics are at the heart of the new DIY movement and have provided solutions to countless projects by over half a million users worldwide. However, their use in environmental sensing is in its infancy. At present a primary limitation to widespread deployment of open-source electronics for environmental sensing is the lack of a simple, open-source software stack to manage streaming data from heterogeneous sensor networks. Here we present a functioning prototype software stack that receives sensor data over a self-meshing ZigBee wireless network from over a hundred sensors, stores the data locally and serves it on demand as a CUAHSI Water One Flow (WOF) web service. We highlight a few new, innovative components, including: (1) a versatile open data logger design based the Arduino electronics platform and ZigBee radios; (2) a software library implementing SDI-12 communication protocol between any Arduino platform and SDI12-enabled sensors without the need for additional hardware (https://github.com/StroudCenter/Arduino-SDI-12); and (3) 'midStream', a light-weight set of Python code that receives streaming sensor data, appends it with metadata on the fly by querying a relational database structured on an early version of the Observations Data Model version 2.0 (ODM2), and uses the WOFpy library to serve the data as WaterML via SOAP and REST web services.
Quantum particle statistics on the holographic screen leads to modified Newtonian dynamics
NASA Astrophysics Data System (ADS)
Pazy, E.; Argaman, N.
2012-05-01
Employing a thermodynamic interpretation of gravity based on the holographic principle and assuming underlying particle statistics, fermionic or bosonic, for the excitations of the holographic screen leads to modified Newtonian dynamics (MOND). A connection between the acceleration scale a0 appearing in MOND and the Fermi energy of the holographic fermionic degrees of freedom is obtained. In this formulation the physics of MOND results from the quantum-classical crossover in the fermionic specific heat. However, due to the dimensionality of the screen, the formalism is general and applies to two-dimensional bosonic excitations as well. It is shown that replacing the assumption of the equipartition of energy on the holographic screen by a standard quantum-statistical-mechanics description wherein some of the degrees of freedom are frozen out at low temperatures is the physical basis for the MOND interpolating function ?˜. The interpolating function ?˜ is calculated within the statistical mechanical formalism and compared to the leading phenomenological interpolating functions, most commonly used. Based on the statistical mechanical view of MOND, its cosmological implications are reinterpreted: the connection between a0 and the Hubble constant is described as a quantum uncertainty relation; and the relationship between a0 and the cosmological constant is better understood physically.
Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-10-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (Ëœ15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical 18F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of Ëœ30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.
Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation
Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman
2013-01-01
In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994
Statistics of voltage drop in distribution circuits: a dynamic programming approach
Turitsyn, Konstantin S
2010-01-01
We analyze a power distribution line with high penetration of distributed generation and strong variations of power consumption and generation levels. In the presence of uncertainty the statistical description of the system is required to assess the risks of power outages. In order to find the probability of exceeding the constraints for voltage levels we introduce the probability distribution of maximal voltage drop and propose an algorithm for finding this distribution. The algorithm is based on the assumption of random but statistically independent distribution of loads on buses. Linear complexity in the number of buses is achieved through the dynamic programming technique. We illustrate the performance of the algorithm by analyzing a simple 4-bus system with high variations of load levels.
A CATLINE SDI for the reference department: collection development and current awareness tool.
McKinin, E J
1987-01-01
A method for using a CATLINE SDI (selected dissemination of information) as a current awareness and collection development tool for the health sciences reference department is described. This paper reports three years of experience with this service in an academic health sciences library. It emphasizes the exploitation of four data elements in the CATLINE file: the Abstracting and Indexing Tag (AI) Data Element; the Shelving Location (SL) Data Element; the MeSH Heading (MH) Data Element; the Subheading Qualifier (SH) Data Element. PMID:3329923
NASA Astrophysics Data System (ADS)
Shaklan, Stuart B.; Marchen, Luis; Peterson, Lee; Levine, Marie B.
2014-08-01
We have combined our Excel-based coronagraph dynamics error budget spreadsheets with DAKOTA scripts to perform statistical analyses of the predicted dark-hole contrast. Whereas in the past we have reported the expected contrast level for an input set of allocated parameters, we now generate confidence intervals for the predicted contrast. Further, we explore the sensitivity to individual or groups of parameters and model uncertainty factors through aleatory-epistemic simulations based on a surrogate model fitted to the error budget. We show example results for a generic high-contrast coronagraph.
Dynamics and statistics of wave-particle interactions in a confined geometry
NASA Astrophysics Data System (ADS)
Gilet, Tristan
2014-11-01
A walker is a droplet bouncing on a liquid surface and propelled by the waves that it generates. This macroscopic wave-particle association exhibits behaviors reminiscent of quantum particles. This article presents a toy model of the coupling between a particle and a confined standing wave. The resulting two-dimensional iterated map captures many features of the walker dynamics observed in different configurations of confinement. These features include the time decomposition of the chaotic trajectory in quantized eigenstates and the particle statistics being shaped by the wave. It shows that deterministic wave-particle coupling expressed in its simplest form can account for some quantumlike behaviors.
NASA Astrophysics Data System (ADS)
Haas, Rabea; Pinto, Joaquim G.
2013-04-01
The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.
Hotspots of boundary accumulation: dynamics and statistics of micro-swimmers in flowing films.
Mathijssen, Arnold J T M; Doostmohammadi, Amin; Yeomans, Julia M; Shendruk, Tyler N
2016-02-01
Biological flows over surfaces and interfaces can result in accumulation hotspots or depleted voids of microorganisms in natural environments. Apprehending the mechanisms that lead to such distributions is essential for understanding biofilm initiation. Using a systematic framework, we resolve the dynamics and statistics of swimming microbes within flowing films, considering the impact of confinement through steric and hydrodynamic interactions, flow and motility, along with Brownian and run-tumble fluctuations. Micro-swimmers can be peeled off the solid wall above a critical flow strength. However, the interplay of flow and fluctuations causes organisms to migrate back towards the wall above a secondary critical value. Hence, faster flows may not always be the most efficacious strategy to discourage biofilm initiation. Moreover, we find run-tumble dynamics commonly used by flagellated microbes to be an intrinsically more successful strategy to escape from boundaries than equivalent levels of enhanced Brownian noise in ciliated organisms. PMID:26841796
NASA Astrophysics Data System (ADS)
Arbona, A.; Bona, C.; Miñano, B.; Plastino, A.
2014-09-01
The definition of complexity through Statistical Complexity Measures (SCM) has recently seen major improvements. Mostly, the effort is concentrated in measures on time series. We propose a SCM definition for spatial dynamical systems. Our definition is in line with the trend to combine entropy with measures of structure (such as disequilibrium). We study the behaviour of our definition against the vectorial noise model of Collective Motion. From a global perspective, we show how our SCM is minimal at both the microscale and macroscale, while it reaches a maximum at the ranges that define the mesoscale in this model. From a local perspective, the SCM is minimum both in highly ordered and disordered areas, while it reaches a maximum at the edges between such areas. These characteristics suggest this is a good candidate for detecting the mesoscale of arbitrary dynamical systems as well as regions where the complexity is maximal in such systems.
Neutral dynamics with environmental noise: Age-size statistics and species lifetimes
NASA Astrophysics Data System (ADS)
Kessler, David; Suweis, Samir; Formentin, Marco; Shnerb, Nadav M.
2015-08-01
Neutral dynamics, where taxa are assumed to be demographically equivalent and their abundance is governed solely by the stochasticity of the underlying birth-death process, has proved itself as an important minimal model that accounts for many empirical datasets in genetics and ecology. However, the restriction of the model to demographic [O (âˆš{N }) ] noise yields relatively slow dynamics that appears to be in conflict with both short-term and long-term characteristics of the observed systems. Here we analyze two of these problemsâ€”age-size relationships and species extinction timeâ€”in the framework of a neutral theory with both demographic and environmental stochasticity. It turns out that environmentally induced variations of the demographic rates control the long-term dynamics and modify dramatically the predictions of the neutral theory with demographic noise only, yielding much better agreement with empirical data. We consider two prototypes of "zero mean" environmental noise, one which is balanced with regard to the arithmetic abundance, another balanced in the logarithmic (fitness) space, study their species lifetime statistics, and discuss their relevance to realistic models of community dynamics.
Neutral dynamics with environmental noise: Age-size statistics and species lifetimes.
Kessler, David; Suweis, Samir; Formentin, Marco; Shnerb, Nadav M
2015-08-01
Neutral dynamics, where taxa are assumed to be demographically equivalent and their abundance is governed solely by the stochasticity of the underlying birth-death process, has proved itself as an important minimal model that accounts for many empirical datasets in genetics and ecology. However, the restriction of the model to demographic [O?N)] noise yields relatively slow dynamics that appears to be in conflict with both short-term and long-term characteristics of the observed systems. Here we analyze two of these problems--age-size relationships and species extinction time--in the framework of a neutral theory with both demographic and environmental stochasticity. It turns out that environmentally induced variations of the demographic rates control the long-term dynamics and modify dramatically the predictions of the neutral theory with demographic noise only, yielding much better agreement with empirical data. We consider two prototypes of "zero mean" environmental noise, one which is balanced with regard to the arithmetic abundance, another balanced in the logarithmic (fitness) space, study their species lifetime statistics, and discuss their relevance to realistic models of community dynamics. PMID:26382447
SERVIR's Contributions and Benefits to Belize thru Spatial Data Infrastructure (SDI) Development
NASA Technical Reports Server (NTRS)
Irwin, Daniel E.
2006-01-01
Dan Irwin, the SERVIR Project Manager is being honored with the privilege of delivering the opening remarks at Belize s second celebration of GIS Day, a weeklong event to be held at the University of Belize's campus in the nation s capital, Belmopan. The request has been extended by the GIS Day Planning Committee which operates under the auspices of Belize s Ministry of Natural Resources & the Environment, which is the focal ministry for SERVIR. In the 20-30 min. allotted for the opening remarks, the SERVIR Project Manager will expound on how SERVIR, operating under the auspices of NASA s Ecological Forecasting Program, contributes to spatial data infrastructure (SDI) development in Belize. NASA s contributions to the region - particularly work under the Mesoamerican Biological Corridor - will be highlighted. Continuing, the remarks will discuss SERVIR s role in Belize s steadily expanding SDI, particularly in the context of delivering integrated decision support products via web-based infrastructure. The remarks will close with a call to the parties assembled to work together in the application of Earth Observation Systems technologies for the benefit of Belizean society as a whole. NASA s strong presence in Belize s GIS Day celebrations will be highlighted as sustained goodwill of the American people - in partial fulfillment of goals set forth under the Global Earth Observation System of Systems (GEOSS).
NASA Astrophysics Data System (ADS)
Huth, Radan; MikÅ¡ovskÃ½, JiÅ™Ã; Å tÄ›pÃ¡nek, Petr; Belda, Michal; Farda, AleÅ¡; ChlÃ¡dovÃ¡, Zuzana; PiÅ¡oft, Petr
2015-05-01
Minimum and maximum temperature in two regional climate models and five statistical downscaling models are validated according to a unified set of criteria that have a potential relevance for impact assessments: persistence (temporal autocorrelations), spatial autocorrelations, extreme quantiles, skewness, kurtosis, and the degree of fit to observed data on both short and long times scales. The validation is conducted on two dense grids in central Europe as follows: (1) a station network and (2) a grid with a resolution of 10 km. The gridded dataset is not contaminated by artifacts of the interpolation procedure; therefore, we claim that using a gridded dataset as a validation base is a valid approach. The fit to observations in short time scales is equally good for the statistical downscaling (SDS) models and regional climate models (RCMs) in winter, while it is much better for the SDS models in summer. The reproduction of variability on long time scales, expressed as linear trends, is similarly successful by both SDS models and RCMs. Results for other criteria suggest that there is no justification for preferring dynamical models at the expense of statistical modelsâ€”and vice versa. The non-linear SDS models do not outperform the linear ones.
Sivasamy, Aneetha Avalappampatty; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T(2) method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T(2) statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Marks, David A.; Travis, James E.
2014-01-01
A statistical method is developed for comparing precipitation data from measurements performed by (hypothetical) perfect instruments using a recently developed stochastic model of rainfall. The stochastic dynamical equation that describes the underlying random process naturally leads to a consistent spectrum and incorporates the subtle interdependence of the length and time scales governing the statistical fluctuations of the rain rate field. The main attraction of such a model is that the complete set of second-moment statistics embodied in the space-time covariance of both the area-averaged instantaneous rain rate (represented by radar or passive microwave data near the ground) and the time-averaged point rain rate (represented by rain gauge data) can be expressed as suitable integrals over the spectrum. With the help of this framework, the model allows one to carry out a faithful intercomparison of precipitation estimates derived from radar or passive microwave remote sensing over an area with direct observations by rain gauges or disdrometers, assuming all the measuring instruments to be ideal. A standard linear regression analysis approach to the intercomparison of radar and gauge rain rate estimates is formulated in terms of the appropriate observed and model-derived quantities. We also estimate the relative sampling error as well as separate absolute sampling errors for radar and gauge measurements of rainfall from the spectral model.
Avalappampatty Sivasamy, Aneetha; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
NASA Astrophysics Data System (ADS)
Hellström, Cecilia; Chen, Deliang
2003-11-01
A prerequisite of a successful statistical downscaling is that large-scale predictors simulated by the General Circulation Model (GCM) must be realistic. It is assumed here that features smaller than the GCM resolution are important in determining the realism of the large-scale predictors. It is tested whether a three-step method can improve conventional one-step statistical downscaling. The method uses predictors that are upscaled from a dynamical downscaling instead of predictors taken directly from a GCM simulation. The method is applied to downscaling of monthly precipitation in Sweden. The statistical model used is a multiple regression model that uses indices of large-scale atmospheric circulation and 850-hPa specific humidity as predictors. Data from two GCMs (HadCM2 and ECHAM4) and two RCM experiments of the Rossby Centre model (RCA1) driven by the GCMs are used. It is found that upscaled RCA1 predictors capture the seasonal cycle better than those from the GCMs, and hence increase the reliability of the downscaled precipitation. However, there are only slight improvements in the simulation of the seasonal cycle of downscaled precipitation. Due to the cost of the method and the limited improvements in the downscaling results, the three-step method is not justified to replace the one-step method for downscaling of Swedish precipitation.
Statistical analysis of nonlinear dynamical systems using differential geometric sampling methods.
Calderhead, Ben; Girolami, Mark
2011-12-01
Mechanistic models based on systems of nonlinear differential equations can help provide a quantitative understanding of complex physical or biological phenomena. The use of such models to describe nonlinear interactions in molecular biology has a long history; however, it is only recently that advances in computing have allowed these models to be set within a statistical framework, further increasing their usefulness and binding modelling and experimental approaches more tightly together. A probabilistic approach to modelling allows us to quantify uncertainty in both the model parameters and the model predictions, as well as in the model hypotheses themselves. In this paper, the Bayesian approach to statistical inference is adopted and we examine the significant challenges that arise when performing inference over nonlinear ordinary differential equation models describing cell signalling pathways and enzymatic circadian control; in particular, we address the difficulties arising owing to strong nonlinear correlation structures, high dimensionality and non-identifiability of parameters. We demonstrate how recently introduced differential geometric Markov chain Monte Carlo methodology alleviates many of these issues by making proposals based on local sensitivity information, which ultimately allows us to perform effective statistical analysis. Along the way, we highlight the deep link between the sensitivity analysis of such dynamic system models and the underlying Riemannian geometry of the induced posterior probability distributions. PMID:23226584
An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2013-08-01
A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.
Statistical dynamic image reconstruction in state-of-the-art high-resolution PET
NASA Astrophysics Data System (ADS)
Rahmim, Arman; Cheng, Ju-Chieh; Blinder, Stephan; Camborde, Maurie-Laure; Sossi, Vesna
2005-10-01
Modern high-resolution PET is now more than ever in need of scrutiny into the nature and limitations of the imaging modality itself as well as image reconstruction techniques. In this work, we have reviewed, analysed and addressed the following three considerations within the particular context of state-of-the-art dynamic PET imaging: (i) the typical average numbers of events per line-of-response (LOR) are now (much) less than unity, (ii) due to the physical and biological decay of the activity distribution, one requires robust and efficient reconstruction algorithms applicable to a wide range of statistics and (iii) the computational considerations in dynamic imaging are much enhanced (i.e., more frames to be stored and reconstructed). Within the framework of statistical image reconstruction, we have argued theoretically and shown experimentally that the sinogram non-negativity constraint (when using the delayed-coincidence and/or scatter-subtraction techniques) is especially expected to result in an overestimation bias. Subsequently, two schemes are considered: (a) subtraction techniques in which an image non-negativity constraint has been imposed and (b) implementation of random and scatter estimates inside the reconstruction algorithms, thus enabling direct processing of Poisson-distributed prompts. Both techniques are able to remove the aforementioned bias, while the latter, being better conditioned theoretically, is able to exhibit superior noise characteristics. We have also elaborated upon and verified the applicability of the accelerated list-mode image reconstruction method as a powerful solution for accurate, robust and efficient dynamic reconstructions of high-resolution data (as well as a number of additional benefits in the context of state-of-the-art PET).
Statistical Testing of Dynamically Downscaled Rainfall Data for the East Coast of Australia
NASA Astrophysics Data System (ADS)
Parana Manage, Nadeeka; Lockart, Natalie; Willgoose, Garry; Kuczera, George
2015-04-01
This study performs a validation of statistical properties of downscaled climate data, concentrating on the rainfall which is required for hydrology predictions used in reservoir simulations. The data sets used in this study have been produced by the NARCliM (NSW/ACT Regional Climate Modelling) project which provides a dynamically downscaled climate dataset for South-East Australia at 10km resolution. NARCliM has used three configurations of the Weather Research Forecasting Regional Climate Model and four different GCMs (MIROC-medres 3.2, ECHAM5, CCCMA 3.1 and CSIRO mk3.0) from CMIP3 to perform twelve ensembles of simulations for current and future climates. Additionally to the GCM-driven simulations, three control run simulations driven by the NCEP/NCAR reanalysis for the entire period of 1950-2009 has also been performed by the project. The validation has been performed in the Upper Hunter region of Australia which is a semi-arid to arid region 200 kilometres North-West of Sydney. The analysis used the time series of downscaled rainfall data and ground based measurements for selected Bureau of Meteorology rainfall stations within the study area. The initial testing of the gridded rainfall was focused on the autoregressive characteristics of time series because the reservoir performance depends on long-term average runoffs. A correlation analysis was performed for fortnightly, monthly and annual averaged time resolutions showing a good statistical match between reanalysis and ground truth. The spatial variation of the statistics of gridded rainfall series were calculated and plotted at the catchment scale. The spatial correlation analysis shows a poor agreement between NARCliM data and ground truth at each time resolution. However, the spatial variability plots show a strong link between the statistics and orography at the catchment scale.
NASA Astrophysics Data System (ADS)
Laugel, AmÃ©lie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; Mendez, Fernando
2013-04-01
Wave climate forecasting is a major issue for numerous marine and coastal related activities, such as offshore industries, flooding risks assessment and wave energy resource evaluation, among others. Generally, there are two main ways to predict the impacts of the climate change on the wave climate at regional scale: the dynamical and the statistical downscaling of GCM (Global Climate Model). In this study, both methods have been applied on the French coast (Atlantic , English Channel and North Sea shoreline) under three climate change scenarios (A1B, A2, B1) simulated with the GCM ARPEGE-CLIMAT, from MÃ©tÃ©o-France (AR4, IPCC). The aim of the work is to characterise the wave climatology of the 21st century and compare the statistical and dynamical methods pointing out advantages and disadvantages of each approach. The statistical downscaling method proposed by the Environmental Hydraulics Institute of Cantabria (Spain) has been applied (Menendez et al., 2011). At a particular location, the sea-state climate (Predictand Y) is defined as a function, Y=f(X), of several atmospheric circulation patterns (Predictor X). Assuming these climate associations between predictor and predictand are stationary, the statistical approach has been used to project the future wave conditions with reference to the GCM. The statistical relations between predictor and predictand have been established over 31 years, from 1979 to 2009. The predictor is built as the 3-days-averaged squared sea level pressure gradient from the hourly CFSR database (Climate Forecast System Reanalysis, http://cfs.ncep.noaa.gov/cfsr/). The predictand has been extracted from the 31-years hindcast sea-state database ANEMOC-2 performed with the 3G spectral wave model TOMAWAC (Benoit et al., 1996), developed at EDF R&D LNHE and Saint-Venant Laboratory for Hydraulics and forced by the CFSR 10m wind field. Significant wave height, peak period and mean wave direction have been extracted with an hourly-resolution at 110 coastal locations along the French coast. The model, based on the BAJ parameterization of the source terms (Bidlot et al, 2007) was calibrated against ten years of GlobWave altimeter observations (2000-2009) and validated through deep and shallow water buoy observations. The dynamical downscaling method has been performed with the same numerical wave model TOMAWAC used for building ANEMOC-2. Forecast simulations are forced by the 10m wind fields of ARPEGE-CLIMAT (A1B, A2, B1) from 2010 to 2100. The model covers the Atlantic Ocean and uses a spatial resolution along the French and European coast of 10 and 20 km respectively. The results of the model are stored with a time resolution of one hour. References: Benoit M., Marcos F., and F. Becq, (1996). Development of a third generation shallow-water wave model with unstructured spatial meshing. Proc. 25th Int. Conf. on Coastal Eng., (ICCE'1996), Orlando (Florida, USA), pp 465-478. Bidlot J-R, Janssen P. and Adballa S., (2007). A revised formulation of ocean wave dissipation and its model impact, technical memorandum ECMWF nÂ°509. Menendez, M., Mendez, F.J., Izaguirre,C., Camus, P., Espejo, A., Canovas, V., Minguez, R., Losada, I.J., Medina, R. (2011). Statistical Downscaling of Multivariate Wave Climate Using a Weather Type Approach, 12th International Workshop on Wave Hindcasting and Forecasting and 3rd Coastal Hazard Symposium, Kona (Hawaii).
Enhancing dynamic graphical analysis with the Lisp-Stat language and the ViSta statistical program.
Ledesma, Rubén; Molina, J Gabriel; Young, Forrest W
2005-11-01
Presented is a sample of computerized methods aimed at multidimensional scaling and psychometric item analysis that offer a dynamic graphical interface to execute analyses and help visualize the results. These methods show how the Lisp-Stat programming language and the ViSta statistical program can be jointly applied to develop powerful computer applications that enhance dynamic graphical analysis methods. The feasibility of this combined strategy relies on two main features: (1) The programming architecture of ViSta enables users to add new statistical methods as plug-ins, which are integrated into the program environment and can make use of all the functions already available in ViSta (e.g., data manipulation, editing, printing); and (2) the set of powerful statistical and graphical functions integrated into the Lisp-Stat programming language provides the means for developing statistical methods with dynamic graphical visualizations, which can be implemented as ViSta plug-ins. PMID:16629303
NASA Astrophysics Data System (ADS)
Busuioc, A.; Dumitrescu, A.; Baciu, M.; Cazacioc, L.
2012-04-01
Changes in monthly temperature and precipitation at stations in two small areas placed in western (Banat Plain) and southwestern (Oltenia Plain) part of Romania for the periods 2021-2050 and 2071-2100 (compared to 1961-1990), under the IPCC A1B scenario, are estimated through two downscaling techniques (statistical-SDM and dynamical-RCM). These results were obtained within the SEE project CC-WaterS (www.ccwaters.eu). The statistical downscaling technique uses a model based on canonical correlation analysis (CCA). New improvement is achieved in this paper comparing to other previous studies, mainly referring to the combination of the local standardized temperature and precipitation anomalies (11 stations) in a single spatial vector considered as predictand, giving more physical consistence to the results. Various predictors were tested to find the optimum statistical downscaling model (SDM): the temperature at 850 hPa (T850), sea level pressure (SLP) and specific humidity at 700 hPa (SH700), either used individually or together. The observed predictand data are based on homogenized dataset. It was found that the T850 is good predictor for all seasons but the combination between the three predictors gives higher skill (in terms of explained variance) for winter and similar skill for other seasons. From physical reasons both versions were retained in order to analyse the uncertainty (similar skill should give similar future climate change signal if the statistical relationship will be also valid in the future and all predictors capture the entire climate change signal). The model was fitted with the data set for the period 1961-1990 and validated over the independent data set 1991-2007.The optimum statistical downscaling model, established over the independent data set for each season, has been then applied to predictors from the A1B scenario simulations of the ENSEMBLES RCMs (http://ensemblesrt3.dmi.dk), RegCM3 and CNRM, driven by the global models ECHAM5 (run 3) and ARPEGE, respectively. To estimate the uncertainty related to the downscaling technique (dynamical or statistical), the results achieved through the statistical downscaling model (SDM) applied to the global model ECHAM5 have been compared to those derived directly from 5 RCMs (including RegCM3) with the same driver as well as with those derived from the SDM applied to the two mentioned RCMs. The final ensemble achieved from 8 ENSEMBLES RCM outputs and SDM outputs has been considered to estimate the uncertainty associated to the climate change signal at the 11 stations. The optimum (most plausible) climate change signal (represented by the ensemble average) and the model spread (represented by the standard deviation of the 10 values) have been computed. The uncertainties related to the RCMs/GCM skill in reproducing the predictor variability are analysed in details for the pair RegCM3-ECHAM5.
Collisional statistics and dynamics of two-dimensional hard-disk systems: From fluid to solid.
Taloni, Alessandro; Meroz, Yasmine; Huerta, Adrián
2015-08-01
We perform extensive MD simulations of two-dimensional systems of hard disks, focusing on the collisional statistical properties. We analyze the distribution functions of velocity, free flight time, and free path length for packing fractions ranging from the fluid to the solid phase. The behaviors of the mean free flight time and path length between subsequent collisions are found to drastically change in the coexistence phase. We show that single-particle dynamical properties behave analogously in collisional and continuous-time representations, exhibiting apparent crossovers between the fluid and the solid phases. We find that, both in collisional and continuous-time representation, the mean-squared displacement, velocity autocorrelation functions, intermediate scattering functions, and self-part of the van Hove function (propagator) closely reproduce the same behavior exhibited by the corresponding quantities in granular media, colloids, and supercooled liquids close to the glass or jamming transition. PMID:26382368
Defect-phase-dynamics approach to statistical domain-growth problem of clock models
NASA Technical Reports Server (NTRS)
Kawasaki, K.
1985-01-01
The growth of statistical domains in quenched Ising-like p-state clock models with p = 3 or more is investigated theoretically, reformulating the analysis of Ohta et al. (1982) in terms of a phase variable and studying the dynamics of defects introduced into the phase field when the phase variable becomes multivalued. The resulting defect/phase domain-growth equation is applied to the interpretation of Monte Carlo simulations in two dimensions (Kaski and Gunton, 1983; Grest and Srolovitz, 1984), and problems encountered in the analysis of related Potts models are discussed. In the two-dimensional case, the problem is essentially that of a purely dissipative Coulomb gas, with a sq rt t growth law complicated by vertex-pinning effects at small t.
Extended Dynamic Subgraph Statistics Using h-Index Parameterized Data Structures
NASA Astrophysics Data System (ADS)
Eppstein, David; Goodrich, Michael T.; Strash, Darren; Trott, Lowell
We present techniques for maintaining subgraph frequencies in a dynamic graph, using data structures that are parameterized in terms of h, the h-index of the graph. Our methods extend previous results of Eppstein and Spiro for maintaining statistics for undirected subgraphs of size three to directed subgraphs and to subgraphs of size four. For the directed case, we provide a data structure to maintain counts for all 3-vertex induced subgraphs in O(h) amortized time per update. For the undirected case, we maintain the counts of size-four subgraphs in O(h 2) amortized time per update. These extensions enable a number of new applications in Bioinformatics and Social Networking research.
Statistical and dynamical study of disease propagation in a small world network
NASA Astrophysics Data System (ADS)
Zekri, Nouredine; Clerc, Jean Pierre
2001-11-01
Statistical properties and dynamical disease propagation have been studied numerically using a percolation model in a one dimensional small world network. The parameters chosen correspond to a realistic network of school age children. It has been found that percolation threshold decreases as a power law as the shortcut fluctuations increase. It has also been found that the number of infected sites grows exponentially with time and its rate depends logarithmically on the density of susceptibles. This behavior provides an interesting way to estimate the serology for a given population from the measurement of the disease growing rate during an epidemic phase. The case in which the infection probability of nearest neighbors is different from that of short cuts has also been examined. A double diffusion behavior with a slower diffusion between the characteristic times has been found.
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI) as with all microirrigation systems is typically only used on crops with greater value. In the U.S. Great Plains region, the typical irrigated crops are the cereal and oil seed crops and cotton. These crops have less economic revenue than typical microirrigated cro...
Technology Transfer Automated Retrieval System (TEKTRAN)
An experimental field moisture controlled subsurface drip irrigation (SDI) system was designed and installed as a field trial in a Vertisol in the Alabama Black Belt region for two years. The system was designed to start hydraulic dosing only when field moisture was below field capacity. Results sho...
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin
2014-01-01
The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799-1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi-Dirac or Bose-Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919
[D-value estimation of dynamic spectrum based on the statistical methods].
Lin, Ling; Li, Yong-Cheng; Wang, Meng-Jun; Zhou, Mei; Li, Gang; Zhang, Bao-Ju
2012-11-01
To realize the noninvasive concentration detection of blood components and improve the drawbacks existing in the time-domain single-trial estimation method of "dynamic spectrum" (DS), the D-value estimation method based on the statistical properties was proposed. We extracted the absolute difference between two corresponding values of each wavelength to make up the DS, selected the valid DSs from the DSs of different times by statistic method, and the valid DSs were superimposed and averaged as the final output of the DS. Data collected from 48 volunteers were processed by the D-value estimation and the single-trial estimation, respectively; and then the comparison was carried out between the two methods. Compared with the single-trial estimation, the valid DSs extracted by the D-value estimation were slightly better in denoising; And the average number of the remained valid DSs is improved from 48 to 130; the average of mean square error among the valid DSs is improved from 0.39 to 0.006; the speed of data processing is increased by nearly 20 times. The new method can significantly improve the quality of the extraction of DS. PMID:23387187
NASA Astrophysics Data System (ADS)
Sultan, B.; Oettli, P.; Vrac, M.; Baron, C.
2010-12-01
Global circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. It therefore requires the use of downscaling methods. This study analyzes the performance of both dynamical and statistical downscaling techniques in simulating crop yield at local scale. A detailed case study is conducted using historical weather data for Senegal, applied to the crop model SARRAH for simulating several tropical cereals (sorghum, millet, maize) at local scale. This control simulation is used as a benchmark to evaluate a set of Regional Climate Models (RCM) simulations, forced by ERA-Interim, from the ENSEMBLES project and a statistical downscaling method, the CDF-Transform, used to correct biases in RCM outputs. We first evaluate each climate variable that drives the simulated yield in the control simulation (radiation, rainfall, temperatures). We then simulate crop yields with RCM outputs (with or without applying the CDG-Transform) and evaluate the performance of each RCM in regards to crop yield simulations.
Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin
2014-01-01
The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799â€“1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermiâ€“Dirac or Boseâ€“Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919
NASA Astrophysics Data System (ADS)
Kruschke, T.; Lorenz, P.; Osinski, R.; Voigt, M.; Leckebusch, G. C.; Ulbrich, U.
2012-04-01
Extreme winter wind storms are major natural catastrophes leading to enormous socio-economic impacts in Europe. The impact of a single events depends on the severity and extent of the event itself but also on the region hit by the storm, combined with its specific exposure of values and vulnerability. The spatial distribution of exposed values and their vulnerability is highly heterogeneous. Therefore, it is necessary to analyze extremes of surface wind speeds within winter wind storms with high spatial resolution. This study analyzes if rather simple linear regression methods are suitable for estimating extreme surface wind gusts of high spatial resolution, using different coarse resolution predictors. The statistical relationships between coarse resolution predictors from ECMWF reanalysis data and high resolution (~7km x 7km) predictands, i.e. the maximum gusts, are derived from dynamical simulations of extreme historical events performed with the German Weather Service (DWD) model chain GMEâ€”COSMO-EU. Validation of the results of the statistical downscaling confirms the high skill of linear regressions for different European sub-regions. Hence, the application of these methods to more extensive datasets in order to estimate extreme wind gusts and their exceedance probabilities or return periods is justified.
Phase statistics of multiply scattered ultrasonic waves in dynamic mesoscopic systems
NASA Astrophysics Data System (ADS)
Page, John H.; Cowan, Michael L.; van Tiggelen, Bart
2005-04-01
In weakly scattering materials, detecting motion by measuring the change in phase of reflected ultrasonic waves forms the basis of the well-known technique of Doppler ultrasound. In strongly scattering media, these methods break down and the technique of diffusing acoustic wave spectroscopy (DAWS) was developed [Cowan et al., Phys. Rev. Lett. 85, 453 (2000)]. To explore the use of phase information to investigate the dynamics of multiply scattering media, the temporal fluctuations in the phase of ultrasonic waves transmitted through a time-varying mesoscopic sample have been measured. We have compared phase statistics and correlations to detailed theoretical predictions based on circular Gaussian (C1) statistics [Genack et al., Phys. Rev. Lett. 82, 412 (1999)]. So far, excellent agreement is found. The cumulative phase is found to undergo a Brownian type process, described by a phase diffusion coefficient. A fundamental relationship between the variance in the phase of the transmitted waves and the fluctuations in the phase of individual scattering paths is predicted theoretically and verified experimentally. This relationship not only gives deeper insight into the physics of the phase of multiply scattered waves, but also provides a new, mesoscopic way of probing the motion of the scatterers in the sample. a)Currently at Department of Physics, University of Toronto, Toronto, ON, Canada M5S 3E3
Cai Jing; Read, Paul W.; Larner, James M.; Jones, David R.; Benedict, Stanley H.; Sheng Ke
2008-11-15
Purpose: To investigate the statistical reproducibility of craniocaudal probability distribution function (PDF) of interfraction lung motion using dynamic magnetic resonance imaging. Methods and Materials: A total of 17 subjects, 9 healthy volunteers and 8 lung tumor patients, underwent two to three continuous 300-s magnetic resonance imaging scans in the sagittal plane, repeated 2 weeks apart. Three pulmonary vessels from different lung regions (upper, middle, and lower) in the healthy subjects and lung tumor patients were selected for tracking, and the displacement PDF reproducibility was evaluated as a function of scan time and frame rate. Results: For both healthy subjects and patients, the PDF reproducibility improved with increased scan time and converged to an equilibrium state during the 300-s scan. The PDF reproducibility at 300 s (mean, 0.86; range, 0.70-0.96) were significantly (p < 0.001) increased compared with those at 5 s (mean, 0.65; range, 0.25-0.79). PDF reproducibility showed less sensitivity to imaging frame rates that were >2 frames/s. Conclusion: A statistically significant improvement in PDF reproducibility was observed with a prolonged scan time among the 17 participants. The confirmation of PDF reproducibility over times much shorter than stereotactic body radiotherapy delivery duration is a vital part of the initial validation process of probability-based treatment planning for stereotactic body radiotherapy for lung cancer.
An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Saether, E.; Glaessgen, E.H.; Yamakov, V.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
NASA Astrophysics Data System (ADS)
Funk, C. C.; Shukla, S.; Hoerling, M. P.; Robertson, F. R.; Hoell, A.; Liebmann, B.
2013-12-01
During boreal spring, eastern portions of Kenya and Somalia have experienced more frequent droughts since 1999. Given the region's high levels of food insecurity, better predictions of these droughts could provide substantial humanitarian benefits. We show that dynamical-statistical seasonal climate forecasts, based on the latest generation of coupled atmosphere-ocean and uncoupled atmospheric models, effectively predict boreal spring rainfall in this area. Skill sources are assessed by comparing ensembles driven with full-ocean forcing with ensembles driven with ENSO-only sea surface temperatures (SSTs). Our analysis suggests that both ENSO and non-ENSO Indo-Pacific SST forcing have played an important role in the increase in drought frequencies. Over the past 30 years, La Niña drought teleconnections have strengthened, while non-ENSO Indo-Pacific convection patterns have also supported increased (decreased) Western Pacific (East African) rainfall. To further examine the relative contribution of ENSO, low frequency warming and the Pacific Decadal Oscillation, we present decompositions of ECHAM5, GFS, CAM4 and GMAO AMIP simulations. These decompositions suggest that rapid warming in the western Pacific and steeper western-to-central Pacific SST gradients have likely played an important role in the recent intensification of the Walker circulation, and the associated increase in East African aridity. A linear combination of time series describing the Pacific Decadal Oscillation and the strength of Indo-Pacific warming are shown to track East African rainfall reasonably well. The talk concludes with a few thoughts linking the potentially important interplay of attribution and prediction. At least for recent East African droughts, it appears that a characteristic Indo-Pacific SST and precipitation anomaly pattern can be linked statistically to support forecasts and attribution analyses. The combination of traditional AGCM attribution analyses with simple yet physically plausible statistical estimation procedures may help us better untangle some climate mysteries.
Dislocation dynamics, plasticity and avalanche statistics using the phase-field crystal model
NASA Astrophysics Data System (ADS)
Angheluta, Luiza
2013-03-01
The plastic deformation of stressed crystalline materials is characterized by intermittency and scaling behavior. The sudden strain bursts arise from collective interactions between depinned crystal defects such as dislocations. Recent experiments on sheared nanocrystals provide insights into the connection between the crystal plasticity and the mean field theory of the depinning transition, based on the similar power-law statistics of avalanche events. However, a complete theoretical formulation of this connection is still lacking, as are high quality numerical data. Phase field crystal modelling provides an efficient numerical approach to simulating the dynamics of dislocations in plastic flows at finite temperature. Dislocations are naturally created as defects in a periodic ground state that is being sheared, without any ad hoc creation and annihilation rules. These crystal defects interact and annihilate with one another, generating a collective effect of avalanches in the global plastic strain rate. We examine the statistics of plastic avalanches both at finite and zero temperatures, and find good agreement with the predictions of the mean field interface depinning theory. Moreover, we predict universal scaling forms for the extreme statistics of avalanches and universal relations between the power-law exponents of avalanche duration, size and extreme value. These results account for the observed power-law distribution of the maximum amplitudes in acoustic emission experiments of crystal plasticity, but are also broadly applicable to other systems in the mean-field interface depinning universality class, ranging from magnets to earthquakes. The work reported here was performed in collaboration with: Georgios Tsekenis, Michael LeBlanc, Patrick Y Chan, Jon Dantzig, Karin Dahmen, and Nigel Goldenfeld. The work was supported by the Center for Physics of Geological Processes (Norway) through a post-doctoral grant, the National Science Foundation through grant NSF-DMR-03-25939, NSF_DMR-1005209 and NSF-DMS-1069224 and DOE Subcontract No. 4000076535 (J.D.)
Ni, Bo; He, Fazhi; Yuan, ZhiYong
2015-12-01
Segmenting the lesion areas from ultrasound (US) images is an important step in the intra-operative planning of high-intensity focused ultrasound (HIFU). However, accurate segmentation remains a challenge due to intensity inhomogeneity, blurry boundaries in HIFU US images and the deformation of uterine fibroids caused by patient's breathing or external force. This paper presents a novel dynamic statistical shape model (SSM)-based segmentation method to accurately and efficiently segment the target region in HIFU US images of uterine fibroids. For accurately learning the prior shape information of lesion boundary fluctuations in the training set, the dynamic properties of stochastic differential equation and Fokker-Planck equation are incorporated into SSM (referred to as SF-SSM). Then, a new observation model of lesion areas (named to RPFM) in HIFU US images is developed to describe the features of the lesion areas and provide a likelihood probability to the prior shape given by SF-SSM. SF-SSM and RPFM are integrated into active contour model to improve the accuracy and robustness of segmentation in HIFU US images. We compare the proposed method with four well-known US segmentation methods to demonstrate its superiority. The experimental results in clinical HIFU US images validate the high accuracy and robustness of our approach, even when the quality of the images is unsatisfactory, indicating its potential for practical application in HIFU therapy. PMID:26459767
NASA Astrophysics Data System (ADS)
Khrennikov, Andrei
2011-03-01
The idea that quantum randomness can be reduced to randomness of classical fields (fluctuating at time and space scales which are essentially finer than scales approachable in modern quantum experiments) is rather old. Various models have been proposed, e.g., stochastic electrodynamics or the semiclassical model. Recently a new model, so called prequantum classical statistical field theory (PCSFT), was developed. By this model a "quantum system" is just a label for (so to say "prequantum") classical random field. Quantum averages can be represented as classical field averages. Correlations between observables on subsystems of a composite system can be as well represented as classical correlations. In particular, it can be done for entangled systems. Creation of such classical field representation demystifies quantum entanglement. In this paper we show that quantum dynamics (given by Schrödinger's equation) of entangled systems can be represented as the stochastic dynamics of classical random fields. The "effect of entanglement" is produced by classical correlations which were present at the initial moment of time, cf. views of Albert Einstein.
NASA Astrophysics Data System (ADS)
Fitzgerald, J.; Farrell, B.
2013-12-01
Equatorial deep jets (EDJs) are persistent, zonally-coherent jets found within one degree of the equator in all ocean basins (Luyten and Swallow, 1976). The jets are characterized by a vertically oscillating ('stacked') structure between ~500-2000m depth, with jet amplitudes on the order of 10 cm/s superimposed upon a large-scale background shear flow. EDJs are a striking feature of the equatorial climate system and play an important role in equatorial ocean transport. However, the physical mechanism responsible for the presence of EDJs remains uncertain. Previous theoretical models for EDJs have suggested mechanisms involving the reflection and constructive interference of equatorially trapped waves (Wunsch 1977, McCreary 1984) and the instability of mixed Rossby-gravity waves with EDJs as the fastest-growing eigenfunction (Hua et al. 2008, Eden et al. 2008). In this work we explore the jet formation mechanism and the parameter dependence of EDJ structure in the idealized theoretical model of the stochastically-driven equatorial beta plane. The model is formulated in three ways: 1) Fully nonlinear equations of motion 2) Quasilinear (or mean-field) dynamics 3) Statistical state dynamics employing a second order closure method (stochastic structural stability theory). Results from the three models are compared, and the implications for both the jet formation and equilibration mechanisms, as well as the role of eddy-eddy nonlinearity in the EDJ system, are discussed.
The scientists' opposition to SDI: How political views affect technical analysis
Tait, G.E.
1989-01-01
This study examines the scientists' opposition to President Reagan's Strategic Defense Initiative (1983-1989) with a focus on the relationship between the scientists' political and strategic opposition to ballistic missile defenses (BMD) and their technical doubts about BMD technologies. The study begins with a review of the scientists' increased influence in United State's national security decision making because of the development of atomic weapons. The study then examines the scientists' role in developing and promoting a theory of arms control based upon mutual societal vulnerability. Because of this theory, a large segment of the American scientific community came to believe that the development of ballistic missile defenses would destabilize the strategic balance and therefore took the lead in arguing against BMD deployments. These background chapters conclude with an analysis of the scientists' involvement in the political campaign to stop the proposed Sentinel and Safeguard Anti-Ballistic Missile defense. The study then turns to the contemporary scientific opposition to BMD deployments and the SDI research program. After examining the polls and petitions that identify the scientists opposed to SDI, the study analyzes the tactics that three scientists use in their political effort to prevent BMD deployments. Next, an examination of the political and strategic assumptions behind the scientists' opposition to BMD reveals that a belief in the arms control process and deterrence by punishment, especially Assured Destruction deterrence, with a fear of an action-reaction arms race inspires much of the contemporary opposition to BMD. Finally, the scientists' technical doubts about BMD technologies are analyzed through the prism of peer critique. These critiques show that the scientists opposed to BMD deployments us pessimistic and unrealistic assumptions to skew their technical analysis of BMD technologies.
Dynamical and statistical effects of the intrinsic curvature of internal space of molecules.
Teramoto, Hiroshi; Takatsuka, Kazuo
2005-02-15
The Hamilton dynamics of a molecule in a translationally and/or rotationally symmetric field is kept rigorously constrained in its phase space. The relevant dynamical laws should therefore be extracted from these constrained motions. An internal space that is induced by a projection of such a limited phase space onto configuration space is an intrinsically curved space even for a system of zero total angular momentum. In this paper we discuss the general effects of this curvedness on dynamics and structures of molecules in such a manner that is invariant with respect to the selection of coordinates. It is shown that the regular coordinate originally defined by Riemann is particularly useful to expose the curvature correction to the dynamics and statistical properties of molecules. These effects are significant both qualitatively and quantitatively and are studied in two aspects. One is the direct effect on dynamics: A trajectory receives a Lorentz-like force from the curved space as though it was placed in a magnetic field. The well-known problem of the trapping phenomenon at the transition state is analyzed from this point of view. By showing that the trapping force is explicitly described in terms of the curvature of the internal space, we clarify that the physical origin of the trapped motion is indeed originated from the curvature of the internal space and hence is not dependent of the selection of coordinate system. The other aspect is the effect of phase space volume arising from the curvedness: We formulate a general expression of the curvature correction of the classical density of states and extract its physical significance in the molecular geometry along with reaction rate in terms of the scalar curvature and volume loss (gain) due to the curvature. The transition state theory is reformulated from this point of view and it is applied to the structural transition of linear chain molecules in the so-called dihedral angle model. It is shown that the curvature effect becomes large roughly linearly with the size of molecule. PMID:15743215
NASA Astrophysics Data System (ADS)
Ahn, J.; Lee, J.; Shim, K.; Kim, Y.
2013-12-01
In spite of dense meteorological observation conducting over South Korea (The average distance between stations: ~ 12.7km), the detailed topographical effect is not reflected properly due to its mountainous terrains and observation sites mostly situated on low altitudes. A model represents such a topographical effect well, but due to systematic biases in the model, the general temperature distribution is sometimes far different from actual observation. This study attempts to produce a detailed mean temperature distribution for South Korea through a method combining dynamical downscaling and statistical correction. For the dynamical downscaling, a multi-nesting technique is applied to obtain 3-km resolution data with a focus on the domain for the period of 10 years (1999-2008). For the correction of systematic biases, a perturbation method divided into the mean and the perturbation part was used with a different correction method being applied to each part. The mean was corrected by a weighting function while the perturbation was corrected by the self-organizing maps method. The results with correction agree well with the observed pattern compared to those without correction, improving the spatial and temporal correlations as well as the RMSE. In addition, they represented detailed spatial features of temperature including topographic signals, which cannot be expressed properly by gridded observation. Through comparison with in-situ observation with gridded values after objective analysis, it was found that the detailed structure correctly reflected topographically diverse signals that could not be derived from limited observation data. We expect that the correction method developed in this study can be effectively used for the analyses and projections of climate downscaled by using region climate models. Acknowledgements This work was carried out with the support of Korea Meteorological Administration Research and Development Program under Grant CATER 2012-3083 and Rural Development Administration Cooperative Research Program for Agriculture Science and Technology Development under Grant Project No. PJ009353, Republic of Korea. Reference Ahn, J.-B., Lee, J.-L., and Im, E.-S., 2012: The reproducibility of surface air temperature over South Korea using dynamical downscaling and statistical correction, J. Meteor. Soc. Japan, 90, 493-507, doi: 10.2151/jmsj.2012-404
NASA Astrophysics Data System (ADS)
Balasis, G.
2012-04-01
Dynamical complexity detection for output time series of complex systems is one of the foremost problems in physics, biology, engineering, and economic sciences. Especially in geomagnetism and magnetospheric physics, accurate detection of the dissimilarity between normal and abnormal states (e.g. pre-storm activity and magnetic storms) can vastly improve geomagnetic field modelling as well as space weather forecasting, respectively. Nonextensive statistical mechanics through Tsallis entropy provides a solid theoretical basis for describing and analyzing complex systems out of equilibrium, particularly systems exhibiting long-range correlations or fractal properties. Entropy measures (e.g., Tsallis entropy, Shannon entropy, block entropy, Kolmogorov entropy, T-complexity, and approximate entropy) have been proven effectively applicable for the investigation of dynamical complexity in Dst time series. It has been demonstrated that as a magnetic storm approaches, there is clear evidence of significantly lower complexity in the magnetosphere. The observed higher degree of organization of the system agrees with results previously inferred from fractal analysis via estimates of the Hurst exponent based on wavelet transform. This convergence between entropies and linear analyses provides a more reliable detection of the transition from the quiet time to the storm time magnetosphere, thus showing evidence that the occurrence of an intense magnetic storm is imminent. Moreover, based on the general behavior of complex system dynamics it has been recently found that Dst time series exhibit discrete scale invariance which in turn leads to log-periodic corrections to scaling that decorate the pure power law. The latter can be used for the determination of the time of occurrence of an approaching magnetic storm.
Soares, Jitesh A.; Ellermeier, Craig D.; Altier, Craig; Lawhon, Sara D.; Adams, L. Garry; Konjufca, Vjollca; Curtiss, Roy; Slauch, James M.; Ahmer, Brian M. M.
2008-01-01
Background LuxR-type transcription factors are typically used by bacteria to determine the population density of their own species by detecting N-acylhomoserine lactones (AHLs). However, while Escherichia and Salmonella encode a LuxR-type AHL receptor, SdiA, they cannot synthesize AHLs. In vitro, it is known that SdiA can detect AHLs produced by other bacterial species. Methodology/Principal Findings In this report, we tested the hypothesis that SdiA detects the AHL-production of other bacterial species within the animal host. SdiA did not detect AHLs during the transit of Salmonella through the gastrointestinal tract of a guinea pig, a rabbit, a cow, 5 mice, 6 pigs, or 12 chickens. However, SdiA was activated during the transit of Salmonella through turtles. All turtles examined were colonized by the AHL-producing species Aeromonas hydrophila. Conclusions/Significance We conclude that the normal gastrointestinal microbiota of most animal species do not produce AHLs of the correct type, in an appropriate location, or in sufficient quantities to activate SdiA. However, the results obtained with turtles represent the first demonstration of SdiA activity in animals. PMID:18665275
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-01-01
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave ?88.4%, ?96.0%, ?76.7%, and ?65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring ?94.7%, ?81.6%, ?84.0%, and ?72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, ?11.8%, and ?3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was ?9.7%, 8.8%, ?3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937.9% for the 500 mA FBP, 25 mA SIR, and 25 mA FBP, respectively. In numerical simulations, SIR mitigated streak artifacts in the low dose data and yielded flow maps with mean error <7% and standard deviation <9% of mean, for 30×30 pixel ROIs (12.9 × 12.9 mm2). In comparison, low dose FBP flow errors were ?38% to +258%, and standard deviation was 6%–93%. Additionally, low dose SIR achieved 4.6 times improvement in flow map CNR2 per unit input dose compared to low dose FBP. Conclusions: SIR reconstruction can reduce image noise and mitigate streaking artifacts caused by photon starvation in dynamic CT myocardial perfusion data sets acquired at low dose (low tube current), and improve perfusion map quality in comparison to FBP reconstruction at the same dose. PMID:24989392
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-07-15
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave âˆ’88.4%, âˆ’96.0%, âˆ’76.7%, and âˆ’65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring âˆ’94.7%, âˆ’81.6%, âˆ’84.0%, and âˆ’72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, âˆ’11.8%, and âˆ’3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was âˆ’9.7%, 8.8%, âˆ’3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937.9% for the 500 mA FBP, 25 mA SIR, and 25 mA FBP, respectively. In numerical simulations, SIR mitigated streak artifacts in the low dose data and yielded flow maps with mean error <7% and standard deviation <9% of mean, for 30Ã—30 pixel ROIs (12.9 Ã— 12.9 mm{sup 2}). In comparison, low dose FBP flow errors were âˆ’38% to +258%, and standard deviation was 6%â€“93%. Additionally, low dose SIR achieved 4.6 times improvement in flow map CNR{sup 2} per unit input dose compared to low dose FBP. Conclusions: SIR reconstruction can reduce image noise and mitigate streaking artifacts caused by photon starvation in dynamic CT myocardial perfusion data sets acquired at low dose (low tube current), and improve perfusion map quality in comparison to FBP reconstruction at the same dose.
Zhang, Yu J; Harte, John
2015-11-01
Model predictions for species competition outcomes highly depend on the assumed form of the population growth function. In this paper we apply an alternative inferential method based on statistical mechanics, maximizing Boltzmann entropy, to predict resource-constrained population dynamics and coexistence. Within this framework, population dynamics and competition outcome can be determined without assuming any particular form of the population growth function. The dynamics of each species is determined by two parameters: the mean resource requirement ? (related to the mean metabolic rate) and individual distinguishability Dr (related to intra- compared to interspecific functional variation). Our theory clarifies the condition for the energetic equivalence rule (EER) to hold, and provide a statistical explanation for the importance of species functional variation in determining population dynamics and coexistence patterns. PMID:26226230
NASA Astrophysics Data System (ADS)
Demaria, E. M.; Troch, P. A.; Durcik, M.; Dominguez, F.; Rajagopal, S.
2010-12-01
The Phoenix, AZ metro area is the twelfth largest in the US, and the Phoenix valley is experiencing rapid population growth. Water resource availability in the 21st century for the region is of great concern to water managers. The City of Phoenix sources its surface water supplies from two watersheds: the Colorado River and the Salt/Verde Rivers. In this research we simulated the potential impacts of climate change on the hydrology of the Salt and Verde River basins in Arizona, using statistically and dynamically downscaled climate scenarios from the Hadley Centre Coupled Model, version 3 (HadCM3). Statistically (STA) and Dynamically (DYN) downscaled precipitation and temperature data were used to force the Variable Infiltration Capacity (VIC) hydrological model. DYN streamflow simulations for the winter season showed no significant changes throughout the century whereas STA streamflow simulations decreased in the first three decades before increasing in the final five decades of the century. Simulated streamflows in the summer season were larger than streamflows in the historical record for the DYN data; these increases were strongly tied to increased precipitation. The STA data showed simulated streamflows systematically below the historical period for the Salt River basin and a similar pattern to the simulated winter flows for the Verde River basin. An analysis of the frequency of maximum monthly volumes indicated a slight increase in the magnitude of events in the future whereas streamflow deficits are more extreme in the 21st century for the STA simulated flows. DYN simulated maximum monthly streamflows will become slightly smaller than in the present and the severity of streamflow deficits will be reduced, particularly in the Salt River basin. Potential reasons for the discrepancies between STA and DYN simulations might be explained by differences in the temporal and spatial distribution of rainfall events, from the temporal disaggregation of monthly precipitation and temperature performed to the STA data, and from a better representation of intra and interannual variability in the DYN data. These results suggest that the downscaling method used plays an important role in the magnitude of simulated streamflows. This research can be used as a planning tool by Phoenix area water managers: the simulated streamflow could force their reservoir and water resource management models.
NASA Astrophysics Data System (ADS)
Molini, A.
2012-12-01
Precipitation is one of the major drivers of ecosystem dynamics. Such control is the result of complex dynamical interactions, seldom non linear, and exerted over a wide range of space and time scales. For this reason, if for example precipitation variability and intermittency are known to be among the main drivers of plants production, with a consequent influence on Carbon and Nitrogen cycles, the complete pathway of such a forcing remains often unclear. Traditional time series analysis bases the study of these inter-connections on linear correlation statistics. However, the possible presence of causal dynamical connections, as well as non-linear couplings and non-stationarity can affect the performance of these tools. Additionally, dynamical drivers can act simultaneously over different space and time scales. Given this premise, this talk explores linear and non-linear correlation patterns, information flows and directional couplings characterizing the control of precipitation on ecosystem dynamics by using an ensemble of statistics borrowed from information theory, non-linear dynamical systems analysis and multi-resolution spectral decomposition. In particular, we focus on the development of an extension to the frequency domain of delayed correlation and conditional mutual information functions, and on the implementation of directional coupling measures as conditional spectral causality, phase-slope index, and transfer entropy in the wavelet domain. Several examples, from different climatic regimes, are discussed with the goal of highlighting strengths and weaknesses of these statistics.
The influence of isospin on both statistical and dynamical aspects of HI reactions
NASA Astrophysics Data System (ADS)
Sobotka, Lee
2003-04-01
Several aspects of how isospin can influence reactions will be discussed. From the statistical side, I will review how isospin influences the level density and, for example, residue production in fusion reactions. The most interesting aspect here is how the continuum, many-body effects and isospin conspire to make the general question of nuclear level densities at the limits of stability interesting. What is already known about the isospin dependence of nuclear level densities, what can be experimentally considered with the facilities presently available and those on the drawing board will be discussed. The status of our knowledge of how isospin influences heavy-ion reaction dynamics at intermediate energy will be presented. The theoretical argument for isospin fractionation will be reviewed as well as the some of the pit-falls in searching for this effect experimentally. The present ambiguous status of this search is reviewed. Finally, I will address the largest issue in this subfield, the possibility that flow (and other) measurements might contribute to our knowledge of the isopin dependence of the EoS. Our present uncertainty and significance of this aspect of the EoS as well as what specific measurements can be done to address this issue will be presented.
Dynamical and statistical behavior of discrete combustion waves: a theoretical and numerical study.
Bharath, Naine Tarun; Rashkovskiy, Sergey A; Tewari, Surya P; Gundawar, Manoj Kumar
2013-04-01
We present a detailed theoretical and numerical study of combustion waves in a discrete one-dimensional disordered system. The distances between neighboring reaction cells were modeled with a gamma distribution. The results show that the random structure of the microheterogeneous system plays a crucial role in the dynamical and statistical behavior of the system. This is a consequence of the nonlinear interaction of the random structure of the system with the thermal wave. An analysis of the experimental data on the combustion of a gasless system (Ti + xSi) and a wide range of thermite systems was performed in view of the developed model. We have shown that the burning rate of the powder system sensitively depends on its internal structure. The present model allows for reproducing theoretically the experimental data for a wide range of pyrotechnic mixtures. We show that Arrhenius' macrokinetics at combustion of disperse systems can take place even in the absence of Arrhenius' microkinetics; it can have a purely thermal nature and be related to their heterogeneity and to the existence of threshold temperature. It is also observed that the combustion of disperse systems always occurs in the microheterogeneous mode according to the relay-race mechanism. PMID:23679470
Dynamical and statistical behavior of discrete combustion waves: A theoretical and numerical study
NASA Astrophysics Data System (ADS)
Bharath, Naine Tarun; Rashkovskiy, Sergey A.; Tewari, Surya P.; Gundawar, Manoj Kumar
2013-04-01
We present a detailed theoretical and numerical study of combustion waves in a discrete one-dimensional disordered system. The distances between neighboring reaction cells were modeled with a gamma distribution. The results show that the random structure of the microheterogeneous system plays a crucial role in the dynamical and statistical behavior of the system. This is a consequence of the nonlinear interaction of the random structure of the system with the thermal wave. An analysis of the experimental data on the combustion of a gasless system (Ti + xSi) and a wide range of thermite systems was performed in view of the developed model. We have shown that the burning rate of the powder system sensitively depends on its internal structure. The present model allows for reproducing theoretically the experimental data for a wide range of pyrotechnic mixtures. We show that Arrheniusâ€™ macrokinetics at combustion of disperse systems can take place even in the absence of Arrheniusâ€™ microkinetics; it can have a purely thermal nature and be related to their heterogeneity and to the existence of threshold temperature. It is also observed that the combustion of disperse systems always occurs in the microheterogeneous mode according to the relay-race mechanism.
NASA Astrophysics Data System (ADS)
Roth, A. E.; Jones, C. D.; Durian, D. J.
2013-04-01
We report on the statistics of bubble size, topology, and shape and on their role in the coarsening dynamics for foams consisting of bubbles compressed between two parallel plates. The design of the sample cell permits control of the liquid content, through a constant pressure condition set by the height of the foam above a liquid reservoir. We find that in the scaling regime, all bubble distributions are independent not only of time, but also of liquid content. For coarsening, the average rate decreases with liquid content due to the blocking of gas diffusion by Plateau borders inflated with liquid; we achieve a factor of 4 reduction from the dry limit. By observing the growth rate of individual bubbles, we find that von Neumann's law becomes progressively violated with increasing wetness and decreasing bubble size. We successfully model this behavior by explicitly incorporating the border-blocking effect into the von Neumann argument. Two dimensionless bubble shape parameters naturally arise, one of which is primarily responsible for the violation of von Neumann's law for foams that are not perfectly dry.
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
NASA Technical Reports Server (NTRS)
Zhang, Taiping
1994-01-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations.
Exploring the String Landscape: The Dynamics, Statistics, and Cosmology of Parallel Worlds
NASA Astrophysics Data System (ADS)
Ahlqvist, Stein Pontus
This dissertation explores various facets of the low-energy solutions in string theory known as the string landscape. Three separate questions are addressed - the tunneling dynamics between these vacua, the statistics of their location in moduli space, and the potential realization of slow-roll inflation in the flux potentials generated in string theory. We find that the tunneling transitions that occur between a certain class of supersymmetric vacua related to each other via monodromies around the conifold point are sensitive to the details of warping in the near-conifold regime. We also study the impact of warping on the distribution of vacua near the conifold and determine that while previous work has concluded that the conifold point acts as an accumulation point for vacua, warping highly dilutes the distribution in precisely this regime. Finally we investigate a novel form of inflation dubbed spiral inflation to see if it can be realized near the conifold point. We conclude that for our particular models, spiral inflation seems to rely on a de Sitter-like vacuum energy. As a result, whenever spiral inflation is realized, the inflation is actually driven by a vacuum energy.
OneGeology Web Services and Portal as a global geological SDI - latest standards and technology
NASA Astrophysics Data System (ADS)
Duffy, Tim; Tellez-Arenas, Agnes
2014-05-01
The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone wishing to discover the availability of global geological web services and has new functionality to view and use such services including multiple projection support. KEYWORDS : OneGeology; GeoSciML V 3.2; Data exchange; Portal; INSPIRE; Standards; OGC; Interoperability; GeoScience information; WMS; WFS; Cookbook.
ERIC Educational Resources Information Center
Callamaras, Peter
1983-01-01
This buyer's guide to seven major types of statistics software packages for microcomputers reviews Edu-Ware Statistics 3.0; Financial Planning; Speed Stat; Statistics with DAISY; Human Systems Dynamics package of Stats Plus, ANOVA II, and REGRESS II; Maxistat; and Moore-Barnes' MBC Test Construction and MBC Correlation. (MBR)
He, Jiajie; Dougherty, Mark; Shaw, Joey; Fulton, John; Arriaga, Francisco
2011-10-01
Rural areas represent approximately 95% of the 14000 km(2) Alabama Black Belt, an area of widespread Vertisols dominated by clayey, smectitic, shrink-swell soils. These soils are unsuitable for conventional onsite wastewater treatment systems (OWTS) which are nevertheless widely used in this region. In order to provide an alternative wastewater dosing system, an experimental field moisture controlled subsurface drip irrigation (SDI) system was designed and installed as a field trial. The experimental system that integrates a seasonal cropping system was evaluated for two years on a 500-m(2) Houston clay site in west central Alabama from August 2006 to June 2008. The SDI system was designed to start hydraulic dosing only when field moisture was below field capacity. Hydraulic dosing rates fluctuated as expected with higher dosing rates during warm seasons with near zero or zero dosing rates during cold seasons. Lower hydraulic dosing in winter creates the need for at least a two-month waste storage structure which is an insurmountable challenge for rural homeowners. An estimated 30% of dosed water percolated below 45-cm depth during the first summer which included a 30-year historic drought. This massive volume of percolation was presumably the result of preferential flow stimulated by dry weather clay soil cracking. Although water percolation is necessary for OWTS, this massive water percolation loss indicated that this experimental system is not able to effective control soil moisture within its monitoring zone as designed. Overall findings of this study indicated that soil moisture controlled SDI wastewater dosing is not suitable as a standalone system in these Vertisols. However, the experimental soil moisture control system functioned as designed, demonstrating that soil moisture controlled SDI wastewater dosing may find application as a supplement to other wastewater disposal methods that can function during cold seasons. PMID:21621905
Yano, Ayaka; Nicol, Barbara; Jouanno, Elodie; Quillet, Edwige; Fostier, Alexis; Guyomard, RenÃ©; Guiguen, Yann
2013-01-01
All salmonid species investigated to date have been characterized with a male heterogametic sex-determination system. However, as these species do not share any Y-chromosome conserved synteny, there remains a debate on whether they share a common master sex-determining gene. In this study, we investigated the extent of conservation and evolution of the rainbow trout (Oncorhynchus mykiss) master sex-determining gene, sdY (sexually dimorphic on the Y-chromosome), in 15 different species of salmonids. We found that the sdY sequence is highly conserved in all salmonids and that sdY is a male-specific Y-chromosome gene in the majority of these species. These findings demonstrate that most salmonids share a conserved sex-determining locus and also strongly suggest that sdY may be this conserved master sex-determining gene. However, in two whitefish species (subfamily Coregoninae), sdY was found both in males and females, suggesting that alternative sex-determination systems may have also evolved in this family. Based on the wide conservation of sdY as a male-specific Y-chromosome gene, efficient and easy molecular sexing techniques can now be developed that will be of great interest for studying these economically and environmentally important species. PMID:23745140
Statistical properties and pre-hit dynamics of price limit hits in the Chinese stock markets.
Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
2015-01-01
Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders' short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners. PMID:25874716
NASA Astrophysics Data System (ADS)
Gai, Lili; Iacovella, Christopher R.; Wan, Li; McCabe, Clare; Cummings, Peter T.
2015-08-01
The fluid-solid phase transition behavior of nano-confined Lennard-Jones fluids as a function of temperature and degree of nanoconfinement has been studied via statistical temperature molecular dynamics (STMD). The STMD method allows the direct calculation of the density of states and thus the heat capacity with high efficiency. The fluids are simulated between parallel solid surfaces with varying pore sizes, wall-fluid interaction energies, and registry of the walls. The fluid-solid phase transition behavior has been characterized through determination of the heat capacity. The results show that for pores of ideal-spacing, the order-disorder transition temperature (TODT) is reduced as the pore size increases until values consistent with that seen in a bulk system. Also, as the interaction between the wall and fluid is reduced, TODT is reduced due to weak constraints from the wall. However, for non-ideal spacing pores, quite different behavior is obtained, e.g., generally TODT are largely reduced, and TODT is decreased as the wall constraint becomes larger. For unaligned walls (i.e., whose lattices are not in registry), the fluid-solid transition is also detected as T is reduced, indicating non-ideality in orientation of the walls does not impact the formation of a solid, but results in a slight change in TODT compared to the perfectly aligned systems. The STMD method is demonstrated to be a robust way for probing the phase transitions of nanoconfined fluids systematically, enabling the future examination of the phase transition behavior of more complex fluids.
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
Zhang, T. )
1994-06-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.
Statistical Properties and Pre-Hit Dynamics of Price Limit Hits in the Chinese Stock Markets
Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
2015-01-01
Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off tradersâ€™ short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners. PMID:25874716
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
Wood, Andrew W; Leung, Lai R; Sridhar, V; Lettenmaier, D P
2004-01-01
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975â€“1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040â€“2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods â€“ linear interpolation (LI), spatial disaggregation (SD), and bias-correction and spatial disaggregation (BCSD) â€“ each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM â€“ at Â½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.
Development of a current collection loss management system for SDI homopolar power supplies
Brown, D.W.
1989-01-01
High speed, high power density current collection systems have been identified as an enabling technology required to construct homopolar power supplies to meet SDI missions. This work is part of a three-year effort directed towards the analysis, experimental verification, and prototype construction of a current collection system designed to operate continuously at 2 kA/cm{sup 2}, at a rubbing speed of 200 m/s, and with acceptable losses in a space environment. To data, no system has achieved these conditions simultaneously. This is the annual report covering the second year period of performance on DOE contract DE-AC03-86SF16518. Major areas covered include design, construction and operation of a cryogenically cooled brush test rig, design and construction of a high speed brush test rig, optimization study for homopolar machines, loss analysis of the current collection system, and an application study which defines the air-core homopolar construction necessary to achieve the goal of 80--90 kW/kg generator power density. 17 figs., 2 tabs.
Development of a current collection loss management system for SDI homopolar power supplies
Hannan, W.F. III.
1987-01-01
High speed, high power density current collection systems have been identified as an enabling technology required to construct homopolar power supplies to meet SDI missions. This work is part of a three-year effort directed towards the analysis, experimental verification, and prototype construction of a current collection system designed to operated continuously at 2 kA/cm{sup 2}, at a rubbing speed of 200 m/s, and with acceptable losses in a space environment. To data, no system has achieved these conditions simultaneously. This is the annual report covering the first year period of performance on DOE contract DE-AC03-86SF16518. Major areas covered include design and construction of a cryogenically-cooled brush test rig, design of a high speed brush test rig, loss analysis of the current collection system, and an application study which defines the air core homopolar construction necessary to achieve the goal of 80--90 kW/kg generator power density. 15 figs.
A review of gas-cooled reactor concepts for SDI (Strategic Defense Initiative) applications
Marshall, A.C.
1989-08-01
We have completed a review of multimegawatt gas-cooled reactor concepts proposed for SDI applications. Our study concluded that the principal reason for considering gas-cooled reactors for burst-mode operation was the potential for significant system mass savings over closed-cycle systems if open-cycle gas-cooled operation (effluent exhausted to space) is acceptable. The principal reason for considering gas-cooled reactors for steady-state operation is that they may represent a lower technology risk than other approaches. In the review, nine gas-cooled reactor concepts were compared to identify the most promising. For burst-mode operation, the NERVA (Nuclear Engine for Rocket Vehicle Application) derivative reactor concept emerged as a strong first choice since its performance exceeds the anticipated operational requirements and the technology has been demonstrated and is retrievable. Although the NERVA derivative concepts were determined to be the lead candidates for the Multimegawatt Steady-State (MMWSS) mode as well, their lead over the other candidates is not as great as for the burst mode. 90 refs., 2 figs., 10 tabs.
Zilany, Muhammad S. A.; Carney, Laurel H.
2010-01-01
Neurons in the auditory system respond to recent stimulus-level history by adapting their response functions according to the statistics of the stimulus, partially alleviating the so-called “dynamic-range problem.” However, the mechanism and source of this adaptation along the auditory pathway remain unknown. Inclusion of power-law dynamics in a phenomenological model of the inner hair cell (IHC)- auditory nerve (AN) synapse successfully explained neural adaptation to sound-level statistics, including the time course of adaptation of the mean firing rate and changes in the dynamic range observed in AN responses. A direct comparison between model responses to a dynamic stimulus and to an “inversely-gated” static background suggested that AN dynamic-range adaptation largely results from the adaptation produced by the response history. These results support the hypothesis that the potential mechanism underlying the dynamic-range adaptation observed at the level of the auditory nerve is located peripheral to the spike generation mechanism and central to the IHC receptor potential. PMID:20685981
NASA Astrophysics Data System (ADS)
Karakatsanis, L. P.; Pavlos, G. P.; Xenakis, M. N.
2013-09-01
In this study which is the continuation of the first part (Pavlos et al. 2012) [1], the nonlinear analysis of the solar flares index is embedded in the non-extensive statistical theory of Tsallis (1988) [3]. The q-triplet of Tsallis, as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the singular value decomposition (SVD) components of the solar flares timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using theq-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2000) [25]. Our analysis showed clearly the following: (a) a phase transition process in the solar flare dynamics from a high dimensional non-Gaussian self-organized critical (SOC) state to a low dimensional also non-Gaussian chaotic state, (b) strong intermittent solar corona turbulence and an anomalous (multifractal) diffusion solar corona process, which is strengthened as the solar corona dynamics makes a phase transition to low dimensional chaos, (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of the functions: (i) non-Gaussian probability distribution function P(x), (ii) f(a) and D(q), and (iii) J(p) for the solar flares timeseries and its underlying non-equilibrium solar dynamics, and (d) the solar flare dynamical profile is revealed similar to the dynamical profile of the solar corona zone as far as the phase transition process from self-organized criticality (SOC) to chaos state. However the solar low corona (solar flare) dynamical characteristics can be clearly discriminated from the dynamical characteristics of the solar convection zone.
ERIC Educational Resources Information Center
Koparan, Timur
2016-01-01
In this study, the effect on the achievement and attitudes of prospective teachers is examined. With this aim ahead, achievement test, attitude scale for statistics and interviews were used as data collection tools. The achievement test comprises 8 problems based on statistical data, and the attitude scale comprises 13 Likert-type items. The studyâ€¦
NASA Astrophysics Data System (ADS)
Wen, Haohua; Woo, C. H.
2016-03-01
Contributions from the vibrational thermodynamics of phonons and magnons in the dynamic simulations of thermally activated atomic processes in crystalline materials were considered within the framework of classical statistics in conventional studies. The neglect of quantum effects produces the wrong lattice and spin dynamics and erroneous activation characteristics, sometimes leading to the incorrect results. In this paper, we consider the formation and migration of mono-vacancy in BCC iron over a large temperature range from 10 K to 1400 K, across the ferro/paramagnetic phase boundary. Entropies and enthalpies of migration and formation are calculated using quantum heat baths based on a Bose-Einstein statistical description of thermal excitations in terms of phonons and magnons. Corrections due to the use of classical heat baths are evaluated and discussed.
Liang, Shiuan-Ni; Lan, Boon Leong
2012-01-01
The Newtonian and special-relativistic statistical predictions for the mean, standard deviation and probability density function of the position and momentum are compared for the periodically-delta-kicked particle at low speed. Contrary to expectation, we find that the statistical predictions, which are calculated from the same parameters and initial Gaussian ensemble of trajectories, do not always agree if the initial ensemble is sufficiently well-localized in phase space. Moreover, the breakdown of agreement is very fast if the trajectories in the ensemble are chaotic, but very slow if the trajectories in the ensemble are non-chaotic. The breakdown of agreement implies that special-relativistic mechanics must be used, instead of the standard practice of using Newtonian mechanics, to correctly calculate the statistical predictions for the dynamics of a low-speed system. PMID:22606259
NASA Astrophysics Data System (ADS)
Ulyanov, Sergey S.; Tuchin, Valery V.; Bednov, Andrey A.
1995-01-01
The theoretical investigation of the processes of strongly focused Gaussian beams diffraction in blood capillaries with a diameter a bit greater than the erythrocyte size have been carried out. Spatial-temporal correlation functions of intensity fluctuations in dynamic statistically inhomogeneous speckles have been studied. Modified speckle-interferometrical method using strongly focused Gaussian beam scattering is suggested for blood flow measurements. The possibilities of this method application to blood and lymph flow velocity monitoring in narrow vessels has been analyzed.
Argonne CW Linac (ACWL)â€”legacy from SDI and opportunities for the future
NASA Astrophysics Data System (ADS)
McMichael, G. E.; Yule, T. J.
1995-09-01
The former Strategic Defense Initiative Organization (SDIO) invested significant resources over a 6-year period to develop and build an accelerator to demonstrate the launching of a cw beam with characteristics suitable for a space-based Neutral Particle Beam (NPB) system. This accelerator, the CWDD (Continuous Wave Deuterium Demonstrator) accelerator, was designed to accelerate 80 mA cw of D- to 7.5 MeV. A considerable amount of hardware was constructed and installed in the Argonne-based facility, and major performance milestones were achieved before program funding from the Department of Defense ended in October 1993. Existing assets have been turned over to Argonne. Assets include a fully functional 200 kV cw D- injector, a cw RFQ that has been tuned, leak checked and aligned, beam lines and a high-power beam stop, all installed in a shielded vault with appropriate safety and interlock systems. In addition, there are two high power (1 MW) cw rf amplifiers and all the ancillary power, cooling and control systems required for a high-power accelerator system. The SDI mission required that the CWDD accelerator structures operate at cryogenic temperatures (26K), a requirement that placed severe limitations on operating period (CWDD would have provided 20 seconds of cw beam every 90 minutes). However, the accelerator structures were designed for full-power rf operation with water cooling and ACWL (Argonne Continuous Wave Linac), the new name for CWDD in its water-cooled, positive-ion configuration, will be able to operate continuously. Project status and achievements will be reviewed. Preliminary design of a proton conversion for the RFQ, and other proposals for turning ACWL into a testbed for cw-linac engineering, will be discussed.
NASA Astrophysics Data System (ADS)
Hong, Mei; Zhang, Ren; Wang, Dong; Feng, Mang; Wang, Zhengxin; Singh, Vijay P.
2015-07-01
To address the inaccuracy of long-term El Niño-Southern Oscillation (ENSO) forecasts, a new dynamical-statistical forecasting model of the ENSO index was developed based on dynamical model reconstruction and improved self-memorization. To overcome the problem of single initial prediction values, the largest Lyapunov exponent was introduced to improve the traditional self-memorization function, thereby making it more effective for describing chaotic systems, such as ENSO. Equation reconstruction, based on actual data, was used as a dynamical core to overcome the problem of using a simple core. The developed dynamical-statistical forecasting model of the ENSO index is used to predict the sea surface temperature anomaly in the equatorial eastern Pacific and El Niño/La Niña events. The real-time predictive skills of the improved model were tested. The results show that our model predicted well within lead times of 12 months. Compared with six mature models, both temporal correlation and root mean square error of the improved model are slightly worse than those of the European Centre for Medium-Range Weather Forecasts model, but better than those of the other five models. Additionally, the margin between the forecast results in summer and those in winter is not great, which means that the improved model can overcome the "spring predictability barrier", to some extent. Finally, a real-time prediction experiment is carried out beginning in September 2014. Our model is a new exploration of the ENSO forecasting method.
Statistical dynamics of classical systems: A self-consistent field approach
Grzetic, Douglas J. Wickham, Robert A.; Shi, An-Chang
2014-06-28
We develop a self-consistent field theory for particle dynamics by extremizing the functional integral representation of a microscopic Langevin equation with respect to the collective fields. Although our approach is general, here we formulate it in the context of polymer dynamics to highlight satisfying formal analogies with equilibrium self-consistent field theory. An exact treatment of the dynamics of a single chain in a mean force field emerges naturally via a functional Smoluchowski equation, while the time-dependent monomer density and mean force field are determined self-consistently. As a simple initial demonstration of the theory, leaving an application to polymer dynamics for future work, we examine the dynamics of trapped interacting Brownian particles. For binary particle mixtures, we observe the kinetics of phase separation.
NASA Astrophysics Data System (ADS)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Mike; Gershunov, Alexander; Gutowski, William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun-Young; Lim, Young-Kwon; Prabhat
2015-05-01
The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.
NASA Astrophysics Data System (ADS)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Mike; Gershunov, Alexander; Gutowski, William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun-Young; Lim, Young-Kwon; Prabhat
2016-02-01
The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.
NASA Astrophysics Data System (ADS)
Lode, Axel U. J.; Chakrabarti, Barnali; Kota, Venkata K. B.
2015-09-01
We study the quantum many-body dynamics and the entropy production triggered by an interaction quench in a system of N =10 interacting identical bosons in an external one-dimensional harmonic trap. The multiconfigurational time-dependent Hartree method for bosons (MCTDHB) is used for solving the time-dependent SchrÃ¶dinger equation at a high level of accuracy. We consider many-body entropy measures such as the Shannon information entropy, number of principal components, and occupation entropy that are computed from the time-dependent many-body basis set used in MCTDHB. These measures quantify relevant physical features such as irregular or chaotic dynamics, statistical relaxation, and thermalization. We monitor the entropy measures as a function of time and assess how they depend on the interaction strength. For larger interaction strength, the many-body information and occupation entropies approach the value predicted for the Gaussian orthogonal ensemble of random matrices. This implies statistical relaxation. The basis states of MCTDHB are explicitly time-dependent and optimized by the variational principle in a way that minimizes the number of significantly contributing ones. It is therefore a nontrivial fact that statistical relaxation prevails in MCTDHB computations. Moreover, we demonstrate a fundamental connection between the production of entropy, the buildup of correlations and loss of coherence in the system. Our findings imply that mean-field approaches such as the time-dependent Gross-Pitaevskii equation cannot capture statistical relaxation and thermalization because they neglect correlations. Since the coherence and correlations are experimentally accessible, their present connection to many-body entropies can be scrutinized to detect statistical relaxation. In this work we use the recent recursive software implementation of the MCTDHB (R-MCTDHB).
NASA Astrophysics Data System (ADS)
Frossard, L.; Rieder, H. E.; Ribatet, M.; Staehelin, J.; Maeder, J. A.; Di Rocco, S.; Davison, A. C.; Peter, T.
2012-05-01
We use models for mean and extreme values of total column ozone on spatial scales to analyze "fingerprints" of atmospheric dynamics and chemistry on long-term ozone changes at northern and southern mid-latitudes. The r-largest order statistics method is used for pointwise analysis of extreme events in low and high total ozone (termed ELOs and EHOs, respectively). For the corresponding mean value analysis a pointwise autoregressive moving average model (ARMA) is used. The statistical models include important atmospheric covariates to describe the dynamical and chemical state of the atmosphere: the solar cycle, the Quasi-Biennial Oscillation (QBO), ozone depleting substances (ODS) in terms of equivalent effective stratospheric chlorine (EESC), the North Atlantic Oscillation (NAO), the Antarctic Oscillation (AAO), the El~Niño/Southern Oscillation (ENSO), and aerosol load after the volcanic eruptions of El Chichón and Mt. Pinatubo. The influence of the individual covariates on mean and extreme levels in total column ozone is derived on a grid cell basis. The results show that "fingerprints", i.e., significant influence, of dynamical and chemical features are captured in both the "bulk" and the tails of the ozone distribution, respectively described by means and EHOs/ELOs. While results for the solar cycle, QBO and EESC are in good agreement with findings of earlier studies, unprecedented spatial fingerprints are retrieved for the dynamical covariates.
Investigation of statistical properties of lymph-flow dynamics using speckle microscopy
NASA Astrophysics Data System (ADS)
Bednov, Andrey A.; Galanzha, Ekateryna I.; Tuchin, Valery V.; Ulyanov, Sergey S.; Brill, Gregory E.
1997-05-01
At different pathological stages, the changes both of blood and lymph microcirculation parameters are observed. These parameters are of great importance in diagnostics. The type of these changes may indicate both the kind and the degree of disease. Investigation of the behavior of dynamic characteristics of these flows at different stages is of special interest. In this paper the peculiarities both of blood and lymph motion have been considered. The further development of speckle-interferometrical method has been carried out for the investigation of the dynamic characteristics of blood and lymph flows in microvessels. Analysis of two dynamic parameters which had been introduced in previous papers concerning this problem, is made in this paper. The influence of lymphotropic agent both on lymph flow and its dynamic characteristics is also discussed.
NASA Astrophysics Data System (ADS)
Li, W.; Ma, Q.; Thorne, R. M.; Bortnik, J.; Kletzing, C. A.; Kurth, W. S.; Hospodarsky, G. B.; Nishimura, Y.
2015-05-01
Plasmaspheric hiss is known to play an important role in controlling the overall structure and dynamics of radiation belt electrons inside the plasmasphere. Using newly available Van Allen Probes wave data, which provide excellent coverage in the entire inner magnetosphere, we evaluate the global distribution of the hiss wave frequency spectrum and wave intensity for different levels of substorm activity. Our statistical results show that observed hiss peak frequencies are generally lower than the commonly adopted value (~550 Hz), which was in frequent use, and that the hiss wave power frequently extends below 100 Hz, particularly at larger L shells (> ~3) on the dayside during enhanced levels of substorm activity. We also compare electron pitch angle scattering rates caused by hiss using the new statistical frequency spectrum and the previously adopted Gaussian spectrum and find that the differences are up to a factor of ~5 and are dependent on energy and L shell. Moreover, the new statistical hiss wave frequency spectrum including wave power below 100 Hz leads to increased pitch angle scattering rates by a factor of ~1.5 for electrons above ~100 keV at L~5, although their effect is negligible at L â‰¤ 3. Consequently, we suggest that the new realistic hiss wave frequency spectrum should be incorporated into future modeling of radiation belt electron dynamics.
Static Numbers to Dynamic Statistics: Designing a Policy-Friendly Social Policy Indicator Framework
ERIC Educational Resources Information Center
Ahn, Sang-Hoon; Choi, Young Jun; Kim, Young-Mi
2012-01-01
In line with the economic crisis and rapid socio-demographic changes, the interest in "social" and "well-being" indicators has been revived. Social indicator movements of the 1960s resulted in the establishment of social indicator statistical frameworks; that legacy has remained intact in many national governments and international organisations.…
Static Numbers to Dynamic Statistics: Designing a Policy-Friendly Social Policy Indicator Framework
ERIC Educational Resources Information Center
Ahn, Sang-Hoon; Choi, Young Jun; Kim, Young-Mi
2012-01-01
In line with the economic crisis and rapid socio-demographic changes, the interest in "social" and "well-being" indicators has been revived. Social indicator movements of the 1960s resulted in the establishment of social indicator statistical frameworks; that legacy has remained intact in many national governments and international organisations.â€¦
NASA Astrophysics Data System (ADS)
Rosa, Bogdan; Parishani, Hossein; Ayala, Orlando; Wang, Lian-Ping; Grabowski, Wojciech W.
2011-12-01
In recent years, direct numerical simulation (DNS) approach has become a reliable tool for studying turbulent collision-coalescence of cloud droplets relevant to warm rain development. It has been shown that small-scale turbulent motion can enhance the collision rate of droplets by either enhancing the relative velocity and collision efficiency or by inertia-induced droplet clustering. A hybrid DNS approach incorporating DNS of air turbulence, disturbance flows due to droplets, and droplet equation of motion has been developed to quantify these effects of air turbulence. Due to the computational complexity of the approach, a major challenge is to increase the range of scales or size of the computation domain so that all scales affecting droplet pair statistics are simulated. Here we discuss our on-going work in this direction by improving the parallel scalability of the code, and by studying the effect of large-scale forcing on pair statistics relevant to turbulent collision. New results at higher grid resolutions show a saturation of pair and collision statistics with increasing flow Reynolds number, for given Kolmogorov scales and small droplet sizes. Furthermore, we examine the orientation dependence of pair statistics which reflects an interesting coupling of gravity and droplet clustering.
Notaro, Michael; Wang, Yi; Liu, Zhengyu; Gallimore, Robert; Levis, Samuel
2008-01-05
A negative feedback of vegetation cover on subsequent annual precipitation is simulated for the mid-Holocene over North Africa using a fully coupled general circulation model with dynamic vegetation, FOAM-LPJ (Fast Ocean Atmosphere Model-Lund Potsdam Jena Model). By computing a vegetation feedback parameter based on lagged autocovariances, the simulated impact of North African vegetation on precipitation is statistically quantified. The feedback is also dynamically assessed through initial value ensemble experiments, in which North African grass cover is initially reduced and the climatic response analyzed. The statistical and dynamical assessments of the negative vegetation feedback agree in sign and relative magnitude for FOAM-LPJ. The negative feedback on annual precipitation largely results from a competition between bare soil evaporation and plant transpiration, with increases in the former outweighing reductions in the latter given reduced grass cover. This negative feedback weakens and eventually reverses sign over time during a transient simulation from the mid-Holocene to present. A similar, but weaker, negative feedback is identified in Community Climate System Model Version 2 (CCSM2) over North Africa for the mid-Holocene.
Choi, Ok Ran; Lim, In Kyoung
2011-04-08
Highlights: {yields} Reduced p21 expression in senescent cells treated with DNA damaging agents. {yields} Increase of [{sup 3}H]thymidine and BrdU incorporations in DNA damaged-senescent cells. {yields} Upregulation of miR-93 expression in senescent cells in response to DSB. {yields} Failure of p53 binding to p21 promoter in senescent cells in response to DSB. {yields} Molecular mechanism of increased cancer development in aged than young individuals. -- Abstract: To answer what is a critical event for higher incidence of tumor development in old than young individuals, primary culture of human diploid fibroblasts were employed and DNA damage was induced by doxorubicin or X-ray irradiation. Response to the damage was different between young and old cells; loss of p21{sup sdi1} expression in spite of p53{sup S15} activation in old cells along with [{sup 3}H]thymidine and BrdU incorporation, but not in young cells. The phenomenon was confirmed by other tissue fibroblasts obtained from different donor ages. Induction of miR-93 expression and reduced p53 binding to p21 gene promoter account for loss of p21{sup sdi1} expression in senescent cells after DNA damage, suggesting a mechanism of in vivo carcinogenesis in aged tissue without repair arrest.
Muir, Ryan D.; Kissick, David J.; Simpson, Garth J.
2012-01-01
Data from photomultiplier tubes are typically analyzed using either counting or averaging techniques, which are most accurate in the dim and bright signal limits, respectively. A statistical means of adjoining these two techniques is presented by recovering the Poisson parameter from averaged data and relating it to the statistics of binomial counting from Kissick et al. [Anal. Chem. 82, 10129 (2010)]. The point at which binomial photon counting and averaging have equal signal to noise ratios is derived. Adjoining these two techniques generates signal to noise ratios at 87% to approaching 100% of theoretical maximum across the full dynamic range of the photomultiplier tube used. The technique is demonstrated in a second harmonic generation microscope. PMID:22535131
Muir, Ryan D; Kissick, David J; Simpson, Garth J
2012-04-23
Data from photomultiplier tubes are typically analyzed using either counting or averaging techniques, which are most accurate in the dim and bright signal limits, respectively. A statistical means of adjoining these two techniques is presented by recovering the Poisson parameter from averaged data and relating it to the statistics of binomial counting from Kissick et al. [Anal. Chem. 82, 10129 (2010)]. The point at which binomial photon counting and averaging have equal signal to noise ratios is derived. Adjoining these two techniques generates signal to noise ratios at 87% to approaching 100% of theoretical maximum across the full dynamic range of the photomultiplier tube used. The technique is demonstrated in a second harmonic generation microscope. PMID:22535131
NASA Astrophysics Data System (ADS)
Frossard, L.; Rieder, H. E.; Ribatet, M.; Staehelin, J.; Maeder, J. A.; Di Rocco, S.; Davison, A. C.; Peter, T.
2013-01-01
We use statistical models for mean and extreme values of total column ozone to analyze "fingerprints" of atmospheric dynamics and chemistry on long-term ozone changes at northern and southern mid-latitudes on grid cell basis. At each grid cell, the r-largest order statistics method is used for the analysis of extreme events in low and high total ozone (termed ELOs and EHOs, respectively), and an autoregressive moving average (ARMA) model is used for the corresponding mean value analysis. In order to describe the dynamical and chemical state of the atmosphere, the statistical models include important atmospheric covariates: the solar cycle, the Quasi-Biennial Oscillation (QBO), ozone depleting substances (ODS) in terms of equivalent effective stratospheric chlorine (EESC), the North Atlantic Oscillation (NAO), the Antarctic Oscillation (AAO), the El Niño/Southern Oscillation (ENSO), and aerosol load after the volcanic eruptions of El Chichón and Mt. Pinatubo. The influence of the individual covariates on mean and extreme levels in total column ozone is derived on a grid cell basis. The results show that "fingerprints", i.e., significant influence, of dynamical and chemical features are captured in both the "bulk" and the tails of the statistical distribution of ozone, respectively described by mean values and EHOs/ELOs. While results for the solar cycle, QBO, and EESC are in good agreement with findings of earlier studies, unprecedented spatial fingerprints are retrieved for the dynamical covariates. Column ozone is enhanced over Labrador/Greenland, the North Atlantic sector and over the Norwegian Sea, but is reduced over Europe, Russia and the Eastern United States during the positive NAO phase, and vice-versa during the negative phase. The NAO's southern counterpart, the AAO, strongly influences column ozone at lower southern mid-latitudes, including the southern parts of South America and the Antarctic Peninsula, and the central southern mid-latitudes. Results for both NAO and AAO confirm the importance of atmospheric dynamics for ozone variability and changes from local/regional to global scales.
NASA Astrophysics Data System (ADS)
Koukas, Ioannis; Koukoravas, Vasilis; Mantesi, Konstantina; Sakellari, Katerina; Xanthopoulou, Themis-Demetra; Zarkadoulas, Akis; Markonis, Yannis; Papalexiou, Simon Michael; Koutsoyiannis, Demetris
2014-05-01
The statistical properties of over 300 different proxy records of the last two thousand years derived from the PAGES 2k database years are stochastically analysed. Analyses include estimation of their first four moments and their autocorrelation functions (ACF), as well as the determination of the presence of Hurst-Kolmogorov behaviour (known also as long term persistence). The data are investigated in groups according to their proxy type and location, while their statistical properties are also compared to those of the final temperature reconstructions. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Nonequilibrium statistical mechanics of a two-temperature Ising ring with conserved dynamics.
Borchers, Nicholas; Pleimling, Michel; Zia, R K P
2014-12-01
The statistical mechanics of a one-dimensional Ising model in thermal equilibrium is well-established, textbook material. Yet, when driven far from equilibrium by coupling two sectors to two baths at different temperatures, it exhibits remarkable phenomena, including an unexpected "freezing by heating." These phenomena are explored through systematic numerical simulations. Our study reveals complicated relaxation processes as well as a crossover between two very different steady-state regimes. PMID:25615050
2010-01-01
Chemical communication mediates signaling between cells. Bacteria also engage in chemical signaling, termed quorum sensing (QS), to coordinate population-wide behavior. The bacterial pathogen enterohemorrhagic E. coli (EHEC), responsible for outbreaks of bloody diarrhea worldwide, exploits QS to promote expression of virulence factors in humans. Although EHEC is a human pathogen, it is a member of the gastrointestinal (GI) flora in cattle, the main reservoir for this bacterium. EHEC cattle colonization requires SdiA, a QS transcription factor that uses acyl-homoserine lactones (AHLs), for proper folding and function. EHEC harbors SdiA, but does not produce AHLs, consequently having to sense AHLs produced by other bacterial species. We recently showed that SdiA is necessary for efficient EHEC passage through the bovine GI tract, and show that AHLs are prominent within cattle rumen, but absent from the other sections of the GI tract. EHEC utilizes the locus of enterocyte effacement (LEE) to colonize the recto-anal junction of cattle, and the glutamate decarboxylase (gad) system to colonize cows. Transcription of the LEE genes is decreased by rumen AHLs through SdiA, while transcription of the gad acid resistant system is increased. It would be expensive for EHEC to express the LEE genes in the rumen where they are not necessary. However, in preparation for the acidic distal stomachs the EHEC gad is activated in the rumen. Hence AHL signaling through SdiA aids EHEC in gauging these environments, and modulates gene expression towards adaptation to a commensal life-style in cattle.1 Inasmuch as EHEC is largely prevalent in cattle herds, interference with SdiA-mediated QS inhibition of cattle colonization could be an attractive approach to diminish contamination of food products due to cattle shedding of this pathogen. PMID:21468228
NASA Technical Reports Server (NTRS)
Kozyra, J. U.; Cravens, T. E.; Nagy, A. F.; Brace, L. H.
1986-01-01
A statistical study of the subauroral electron temperature enhancement was undertaken using Langmuir probe observations during 488 traversals of the midlatitude plasmapause region by the DE-2 satellite. The subauroral electron temperature enhancement on the nightside is a quasi-permanent feature at all altitudes between 350 and 1000 km with an occurrence frequency that depends on altitude. The occurrence frequency of the subauroral electron temperature peak has a strong altitude dependence on the dayside. The position of the subauroral Te peak decreases with increasing magnetic activity in a manner similar to that of the equatorial plasmapause and other midlatitude plasmapause signatures.
Ohnuki, Shinsuke; Enomoto, Kenichi; Yoshimoto, Hiroyuki; Ohya, Yoshikazu
2014-03-01
The vitality of brewing yeasts has been used to monitor their physiological state during fermentation. To investigate the fermentation process, we used the image processing software, CalMorph, which generates morphological data on yeast mother cells and bud shape, nuclear shape and location, and actin distribution. We found that 248 parameters changed significantly during fermentation. Successive use of principal component analysis (PCA) revealed several important features of yeast, providing insight into the dynamic changes in the yeast population. First, PCA indicated that much of the observed variability in the experiment was summarized in just two components: a change with a peak and a change over time. Second, PCA indicated the independent and important morphological features responsible for dynamic changes: budding ratio, nucleus position, neck position, and actin organization. Thus, the large amount of data provided by imaging analysis can be used to monitor the fermentation processes involved in beer and bioethanol production. PMID:24012106
Dynamics and Statistical Mechanics of Rotating and non-Rotating Vortical Flows
Lim, Chjan
2013-12-18
Three projects were analyzed with the overall aim of developing a computational/analytical model for estimating values of the energy, angular momentum, enstrophy and total variation of fluid height at phase transitions between disordered and self-organized flow states in planetary atmospheres. It is believed that these transitions in equilibrium statistical mechanics models play a role in the construction of large-scale, stable structures including super-rotation in the Venusian atmosphere and the formation of the Great Red Spot on Jupiter. Exact solutions of the spherical energy-enstrophy models for rotating planetary atmospheres by Kac's method of steepest descent predicted phase transitions to super-rotating solid-body flows at high energy to enstrophy ratio for all planetary spins and to sub-rotating modes if the planetary spin is large enough. These canonical statistical ensembles are well-defined for the long-range energy interactions that arise from 2D fluid flows on compact oriented manifolds such as the surface of the sphere and torus. This is because in Fourier space available through Hodge theory, the energy terms are exactly diagonalizable and hence has zero range, leading to well-defined heat baths.
Bahlmann, Claus; Burkhardt, Hans
2004-03-01
In this paper, we give a comprehensive description of our writer-independent online handwriting recognition system frog on hand. The focus of this work concerns the presentation of the classification/training approach, which we call cluster generative statistical dynamic time warping (CSDTW). CSDTW is a general, scalable, HMM-based method for variable-sized, sequential data that holistically combines cluster analysis and statistical sequence modeling. It can handle general classification problems that rely on this sequential type of data, e.g., speech recognition, genome processing, robotics, etc. Contrary to previous attempts, clustering and statistical sequence modeling are embedded in a single feature space and use a closely related distance measure. We show character recognition experiments of frog on hand using CSDTW on the UNIPEN online handwriting database. The recognition accuracy is significantly higher than reported results of other handwriting recognition systems. Finally, we describe the real-time implementation of frog on hand on a Linux Compaq iPAQ embedded device. PMID:15376878
Anisotropy and shear-layer edge dynamics of statistically unsteady, stratified turbulence
NASA Astrophysics Data System (ADS)
Wingstedt, E. M. M.; Fossum, H. E.; Pettersson Reif, B. A.; Werne, J.
2015-06-01
Direct numerical simulation data of an evolving Kelvin-Helmholtz instability have been analyzed in order to characterize the dynamic and kinematic response of shear-generated turbulent flow to imposed stable stratification. Particular emphasis was put on anisotropy and shear-layer edge dynamics in the net kinetic energy decay phase of the Kelvin-Helmholtz evolution. Results indicate a faster increase of small-scale anisotropy compared to large-scale anisotropy. Also, the anisotropy of thermal dissipation differs significantly from that of viscous dissipation. It is found that the Reynolds stress anisotropy increases up to a stratification level roughly corresponding to Rig ? 0.4, but subsequently decreases for higher levels of stratification, most likely due to relaminarization. Coherent large-scale turbulence structures are cylindrical in the center of the shear layer, whereas they become ellipsoidal in the strongly stratified edge-layer region. The structures of the Reynolds stresses are highly one-componental in the center and turn two-componental as stratification increases. Stratification affects all scales, but it seems to affect larger scales to a higher degree than smaller scales and thermal scales more strongly than momentum scales. The effect of strong stable stratification at the edge of the shear layer is highly reminiscent of the non-local pressure effects of solid walls. However, the kinematic blocking inherently associated with impermeable walls is not observed in the edge layer. Vertical momentum flux reversal is found in part of the shear layer. The roles of shear and buoyant production of turbulence kinetic energy are exchanged, and shear production is transferring energy into the mean flow field, which contributes to relaminarization. The change in dynamics near the edge of the shear layer has important implications for predictive turbulence model formulations.
Temporal Dynamics and Nonclassical Photon Statistics of Quadratically Coupled Optomechanical Systems
NASA Astrophysics Data System (ADS)
Singh, Shailendra Kumar; Muniandy, S. V.
2016-01-01
Quantum optomechanical system serves as an interface for coupling between photons and phonons due to mechanical oscillations. We used the Heisenberg-Langevin approach under Markovian white noise approximation to study a quadratically coupled optomechanical system which contains a thin dielectric membrane quadratically coupled to the cavity field. A decorrelation method is employed to solve for a larger number of coupled equations. Transient mean numbers of cavity photons and phonons that provide dynamical behaviour are computed for different coupling regime. We have also obtained the two-boson second-order correlation functions for the cavity field, membrane oscillator and their cross correlations that provide nonclassical properties governed by quadratic optomechanical system.
DYNAMIC STABILITY OF THE SOLAR SYSTEM: STATISTICALLY INCONCLUSIVE RESULTS FROM ENSEMBLE INTEGRATIONS
Zeebe, Richard E.
2015-01-01
Due to the chaotic nature of the solar system, the question of its long-term stability can only be answered in a statistical sense, for instance, based on numerical ensemble integrations of nearby orbits. Destabilization of the inner planets, leading to close encounters and/or collisions can be initiated through a large increase in Mercury's eccentricity, with a currently assumed likelihood of âˆ¼1%. However, little is known at present about the robustness of this number. Here I report ensemble integrations of the full equations of motion of the eight planets and Pluto over 5 Gyr, including contributions from general relativity. The results show that different numerical algorithms lead to statistically different results for the evolution of Mercury's eccentricity (e{sub M}). For instance, starting at present initial conditions (e{sub M}â‰ƒ0.21), Mercury's maximum eccentricity achieved over 5 Gyr is, on average, significantly higher in symplectic ensemble integrations using heliocentric rather than Jacobi coordinates and stricter error control. In contrast, starting at a possible future configuration (e{sub M}â‰ƒ0.53), Mercury's maximum eccentricity achieved over the subsequent 500 Myr is, on average, significantly lower using heliocentric rather than Jacobi coordinates. For example, the probability for e{sub M} to increase beyond 0.53 over 500 Myr is >90% (Jacobi) versus only 40%-55% (heliocentric). This poses a dilemma because the physical evolution of the real systemâ€”and its probabilistic behaviorâ€”cannot depend on the coordinate system or the numerical algorithm chosen to describe it. Some tests of the numerical algorithms suggest that symplectic integrators using heliocentric coordinates underestimate the odds for destabilization of Mercury's orbit at high initial e{sub M}.
Dynamic Stability of the Solar System: Statistically Inconclusive Results from Ensemble Integrations
NASA Astrophysics Data System (ADS)
Zeebe, Richard E.
2015-01-01
Due to the chaotic nature of the solar system, the question of its long-term stability can only be answered in a statistical sense, for instance, based on numerical ensemble integrations of nearby orbits. Destabilization of the inner planets, leading to close encounters and/or collisions can be initiated through a large increase in Mercury's eccentricity, with a currently assumed likelihood of ~1%. However, little is known at present about the robustness of this number. Here I report ensemble integrations of the full equations of motion of the eight planets and Pluto over 5 Gyr, including contributions from general relativity. The results show that different numerical algorithms lead to statistically different results for the evolution of Mercury's eccentricity (e_M}). For instance, starting at present initial conditions (e_M}? 0.21), Mercury's maximum eccentricity achieved over 5 Gyr is, on average, significantly higher in symplectic ensemble integrations using heliocentric rather than Jacobi coordinates and stricter error control. In contrast, starting at a possible future configuration (e_M}? 0.53), Mercury's maximum eccentricity achieved over the subsequent 500 Myr is, on average, significantly lower using heliocentric rather than Jacobi coordinates. For example, the probability for e_M} to increase beyond 0.53 over 500 Myr is >90% (Jacobi) versus only 40%-55% (heliocentric). This poses a dilemma because the physical evolution of the real system—and its probabilistic behavior—cannot depend on the coordinate system or the numerical algorithm chosen to describe it. Some tests of the numerical algorithms suggest that symplectic integrators using heliocentric coordinates underestimate the odds for destabilization of Mercury's orbit at high initial e_M}.
NASA Astrophysics Data System (ADS)
Avissar, Roni
1991-03-01
Land surface interacts strongly with the atmosphere at all scales. This has a considerable impact on the hydrologic cycle and the climate. Therefore, in order to produce realistic simulations with climate models, their land-surface processes must be parameterized accurately. Because continental surfaces are usually extremely heterogeneous over the resolvable scales considered in these models, surface parameterizations based on the ‘big leaf-big stoma’ approach (that assume grid-scale homogeneity) fail to represent the land-atmosphere interactions that occur at much smaller scales. A parameterization based on a statistical-dynamical approach is suggested here. With this approach, each surface grid element of the numerical model is divided into homogeneous land patches (i.e., patches with similar internal heterogeneity). Assuming that horizontal fluxes between the different patches within a grid element are small as compared to the vertical fluxes, patches of the same type located at different places in the grid can be regrouped into one subgrid surface class. Then, for each one of the subgrid surface classes, probability density functions (pdf) are used to characterize the variability of the different parameters of the soil-plant-atmosphere system. These pdf are combined with the equations of the model that describe the dynamic and the energy and mass conservations in the atmosphere. The potential application of this statistical-dynamical parameterization is illustrated by simulating (i) the development of an agricultural area in an arid region and (ii) the process of deforestation in a tropical region. Both cases emphasize the importance of land-atmosphere interactions on regional hydrologic processes and climate.
Financial price dynamics and pedestrian counterflows: A comparison of statistical stylized facts
NASA Astrophysics Data System (ADS)
Parisi, Daniel R.; Sornette, Didier; Helbing, Dirk
2013-01-01
We propose and document the evidence for an analogy between the dynamics of granular counterflows in the presence of bottlenecks or restrictions and financial price formation processes. Using extensive simulations, we find that the counterflows of simulated pedestrians through a door display eight stylized facts observed in financial markets when the density around the door is compared with the logarithm of the price. Finding so many stylized facts is very rare indeed among all agent-based models of financial markets. The stylized properties are present when the agents in the pedestrian model are assumed to display a zero-intelligent behavior. If agents are given decision-making capacity and adapt to partially follow the majority, periods of herding behavior may additionally occur. This generates the very slow decay of the autocorrelation of absolute return due to an intermittent dynamics. Our findings suggest that the stylized facts in the fluctuations of the financial prices result from a competition of two groups with opposite interests in the presence of a constraint funneling the flow of transactions to a narrow band of prices with limited liquidity.
How electronic dynamics with Pauli exclusion produces Fermi-Dirac statistics
Nguyen, Triet S.; Nanguneri, Ravindra; Parkhill, John
2015-04-07
It is important that any dynamics method approaches the correct population distribution at long times. In this paper, we derive a one-body reduced density matrix dynamics for electrons in energetic contact with a bath. We obtain a remarkable equation of motion which shows that in order to reach equilibrium properly, rates of electron transitions depend on the density matrix. Even though the bath drives the electrons towards a Boltzmann distribution, hole blocking factors in our equation of motion cause the electronic populations to relax to a Fermi-Dirac distribution. These factors are an old concept, but we show how they can be derived with a combination of time-dependent perturbation theory and the extended normal ordering of Mukherjee and Kutzelnigg for a general electronic state. The resulting non-equilibrium kinetic equations generalize the usual Redfield theory to many-electron systems, while ensuring that the orbital occupations remain between zero and one. In numerical applications of our equations, we show that relaxation rates of molecules are not constant because of the blocking effect. Other applications to model atomic chains are also presented which highlight the importance of treating both dephasing and relaxation. Finally, we show how the bath localizes the electron density matrix.
NASA Astrophysics Data System (ADS)
Jacquelin, E.; Adhikari, S.; Sinou, J.-J.; Friswell, M. I.
2015-11-01
Polynomial chaos solution for the frequency response of linear non-proportionally damped dynamic systems has been considered. It has been observed that for lightly damped systems the convergence of the solution can be very poor in the vicinity of the deterministic resonance frequencies. To address this, Aitken's transformation and its generalizations are suggested. The proposed approach is successfully applied to the sequences defined by the first two moments of the responses, and this process significantly accelerates the polynomial chaos convergence. In particular, a 2-dof system with respectively 1 and 2 parameter uncertainties has been studied. The first two moments of the frequency response were calculated by Monte Carlo simulation, polynomial chaos expansion and Aitken's transformation of the polynomial chaos expansion. Whereas 200 polynomials are required to have a good agreement with Monte Carlo results around the deterministic eigenfrequencies, less than 50 polynomials transformed by the Aitken's method are enough. This latter result is improved if a generalization of Aitken's method (recursive Aitken's transformation, Shank's transformation) is applied. With the proposed convergence acceleration, polynomial chaos may be reconsidered as an efficient method to estimate the first two moments of a random dynamic response.
Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.
2005-01-01
A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.
How electronic dynamics with Pauli exclusion produces Fermi-Dirac statistics.
Nguyen, Triet S; Nanguneri, Ravindra; Parkhill, John
2015-04-01
It is important that any dynamics method approaches the correct population distribution at long times. In this paper, we derive a one-body reduced density matrix dynamics for electrons in energetic contact with a bath. We obtain a remarkable equation of motion which shows that in order to reach equilibrium properly, rates of electron transitions depend on the density matrix. Even though the bath drives the electrons towards a Boltzmann distribution, hole blocking factors in our equation of motion cause the electronic populations to relax to a Fermi-Dirac distribution. These factors are an old concept, but we show how they can be derived with a combination of time-dependent perturbation theory and the extended normal ordering of Mukherjee and Kutzelnigg for a general electronic state. The resulting non-equilibrium kinetic equations generalize the usual Redfield theory to many-electron systems, while ensuring that the orbital occupations remain between zero and one. In numerical applications of our equations, we show that relaxation rates of molecules are not constant because of the blocking effect. Other applications to model atomic chains are also presented which highlight the importance of treating both dephasing and relaxation. Finally, we show how the bath localizes the electron density matrix. PMID:25854234
How electronic dynamics with Pauli exclusion produces Fermi-Dirac statistics
NASA Astrophysics Data System (ADS)
Nguyen, Triet S.; Nanguneri, Ravindra; Parkhill, John
2015-04-01
It is important that any dynamics method approaches the correct population distribution at long times. In this paper, we derive a one-body reduced density matrix dynamics for electrons in energetic contact with a bath. We obtain a remarkable equation of motion which shows that in order to reach equilibrium properly, rates of electron transitions depend on the density matrix. Even though the bath drives the electrons towards a Boltzmann distribution, hole blocking factors in our equation of motion cause the electronic populations to relax to a Fermi-Dirac distribution. These factors are an old concept, but we show how they can be derived with a combination of time-dependent perturbation theory and the extended normal ordering of Mukherjee and Kutzelnigg for a general electronic state. The resulting non-equilibrium kinetic equations generalize the usual Redfield theory to many-electron systems, while ensuring that the orbital occupations remain between zero and one. In numerical applications of our equations, we show that relaxation rates of molecules are not constant because of the blocking effect. Other applications to model atomic chains are also presented which highlight the importance of treating both dephasing and relaxation. Finally, we show how the bath localizes the electron density matrix.
SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix
Michalski, D; Huq, M; Bednarz, G; Lalonde, R; Yang, Y; Heron, D
2014-06-01
Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same is for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.
NASA Astrophysics Data System (ADS)
Calzavarini, Enrico; Volk, Romain; Lévêque, Emmanuel; Pinton, Jean-François; Toschi, Federico
2012-02-01
We study by means of an Eulerian-Lagrangian model the statistical properties of velocity and acceleration of a neutrally-buoyant finite-sized particle in a turbulent flow statistically homogeneous and isotropic. The particle equation of motion, besides added mass and steady Stokes drag, keeps into account the unsteady Stokes drag force-known as Basset-Boussinesq history force-and the non-Stokesian drag based on Schiller-Naumann parametrization, together with the finite-size Faxén corrections. We focus on the case of flow at low Taylor-Reynolds number, Re??31, for which fully resolved numerical data which can be taken as a reference are available [Homann H., Bec J. Finite-size effects in the dynamics of neutrally buoyant particles in turbulent flow. J Fluid Mech 651 (2010) 81-91]. Remarkably, we show that while drag forces have always minor effects on the acceleration statistics, their role is important on the velocity behavior. We propose also that the scaling relations for the particle velocity variance as a function of its size, which have been first detected in fully resolved simulations, does not originate from inertial-scale properties of the background turbulent flow but it is likely to arise from the non-Stokesian component of the drag produced by the wake behind the particle. Furthermore, by means of comparison with fully resolved simulations, we show that the Faxén correction to the added mass has a dominant role in the particle acceleration statistics even for particles whose size attains the integral scale.
Po river plume patterns variability and dynamics: a numerical modeling and statistical approach.
NASA Astrophysics Data System (ADS)
Falcieri, Francesco M.; Benetazzo, Alvise; Bergamasco, Andrea; Bonaldo, Davide; Carniel, Sandro; Sclavo, Mauro; Russo, Aniello
2013-04-01
Processes and dynamics of estuarine-shelf environments are defined by many drivers, some of the most important being riverine inputs, winds (and wind driven currents) and tides. Two of them are directly involved in the formation and spatial evolution of a coastal river plume: on the one hand the amount of fresh water entering into the sea through river' discharge, on the other hand the direction and intensity of winds blowing over the domain. The Adriatic Sea is generally considered a dilution basin due to the large amount of freshwater inputs received. These inputs have a significant influence on the basin, both from a physical point of view (by affecting buoyancy) and on the biogeochemical characteristics (by introducing large quantities of nutrients, which sustain primary production in the areas interested by the rivers' plumes). The Po River (mean daily discharge between 275 and 11600 m3/s, yearly mean of 1500 m3/s) is the single largest freshwater source of the Adriatic; its discharges result in a plume that directly influences the characteristics of the coastal areas of the whole Northern sub-basin and as far South as Ancona. The development of strong lateral gradients in salinity is an all year around driver (particularly in Spring and Autumn) of the general and coastal circulation, and influences the water column vertical structure and an important process such as the formation of the Northern Adriatic Dense Water. The Po plume generally follows two major patterns of evolution: southward along the Italian coasts in a ribbon that can fill the whole water column, or across the northern part of the basin toward the Istrian coasts in a generally more stratified condition. A model-based assessment, albeit semi-quantitative, of the dynamics and variability of the Po plume has not been yet reported in literature. In this work we investigated its dynamics by means of an 8 years (2003-2010) numerical simulation with the Regional Ocean Modelling System (ROMS). The model has been implemented on a 2 km regular grid for with surface fluxes come from an high-resolution meteorological model (COSMO I7), open boundary conditions at Otranto Straits come from an existing operational Mediterranean model (MFSTEP), main diurnal and semidiurnal tidal components are imposed at the open boundary, and main rivers discharge (including Po) are introduced as freshwater mass fluxes as measured by river gauges closest to the rivers' mouths.
NASA Astrophysics Data System (ADS)
Eckert, Nicolas; Schläppy, Romain; Jomelli, Vincent; Naaim, Mohamed
2013-04-01
A crucial step for proposing relevant long-term mitigation measures in long term avalanche forecasting is the accurate definition of high return period avalanches. Recently, "statistical-dynamical" approach combining a numerical model with stochastic operators describing the variability of its inputs-outputs have emerged. Their main interests is to take into account the topographic dependency of snow avalanche runout distances, and to constrain the correlation structure between model's variables by physical rules, so as to simulate the different marginal distributions of interest (pressure, flow depth, etc.) with a reasonable realism. Bayesian methods have been shown to be well adapted to achieve model inference, getting rid of identifiability problems thanks to prior information. An important problem which has virtually never been considered before is the validation of the predictions resulting from a statistical-dynamical approach (or from any other engineering method for computing extreme avalanches). In hydrology, independent "fossil" data such as flood deposits in caves are sometimes confronted to design discharges corresponding to high return periods. Hence, the aim of this work is to implement a similar comparison between high return period avalanches obtained with a statistical-dynamical approach and independent validation data resulting from careful dendrogeomorphological reconstructions. To do so, an up-to-date statistical model based on the depth-averaged equations and the classical Voellmy friction law is used on a well-documented case study. First, parameter values resulting from another path are applied, and the dendrological validation sample shows that this approach fails in providing realistic prediction for the case study. This may be due to the strongly bounded behaviour of runouts in this case (the extreme of their distribution is identified as belonging to the Weibull attraction domain). Second, local calibration on the available avalanche chronicle is performed with various prior distributions resulting from expert knowledge and/or other paths. For all calibrations, a very successful convergence is obtained, which confirms the robustness of the used Metropolis-Hastings estimation algorithm. This also demonstrates the interest of the Bayesian framework for aggregating information by sequential assimilation in the frequently encountered case of limited data quantity. Confrontation with the dendrological sample stresses the predominant role of the Coulombian friction coefficient distribution's variance on predicted high magnitude runouts. The optimal fit is obtained for a strong prior reflecting the local bounded behavior, and results in a 10-40 m difference for return periods ranging between 10 and 300 years. Implementing predictive simulations shows that this is largely within the range of magnitude of uncertainties to be taken into account. On the other hand, the different priors tested for the turbulent friction coefficient influence predictive performances only slightly, but have a large influence on predicted velocity and flow depth distributions. This all may be of high interest to refine calibration and predictive use of the statistical-dynamical model for any engineering application.
A statistical and dynamical analysis of some Winter and Summer temperature extremes in Europe
NASA Astrophysics Data System (ADS)
Andrade, Cristina; Santos, João
2013-04-01
Over the last decades Europe has been facing strong extreme events, particularly temperature extremes, with foremost influence on economy, agriculture, water management and society in general. The study of the large-scale atmospheric mechanisms linked to their occurrence is thus significant and is going to be discussed for the winter and summer seasons in this region for 50 years (1961-2010). Additionally, a canonical correlation analysis, coupled with a principal component analysis (BPCCA), is applied between the monthly mean sea level pressure fields and the monthly occurrences of four temperature extreme indices (TN10p - cold nights, TN90p - warm nights, and TX90p - warm days, TX10p - cold days) within a large Euro-Atlantic sector. Each co-variability mode represents a large-scale forcing on the occurrence of those extremes. North Atlantic Oscillation-like patterns and strong anomalies in the atmospheric flow westwards of the British Isles are leading couplings between large-scale atmospheric circulation and wintertime occurrences of both cold (warm) nights and warm (cold) days in Europe. Although summer couplings show lower coherence between warm and cold events, their key driving mechanisms are significant to explain their atmospheric anomalies. In order to get a better insight for both seasons of these extremes, the main features of the statistical distributions of the minima (TNn and TXn) and maxima (TXx and TNx) are also analyzed. Moreover, statistically significant downward (upward) trends are detected in the cold nights and days (warm nights and days) occurrences over the period 1961-2010 throughout Europe for the winters. These tendencies can also be found in summer for the cold nights and warm days, which is in clear agreement with the overall warming. For the summer warm nights and cold days these tendencies are weaker and its signal is geographically dependent. This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692.
A stochastic-dynamic model for global atmospheric mass field statistics
NASA Technical Reports Server (NTRS)
Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.
1981-01-01
A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.
Research on rail surface non-touch statistic and dynamic measuring technique
NASA Astrophysics Data System (ADS)
Bi, Zong-qi; Wang, Lian-fen; Wang, Ai-fang
2014-02-01
On the basis of laser displacement sense principle, one non-contact rail surface sense measuring program is designed. Surface measuring instrument is made. Inspection and measuring for the single-section of rail surface are carried on and data-based image formation is given. The precise measuring is realized for rail surface, especial trail parts; By using the stepper motor diving technique and program operating imagine formation technique, combining the MTLAB programming, the single-surface measurement data is transformed into image and the dynamic measuring for rail vertical smoothing is achieved. By comparing to the standard data, the rail wear state and surface parameters are concluded .This technique met with the needs of non-touch automatic measuring for rail surface.
The influence of lexical statistics on temporal lobe cortical dynamics during spoken word listening.
Cibelli, Emily S; Leonard, Matthew K; Johnson, Keith; Chang, Edward F
2015-08-01
Neural representations of words are thought to have a complex spatio-temporal cortical basis. It has been suggested that spoken word recognition is not a process of feed-forward computations from phonetic to lexical forms, but rather involves the online integration of bottom-up input with stored lexical knowledge. Using direct neural recordings from the temporal lobe, we examined cortical responses to words and pseudowords. We found that neural populations were not only sensitive to lexical status (real vs. pseudo), but also to cohort size (number of words matching the phonetic input at each time point) and cohort frequency (lexical frequency of those words). These lexical variables modulated neural activity from the posterior to anterior temporal lobe, and also dynamically as the stimuli unfolded on a millisecond time scale. Our findings indicate that word recognition is not purely modular, but relies on rapid and online integration of multiple sources of lexical knowledge. PMID:26072003
Statistical techniques for modeling extreme price dynamics in the energy market
NASA Astrophysics Data System (ADS)
Mbugua, L. N.; Mwita, P. N.
2013-02-01
Extreme events have large impact throughout the span of engineering, science and economics. This is because extreme events often lead to failure and losses due to the nature unobservable of extra ordinary occurrences. In this context this paper focuses on appropriate statistical methods relating to a combination of quantile regression approach and extreme value theory to model the excesses. This plays a vital role in risk management. Locally, nonparametric quantile regression is used, a method that is flexible and best suited when one knows little about the functional forms of the object being estimated. The conditions are derived in order to estimate the extreme value distribution function. The threshold model of extreme values is used to circumvent the lack of adequate observation problem at the tail of the distribution function. The application of a selection of these techniques is demonstrated on the volatile fuel market. The results indicate that the method used can extract maximum possible reliable information from the data. The key attraction of this method is that it offers a set of ready made approaches to the most difficult problem of risk modeling.
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI) wets the soil at the depth of the drip line and in a volume around each emitter, but the soil wetted often does not include the soil surface. Because of this, the soil surface remains completely or at least partially dry and evaporative losses of irrigation water are...
ERIC Educational Resources Information Center
Olive, G.; And Others
A selective dissemination of information service based on computer scanning of Nuclear Science Abstracts tapes has operated at the Atomic Energy Research Establishment, Harwell, England since October, 1968. The performance of the mechanized SDI service has been compared with that of the pre-existing current awareness service which is based onâ€¦
ERIC Educational Resources Information Center
Leggate, P.; And Others
During a 2-year period (1970, 1971) SDI (Selective Dissemination of Information) search profiles were written for 353 biologists and other research workers with a need for biological information in academic, industrial, and government research institutions. At the beginning of the experiment a questionnaire and interview survey was made of the…
ERIC Educational Resources Information Center
Olive, G.; And Others
A selective dissemination of information service based on computer scanning of Nuclear Science Abstracts tapes has operated at the Atomic Energy Research Establishment, Harwell, England since October, 1968. The performance of the mechanized SDI service has been compared with that of the pre-existing current awareness service which is based on…
Technology Transfer Automated Retrieval System (TEKTRAN)
QseA and SdiA are two of several transcriptional regulators that regulate virulence gene expression of enterohemorrhagic Escherichia coli (EHEC) O157:H7 via quorum sensing (QS). QseA regulates the expression of the locus of enterocyte effacement (LEE). LEE encodes for a type III secretion (T3S) sys...
Thompson, Keiran C.; Crittenden, Deborah L.; Kable, Scott H.; Jordan, Meredith J.T.
2006-01-28
Previous experimental and theoretical studies of the radical dissociation channel of T{sub 1} acetaldehyde show conflicting behavior in the HCO and CH{sub 3} product distributions. To resolve these conflicts, a full-dimensional potential-energy surface for the dissociation of CH{sub 3}CHO into HCO and CH{sub 3} fragments over the barrier on the T{sub 1} surface is developed based on RO-CCSD(T)/cc-pVTZ(DZ) ab initio calculations. 20 000 classical trajectories are calculated on this surface at each of five initial excess energies, spanning the excitation energies used in previous experimental studies, and translational, vibrational, and rotational distributions of the radical products are determined. For excess energies near the dissociation threshold, both the HCO and CH{sub 3} products are vibrationally cold; there is a small amount of HCO rotational excitation and little CH{sub 3} rotational excitation, and the reaction energy is partitioned dominantly (>90% at threshold) into relative translational motion. Close to threshold the HCO and CH{sub 3} rotational distributions are symmetrically shaped, resembling a Gaussian function, in agreement with observed experimental HCO rotational distributions. As the excess energy increases the calculated HCO and CH{sub 3} rotational distributions are observed to change from a Gaussian shape at threshold to one more resembling a Boltzmann distribution, a behavior also seen by various experimental groups. Thus the distribution of energy in these rotational degrees of freedom is observed to change from nonstatistical to apparently statistical, as excess energy increases. As the energy above threshold increases all the internal and external degrees of freedom are observed to gain population at a similar rate, broadly consistent with equipartitioning of the available energy at the transition state. These observations generally support the practice of separating the reaction dynamics into two reservoirs: an impulsive reservoir, fed by the exit channel dynamics, and a statistical reservoir, supported by the random distribution of excess energy above the barrier. The HCO rotation, however, is favored by approximately a factor of 3 over the statistical prediction. Thus, at sufficiently high excess energies, although the HCO rotational distribution may be considered statistical, the partitioning of energy into HCO rotation is not.
Ly, Cheng; Tranchina, Daniel
2009-02-01
In the probability density function (PDF) approach to neural network modeling, a common simplifying assumption is that the arrival times of elementary postsynaptic events are governed by a Poisson process. This assumption ignores temporal correlations in the input that sometimes have important physiological consequences. We extend PDF methods to models with synaptic event times governed by any modulated renewal process. We focus on the integrate-and-fire neuron with instantaneous synaptic kinetics and a random elementary excitatory postsynaptic potential (EPSP), A. Between presynaptic events, the membrane voltage, v, decays exponentially toward rest, while s, the time since the last synaptic input event, evolves with unit velocity. When a synaptic event arrives, v jumps by A, and s is reset to zero. If v crosses the threshold voltage, an action potential occurs, and v is reset to v(reset). The probability per unit time of a synaptic event at time t, given the elapsed time s since the last event, h(s, t), depends on specifics of the renewal process. We study how regularity of the train of synaptic input events affects output spike rate, PDF and coefficient of variation (CV) of the interspike interval, and the autocorrelation function of the output spike train. In the limit of a deterministic, clocklike train of input events, the PDF of the interspike interval converges to a sum of delta functions, with coefficients determined by the PDF for A. The limiting autocorrelation function of the output spike train is a sum of delta functions whose coefficients fall under a damped oscillatory envelope. When the EPSP CV, sigma A/mu A, is equal to 0.45, a CV for the intersynaptic event interval, sigma T/mu T = 0.35, is functionally equivalent to a deterministic periodic train of synaptic input events (CV = 0) with respect to spike statistics. We discuss the relevance to neural network simulations. PMID:19431264
Roques, Lionel; Bonnefon, Olivier
2016-01-01
We propose and develop a general approach based on reaction-diffusion equations for modelling a species dynamics in a realistic two-dimensional (2D) landscape crossed by linear one-dimensional (1D) corridors, such as roads, hedgerows or rivers. Our approach is based on a hybrid "2D/1D model", i.e, a system of 2D and 1D reaction-diffusion equations with homogeneous coefficients, in which each equation describes the population dynamics in a given 2D or 1D element of the landscape. Using the example of the range expansion of the tiger mosquito Aedes albopictus in France and its main highways as 1D corridors, we show that the model can be fitted to realistic observation data. We develop a mechanistic-statistical approach, based on the coupling between a model of population dynamics and a probabilistic model of the observation process. This allows us to bridge the gap between the data (3 levels of infestation, at the scale of a French department) and the output of the model (population densities at each point of the landscape), and to estimate the model parameter values using a maximum-likelihood approach. Using classical model comparison criteria, we obtain a better fit and a better predictive power with the 2D/1D model than with a standard homogeneous reaction-diffusion model. This shows the potential importance of taking into account the effect of the corridors (highways in the present case) on species dynamics. With regard to the particular case of A. albopictus, the conclusion that highways played an important role in species range expansion in mainland France is consistent with recent findings from the literature. PMID:26986201
2016-01-01
We propose and develop a general approach based on reaction-diffusion equations for modelling a species dynamics in a realistic two-dimensional (2D) landscape crossed by linear one-dimensional (1D) corridors, such as roads, hedgerows or rivers. Our approach is based on a hybrid â€œ2D/1D modelâ€, i.e, a system of 2D and 1D reaction-diffusion equations with homogeneous coefficients, in which each equation describes the population dynamics in a given 2D or 1D element of the landscape. Using the example of the range expansion of the tiger mosquito Aedes albopictus in France and its main highways as 1D corridors, we show that the model can be fitted to realistic observation data. We develop a mechanistic-statistical approach, based on the coupling between a model of population dynamics and a probabilistic model of the observation process. This allows us to bridge the gap between the data (3 levels of infestation, at the scale of a French department) and the output of the model (population densities at each point of the landscape), and to estimate the model parameter values using a maximum-likelihood approach. Using classical model comparison criteria, we obtain a better fit and a better predictive power with the 2D/1D model than with a standard homogeneous reaction-diffusion model. This shows the potential importance of taking into account the effect of the corridors (highways in the present case) on species dynamics. With regard to the particular case of A. albopictus, the conclusion that highways played an important role in species range expansion in mainland France is consistent with recent findings from the literature. PMID:26986201
Global nannoplankton dynamics across the Paleocene-Eocene Thermal Maximum: A statistical approach
NASA Astrophysics Data System (ADS)
Schneider, L. J.; Bralower, T. J.; Patzkowsky, M.; Kump, L.
2012-12-01
Global warming during the Paleocene-Eocene Thermal Maximum (PETM; 55.8 Ma) had a profound effect on life on land and in the ocean. With global temperatures rising 5°C over ~20 kyr, the PETM is considered to be the most analogous interval to modern day climate change. Calcareous nannoplankton, a group of calcifying marine phytoplankton, have been extensively studied across this event. Results from these studies indicate nannoplankton assemblages responded to changing surface water temperatures and nutrient availability. Together, these records can provide a global picture of nannofossil assemblage dynamics during this critical interval. Issues such as the timing and nature of assemblage change on a global scale, the rate of assemblage change, and how assemblage shifts differ regionally can be further resolved. Here we use an ordination technique (detrended correspondence analysis; DCA), which condenses complex assemblage data and displays it in a simple, interpretable way. We applied the DCA to previously published nannofossil abundance data from 7 globally distributed sites and compared these results to published benthic and bulk ?13C records across the PETM. Our initial results show that changes in the nannofossil assemblage, as displayed through DCA 1, closely follow the trends of the ?13C curves at each site. This suggests that the organisms are closely linked to the carbon cycle in some way during this time period. From this study we will have a better understanding of how global nannoplankton populations responded to rapid climate change and when environmental alterations began to take place.
Ramalingam, Shivaji G; Hamon, Lomig; Pré, Pascaline; Giraudet, Sylvain; Le Coq, Laurence; Le Cloirec, Pierre
2012-07-01
Adsorption of Volatile Organic Compounds (VOCs) is one of the best remediation techniques for controlling industrial air pollution. In this paper, a quantitative predictor model for the characteristic adsorption energy (E) of the Dubinin-Radushkevich (DR) isotherm model has been established with R(2) value of 0.94. A predictor model for characteristic adsorption energy (E) has been established by using Multiple Linear Regression (MLR) analysis in a statistical package MINITAB. The experimental value of characteristic adsorption energy was computed by modeling the isotherm equilibrium data (which contain 120 isotherms involving five VOCs and eight activated carbons at 293, 313, 333, and 353 K) with the Gauss-Newton method in a statistical package R-STAT. The MLR model has been validated with the experimental equilibrium isotherm data points, and it will be implemented in the dynamic adsorption simulation model PROSIM. By implementing this model, it predicts an enormous range of 1200 isotherm equilibrium coefficients of DR model at different temperatures such as 293, 313, 333, and 353K (each isotherm has 10 equilibrium points by changing the concentration) just by a simple MLR characteristic energy model without any experiments. PMID:22503987
Jacobitz, Frank G; Schneider, Kai; Bos, Wouter J T; Farge, Marie
2016-01-01
The acceleration statistics of sheared and rotating homogeneous turbulence are studied using direct numerical simulation results. The statistical properties of Lagrangian and Eulerian accelerations are considered together with the influence of the rotation to shear ratio, as well as the scale dependence of their statistics. The probability density functions (pdfs) of both Lagrangian and Eulerian accelerations show a strong and similar dependence on the rotation to shear ratio. The variance and flatness of both accelerations are analyzed and the extreme values of the Eulerian acceleration are observed to be above those of the Lagrangian acceleration. For strong rotation it is observed that flatness yields values close to three, corresponding to Gaussian-like behavior, and for moderate and vanishing rotation the flatness increases. Furthermore, the Lagrangian and Eulerian accelerations are shown to be strongly correlated for strong rotation due to a reduced nonlinear term in this case. A wavelet-based scale-dependent analysis shows that the flatness of both Eulerian and Lagrangian accelerations increases as scale decreases, which provides evidence for intermittent behavior. For strong rotation the Eulerian acceleration is even more intermittent than the Lagrangian acceleration, while the opposite result is obtained for moderate rotation. Moreover, the dynamics of a passive scalar with gradient production in the direction of the mean velocity gradient is analyzed and the influence of the rotation to shear ratio is studied. Concerning the concentration of a passive scalar spread by the flow, the pdf of its Eulerian time rate of change presents higher extreme values than those of its Lagrangian time rate of change. This suggests that the Eulerian time rate of change of scalar concentration is mainly due to advection, while its Lagrangian counterpart is only due to gradient production and viscous dissipation. PMID:26871161
NASA Astrophysics Data System (ADS)
Jacobitz, Frank G.; Schneider, Kai; Bos, Wouter J. T.; Farge, Marie
2016-01-01
The acceleration statistics of sheared and rotating homogeneous turbulence are studied using direct numerical simulation results. The statistical properties of Lagrangian and Eulerian accelerations are considered together with the influence of the rotation to shear ratio, as well as the scale dependence of their statistics. The probability density functions (pdfs) of both Lagrangian and Eulerian accelerations show a strong and similar dependence on the rotation to shear ratio. The variance and flatness of both accelerations are analyzed and the extreme values of the Eulerian acceleration are observed to be above those of the Lagrangian acceleration. For strong rotation it is observed that flatness yields values close to three, corresponding to Gaussian-like behavior, and for moderate and vanishing rotation the flatness increases. Furthermore, the Lagrangian and Eulerian accelerations are shown to be strongly correlated for strong rotation due to a reduced nonlinear term in this case. A wavelet-based scale-dependent analysis shows that the flatness of both Eulerian and Lagrangian accelerations increases as scale decreases, which provides evidence for intermittent behavior. For strong rotation the Eulerian acceleration is even more intermittent than the Lagrangian acceleration, while the opposite result is obtained for moderate rotation. Moreover, the dynamics of a passive scalar with gradient production in the direction of the mean velocity gradient is analyzed and the influence of the rotation to shear ratio is studied. Concerning the concentration of a passive scalar spread by the flow, the pdf of its Eulerian time rate of change presents higher extreme values than those of its Lagrangian time rate of change. This suggests that the Eulerian time rate of change of scalar concentration is mainly due to advection, while its Lagrangian counterpart is only due to gradient production and viscous dissipation.
Coupled flow-polymer dynamics via statistical field theory: Modeling and computation
NASA Astrophysics Data System (ADS)
Ceniceros, Hector D.; Fredrickson, Glenn H.; Mohler, George O.
2009-03-01
Field-theoretic models, which replace interactions between polymers with interactions between polymers and one or more conjugate fields, offer a systematic framework for coarse-graining of complex fluids systems. While this approach has been used successfully to investigate a wide range of polymer formulations at equilibrium, field-theoretic models often fail to accurately capture the non-equilibrium behavior of polymers, especially in the early stages of phase separation. Here the "two-fluid" approach serves as a useful alternative, treating the motions of fluid components separately in order to incorporate asymmetries between polymer molecules. In this work we focus on the connection of these two theories, drawing upon the strengths of each of the approaches in order to couple polymer microstructure with the dynamics of the flow in a systematic way. For illustrative purposes we work with an inhomogeneous melt of elastic dumbbell polymers, though our methodology will apply more generally to a wide variety of inhomogeneous systems. First we derive the model, incorporating thermodynamic forces into a two-fluid model for the flow through the introduction of conjugate chemical potential and elastic strain fields for the polymer density and stress. The resulting equations are composed of a system of fourth order PDEs coupled with a non-linear, non-local optimization problem to determine the conjugate fields. The coupled system is severely stiff and with a high degree of computational complexity. Next, we overcome the formidable numerical challenges posed by the model by designing a robust semi-implicit method based on linear asymptotic behavior of the leading order terms at small scales, by exploiting the exponential structure of global (integral) operators, and by parallelizing the non-linear optimization problem. The semi-implicit method effectively removes the fourth order stability constraint associated with explicit methods and we observe only a first order time-step restriction. The algorithm for solving the non-linear optimization problem, which takes advantage of the form of the operators being optimized, reduces the overall simulation time by several orders of magnitude. We illustrate the methodology with several examples of phase separation in an initially quiescent flow.
NASA Astrophysics Data System (ADS)
Habasaki, J.; Ngai, K. L.
2013-08-01
Molecular dynamics simulations have been performed to study the structures along the pressure-volume diagram of network-glasses and melts exemplified by the lithium disilicate system. Experimentally, densification of the disilicate glass by elevated pressure is known and this feature is reasonably reproduced by the simulations. During the process of densification or decompression of the system, the statistics of Qn (i.e., SiO4 tetrahedron unit with n bridging oxygen linked to the silicon atom where n = 0, 1, 2, 3, or 4) change, and the percentage of the Q3 structures show the maximum value near atmospheric pressure at around Tg. Changes of Qn distribution are driven by the changes of volume (or pressure) and are explained by the different volumes of structural units. Furthermore, some pairs of network structures with equi-volume, but having different distributions of Qn (or different heterogeneity), are found. Therefore, for molecular dynamics simulations of the Qn distributions, it is important to take into account the complex phase behavior including poly-structures with different heterogeneities as well as the position of the system in the P-V-T diagram.
NASA Astrophysics Data System (ADS)
Sugiyama, K.; Nakajima, K.; Odaka, M.; Kuramoto, K.; Hayashi, Y.-Y.
2014-02-01
A series of long-term numerical simulations of moist convection in Jupiterâ€™s atmosphere is performed in order to investigate the idealized characteristics of the vertical structure of multi-composition clouds and the convective motions associated with them, varying the deep abundances of condensable gases and the autoconversion time scale, the latter being one of the most questionable parameters in cloud microphysical parameterization. The simulations are conducted using a two-dimensional cloud resolving model that explicitly represents the convective motion and microphysics of the three cloud components, H2O, NH3, and NH4SH imposing a body cooling that substitutes the net radiative cooling. The results are qualitatively similar to those reported in Sugiyama et al. (Sugiyama, K. et al. [2011]. Intermittent cumulonimbus activity breaking the three-layer cloud structure of Jupiter. Geophys. Res. Lett. 38, L13201. doi:10.1029/2011GL047878): stable layers associated with condensation and chemical reaction act as effective dynamical and compositional boundaries, intense cumulonimbus clouds develop with distinct temporal intermittency, and the active transport associated with these clouds results in the establishment of mean vertical profiles of condensates and condensable gases that are distinctly different from the hitherto accepted three-layered structure (e.g., Atreya, S.K., Romani, P.N. [1985]. Photochemistry and clouds of Jupiter, Saturn and Uranus. In: Recent Advances in Planetary Meteorology. Cambridge Univ. Press, London, pp. 17-68). Our results also demonstrate that the period of intermittent cloud activity is roughly proportional to the deep abundance of H2O gas. The autoconversion time scale does not strongly affect the results, except for the vertical profiles of the condensates. Changing the autoconversion time scale by a factor of 100 changes the intermittency period by a factor of less than two, although it causes a dramatic increase in the amount of condensates in the upper troposphere. The moist convection layer becomes potentially unstable with respect to an air parcel rising from below the H2O lifting condensation level (LCL) well before the development of cumulonimbus clouds. The instability accumulates until an appropriate trigger is provided by the H2O condensate that falls down through the H2O LCL; the H2O condensate drives a downward flow below the H2O LCL as a result of the latent cooling associated with the re-evaporation of the condensate, and the returning updrafts carry moist air from below to the moist convection layer. Active cloud development is terminated when the instability is completely exhausted. The period of intermittency is roughly equal to the time obtained by dividing the mean temperature increase, which is caused by active cumulonimbus development, by the body cooling rate.
NASA Astrophysics Data System (ADS)
Reyers, Mark; Pinto, Joaquim G.; Moemken, Julia
2015-04-01
A statistical-dynamical downscaling (SDD) approach for the regionalisation of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily MSLP fields with the central point being located over Germany. 77 weather classes based on the associated circulation weather type and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamical downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different datasets the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate SDD is able to simulate realistic PDFs of 10m-wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD simulated Eout. In terms of decadal hindcasts results of SDD are similar to DD simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout timeseries of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the MPI-ESM decadal prediction system. Long-term climate change projections in SRES scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to results of other studies using DD methods, with increasing Eout over Northern Europe and a negative trend over Southern Europe. Despite some biases it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.
ERIC Educational Resources Information Center
Chicot, Katie; Holmes, Hilary
2012-01-01
The use, and misuse, of statistics is commonplace, yet in the printed format data representations can be either over simplified, supposedly for impact, or so complex as to lead to boredom, supposedly for completeness and accuracy. In this article the link to the video clip shows how dynamic visual representations can enliven and enhance the…
NASA Astrophysics Data System (ADS)
Vaittinada Ayar, Pradeebane; Vrac, Mathieu; Bastin, Sophie; Carreau, Julie; Déqué, Michel; Gallardo, Clemente
2015-05-01
Given the coarse spatial resolution of General Circulation Models, finer scale projections of variables affected by local-scale processes such as precipitation are often needed to drive impacts models, for example in hydrology or ecology among other fields. This need for high-resolution data leads to apply projection techniques called downscaling. Downscaling can be performed according to two approaches: dynamical and statistical models. The latter approach is constituted by various statistical families conceptually different. If several studies have made some intercomparisons of existing downscaling models, none of them included all those families and approaches in a manner that all the models are equally considered. To this end, the present study conducts an intercomparison exercise under the EURO- and MED-CORDEX initiative hindcast framework. Six Statistical Downscaling Models (SDMs) and five Regional Climate Models (RCMs) are compared in terms of precipitation outputs. The downscaled simulations are driven by the ERAinterim reanalyses over the 1989-2008 period over a common area at 0.44° of resolution. The 11 models are evaluated according to four aspects of the precipitation: occurrence, intensity, as well as spatial and temporal properties. For each aspect, one or several indicators are computed to discriminate the models. The results indicate that marginal properties of rain occurrence and intensity are better modelled by stochastic and resampling-based SDMs, while spatial and temporal variability are better modelled by RCMs and resampling-based SDM. These general conclusions have to be considered with caution because they rely on the chosen indicators and could change when considering other specific criteria. The indicators suit specific purpose and therefore the model evaluation results depend on the end-users point of view and how they intend to use with model outputs. Nevertheless, building on previous intercomparison exercises, this study provides a consistent intercomparison framework, including both SDMs and RCMs, which is designed to be flexible, i.e., other models and indicators can easily be added. More generally, this framework provides a tool to select the downscaling model to be used according to the statistical properties of the local-scale climate data to drive properly specific impact models.
NASA Astrophysics Data System (ADS)
Vaittinada Ayar, Pradeebane; Vrac, Mathieu; Bastin, Sophie; Carreau, Julie; DÃ©quÃ©, Michel; Gallardo, Clemente
2016-02-01
Given the coarse spatial resolution of General Circulation Models, finer scale projections of variables affected by local-scale processes such as precipitation are often needed to drive impacts models, for example in hydrology or ecology among other fields. This need for high-resolution data leads to apply projection techniques called downscaling. Downscaling can be performed according to two approaches: dynamical and statistical models. The latter approach is constituted by various statistical families conceptually different. If several studies have made some intercomparisons of existing downscaling models, none of them included all those families and approaches in a manner that all the models are equally considered. To this end, the present study conducts an intercomparison exercise under the EURO- and MED-CORDEX initiative hindcast framework. Six Statistical Downscaling Models (SDMs) and five Regional Climate Models (RCMs) are compared in terms of precipitation outputs. The downscaled simulations are driven by the ERAinterim reanalyses over the 1989-2008 period over a common area at 0.44Â° of resolution. The 11 models are evaluated according to four aspects of the precipitation: occurrence, intensity, as well as spatial and temporal properties. For each aspect, one or several indicators are computed to discriminate the models. The results indicate that marginal properties of rain occurrence and intensity are better modelled by stochastic and resampling-based SDMs, while spatial and temporal variability are better modelled by RCMs and resampling-based SDM. These general conclusions have to be considered with caution because they rely on the chosen indicators and could change when considering other specific criteria. The indicators suit specific purpose and therefore the model evaluation results depend on the end-users point of view and how they intend to use with model outputs. Nevertheless, building on previous intercomparison exercises, this study provides a consistent intercomparison framework, including both SDMs and RCMs, which is designed to be flexible, i.e., other models and indicators can easily be added. More generally, this framework provides a tool to select the downscaling model to be used according to the statistical properties of the local-scale climate data to drive properly specific impact models.
NASA Astrophysics Data System (ADS)
Chatzopoulos, S.; Fritz, T. K.; Gerhard, O.; Gillessen, S.; Wegg, C.; Genzel, R.; Pfuhl, O.
2015-02-01
We derive new constraints on the mass, rotation, orbit structure, and statistical parallax of the Galactic old nuclear star cluster and the mass of the supermassive black hole. We combine star counts and kinematic data from Fritz et al., including 2500 line-of-sight velocities and 10 000 proper motions obtained with VLT instruments. We show that the difference between the proper motion dispersions ?l and ?b cannot be explained by rotation, but is a consequence of the flattening of the nuclear cluster. We fit the surface density distribution of stars in the central 1000 arcsec by a superposition of a spheroidal cluster with scale ˜100 arcsec and a much larger nuclear disc component. We compute the self-consistent two-integral distribution function f(E, Lz) for this density model, and add rotation self-consistently. We find that (i) the orbit structure of the f(E, Lz) gives an excellent match to the observed velocity dispersion profiles as well as the proper motion and line-of-sight velocity histograms, including the double-peak in the vl-histograms. (ii) This requires an axial ratio near q1 = 0.7 consistent with our determination from star counts, q1 = 0.73 ± 0.04 for r < 70 arcsec. (iii) The nuclear star cluster is approximately described by an isotropic rotator model. (iv) Using the corresponding Jeans equations to fit the proper motion and line-of-sight velocity dispersions, we obtain best estimates for the nuclear star cluster mass, black hole mass, and distance M*(r < 100 arcsec) = (8.94 ± 0.31|stat ± 0.9|syst) × 106 M?, M• = (3.86 ± 0.14|stat ± 0.4|syst) × 106 M?, and R0 = 8.27 ± 0.09|stat ± 0.1|syst kpc, where the estimated systematic errors account for additional uncertainties in the dynamical modelling. (v) The combination of the cluster dynamics with the S-star orbits around Sgr A* strongly reduces the degeneracy between black hole mass and Galactic Centre distance present in previous S-star studies. A joint statistical analysis with the results of Gillessen et al., gives M• = (4.23 ± 0.14) × 106 M? and R0 = 8.33 ± 0.11 kpc.
NASA Astrophysics Data System (ADS)
Chae, Kyu-Hyun; Gong, In-Taek
2015-08-01
Modified Newtonian dynamics (MOND) proposed by Milgrom provides a paradigm alternative to dark matter (DM) that has been successful in fitting and predicting the rich phenomenology of rotating disc galaxies. There have also been attempts to test MOND in dispersion-supported spheroidal early-type galaxies, but it remains unclear whether MOND can fit the various empirical properties of early-type galaxies for the whole ranges of mass and radius. As a way of rigorously testing MOND in elliptical galaxies we calculate the MOND-predicted velocity dispersion profiles (VDPs) in the inner regions of Ëœ2000 nearly round Sloan Digital Sky Survey elliptical galaxies under a variety of assumptions on velocity dispersion (VD) anisotropy, and then compare the predicted distribution of VDP slopes with the observed distribution in 11 ATLAS3D galaxies selected with essentially the same criteria. We find that the MOND model parametrized with an interpolating function that works well for rotating galaxies can also reproduce the observed distribution of VDP slopes based only on the observed stellar mass distribution without DM or any other galaxy-to-galaxy varying factor. This is remarkable in view that Newtonian dynamics with DM requires a specific amount and/or profile of DM for each galaxy in order to reproduce the observed distribution of VDP slopes. When we analyse non-round galaxy samples using the MOND-based spherical Jeans equation, we do not find any systematic difference in the mean property of the VDP slope distribution compared with the nearly round sample. However, in line with previous studies of MOND through individual analyses of elliptical galaxies, varying MOND interpolating function or VD anisotropy can lead to systematic change in the VDP slope distribution, indicating that a statistical analysis of VDPs can be used to constrain specific MOND models with an accurate measurement of VDP slopes or a prior constraint on VD anisotropy.
NASA Astrophysics Data System (ADS)
Krantz, Richard; Douthett, Jack; Cartwright, Julyan; Gonzalez, Diego; Piro, Oreste
2010-10-01
Some time ago two apparently dissimilar presentations were given at the 2007 Helmholtz Workshop in Berlin. One by J. Douthett and R. Krantz focused on the commonality between the mathematical descriptions of musical scales and the long-ranged, one-dimensional, anti-ferromagnetic Ising model of statistical physics. The other by J. Cartwright, D. Gonzalez, and O. Piro articulated a nonlinear dynamical model of pitch perception. Both approaches lead to a Farey series devil's staircase structure. In the first case, the ground state magnetic phase diagram of the Ising model is a Farey series devil's staircase. In the second case, the ear is modeled as a nonlinear system leading to a three-frequency resonant pitch perception model of the auditory system that exhibits a devil's staircase phase-locked structure. In this poster we present a summary of each of these works side-by-side to illuminate the link between these two seemingly disparate systems. Adapted from JMM Vol. 4, No. 1, 57, Mar. 2010.
NASA Astrophysics Data System (ADS)
Warrier, M.; Bhardwaj, U.; Hemani, H.; Schneider, R.; Mutzke, A.; Valsakumar, M. C.
2015-12-01
We report on molecular Dynamics (MD) simulations carried out in fcc Cu and bcc W using the Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) code to study (i) the statistical variations in the number of interstitials and vacancies produced by energetic primary knock-on atoms (PKA) (0.1-5Â keV) directed in random directions and (ii) the in-cascade cluster size distributions. It is seen that around 60-80 random directions have to be explored for the average number of displaced atoms to become steady in the case of fcc Cu, whereas for bcc W around 50-60 random directions need to be explored. The number of Frenkel pairs produced in the MD simulations are compared with that from the Binary Collision Approximation Monte Carlo (BCA-MC) code SDTRIM-SP and the results from the NRT model. It is seen that a proper choice of the damage energy, i.e. the energy required to create a stable interstitial, is essential for the BCA-MC results to match the MD results. On the computational front it is seen that in-situ processing saves the need to input/output (I/O) atomic position data of several tera-bytes when exploring a large number of random directions and there is no difference in run-time because the extra run-time in processing data is offset by the time saved in I/O.
NASA Astrophysics Data System (ADS)
Mezghani, Abdelkader; Benestad, Rasmus E.
2014-05-01
The global climate community has produced a wide range of results from atmospheric-ocean general circulation models, which are considered as the primary source of information on the future climate change. However, there are still gaps between the spatial resolution of climate model outputs and the point-scale requirement of most of climate change impact studies. Thus, empirical-statistical downscaling (ESD) and dynamical downscaling (DD) techniques continue to be used as alternatives and various models have been made available by the scientific community. Several comparative studies have been done during the last decade,dealing with downscaling local weather variables such as temperature and precipitation over a region of interest. Accordingly, in this work, new methods and strategies based on merging ESD and DD results will be proposed in order to increase the quality of the local climate projections with a special focus on seasonal and decadal precipitation and temperature based on CMIP3/5 experiments. A new freely available ESD R-package developed by MET Norway is used and will be also presented.
Healey, R.D.
1987-08-15
Software Built-In Test (BIT) is a design technique for collecting information from operational software that will assist in identifying differences between the real Operating Environment and either the Design or Test Environments. The BIT senses and indicates where the software is operating in new or overload environmental conditions and may, therefore, be more likely to fail. (This anomalous situation may be the result of either hardware failure or software design error.) The technical challenge is to incorporate the large number of relatively simple BIT tests into the fault-tolerant and continuously operating environment likely to characterize a solution to the battle management portion of the SDI mission. The management challenge is to provide these technical assists in such a way that they can be implemented in operational software with a minimal increase in software development time; it is then reasonable to expect that BIT will not shift from a hard requirement to a nice-to-have feature as schedule pressures potentially impact development. This approach overcomes the management problem by providing a standard set of tools for use within the software development environment which will implement BIT with a minimum amount of programmer action.
NASA Astrophysics Data System (ADS)
Sherman, James P.; She, Chiao-Yao
2006-06-01
One thousand three hundred and eleven 15-min profiles of nocturnal mesopause region (80 105 km) temperature and horizontal wind, observed by Colorado State University sodium lidar over Fort Collins, CO (41Â°N, 105Â°W), between May 2002 and April 2003, were analyzed. From these profiles, taken over 390 h and each possessing vertical resolution of 2 km, a statistical analysis of seasonal variations in wind shears, convective and dynamical instabilities was performed. Large wind shears were most often observed near 100 km and during winter months. Thirty-five percent of the winter profiles contained wind shears exceeding 40 m/s per km at some altitude. In spite of large winds and shears, the mesopause region (at a resolution of 2 km and 15 min) is a very stable region. At a given altitude, the probability for convective instability is less than 1.4% for all seasons and the probability for dynamic instability (in the sense of Richardson number) ranges from 2.7% to 6.0%. Wind shear measurements are compared with four decades of chemical release measurements, compiled in a study by Larson [2002. Winds and shears in the mesosphere and lower thermosphere: results from four decades of chemical release wind measurements. Journal of Geophysical Research 107(A8), 1215]. Instability results are compared with those deduced from an annual lidar study conducted with higher spatial and temporal resolution at the Starfire Optical Range (SOR) in Albuquerque, NM, by Zhao et al. [2003. Measurements of atmospheric stability in the mesopause region at Starfire Optical Range, NM. Journal of Atmospheric and Solar-Terrestrial Physics 65, 219 232], and from a study by Li et al. [2005b. Characteristics of instabilities in the mesopause region over Maui, Hawaii. Journal of Geophysical Research 110, D09S12] with 19 days of data acquired from Maui Mesosphere and Lower Thermosphere (Maui MALT) Campaign . The Fort Collins lidar profiles were also analyzed using 1-h temporal resolution to compare instances of instabilities observed on different time scales.
NASA Astrophysics Data System (ADS)
Sun, F.; Hall, A. D.; Walton, D.; Capps, S. B.; Qu, X.; Huang, H. J.; Berg, N.; Jousse, A.; Schwartz, M.; Nakamura, M.; Cerezo-Mota, R.
2012-12-01
Using a combination of dynamical and statistical downscaling techniques, we projected mid-21st century warming in the Los Angeles region at 2-km resolution. To account for uncertainty associated with the trajectory of future greenhouse gas emissions, we examined projections for both "business-as-usual" (RCP8.5) and "mitigation" (RCP2.6) emissions scenarios from the Fifth Coupled Model Intercomparison Project (CMIP5). To account for the considerable uncertainty associated with choice of global climate model, we downscaled results for all available global climate models in CMIP5. For the business-as-usual scenario, we find that by the mid-21st century, the most likely warming is roughly 2.6Â°C averaged over the region's land areas, with a 95% confidence that the warming lies between 0.9 and 4.2Â°C. The high resolution of the projections reveals a pronounced spatial pattern in the warming: High elevations and inland areas separated from the coast by at least one mountain complex warm 20 to 50% more than the areas near the coast or within the Los Angeles basin. This warming pattern is especially apparent in summertime. The summertime warming contrast between the inland and coastal zones has a large effect on the most likely expected number of extremely hot days per year. Coastal locations and areas within the Los Angeles basin see roughly two to three times the number of extremely hot days, while high elevations and inland areas typically experience approximately three to five times the number of extremely hot days. Under the mitigation emissions scenario, the most likely warming and increase in heat extremes are somewhat smaller. However, the majority of the warming seen in the business-as-usual scenario still occurs at all locations in the most likely case under the mitigation scenario, and heat extremes still increase significantly. This warming study is the first part of a series studies of our project. More climate change impacts on the Santa Ana wind, rainfall, snowfall and snowmelt, cloud and surface hydrology are forthcoming and could be found in www.atmos.ucla.edu/csrl.he ensemble-mean, annual-mean surface air temperature change and its uncertainty from the available CMIP5 GCMs under the RCP8.5 (left) and RCP2.6 (right) emissions scenarios, unit: Â°C.
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Alados-Arboledas, L.
2012-01-01
This work presents the first analysis of longterm correlative day-to-night columnar aerosol optical properties. The aim is to better understand columnar aerosol dynamic from ground-based observations, which are poorly studied until now. To this end we have used a combination of sun-and-star photometry measurements acquired in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) from 2007 to 2010. For the whole study period, mean aerosol optical depth (AOD) around 440 nm (+/-standard deviation) is 0.18 +/- 0.10 and 0.19 +/- 0.11 for daytime and nighttime, respectively, while the mean AngstrÂ¨om exponent (alpha ) is 1.0 +/- 0.4 and 0.9 +/- 0.4 for daytime and nighttime. The ANOVA statistical tests reveal that there are no significant differences between AOD and obtained at daytime and those at nighttime. Additionally, the mean daytime values of AOD and obtained during this study period are coherent with the values obtained in the surrounding AERONET stations. On the other hand, AOD around 440 nm present evident seasonal patterns characterised by large values in summer (mean value of 0.20 +/- 0.10 both at daytime and nighttime) and low values in winter (mean value of 0.15 +/- 0.09 at daytime and 0.17 +/- 0.10 at nighttime). The AngstrÂ¨om exponents also present seasonal patterns, but with low values in summer (mean values of 0.8 +/- 0.4 and 0.9 +/- 0.4 at dayand night-time) and relatively large values in winter (mean values of 1.2 +/- 0.4 and 1.0 +/- 0.3 at daytime and nighttime). These seasonal patterns are explained by the differences in the meteorological conditions and by the differences in the strength of the aerosol sources. To take more insight about the changes in aerosol particles between day and night, the spectral differences of the Angstrom exponent as function of the AngstrÂ¨om exponent are also studied. These analyses reveal increases of the fine mode radius and of the fine mode contribution to AOD during nighttime, being more remarkable in the summer seasons. These variations are explained by the changes of the local aerosol sources and by the meteorological conditions between daytime and nighttime, as well as aerosol aging processes. Case studies during summer and winter for different aerosol loads and types are also presented to clearly illustrate these findings.
NASA Astrophysics Data System (ADS)
Eslamizadeh, H.
2015-09-01
The fission probability, pre-scission neutron, proton and alpha multiplicities, anisotropy of fission fragment angular distribution and the fission time have been calculated for the compound nuclei 200Pb and 197Tl based on the modified statistical model and four-dimensional dynamical model. In dynamical calculations, dissipation was generated through the chaos weighted wall and window friction formula. The projection of the total spin of the compound nucleus to the symmetry axis, K, was considered as the fourth-dimension in Langevin dynamical calculations. In our dynamical calculations, we have used a constant dissipation coefficient of K, {Î³ }K=0.077{({{MeV}} {{zs}})}-{1/2}, and a non-constant dissipation coefficient to reproduce the above-mentioned experimental data. Comparison of the theoretical results of the fission probability and pre-scission particle multiplicities with the experimental data showed that the difference between the results of both dynamical models is small whereas, for the anisotropy of fission fragment angular distribution, it is slightly large. Furthermore, comparison of the results of the modified statistical model with the above-mentioned experimental data showed that with choosing appropriate values of the temperature coefficient of the effective potential, Î» , and the scaling factor of the fission-barrier height, {r}s, the experimental data were satisfactorily reproduced.
Developing a Web-based system by integrating VGI and SDI for real estate management and marketing
NASA Astrophysics Data System (ADS)
Salajegheh, J.; Hakimpour, F.; Esmaeily, A.
2014-10-01
Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.
p21{sup WAF1/Cip1/Sdi1} knockout mice respond to doxorubicin with reduced cardiotoxicity
Terrand, Jerome; Xu, Beibei; Morrissy, Steve; Dinh, Thai Nho; Williams, Stuart; Chen, Qin M.
2011-11-15
Doxorubicin (Dox) is an antineoplastic agent that can cause cardiomyopathy in humans and experimental animals. As an inducer of reactive oxygen species and a DNA damaging agent, Dox causes elevated expression of p21{sup WAF1/Cip1/Sdi1} (p21) gene. Elevated levels of p21 mRNA and p21 protein have been detected in the myocardium of mice following Dox treatment. With chronic treatment of Dox, wild type (WT) animals develop cardiomyopathy evidenced by elongated nuclei, mitochondrial swelling, myofilamental disarray, reduced cardiac output, reduced ejection fraction, reduced left ventricular contractility, and elevated expression of ANF gene. In contrast, p21 knockout (p21KO) mice did not show significant changes in the same parameters in response to Dox treatment. In an effort to understand the mechanism of the resistance against Dox induced cardiomyopathy, we measured levels of antioxidant enzymes and found that p21KO mice did not contain elevated basal or inducible levels of glutathione peroxidase and catalase. Measurements of 6 circulating cytokines indicated elevation of IL-6, IL-12, IFN{gamma} and TNF{alpha} in Dox treated WT mice but not p21KO mice. Dox induced elevation of IL-6 mRNA was detected in the myocardium of WT mice but not p21KO mice. While the mechanism of the resistance against Dox induced cardiomyopathy remains unclear, lack of inflammatory response may contribute to the observed cardiac protection in p21KO mice. -- Highlights: Black-Right-Pointing-Pointer Doxorubicin induces p21 elevation in the myocardium. Black-Right-Pointing-Pointer Doxorubicin causes dilated cardiomyopathy in wild type mice. Black-Right-Pointing-Pointer p21 Knockout mice are resistant against doxorubicin induced cardiomyopathy. Black-Right-Pointing-Pointer Lack of inflammatory response correlates with the resistance in p21 knockout mice.
Rundle, John B.; Klein, William
2015-09-29
We have carried out research to determine the dynamics of failure in complex geomaterials, specifically focusing on the role of defects, damage and asperities in the catastrophic failure processes (now popularly termed â€œBlack Swan eventsâ€). We have examined fracture branching and flow processes using models for invasion percolation, focusing particularly on the dynamics of bursts in the branching process. We have achieved a fundamental understanding of the dynamics of nucleation in complex geomaterials, specifically in the presence of inhomogeneous structures.
Dayou, Fabrice; Larrégaray, Pascal; Bonnet, Laurent; Rayez, Jean-Claude; Arenas, Pedro Nilo; González-Lezana, Tomás
2008-05-01
The dynamics of the singlet channel of the Si+O(2)-->SiO+O reaction is investigated by means of quasiclassical trajectory (QCT) calculations and two statistical based methods, the statistical quantum method (SQM) and a semiclassical version of phase space theory (PST). The dynamics calculations have been performed on the ground (1)A(') potential energy surface of Dayou and Spielfiedel [J. Chem. Phys. 119, 4237 (2003)] for a wide range of collision energies (E(c)=5-400 meV) and initial O(2) rotational states (j=1-13). The overall dynamics is found to be highly sensitive to the selected initial conditions of the reaction, the increase in either the collisional energy or the O(2) rotational excitation giving rise to a continuous transition from a direct abstraction mechanism to an indirect insertion mechanism. The product state properties associated with a given collision energy of 135 meV and low rotational excitation of O(2) are found to be consistent with the inverted SiO vibrational state distribution observed in a recent experiment. The SQM and PST statistical approaches, especially designed to deal with complex-forming reactions, provide an accurate description of the QCT total integral cross sections and opacity functions for all cases studied. The ability of such statistical treatments in providing reliable product state properties for a reaction dominated by a competition between abstraction and insertion pathways is carefully examined, and it is shown that a valuable information can be extracted over a wide range of selected initial conditions. PMID:18465922
Li, Kun; Emani, Prashant S; Ash, Jason; Groves, Michael; Drobny, Gary P
2014-08-13
Extracellular matrix proteins adsorbed onto mineral surfaces exist in a unique environment where the structure and dynamics of the protein can be altered profoundly. To further elucidate how the mineral surface impacts molecular properties, we perform a comparative study of the dynamics of nonpolar side chains within the mineral-recognition domain of the biomineralization protein salivary statherin adsorbed onto its native hydroxyapatite (HAP) mineral surface versus the dynamics displayed by the native protein in the hydrated solid state. Specifically, the dynamics of phenylalanine side chains (viz., F7 and F14) located in the surface-adsorbed 15-amino acid HAP-recognition fragment (SN15: DpSpSEEKFLRRIGRFG) are studied using deuterium magic angle spinning ((2)H MAS) line shape and spin-lattice relaxation measurements. (2)H NMR MAS spectra and T1 relaxation times obtained from the deuterated phenylalanine side chains in free and HAP-adsorbed SN15 are fitted to models where the side chains are assumed to exchange between rotameric states and where the exchange rates and a priori rotameric state populations are varied iteratively. In condensed proteins, phenylalanine side-chain dynamics are dominated by 180° flips of the phenyl ring, i.e., the "? flip". However, for both F7 and F14, the number of exchanging side-chain rotameric states increases in the HAP-bound complex relative to the unbound solid sample, indicating that increased dynamic freedom accompanies introduction of the protein into the biofilm state. The observed rotameric exchange dynamics in the HAP-bound complex are on the order of 5-6 × 10(6) s(-1), as determined from the deuterium MAS line shapes. The dynamics in the HAP-bound complex are also shown to have some solution-like behavioral characteristics, with some interesting deviations from rotameric library statistics. PMID:25054469
Madkour, Tarek M; Salem, Sarah A; Miller, Stephen A
2013-04-28
To fully understand the thermodynamic nature of polymer blends and accurately predict their miscibility on a microscopic level, a hybrid model employing both statistical mechanics and molecular dynamics techniques was developed to effectively predict the total free energy of mixing. The statistical mechanics principles were used to derive an expression for the deformational entropy of the chains in the polymeric blends that could be evaluated from molecular dynamics trajectories. Evaluation of the entropy loss due to the deformation of the polymer chains in the case of coiling as a result of the repulsive interactions between the blend components or in the case of swelling due to the attractive interactions between the polymeric segments predicted a negative value for the deformational entropy resulting in a decrease in the overall entropy change upon mixing. Molecular dynamics methods were then used to evaluate the enthalpy of mixing, entropy of mixing, the loss in entropy due to the deformation of the polymeric chains upon mixing and the total free energy change for a series of polar and non-polar, poly(glycolic acid), PGA, polymer blends. PMID:23493907
Technology Transfer Automated Retrieval System (TEKTRAN)
It is known that irrigation application method can impact crop water use and water use efficiency, but the mechanisms involved are incompletely understood, particularly in terms of the water and energy balances during the growing season from pre-irrigation through planting, early growth and yield de...
NASA Astrophysics Data System (ADS)
Pahlavani, M. R.; Firoozi, B.
2015-11-01
Within a developed particle-hole approach, a systematic study of the Î²- transition from the ground state of the 16N nucleus to the ground and some exited states of the 16O nucleus has been carried out. The energy spectrum and the wave functions of pure configuration of the 16N and 16O nuclei are numerically obtained using the mean-field shell model with respect to the Woods-Saxon nuclear potential accompanying spin-orbit and Coulomb interaction. Considering SDI residual interaction, mixed configuration of ground and excited pnTDA and TDA states are extracted for the aforementioned nucleus. These energy spectra and corresponding eigenstates are highly correspondent to the experimental energy spectrum and eigenstates after adjusting the residual potential parameters using the Nelder-Mead (NM) algorithm. In this approach, the endpoint energy, log ft and the partial half-lives of some possible transitions are calculated. The obtained results using the optimized SDI approach are reasonably close to the available experimental data.
NASA Technical Reports Server (NTRS)
Mcmillan, S. L. W.; Lightman, A. P.
1984-01-01
A unified N-body and statistical treatment of stellar dynamics is developed and applied to the late stages of core collapse and early stages of post collapse evolution in globular clusters. A 'hybrid' computer code is joined to a direct N-body code which is used to calculate exactly the behavior of particles in the inner spatial region, and the combination is used to follow particles statistically in the outer spatial region. A transition zone allows the exchange of particles and energy between the two regions. The main application results include: formation of a hard central binary system, reversal of core collapse and expansion due to the heat input from this binary, ejection of the binary from the core, and recollapse of the core; density profiles that form a one-parameter sequence during the core oscillations; and indications that these oscillations will eventually cease.
NASA Astrophysics Data System (ADS)
Volchenkov, D.; Blanchard, Ph.
2007-02-01
Different models of random walks on the dual graphs of compact urban structures are considered. Analysis of access times between streets helps to detect the city modularity. The statistical mechanics approach to the ensembles of lazy random walkers is developed. The complexity of city modularity can be measured by an informationlike parameter which plays the role of an individual fingerprint of Genius loci. Global structural properties of a city can be characterized by the thermodynamic parameters calculated in the random walk problem.
NASA Astrophysics Data System (ADS)
Hong, Mei; Zhang, Ren; Wang, Dong; Chen, Xi; Shi, Jian; Singh, Vijay
2014-12-01
The western Pacific subtropical high (WPSH) is closely correlated with the East Asian climate. To date, the underlying mechanisms and sustaining factors have not been positively elucidated. Based on the concept of dynamical system model reconstruction, this paper presents a nonlinear statistical-dynamical model of the subtropical high ridge line (SHRL) in concurrence with four summer monsoon factors. SHRL variations from 1990 to 2011 are subdivided into three categories, while parameter differences relating to three differing models are examined. Dynamical characteristics of SHRL are analyzed and an aberrance mechanism subsequently developed. Modeling suggests that different parameters may lead to significant variance pertaining to monsoon variables corresponding with numerous WPSH activities. Dynamical system bifurcation and mutation indicates that the South China Sea monsoon trough is a significant factor with respect to the occurrence and maintenance of the 'double-ridge' phenomenon. Moreover, the occurrence of the Mascarene cold high is predicted to cause an abnormal northward location of WPSH, resulting in the â€œempty plumâ€ phenomenon.
NASA Astrophysics Data System (ADS)
Albers, D. J.; Hripcsak, George
2010-02-01
Statistical physics and information theory is applied to the clinical chemistry measurements present in a patient database containing 2.5 million patients' data over a 20-year period. Despite the seemingly naive approach of aggregating all patients over all times (with respect to particular clinical chemistry measurements), both a diurnal signal in the decay of the time-delayed mutual information and the presence of two sub-populations with differing health are detected. This provides a proof in principle that the highly fragmented data in electronic health records has potential for being useful in defining disease and human phenotypes.
NASA Astrophysics Data System (ADS)
Zhao, Jun-Hu; Yang, Liu; Hou, Wei; Liu, Gang; Zeng, Yu-Xing
2015-05-01
The cold vortex is a major high impact weather system in northeast China during the warm season, its frequent activities also affect the short-term climate throughout eastern China. How to objectively and quantitatively predict the intensity trend of the cold vortex is an urgent and difficult problem for current short-term climate prediction. Based on the dynamical-statistical combining principle, the predicted results of the Beijing Climate Centerâ€™s global atmosphere-ocean coupled model and rich historical data are used for dynamic-statistical extra-seasonal prediction testing and actual prediction of the summer 500-hPa geopotential height over the cold vortex activity area. The results show that this method can significantly reduce the modelâ€™s prediction error over the cold vortex activity area, and improve the prediction skills. Furthermore, the results of the sensitivity test reveal that the predicted results are highly dependent on the quantity of similar factors and the number of similar years. Project supported by the National Natural Science Foundation of China (Grant No. 41375078), the National Basic Research Program of China (Grant Nos. 2012CB955902 and 2013CB430204), and the Special Scientific Research Fund of Public Welfare Profession of China (Grant No. GYHY201306021).
NASA Astrophysics Data System (ADS)
Verma, M.; Denker, C.
2014-03-01
Context. Solar pores are penumbra-lacking magnetic features, that mark two important transitions in the spectrum of magnetohydrodynamic processes: (1) the magnetic field becomes sufficiently strong to suppress the convective energy transport and (2) at some critical point some pores develop a penumbra and become sunspots. Aims: The purpose of this statistical study is to comprehensively describe solar pores in terms of their size, perimeter, shape, photometric properties, and horizontal proper motions. The seeing-free and uniform data of the Japanese Hinode mission provide an opportunity to compare flow fields in the vicinity of pores in different environments and at various stages of their evolution. Methods: The extensive database of high-resolution G-band images observed with the Hinode Solar Optical Telescope (SOT) is a unique resource to derive statistical properties of pores using advanced digital image processing techniques. The study is based on two data sets: (1) photometric and morphological properties inferred from single G-band images cover almost seven years from 2006 October 25 to 2013 August 31; and (2) horizontal flow fields derived from 356 one-hour sequences of G-band images using local correlation tracking (LCT) for a shorter period of time from 2006 November 3 to 2008 January 6 comprising 13 active regions. Results: A total of 7643/2863 (single/time-averaged) pores builds the foundation of the statistical analysis. Pores are preferentially observed at low latitudes in the southern hemisphere during the deep minimum of solar cycle No. 23. This imbalance reverses during the rise of cycle No. 24, when the pores migrate from high to low latitudes. Pores are rarely encountered in quiet-Sun G-band images, and only about 10% of pores exist in isolation. In general, pores do not exhibit a circular shape. Typical aspect ratios of the semi-major and -minor axes are 3:2 when ellipses are fitted to pores. Smaller pores (more than two-thirds are smaller than 5 Mm2) tend to be more circular, and their boundaries are less corrugated. Both the area and perimeter length of pores obey log-normal frequency distributions. The frequency distribution of the intensity can be reproduced by two Gaussians representing dark and bright components. Bright features resembling umbral dots and even light bridges cover about 20% of the pores' area. Averaged radial profiles show a peak in the intensity at normalized radius RN = r/Rpore = 2.1, followed by maxima of the divergence at RN = 2.3 and the radial component of the horizontal velocity at RN = 4.6. The divergence is negative within pores strongly suggesting converging flows towards the center of pores, whereas exterior flows are directed towards neighboring supergranular boundaries. The photometric radius of pores, where the intensity reaches quiet-Sun levels at RN = 1.4, corresponds to the position where the divergence is zero at RN = 1.6. Conclusions: Morphological and photometric properties as well as horizontal flow fields have been obtained for a statistically meaningful sample of pores. This provides critical boundary conditions for MHD simulations of magnetic flux concentrations, which eventually evolve into sunspots or just simply erode and fade away. Numerical models of pores (and sunspots) have to fit within these confines, and more importantly ensembles of pores have to agree with the frequency distributions of observed parameters.
Technology Transfer Automated Retrieval System (TEKTRAN)
Quorum sensing transcriptional regulator SdiA has been shown to enhance the survival of Escherichia coli O157:H7 (O157) in the acidic compartment of bovine rumen in response to N-acyl-L-homoserine lactones (AHLs) produced by the rumen bacteria. Bacteria that survive the rumen environment subsequentl...
NASA Astrophysics Data System (ADS)
Mutz, Sebastian; Paeth, Heiko; Winkler, Stefan
2016-03-01
The long-term behaviour of Norwegian glaciers is reflected by the long mass-balance records provided by the Norwegian Water Resources and Energy Directorate. These show positive annual mass balances in the 1980s and 1990s at maritime glaciers followed by rapid mass loss since 2000. This study assesses the influence of various atmospheric variables on mass changes of selected Norwegian glaciers by correlation- and cross-validated stepwise multiple regression analyses. The atmospheric variables are constructed from reanalyses by the National Centers for Environmental Prediction and the European Centre for Medium-Range Weather Forecasts. Transfer functions determined by the multiple regression are applied to predictors derived from a multi-model ensemble of climate projections to estimate future mass-balance changes until 2100. The statistical relationship to the North Atlantic Oscillation (NAO), the strongest predictor, is highest for maritime glaciers and less for more continental ones. The mass surplus in the 1980s and 1990s can be attributed to a strong NAO phase and lower air temperatures during the ablation season. The mass loss since 2000 can be explained by an increase of summer air temperatures and a slight weakening of the NAO. From 2000 to 2100 the statistical model predicts predicts changes for glaciers in more continental settings of c. -20 m w.e. (water equivalent) or 0.2 m w.e./a. The corresponding range for their more maritime counterparts is -0.5 to +0.2 m w.e./a. Results from Bayesian classification of observed atmospheric states associated with high melt or high accumulation in the past into different simulated climates in the future suggest that climatic conditions towards the end of the twenty-first century favour less winterly accumulation and more ablation in summer. The posterior probabilities for high accumulation at the end of the twenty-first century are typically 1.5-3 times lower than in the twentieth century while the posterior probabilities for high melt are often 1.5-3 times higher at the end of the twenty-first century than in the twentieth and early twenty-first century.
NASA Technical Reports Server (NTRS)
Mcmillan, S. L. W.
1986-01-01
The period immediately following the core collapse phase in the evolution of a globular cluster is studied using a hybrid N-body/Fokker-Planck stellar dynamical code. Several core oscillations of the type predicted in earlier work are seen. The oscillations are driven by the formation, hardening, and ejection of binaries by three-body processes, and appear to decay on a timescale of about 10 to the 7th yr, for the choice of 'typical' cluster parameters made here. There is no evidence that they are gravothermal in nature. The mechanisms responsible for the decay are discussed in some detail. The distribution of hard binaries produced by the oscillations is compared with theoretical expectations and the longer term evolution of the system is considered.
Diegert, Carl F.
2006-12-01
We define a new diagnostic method where computationally-intensive numerical solutions are used as an integral part of making difficult, non-contact, nanometer-scale measurements. The limited scope of this report comprises most of a due diligence investigation into implementing the new diagnostic for measuring dynamic operation of Sandia's RF Ohmic Switch. Our results are all positive, providing insight into how this switch deforms during normal operation. Future work should contribute important measurements on a variety of operating MEMS devices, with insights that are complimentary to those from measurements made using interferometry and laser Doppler methods. More generally, the work opens up a broad front of possibility where exploiting massive high-performance computers enable new measurements.
Carter, S.; Mizell, D.
1989-07-01
The design of ISI's SDI architecture simulator was intended to minimize the software development necessary to add new simulation models to the system or to refine the detail of existing ones. The key software design approach used to accomplish this goal was the modeling of each simulated defense system component by a software object called a 'technology module.' Each technology module provided a carefully defined abstract interface between the component model and the rest of the simulation system, particularly the simulation models of battle managers. This report documents the first test of the validity of this software design approach. A new technology module modeling a kinetic kill vehicle' (KKV) was added to the simulator. Although this technology module had an impact on several parts of the simulation system in the form of new data structures and functions that had to be created, the integration of the new module was accomplished without the necessity of replacing any existing code.
NASA Astrophysics Data System (ADS)
Franchito, Sergio H.; Brahmananda Rao, V.; Moraes, E. C.
2011-11-01
In this study, a zonally-averaged statistical climate model (SDM) is used to investigate the impact of global warming on the distribution of the geobotanic zones over the globe. The model includes a parameterization of the biogeophysical feedback mechanism that links the state of surface to the atmosphere (a bidirectional interaction between vegetation and climate). In the control experiment (simulation of the present-day climate) the geobotanic state is well simulated by the model, so that the distribution of the geobotanic zones over the globe shows a very good agreement with the observed ones. The impact of global warming on the distribution of the geobotanic zones is investigated considering the increase of CO2 concentration for the B1, A2 and A1FI scenarios. The results showed that the geobotanic zones over the entire earth can be modified in future due to global warming. Expansion of subtropical desert and semi-desert zones in the Northern and Southern Hemispheres, retreat of glaciers and sea-ice, with the Arctic region being particularly affected and a reduction of the tropical rainforest and boreal forest can occur due to the increase of the greenhouse gases concentration. The effects were more pronounced in the A1FI and A2 scenarios compared with the B1 scenario. The SDM results confirm IPCC AR4 projections of future climate and are consistent with simulations of more complex GCMs, reinforcing the necessity of the mitigation of climate change associated to global warming.
NASA Technical Reports Server (NTRS)
Zheng, Quanan; Yan, Xiao-Hai; Klemas, Vic
1993-01-01
The internal waves on the continental shelf on the Middle Atlantic Bight seen on Space Shuttle photographs taken during the STS-40 mission in June 1991 are measured and analyzed. The internal wave field in the sample area has a three-level structure which consists of packet groups, packets, and solitons. An average packet group wavelength of 17.5 km and an average soliton wavelength of 0.6 km are measured. Finite-depth theory is used to derive the dynamic parameters of the internal solitons: the maximum amplitude of 5.6 m, the characteristic phase speed of 0.42 m/s, the characteristic period of 23.8 min, the velocity amplitude of the water particles in the upper and lower layers of 0.13 m/s and 0.030 m/s respectively, and the theoretical energy per unit crest line of 6.8 x 10 exp 4 J/m. The frequency distribution of solitons is triple-peaked rather than continuous. The major generation source is at 160 m water depth, and a second is at 1800 m depth, corresponding to the upper and lower edges of the shelf break.
NASA Astrophysics Data System (ADS)
Minvielle, Marie; Cassou, Christophe; Bourdallé-Badie, Romain; Terray, Laurent; Najac, Julien
2011-02-01
A novel statistical-dynamical scheme has been developed to reconstruct the sea surface atmospheric variables necessary to force an ocean model. Multiple linear regressions are first built over a so-called learning period and over the entire Atlantic basin from the observed relationship between the surface wind conditions, or predictands, and the anomalous large scale atmospheric circulations, or predictors. The latter are estimated in the extratropics by 500 hPa geopotential height weather regimes and in the tropics by low-level wind classes. The transfer function further combined to an analog step is then used to reconstruct all the surface variables fields over 1958-2002. We show that the proposed hybrid scheme is very skillful in reproducing the mean state, the seasonal cycle and the temporal evolution of all the surface ocean variables at interannual timescale. Deficiencies are found in the level of variance especially in the tropics. It is underestimated for 2-m temperature and humidity as well as for surface radiative fluxes in the interannual frequency band while it is slightly overestimated at higher frequency. Decomposition in empirical orthogonal function (EOF) shows that the spatial and temporal coherence of the forcing fields is however very well captured by the reconstruction method. For dynamical downscaling purposes, reconstructed fields are then interpolated and used to carry out a high-resolution oceanic simulation using the NATL4 (1/4°) model integrated over 1979-2001. This simulation is compared to a reference experiment where the original observed forcing fields are prescribed instead. Mean states between the two experiments are virtually undistinguishable both in terms of surface fluxes and ocean dynamics estimated by the barotropic and the meridional overturning streamfunctions. The 3-dimensional variance of the simulated ocean is well preserved at interannual timescale both for temperature and salinity except in the tropics where it is underestimated. The main modes of interannual variability assessed through EOF are correctly reproduced for sea surface temperature, barotropic streamfunction and mixed layer depth both in terms of spatial structure and temporal evolution. Collectively, our results provide evidence that the statistical-dynamical scheme presented in this two-part study is an efficient and promising tool to infer oceanic changes (in particular those related to the wind-driven circulation) due to modifications in the large-scale atmospheric circulation. As a prerequisite, we have here validated the method for present-day climate; we encourage its use for climate change studies with some adaptations though.
Demkin, V. P.; Mel'nichuk, S. V.
2014-09-15
In the present work, results of investigations into the dynamics of secondary electrons with helium atoms in the presence of the reverse electric field arising in the flare of a high-voltage pulsed beam-type discharge and leading to degradation of the primary electron beam are presented. The electric field in the discharge of this type at moderate pressures can reach several hundred V/cm and leads to considerable changes in the kinetics of secondary electrons created in the process of propagation of the electron beam generated in the accelerating gap with a grid anode. Moving in the accelerating electric field toward the anode, secondary electrons create the so-called compensating current to the anode. The character of electron motion and the compensating current itself are determined by the ratio of the field strength to the concentration of atoms (E/n). The energy and angular spectra of secondary electrons are calculated by the Monte Carlo method for different ratios E/n of the electric field strength to the helium atom concentration. The motion of secondary electrons with threshold energy is studied for inelastic collisions of helium atoms and differential analysis is carried out of the collisional processes causing energy losses of electrons in helium for different E/n values. The mechanism of creation and accumulation of slow electrons as a result of inelastic collisions of secondary electrons with helium atoms and selective population of metastable states of helium atoms is considered. It is demonstrated that in a wide range of E/n values the motion of secondary electrons in the beam-type discharge flare has the character of drift. At E/n values characteristic for the discharge of the given type, the drift velocity of these electrons is calculated and compared with the available experimental data.
NASA Astrophysics Data System (ADS)
Kissick, David J.; Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.; Simpson, Garth J.
2013-02-01
Despite the ubiquitous use of multi-photon and confocal microscopy measurements in biology, the core techniques typically suffer from fundamental compromises between signal to noise (S/N) and linear dynamic range (LDR). In this study, direct synchronous digitization of voltage transients coupled with statistical analysis is shown to allow S/N approaching the theoretical maximum throughout an LDR spanning more than 8 decades, limited only by the dark counts of the detector on the low end and by the intrinsic nonlinearities of the photomultiplier tube (PMT) detector on the high end. Synchronous digitization of each voltage transient represents a fundamental departure from established methods in confocal/multi-photon imaging, which are currently based on either photon counting or signal averaging. High information-density data acquisition (up to 3.2 GB/s of raw data) enables the smooth transition between the two modalities on a pixel-by-pixel basis and the ultimate writing of much smaller files (few kB/s). Modeling of the PMT response allows extraction of key sensor parameters from the histogram of voltage peak-heights. Applications in second harmonic generation (SHG) microscopy are described demonstrating S/N approaching the shot-noise limit of the detector over large dynamic ranges.
... Cancer Statistics Cancer Statistics Cancer has a major impact on society in the United States and across the world. ... makers, health professionals, and researchers to understand the impact of ... poses to the society at large. Statistical trends are also important for ...
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2008-01-01
As a branch of knowledge, Statistics is ubiquitous and its applications can be found in (almost) every field of human endeavour. In this article, the authors track down the possible source of the link between the "Siren song" and applications of Statistics. Answers to their previous five questions and five new questions on Statistics are presented.
NASA Astrophysics Data System (ADS)
Zekri, Nouredine; Clerc, Jean Pierre
We study numerically in this work the statistical and dynamical properties of the clusters in a one dimensional small world model. The parameters chosen correspond to a realistic network of children of school age where a disease like measles can propagate. Extensive results on the statistical behavior of the clusters around the percolation threshold, as well as the evoltion with time, are discussed. To cite this article: N. Zekri, J.P. Clerc, C. R. Physique 3 (2002) 741-747.
Hay, L.E.; Clark, M.P.
2003-01-01
This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.
Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; PlÃ³sz, Benedek Gy
2015-10-15
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy - taking a reasonable computational effort - when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems' performance under normal and shock events. PMID:26248321
NASA Astrophysics Data System (ADS)
Sun, F.; Hall, A. D.; Walton, D.; Capps, S. B.; Reich, K.
2013-12-01
Using a combination of dynamical and statistical downscaling techniques, we produced 2-km-resolution regional climate reconstructions and future projections of surface warming and snowfall changes in the Los Angeles region at the middle and end of the 21st century. Projections for both time periods were compared to a validated simulation of a baseline period (1981-2000). We examined outcomes associated with two IPCC-AR5 greenhouse gas emissions scenarios: a "business-as-usual" scenario (RCP8.5) and a "mitigation" scenario (RCP2.6). Output from all available global climate models in the CMIP5 archive was downscaled. We first statistically downscaled surface warming and then applied a statistical model between the surface temperature and snowfall to project the snowfall change. By mid-century, the mountainous areas in the Los Angeles region are likely to receive substantially less snowfall than in the baseline period. In RCP8.5, about 60% of the snowfall is most likely to persist, while in RCP2.6, the likely amount remaining is somewhat higher (about 70%). By end-of-century, however, the two scenarios diverge significantly. In RCP8.5, snowfall sees a dramatic further reduction, with only about a third of baseline snowfall persisting. For RCP2.6, snowfall sees only a negligible further reduction from mid-century. Due to significant differences in climate change outcomes across the global models, we estimated these numbers associated with uncertainty, in the range of 15-30 percentage points. For both scenarios and both time slices, the snowfall loss is consistently greatest at low elevations, and the lower-lying mountain ranges are somewhat more vulnerable to snowfall loss. The similarity in the two scenarios' most likely snowfall outcomes at mid-century illustrates the inevitability of climate change in the coming decades, no matter what mitigation measures are taken. Their stark contrast at century's end reveals that reduction of greenhouse gas emissions will help avoid a dramatic loss of snowfall by the end of the century. In addition to snowfall projections, the warming-accelerated snow melting of the already reduced snowfall will be discussed as well.
Chang, Bey-Dih; Watanabe, Keiko; Broude, Eugenia V.; Fang, Jing; Poole, Jason C.; Kalinichenko, Tatiana V.; Roninson, Igor B.
2000-01-01
Induction of cyclin-dependent kinase inhibitor p21Waf1/Cip1/Sdi1 triggers cell growth arrest associated with senescence and damage response. Overexpression of p21 from an inducible promoter in a human cell line induces growth arrest and phenotypic features of senescence. cDNA array hybridization showed that p21 expression selectively inhibits a set of genes involved in mitosis, DNA replication, segregation, and repair. The kinetics of inhibition of these genes on p21 induction parallels the onset of growth arrest, and their reexpression on release from p21 precedes the reentry of cells into cell cycle, indicating that inhibition of cell-cycle progression genes is a mechanism of p21-induced growth arrest. p21 also up-regulates multiple genes that have been associated with senescence or implicated in age-related diseases, including atherosclerosis, Alzheimer's disease, amyloidosis, and arthritis. Most of the tested p21-induced genes were not activated in cells that had been growth arrested by serum starvation, but some genes were induced in both forms of growth arrest. Several p21-induced genes encode secreted proteins with paracrine effects on cell growth and apoptosis. In agreement with the overexpression of such proteins, conditioned media from p21-induced cells were found to have antiapoptotic and mitogenic activity. These results suggest that the effects of p21 induction on gene expression in senescent cells may contribute to the pathogenesis of cancer and age-related diseases. PMID:10760295
... them all the time in the news - the number of people who were in the hospital last year, the ... all types of health statistics. Health statistics are numbers about some ... of diseases in groups of people. This can help in figuring out who is ...
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the statisticalâ€¦
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1987-01-01
A dynamic rain attenuation prediction model is developed for use in obtaining the temporal characteristics, on time scales of minutes or hours, of satellite communication link availability. Analagous to the associated static rain attenuation model, which yields yearly attenuation predictions, this dynamic model is applicable at any location in the world that is characterized by the static rain attenuation statistics peculiar to the geometry of the satellite link and the rain statistics of the location. Such statistics are calculated by employing the formalism of Part I of this report. In fact, the dynamic model presented here is an extension of the static model and reduces to the static model in the appropriate limit. By assuming that rain attenuation is dynamically described by a first-order stochastic differential equation in time and that this random attenuation process is a Markov process, an expression for the associated transition probability is obtained by solving the related forward Kolmogorov equation. This transition probability is then used to obtain such temporal rain attenuation statistics as attenuation durations and allowable attenuation margins versus control system delay.
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Manos, Thanos; Robnik, Marko
2013-06-01
We study the kicked rotator in the classically fully chaotic regime using Izrailev's N-dimensional model for various Nâ‰¤4000, which in the limit Nâ†’âˆž tends to the quantized kicked rotator. We do treat not only the case K=5, as studied previously, but also many different values of the classical kick parameter 5â‰¤Kâ‰¤35 and many different values of the quantum parameter kÎµ[5,60]. We describe the features of dynamical localization of chaotic eigenstates as a paradigm for other both time-periodic and time-independent (autonomous) fully chaotic or/and mixed-type Hamilton systems. We generalize the scaling variable Î›=l(âˆž)/N to the case of anomalous diffusion in the classical phase space by deriving the localization length l(âˆž) for the case of generalized classical diffusion. We greatly improve the accuracy and statistical significance of the numerical calculations, giving rise to the following conclusions: (1) The level-spacing distribution of the eigenphases (or quasienergies) is very well described by the Brody distribution, systematically better than by other proposed models, for various Brody exponents Î²(BR). (2) We study the eigenfunctions of the Floquet operator and characterize their localization properties using the information entropy measure, which after normalization is given by Î²(loc) in the interval [0,1]. The level repulsion parameters Î²(BR) and Î²(loc) are almost linearly related, close to the identity line. (3) We show the existence of a scaling law between Î²(loc) and the relative localization length Î›, now including the regimes of anomalous diffusion. The above findings are important also for chaotic eigenstates in time-independent systems [BatistiÄ‡ and Robnik, J. Phys. A: Math. Gen. 43, 215101 (2010); arXiv:1302.7174 (2013)], where the Brody distribution is confirmed to a very high degree of precision for dynamically localized chaotic eigenstates, even in the mixed-type systems (after separation of regular and chaotic eigenstates). PMID:23848746
The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.
... population, or about 25 million Americans, has experienced tinnitus lasting at least five minutes in the past ... by NIDCD Epidemiology and Statistics Program staff: (1) tinnitus prevalence was obtained from the 2008 National Health ...
NASA Astrophysics Data System (ADS)
Hermann, Claudine
Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.
NASA Astrophysics Data System (ADS)
Xiang, J.; Liao, Q. F.; Huang, S. X.; Lan, W. R.; Feng, Q.; Zhou, F. X.
2006-01-01
In the first paper in this series, a variational data assimilation of ideal tropical cyclone (TC) tracks was performed for the statistical-dynamical prediction model SD-90 by the adjoint method, and a prediction of TC tracks was made with good accuracy for tracks containing no sharp turns. In the present paper, the cases of real TC tracks are studied. Due to the complexity of TC motion, attention is paid to the diagnostic research of TC motion. First, five TC tracks are studied. Using the data of each entire TC track, by the adjoint method, five TC tracks are fitted well, and the forces acting on the TCs are retrieved. For a given TC, the distribution of the resultant of the retrieved force and Coriolis force well matches the corresponding TC track, i.e., when a TC turns, the resultant of the retrieved force and Coriolis force acts as a centripetal force, which means that the TC indeed moves like a particle; in particular, for TC 9911, the clockwise looping motion is also fitted well. And the distribution of the resultant appears to be periodic in some cases. Then, the present method is carried out for a portion of the track data for TC 9804, which indicates that when the amount of data for a TC track is sufficient, the algorithm is stable. And finally, the same algorithm is implemented for TCs with a double-eyewall structure, namely Bilis (2000) and Winnie (1997), and the results prove the applicability of the algorithm to TCs with complicated mesoscale structures if the TC track data are obtained every three hours.
Knowles, L Lacey; Maddison, Wayne P
2002-12-01
While studies of phylogeography and speciation in the past have largely focused on the documentation or detection of significant patterns of population genetic structure, the emerging field of statistical phylogeography aims to infer the history and processes underlying that structure, and to provide objective, rather than ad hoc explanations. Methods for parameter estimation are now commonly used to make inferences about demographic past. Although these approaches are well developed statistically, they typically pay little attention to geographical history. In contrast, methods that seek to reconstruct phylogeographic history are able to consider many alternative geographical scenarios, but are primarily nonstatistical, making inferences about particular biological processes without explicit reference to stochastically derived expectations. We advocate the merging of these two traditions so that statistical phylogeographic methods can provide an accurate representation of the past, consider a diverse array of processes, and yet yield a statistical estimate of that history. We discuss various conceptual issues associated with statistical phylogeographic inferences, considering especially the stochasticity of population genetic processes and assessing the confidence of phylogeographic conclusions. To this end, we present some empirical examples that utilize a statistical phylogeographic approach, and then by contrasting results from a coalescent-based approach to those from Templeton's nested cladistic analysis (NCA), we illustrate the importance of assessing error. Because NCA does not assess error in its inferences about historical processes or contemporary gene flow, we performed a small-scale study using simulated data to examine how our conclusions might be affected by such unconsidered errors. NCA did not identify the processes used to simulate the data, confusing among deterministic processes and the stochastic sorting of gene lineages. There is as yet insufficient justification of NCA's ability to accurately infer or distinguish among alternative processes. We close with a discussion of some unresolved problems of current statistical phylogeographic methods to propose areas in need of future development. PMID:12453245
Statistical Downscaling: Lessons Learned
NASA Astrophysics Data System (ADS)
Walton, D.; Hall, A. D.; Sun, F.
2013-12-01
In this study, we examine ways to improve statistical downscaling of general circulation model (GCM) output. Why do we downscale GCM output? GCMs have low resolution, so they cannot represent local dynamics and topographic effects that cause spatial heterogeneity in the regional climate change signal. Statistical downscaling recovers fine-scale information by utilizing relationships between the large-scale and fine-scale signals to bridge this gap. In theory, the downscaled climate change signal is more credible and accurate than its GCM counterpart, but in practice, there may be little improvement. Here, we tackle the practical problems that arise in statistical downscaling, using temperature change over the Los Angeles region as a test case. This region is an ideal place to apply downscaling since its complex topography and shoreline are poorly simulated by GCMs. By comparing two popular statistical downscaling methods and one dynamical downscaling method, we identify issues with statistically downscaled climate change signals and develop ways to fix them. We focus on scale mismatch, domain of influence, and other problems - many of which users may be unaware of - and discuss practical solutions.
Avalanche statistics of sand heaps
NASA Astrophysics Data System (ADS)
Buchholtz, Volkhard; Pöschel, Thorsten
1996-09-01
Large-scale computer simulations are presented to investigate the avalanche statistics of sandpiles using molecular dynamics. We show that different methods of measurement lead to contradictory conclusions, presumably due to avalanches not reaching the end of the experimental table.
NASA Technical Reports Server (NTRS)
Giles, B. L.; Chappell, C. R.; Moore, T. E.; Comfort, R. H.; Waite, J. H., Jr.
1994-01-01
Core (0-50 eV) ion pitch angle measurements from the retarding ion mass spectrometer on Dynamics Explorer 1 are examined with respect to magnetic disturbance, invariant latitude, magnetic local time, and altitude for ions H(+), He(+), O(+), M/Z = 2 (D(+) or He(++)), and O(++). Included are outflow events in the auroral zone, polar cap, and cusp, separated into altitude regions below and above 3 R(sub E). In addition to the customary division into beam, conic, and upwelling distributions, the high-latitude observations fall into three categories corresponding to ion bulk speeds that are (1) less than, (2) comparable to, or (3) faster than that of the spacecraft. This separation, along with the altitude partition, serves to identify conditions under which ionospheric source ions are gravita- tionally bound and when they are more energetic and able to escape to the outer magnetosphere. Features of the cleft ion fountain inferred from single event studies are clearly identifiable in the statistical results. In addition, it is found that the dayside pre-noon cleft is a dayside afternoon cleft, or auroral zone, becomes an additional source for increased activity. The auroral oval as a whole appears to be a steady source of escape velocity H(+), a steady source of escape velocity He(+) ions for the dusk sector, and a source of escape velocity heavy ions for dusk local times primarily during increased activity. The polar cap above the auroral zone is a consistent source of low-energy ions, although only the lighter mass particles appear to have sufficient velocity, on average, to escape to higher altitudes. The observations support two concepts for outflow: (1) The cleft ion fountain consists of ionospheric plasma of 1-20 eV energy streaming upward into the magnetosphere where high-latitude convection electric fields cause poleward dispersion. (2) The auroral ion fountain involves field-aligned beams which flow out along auroral latitude field lines; and, in addition, for late afternoon local times, they experience additional acceleration such that the ion energy distribution tends to exceed the detection range of the instrument (greater than 50-60 eV).
ERIC Educational Resources Information Center
Catley, Alan
2007-01-01
Following the announcement last year that there will be no more math coursework assessment at General Certificate of Secondary Education (GCSE), teachers will in the future be able to devote more time to preparing learners for formal examinations. One of the key things that the author has learned when teaching statistics is that it makes for far…
ERIC Educational Resources Information Center
Penfield, Douglas A.
The 30 papers in the area of educational statistics that were presented at the 1972 AERA Conference are reviewed. The papers are categorized into five broad areas of interest: (1) theory of univariate analysis, (2) nonparametric methods, (3) regression-prediction theory, (4) multivariable methods, and (5) factor analysis. A list of the papers…
Conventional statistics and useful statistics.
Rahlfs, V W
1995-02-01
Differences between conventional statistical methods and more useful, modern methods are demonstrated using a statistical analysis of data from therapeutic research in rheumatology. The conventional methods, t-test and graphs of mean values and the boxplot, detect almost no differences between treatment groups. A more recent procedure for analysing group differences is the Wilcoxon-Mann-Whitney test. The associated graphs are based on the cumulative distribution function of the two treatment groups and the synthetic Receiver Operating Characteristic (ROC). Special differences, namely baseline dependencies, can be visualized in this way. PMID:7710451
Bogomolny, E; Gerland, U; Schmit, C
2001-03-01
We consider the statistical distribution of zeros of random meromorphic functions whose poles are independent random variables. It is demonstrated that correlation functions of these zeros can be computed analytically, and explicit calculations are performed for the two-point correlation function. This problem naturally appears in, e.g., rank-1 perturbation of an integrable Hamiltonian and, in particular, when a delta-function potential is added to an integrable billiard. PMID:11308740
NASA Technical Reports Server (NTRS)
Druzhinin, I. P.; Khamyanova, N. V.; Yagodinskiy, V. N.
1974-01-01
Statistical evaluations of the significance of the relationship of abrupt changes in solar activity and discontinuities in the multi-year pattern of an epidemic process are reported. They reliably (with probability of more than 99.9%) show the real nature of this relationship and its great specific weight (about half) in the formation of discontinuities in the multi-year pattern of the processes in question.
Kaji, Takahiro; Ito, Syoji; Iwai, Shigenori; Miyasaka, Hiroshi
2009-10-22
Single-molecule and ensemble time-resolved fluorescence measurements were applied for the investigation of the conformational dynamics of single-stranded DNA, ssDNA, connected with a fluorescein dye by a C6 linker, where the motions both of DNA and the C6 linker affect the geometry of the system. From the ensemble measurement of the fluorescence quenching via photoinduced electron transfer with a guanine base in the DNA sequence, three main conformations were found in aqueous solution: a conformation unaffected by the guanine base in the excited state lifetime of fluorescein, a conformation in which the fluorescence is dynamically quenched in the excited-state lifetime, and a conformation leading to rapid quenching via nonfluorescent complex. The analysis by using the parameters acquired from the ensemble measurements for interphoton time distribution histograms and FCS autocorrelations by the single-molecule measurement revealed that interconversion in these three conformations took place with two characteristic time constants of several hundreds of nanoseconds and tens of microseconds. The advantage of the combination use of the ensemble measurements with the single-molecule detections for rather complex dynamic motions is discussed by integrating the experimental results with those obtained by molecular dynamics simulation. PMID:19780517
Mitrikas, V G
2014-01-01
The on-going 24th solar cycle (SC) is distinguished from the previous ones by low activity. On the contrary, levels of proton fluxes from galactic cosmic rays (GCR) are high, which increases the proton flow striking the Earth's radiation belts (ERB). Therefore, at present the absorbed dose from ERB protons should be calculated with consideration of the tangible increase of protons intensity built into the model descriptions based on experimental measurements during the minimum between cycles 19 and 20, and the cycle 21 maximum. The absorbed dose from GCR and ERB protons copies galactic protons dynamics, while the ERB electrons dose copies SC dynamics. The major factors that determine the absorbed dose value are SC phase, ISS orbital altitude and shielding of the dosimeter readings of which are used in analysis. The paper presents the results of dynamic analysis of absorbed doses measured by a variety of dosimeters, namely, R-16 (2 ionization chambers), DB8-1, DB8-2, DB8-3, DB8-4 as a function of ISS orbit altitude and SC phase. The existence of annual variation in the absorbed dose dynamics has been confirmed; several additional variations with the periods of 17 and 52 months have been detected. Modulation of absorbed dose variations by the SC and GCR amplitudes has been demonstrated. PMID:25035897
Statistical characterization of dislocation ensembles
El-Azab, A; Deng, J; Tang, M
2006-05-17
We outline a method to study the spatial and orientation statistics of dynamical dislocation systems by modeling the dislocations as a stochastic fiber process. Statistical measures have been introduced for the density, velocity, and flux of dislocations, and the connection between these measures and the dislocation state and plastic distortion rate in the crystal is explained. A dislocation dynamics simulation model has been used to extract numerical data to study the evolution of these statistical measures numerically in a body-centered cubic crystal under deformation. The orientation distribution of the dislocation density, velocity and dislocation flux, as well as the dislocation correlations have been computed. The importance of the statistical measures introduced here in building continuum models of dislocation systems is highlighted.
NASA Astrophysics Data System (ADS)
Graham, D. B.; Cairns, Iver H.; Skjaeraasen, O.; Robinson, P. A.
2012-02-01
The temperature ratio Ti/Te of ions to electrons affects both the ion-damping rate and the ion-acoustic speed in plasmas. The effects of changing the ion-damping rate and ion-acoustic speed are investigated for electrostatic strong turbulence and electromagnetic strong turbulence in three dimensions. When ion damping is strong, density wells relax in place and act as nucleation sites for the formation of new wave packets. In this case, the density perturbations are primarily density wells supported by the ponderomotive force. For weak ion damping, corresponding to low Ti/Te, ion-acoustic waves are launched radially outwards when wave packets dissipate at burnout, thereby increasing the level of density perturbations in the system and thus raising the level of scattering of Langmuir waves off density perturbations. Density wells no longer relax in place so renucleation at recent collapse sites no longer occurs, instead wave packets form in background low density regions, such as superpositions of troughs of propagating ion-acoustic waves. This transition is found to occur at Ti/Te ? 0.1. The change in behavior with Ti/Te is shown to change the bulk statistical properties, scaling behavior, spectra, and field statistics of strong turbulence. For Ti/Te>rsim0.1, the electrostatic results approach the predictions of the two-component model of Robinson and Newman, and good agreement is found for Ti/Te>rsim0.15.
Chaos and Coarse Graining in Statistical Mechanics
NASA Astrophysics Data System (ADS)
Castiglione, Patrizia; Falcioni, Massimo; Lesne, Annick; Vulpiani, Angelo
2008-08-01
1. Basic concepts of dynamical systems theory; 2. Dynamical indicators for chaotic systems: Lyapunov exponents, entropies and beyond; 3. Coarse graining, entropies and Lyapunov exponents at work; 4. Foundation of the statistical mechanics and dynamical systems; 5. On the origin of irreversibility; 6. The role of chaos in non-equilibrium statistical mechanics; 7. Coarse-graining equations in complex systems; 8. Renormalization-group approaches; Index.
NASA Technical Reports Server (NTRS)
Vangelder, B. H. W.
1978-01-01
Non-Bayesian statistics were used in simulation studies centered around laser range observations to LAGEOS. The capabilities of satellite laser ranging especially in connection with relative station positioning are evaluated. The satellite measurement system under investigation may fall short in precise determinations of the earth's orientation (precession and nutation) and earth's rotation as opposed to systems as very long baseline interferometry (VLBI) and lunar laser ranging (LLR). Relative station positioning, determination of (differential) polar motion, positioning of stations with respect to the earth's center of mass and determination of the earth's gravity field should be easily realized by satellite laser ranging (SLR). The last two features should be considered as best (or solely) determinable by SLR in contrast to VLBI and LLR.
Cosmetic Plastic Surgery Statistics
2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...
NASA Technical Reports Server (NTRS)
Balcer-Kubiczek, E. K.; Zhang, X. F.; Harrison, G. H.; Zhou, X. J.; Vigneulle, R. M.; Ove, R.; McCready, W. A.; Xu, J. F.
1999-01-01
PURPOSE: Differences in gene expression underlie the phenotypic differences between irradiated and unirradiated cells. The goal was to identify late-transcribed genes following irradiations differing in quality, and to determine the RBE of 1 GeV/n Fe ions. MATERIALS AND METHODS: Clonogenic assay was used to determine the RBE of Fe ions. Differential hybridization to cDNA target clones was used to detect differences in expression of corresponding genes in mRNA samples isolated from MCF7 cells irradiated with iso-survival doses of Fe ions (0 or 2.5 Gy) or fission neutrons (0 or 1.2 Gy) 7 days earlier. Northern analysis was used to confirm differential expression of cDNA-specific mRNA and to examine expression kinetics up to 2 weeks after irradiation. RESULTS: Fe ion RBE values were between 2.2 and 2.6 in the lines examined. Two of 17 differentially expressed cDNA clones were characterized. hpS2 mRNA was elevated from 1 to 14 days after irradiation, whereas CIP1/WAF1/SDI1 remained elevated from 3 h to 14 days after irradiation. Induction of hpS2 mRNA by irradiation was independent of p53, whereas induction of CIP1/WAF1/SDI1 was observed only in wild-type p53 lines. CONCLUSIONS: A set of coordinately regulated genes, some of which are independent of p53, is associated with change in gene expression during the first 2 weeks post-irradiation.
NASA Astrophysics Data System (ADS)
Cassou, Christophe; Minvielle, Marie; Terray, Laurent; Périgaud, Claire
2011-01-01
The links between the observed variability of the surface ocean variables estimated from reanalysis and the overlying atmosphere decomposed in classes of large-scale atmospheric circulation via clustering are investigated over the Atlantic from 1958 to 2002. Daily 500 hPa geopotential height and 1,000 hPa wind anomaly maps are classified following a weather-typing approach to describe the North Atlantic and tropical Atlantic atmospheric dynamics, respectively. The algorithm yields patterns that correspond in the extratropics to the well-known North Atlantic-Europe weather regimes (NAE-WR) accounting for the barotropic dynamics, and in the tropics to wind classes (T-WC) representing the alteration of the trades. 10-m wind and 2-m temperature (T2) anomaly composites derived from regime/wind class occurrence are indicative of strong relationships between daily large-scale atmospheric circulation and ocean surface over the entire Atlantic basin. High temporal correlation values are obtained basin-wide at low frequency between the observed fields and their reconstruction by multiple linear regressions with the frequencies of occurrence of both NAE-WR and T-WC used as sole predictors. Additional multiple linear regressions also emphasize the importance of accounting for the strength of the daily anomalous atmospheric circulation estimated by the combined distances to all regimes centroids in order to reproduce the daily to interannual variability of the Atlantic ocean. We show that for most of the North Atlantic basin the occurrence of NAE-WR generally sets the sign of the ocean surface anomaly for a given day, and that the inter-regime distances are valuable predictors for the magnitude of that anomaly. Finally, we provide evidence that a large fraction of the low-frequency trends in the Atlantic observed at the surface over the last 50 years can be traced back, except for T2, to changes in occurrence of tropical and extratropical weather classes. All together, our findings are encouraging for the prospects of basin-scale ocean dynamical downscaling using a weather-typing approach to reconstruct forcing fields for high resolution ocean models (Part II) from coarse resolution climate models.
Krommes, J.A. . Plasma Physics Lab.); Kim, Chang-Bae . Inst. for Fusion Studies)
1990-06-01
The fundamental problem in the theory of turbulent transport is to find the flux {Gamma} of a quantity such as heat. Methods based on statistical closures are mired in conceptual controversies and practical difficulties. However, it is possible to bound {Gamma} by employing constraints derived rigorously from the equations of motion. Brief reviews of the general theory and its application to passive advection are given. Then, a detailed application is made to anomalous resistivity generated by self-consistent turbulence in a reversed-field pinch. A nonlinear variational principle for an upper bound on the turbulence electromotive force for fixed current is formulated from the magnetohydrodynamic equations in cylindrical geometry. Numerical solution of a case constrained solely by energy balance leads to a reasonable bound and nonlinear eigenfunctions that share intriguing features with experimental data: the dominant mode numbers appear to be correct, and field reversal is predicted at reasonable values of the pinch parameter. Although open questions remain upon considering all bounding calculations to date one can conclude, remarkably, that global energy balance constrains transport sufficiently so that bounds derived therefrom are not unreasonable and that bounding calculations are feasible even for involved practical problems. The potential of the method has hardly been tapped; it provides a fertile area for future research. 29 refs.
Statistical Mechanics of Motorized Molecules
NASA Astrophysics Data System (ADS)
Prentis, Jeffrey
2002-03-01
We have designed a set of experiments that illustrate the basic principles of statistical mechanics, including the fundamental postulate, the ergodic hypothesis, and the canonical statistics. The experimental system is a granular fluid of "motorized molecules" (self-propelled balls). Mechanical properties are measured using motion sensors, force probes, and digital video. Statistical properties are determined by a dynamical probability - the fraction of time that the system spends in each state. Thermal properties are represented by time averages. The process by which statistical patterns appear in the mechanical data vividly illustrates how thermal order emerges from molecular chaos. The pV diagram of a gas of motorized molecules is obtained by monitoring the random force exerted by the molecules beating against a piston. Brownian motion is studied by monitoring the random walk of a Brownian cube in a fluid of self-propelled spheres. Canonical statistics is illustrated using a "Boltzmann machine" - a working dynamical model of a two-level quantum system in a temperature bath. Polymer statistics is illustrated using a granular polymer solution - a chain of ping-pong balls immersed in a solvent of motorized molecules.
NASA Astrophysics Data System (ADS)
Tanoh, K. S.; Adohi, B. J.-P.; Coulibaly, I. S.; Amory-Mazaudier, C.; Kobea, A. T.; Assamoi, P.
2015-01-01
In this paper, we report on the night-time equatorial F-layer height behaviour at Korhogo (9.2Â° N, 5Â° W; 2.4Â° S dip lat), Ivory Coast, in the West African sector during the solar minimum period 1995-1997. The data were collected from quarter-hourly ionograms of an Ionospheric Prediction Service (IPS) 42-type vertical sounder. The main focus of this work was to study the seasonal changes in the F-layer height and to clarify the equinox transition process recently evidenced at Korhogo during 1995, the year of declining solar flux activity. The F-layer height was found to vary strongly with time, with up to three main phases. The night-to-night variability of these morphological phases was then analysed. The early post-sunset slow rise, commonly associated with rapid chemical recombination processes in the bottom part of the F layer, remained featureless and was observed regardless of the date. By contrast, the following event, either presented like the post-sunset height peak associated with the evening E Ã— B drift, or was delayed to the midnight sector, thus involving another mechanism. The statistical analysis of the occurrence of these events throughout the solar minimum period 1995-1997 revealed two main F-layer height patterns, each characteristic of a specific season. The one with the post-sunset height peak was associated with the northern winter period, whereas the other, with the midnight height peak, characterized the northern summer period. The transition process from one pattern to the other took place during the equinox periods and was found to last only a few weeks. We discuss these results in the light of earlier works.
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho
In this thesis, we study two interdisciplinary problems in the framework of statistical physics, which show the broad applicability of physics on problems with various origins. The first problem corresponds to an optimization problem in allocating resources on random regular networks. Frustrations arise from competition for resources. When the initial resources are uniform, different regimes with discrete fractions of satisfied nodes are observed, resembling the Devil's staircase. We apply the spin glass theory in analyses and demonstrate how functional recursions are converted to simple recursions of probabilities. Equilibrium properties such as the average energy and the fraction of free nodes are derived. When the initial resources are bimodally distributed, increases in the fraction of rich nodes induce a glassy transition, entering a glassy phase described by the existence of multiple metastable states, in which we employ the replica symmetry breaking ansatz for analysis. The second problem corresponds to the study of multi-agent systems modeling financial markets. Agents in the system trade among themselves, and self-organize to produce macroscopic trading behaviors resembling the real financial markets. These behaviors include the arbitraging activities, the setting up and the following of price trends. A phase diagram of these behaviors is obtained, as a function of the sensitivity of price and the market impact factor. We finally test the applicability of the models with real financial data including the Hang Seng Index, the Nasdaq Composite and the Dow Jones Industrial Average. A substantial fraction of agents gains faster than the inflation rate of the indices, suggesting the possibility of using multi-agent systems as a tool for real trading.
Rabbel, Hauke; Frey, Holger; Schmid, Friederike
2015-12-28
The reaction of ABm monomers (m = 2, 3) with a multifunctional Bf-type polymer chain ("hypergrafting") is studied by coarse-grained molecular dynamics simulations. The ABm monomers are hypergrafted using the slow monomer addition strategy. Fully dendronized, i.e., perfectly branched polymers are also simulated for comparison. The degree of branching of the molecules obtained with the "hypergrafting" process critically depends on the rate with which monomers attach to inner monomers compared to terminal monomers. This ratio is more favorable if the ABm monomers have lower reactivity, since the free monomers then have time to diffuse inside the chain. Configurational chain properties are also determined, showing that the stretching of the polymer backbone as a consequence of the "hypergrafting" procedure is much less pronounced than for perfectly dendronized chains. Furthermore, we analyze the scaling of various quantities with molecular weight M for large M (M > 100). The Wiener index scales as M(2.3), which is intermediate between linear chains (M(3)) and perfectly branched polymers (M(2)ln(M)). The polymer size, characterized by the radius of gyration Rg or the hydrodynamic radius Rh, is found to scale as Rg,h ? M(?) with ? ? 0.38, which lies between the exponent of diffusion limited aggregation (? = 0.4) and the mean-field exponent predicted by Konkolewicz and co-workers [Phys. Rev. Lett. 98, 238301 (2007)] (? = 0.33). PMID:26723610
NASA Astrophysics Data System (ADS)
Rabbel, Hauke; Frey, Holger; Schmid, Friederike
2015-12-01
The reaction of ABm monomers (m = 2, 3) with a multifunctional Bf-type polymer chain ("hypergrafting") is studied by coarse-grained molecular dynamics simulations. The ABm monomers are hypergrafted using the slow monomer addition strategy. Fully dendronized, i.e., perfectly branched polymers are also simulated for comparison. The degree of branching of the molecules obtained with the "hypergrafting" process critically depends on the rate with which monomers attach to inner monomers compared to terminal monomers. This ratio is more favorable if the ABm monomers have lower reactivity, since the free monomers then have time to diffuse inside the chain. Configurational chain properties are also determined, showing that the stretching of the polymer backbone as a consequence of the "hypergrafting" procedure is much less pronounced than for perfectly dendronized chains. Furthermore, we analyze the scaling of various quantities with molecular weight M for large M (M > 100). The Wiener index scales as M2.3, which is intermediate between linear chains (M3) and perfectly branched polymers (M2ln(M)). The polymer size, characterized by the radius of gyration Rg or the hydrodynamic radius Rh, is found to scale as Rg,h âˆ MÎ½ with Î½ â‰ˆ 0.38, which lies between the exponent of diffusion limited aggregation (Î½ = 0.4) and the mean-field exponent predicted by Konkolewicz and co-workers [Phys. Rev. Lett. 98, 238301 (2007)] (Î½ = 0.33).
Statistical ecology comes of age.
Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, RÃ©mi; Dray, StÃ©phane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, FrÃ©dÃ©ric; MÃ©rigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, FrÃ©dÃ©ric; Munoz, FranÃ§ois; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-12-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
Statistical ecology comes of age
Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, RÃ©mi; Dray, StÃ©phane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, FrÃ©dÃ©ric; MÃ©rigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, FrÃ©dÃ©ric; Munoz, FranÃ§ois; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-01-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1â€“4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
Statistics Poker: Reinforcing Basic Statistical Concepts
ERIC Educational Resources Information Center
Leech, Nancy L.
2008-01-01
Learning basic statistical concepts does not need to be tedious or dry; it can be fun and interesting through cooperative learning in the small-group activity of Statistics Poker. This article describes a teaching approach for reinforcing basic statistical concepts that can help students who have high anxiety and makes learning and reinforcing…
Statistical physics ""Beyond equilibrium
Ecke, Robert E
2009-01-01
The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.
Elements of Statistical Mechanics
NASA Astrophysics Data System (ADS)
Sachs, Ivo; Sen, Siddhartha; Sexton, James
2006-05-01
This textbook provides a concise introduction to the key concepts and tools of modern statistical mechanics. It also covers advanced topics such as non-relativistic quantum field theory and numerical methods. After introducing classical analytical techniques, such as cluster expansion and Landau theory, the authors present important numerical methods with applications to magnetic systems, Lennard-Jones fluids and biophysics. Quantum statistical mechanics is discussed in detail and applied to Bose-Einstein condensation and topics in astrophysics and cosmology. In order to describe emergent phenomena in interacting quantum systems, canonical non-relativistic quantum field theory is introduced and then reformulated in terms of Feynman integrals. Combining the authors' many years' experience of teaching courses in this area, this textbook is ideal for advanced undergraduate and graduate students in physics, chemistry and mathematics. Analytical and numerical techniques in one text, including sample codes and solved problems on the web at www.cambridge.org/0521841984 Covers a wide range of applications including magnetic systems, turbulence astrophysics, and biology Contains a concise introduction to Markov processes and molecular dynamics
Statistical dependency in visual scanning
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Stark, Lawrence
1986-01-01
A method to identify statistical dependencies in the positions of eye fixations is developed and applied to eye movement data from subjects who viewed dynamic displays of air traffic and judged future relative position of aircraft. Analysis of approximately 23,000 fixations on points of interest on the display identified statistical dependencies in scanning that were independent of the physical placement of the points of interest. Identification of these dependencies is inconsistent with random-sampling-based theories used to model visual search and information seeking.
... Cost Global More Prevalence Disability Suicide Cost Global Statistics Understanding the scope of mental illnesses and their ... those affected receive treatment. The information on these statistics pages includes the best statistics currently available on ...
ERIC Educational Resources Information Center
Caine, Robert; And Others
1978-01-01
Presents arguments for offering introductory statistics courses to undergraduate sociology majors taught within departments of sociology rather than using statistics courses taught by other departments. (Author)
NASA Astrophysics Data System (ADS)
Spera, F. J.; Martin, B.; Creamer, J. B.; Nevins, D.; Cutler, I.; Ghiorso, M. S.; Tikunoff, D.
2010-12-01
Empirical Potential Molecular Dynamics (EPMD) simulations have been carried out for molten MgSiO3, Mg2SiO4, CaMgSi2O6, CaAl2Si2O8 and 1-bar eutectic liquid in the binary system CaMgSi2O6-CaAl2Si2O8 using a Coulomb-Born-Mayer-van der Waals pair potential form and the potential parameters from Matsui (1996, GRL 23:395) for the system CaO-MgO-Al2O3-SiO2. Simulations were performed in the microcanonical ensemble (NEV) with 8000 atoms, a 1 fs time step, and simulation durations up to 2 ns. Computations were carried out every 500 K over a temperature range of 2500 - 5000 K along 10-20 isochores for each composition to insure good coverage in P-T space. During run T and P fluctuations, giving the uncertainty of state point coordinates was typically ± 30 K and ± 0.5 GPa, respectively. Coordination statistics are determined by counting nearest neighbor configurations up to a cutoff defined by the first minima of the pair correlation function. A complete set of coordination statistics was collected at each state point for each composition. At each state point self-diffusivity of each atom was determined from the Einstein relation between Mean Square Displacement and time. Shear viscosity was computed for a subset of state points using Green-Kubo linear response theory, by studying the autocorrelated regressions of spontaneous fluctuations of appropriate components of the stress tensor. Thermodynamic models (and EOS) for each liquid previously developed from these simulations based on combining the Rosenfeld-Tarazona (1998, Mol Phys 95:141) potential energy-temperature scaling law with the Universal EOS (1986, J Phys C, 19:L467) enable self-consistent computation of liquid sound speeds and isochoric heat capacity used to develop phonon thermal conductivity values at high T and P. Self-diffusivity, shear viscosity and phonon thermal conductivity values from the MD simulations vary systematically with composition, temperature and pressure. These systematic relations correlate with and can be modeled from average first nearest neighbor mean coordination numbers especially for Si and Al around oxygen, oxygen around oxygen, and Ca and Mg around oxygen. Generalized versions of the Stokes-Einstein and Eyring relationships connecting self-diffusivity of oxygen to liquid shear viscosity, T and a characteristic length scale based on coordination statistics can be constructed from MD generated transport properties to capture laboratory data reasonably well in many instances.
Statistical Reference Datasets
National Institute of Standards and Technology Data Gateway
Statistical Reference Datasets (Web, free access)Â Â The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.
Statistical dynamics of early river networks
NASA Astrophysics Data System (ADS)
Wang, Xu-Ming; Wang, Peng; Zhang, Ping; Hao, Rui; Huo, Jie
2012-10-01
Based on local erosion rule and fluctuations in rainfall, geology and parameters of a river channel, a generalized Langevin equation is proposed to describe the random prolongation of a river channel. This equation is transformed into the Fokker-Plank equation to follow the early evolution of a river network and the variation of probability distribution of channel lengths. The general solution of the equation is in the product form of two terms. One term is in power form and the other is in exponent form. This distribution shows a complete history of a river network evolving from its infancy to â€œadulthoodâ€). The infancy is characterized by the Gaussian distribution of the channel lengths, while the adulthood is marked by a power law distribution of the channel lengths. The variation of the distribution from the Gaussian to the power law displays a gradual developing progress of the river network. The distribution of basin areas is obtained by means of Hack's law. These provide us with new understandings towards river networks.
Statistical Ensemble of Large Eddy Simulations
NASA Technical Reports Server (NTRS)
Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.
... NF Heroes NF Registry Learn About NF Facts & Statistics NF1 NF2 Schwannomatosis About Us Foundation News & Events ... the One: Holiday 2015 Events STORE DONATE Facts & Statistics Â· NF has been classified into three distinct types; ...
Library Statistics Cooperative Program.
ERIC Educational Resources Information Center
National Commission on Libraries and Information Science, Washington, DC.
The Library Statistics Cooperative Program collects statistics about all types of libraries--academic libraries, public libraries, school library media centers, state library agencies, federal libraries and information centers, and library cooperatives. The Library Statistics Cooperative Program depends on collaboration with all types of libraries…
Minnesota Health Statistics 1988.
ERIC Educational Resources Information Center
Minnesota State Dept. of Health, St. Paul.
This document comprises the 1988 annual statistical report of the Minnesota Center for Health Statistics. After introductory technical notes on changes in format, sources of data, and geographic allocation of vital events, an overview is provided of vital health statistics in all areas. Thereafter, separate sections of the report provide tablesâ€¦
Minnesota Health Statistics 1988.
ERIC Educational Resources Information Center
Minnesota State Dept. of Health, St. Paul.
This document comprises the 1988 annual statistical report of the Minnesota Center for Health Statistics. After introductory technical notes on changes in format, sources of data, and geographic allocation of vital events, an overview is provided of vital health statistics in all areas. Thereafter, separate sections of the report provide tables…
... Prostate Skin Vaginal and Vulvar Cancer Home Uterine Cancer Statistics Language: English EspaÃ±ol (Spanish) Recommend on Facebook Tweet ... comparing incidence and death counts. Ã¢â‚¬ Source: U.S. Cancer Statistics Working Group. United States Cancer Statistics: 1999â€“2012 ...
ERIC Educational Resources Information Center
Strasser, Nora
2007-01-01
Avoiding statistical mistakes is important for educators at all levels. Basic concepts will help you to avoid making mistakes using statistics and to look at data with a critical eye. Statistical data is used at educational institutions for many purposes. It can be used to support budget requests, changes in educational philosophy, changes to…
Thorslund, J; Misfeldt, J
1989-07-01
The classical methodological problem of suicidology is reliability of official statistics. In this article, some recent contributions to the debate, particularly concerning the increased problem of suicide among Inuit, are reviewed. Secondly the suicide statistics of Greenland are analyzed, with the conclusion that the official statistics, as published by the Danish Board of Health, are generally reliable concerning Greenland. PMID:2789569
Statistical quality management
NASA Astrophysics Data System (ADS)
Vanderlaan, Paul
1992-10-01
Some aspects of statistical quality management are discussed. Quality has to be defined as a concrete, measurable quantity. The concepts of Total Quality Management (TQM), Statistical Process Control (SPC), and inspection are explained. In most cases SPC is better than inspection. It can be concluded that statistics has great possibilities in the field of TQM.
Artificial intelligence and statistics
Gale, W.A.
1987-01-01
This book explores the possible applications of artificial intelligence in statistics and conversely, statistics in artificial intelligence. It is a collection of seventeen papers written by leaders in the field. Most of the papers were prepared for the Workshop on Artificial Intelligence and Statistics held in April 1985 and sponsored by ATandT Bell Laboratories. The book is divided into six parts: uncertainly propagation, clustering and learning, expert systems, environments for supporting statistical strategy, knowledge acquisition, and strategy. The editor ties the collection together in the first chapter by providing an overview of AI and statistics, discussing the Workshop, and exploring future research in the field.
Nonlinear Statistical Modeling of Speech
NASA Astrophysics Data System (ADS)
Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.
2009-12-01
Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and discriminative training algorithms for these new models to improve noise robustness.
Nonstationary statistical theory for multipactor
Anza, S.; Vicente, C.; Gil, J.
2010-06-15
This work presents a new and general approach to the real dynamics of the multipactor process: the nonstationary statistical multipactor theory. The nonstationary theory removes the stationarity assumption of the classical theory and, as a consequence, it is able to adequately model electron exponential growth as well as absorption processes, above and below the multipactor breakdown level. In addition, it considers both double-surface and single-surface interactions constituting a full framework for nonresonant polyphase multipactor analysis. This work formulates the new theory and validates it with numerical and experimental results with excellent agreement.
The Statistical Mechanics of Zombies
NASA Astrophysics Data System (ADS)
Alemi, Alexander A.; Bierbaum, Matthew; Myers, Christopher R.; Sethna, James P.
2015-03-01
We present results and analysis from a large scale exact stochastic dynamical simulation of a zombie outbreak. Zombies have attracted some attention lately as a novel and interesting twist on classic disease models. While most of the initial investigations have focused on the continuous, fully mixed dynamics of a differential equation model, we have explored stochastic, discrete simulations on lattices. We explore some of the basic statistical mechanical properties of the zombie model, including its phase diagram and critical exponents. We report on several variant models, including both homogeneous and inhomogeneous lattices, as well as allowing diffusive motion of infected hosts. We build up to a full scale simulation of an outbreak in the United States, and discover that for `realistic' parameters, we are largely doomed.
Statistical theory of cubic Langmuir turbulence
NASA Technical Reports Server (NTRS)
Sun, G.-Z.; Nicholson, D. R.; Rose, H. A.
1985-01-01
The cubic direct interaction approximation is applied to a truncated (in Fourier space) version of the cubically nonlinear Schroedinger equation model of Langmuir physics. The results are compared (in the three-mode case) to those for an ensemble of numerical solutions of the dynamical equations with 10,000 different sets of Gaussianly distributed initial conditions. In the undriven, undamped case, the statistical theory (but not the ensemble) evolves to a state of thermal equilibrium. In the driven, damped case, the statistical theory appears to evolve to a state close to that corresponding to one of the limit cycles of the dynamical equations.
... Transmission Symptoms Diagnosis & Treatment Maps & Statistics Info for Healthcare Professionals Clinicians Public Health Officials Veterinarians ... in the United States Plague was first introduced into the United States ...
Clustering statistics in cosmology
NASA Astrophysics Data System (ADS)
Martinez, Vicent; Saar, Enn
2002-12-01
The main tools in cosmology for comparing theoretical models with the observations of the galaxy distribution are statistical. We will review the applications of spatial statistics to the description of the large-scale structure of the universe. Special topics discussed in this talk will be: description of the galaxy samples, selection effects and biases, correlation functions, Fourier analysis, nearest neighbor statistics, Minkowski functionals and structure statistics. Special attention will be devoted to scaling laws and the use of the lacunarity measures in the description of the cosmic texture.
Statistical regimes of random laser fluctuations
Lepri, Stefano; Cavalieri, Stefano; Oppo, Gian-Luca; Wiersma, Diederik S.
2007-06-15
Statistical fluctuations of the light emitted from amplifying random media are studied theoretically and numerically. The characteristic scales of the diffusive motion of light lead to Gaussian or power-law (Levy) distributed fluctuations depending on external control parameters. In the Levy regime, the output pulse is highly irregular leading to huge deviations from a mean-field description. Monte Carlo simulations of a simplified model which includes the population of the medium demonstrate the two statistical regimes and provide a comparison with dynamical rate equations. Different statistics of the fluctuations helps to explain recent experimental observations reported in the literature.
Introductory statistical mechanics for electron storage rings
Jowett, J.M.
1986-07-01
These lectures introduce the beam dynamics of electron-positron storage rings with particular emphasis on the effects due to synchrotron radiation. They differ from most other introductions in their systematic use of the physical principles and mathematical techniques of the non-equilibrium statistical mechanics of fluctuating dynamical systems. A self-contained exposition of the necessary topics from this field is included. Throughout the development, a Hamiltonian description of the effects of the externally applied fields is maintained in order to preserve the links with other lectures on beam dynamics and to show clearly the extent to which electron dynamics in non-Hamiltonian. The statistical mechanical framework is extended to a discussion of the conceptual foundations of the treatment of collective effects through the Vlasov equation.
NASA Astrophysics Data System (ADS)
Isliker, H.
By 'statistical' models of flares we denote the global stochastic models of the dynamics of the energy-release process and its associated phenomena which consider flares to consist in a large number of constituent small-scale processes. The observations strongly support such a kind of models: a) Radio-and HXR-emission of flares are highly fragmented in space and time, suggesting that the flare process itself is spatially and temporarily fragmented (De Jager and De Jonge 1978, Benz 1985, Aschwanden et al. 1990). b) The temporal dynamics of flares has been shown to be 'complex' (relatively high-dimensional chaotic or stochastic) through time-series analysis of radio-emission (dimension-estimate and power-spectra: Isliker and Benz (1994), Isliker (1996), Ryabov et al. (1997); wavelet transform: Aschwanden et al. 1998, Schwarz et al. 1998). c) Spatially, there are only weak and local correlations between neighbouring burst-sites, reminiscent of a chain-reaction (analysis of nb-spikes spectrograms with symbolic dynamics: Schwarz et al. 1993). The most prominent global dynamical models of the energy-release process which comprise entire flares are Cellular Automata (CA) models (Lu and Hamilton 1991, Lu et al. 1993; extended to model nano-flares: Vlahos et al. 1995, Georgoulis and Vlahos 1996; including non-local communications: MacKinnon et al. 1996; an analytic approach: MacKinnon and MacPherson 1997). In these models, the local processes (reconnection) are modeled in a strongly simplified way, by simple evolution rules, so that inhomogeneous active regions can be modeled entirely. Alternatively, Isliker (1996) proposed a shot noise model for flares. This model is able to explain the temporal characteristics of the flare-process, however, it is formal, so-far, it has not been tied to physics, yet. A different class of stochastic models has been proposed to explain the dynamics of the corona as a whole, with randomly occurring flares (Rosner and Vaiana 1978, criticized in Lu 1995b; Litvinenko 1996; a new approach (a master equation for the flare occurrence probability): Wheatland and Glukhov 1998). In this approach, structures within a flare are not resolved, the aim is to explain the occurrence rate and total sizes of flares. The CA models are successful in explaining the distributions of the peak-fluxes, total fluxes, and durations of HXR-emission, which are all power-laws (see references in Aschwanden et al. 1998). In the radio range, peak-flux distributions of generalized power-law and exponential shape are observed, which generally are steeper than in the HXR (type I: Mercier and Trottet (1997); type III, decim. pulsations, nb-spikes: Aschwanden et al. 1998; type III: Isliker and Vlahos 1998; nb-spikes: Isliker and Benz 1998). Since radio-waves can be emitted in low energy events, the steep distributions might be a hint that small flares (micro-flares) have a steep distribution, too, and might therewith substantially contribute to coronal heating. It must be noted, however, that poor time- or frequency-resolution can lead to a steepening of the peak-flux distributions (Isliker and Benz 1998), an effect whose influence on the published events has to be discussed, still. Originally, the evolution rules of the CAs were only loosely motivated through physical considerations and basically taken from the 'sand-pile' paradigm, above all the connection between CA and MHD (the local theory of magnetic reconnection) was missing. Recently, Isliker et al. (1998) have shown that the evolution rules of the CAs correspond to localized, threshold dependent diffusion, implementing directly the solution of a diffusion equation, with unknown diffusivity and scales. Thus, CAs can be interpreted as an implementation of the (simplified) induction equation in a large, inhomogeneous medium. A complete flare model needs to incorporate not just the energy release process, but also the acceleration and transport of particles, as well as the generation of EM-emission. First steps towards this direction are done: Anastasiadis et al. 1997 studied acce
Multidimensional Visual Statistical Learning
ERIC Educational Resources Information Center
Turk-Browne, Nicholas B.; Isola, Phillip J.; Scholl, Brian J.; Treat, Teresa A.
2008-01-01
Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not…
Reform in Statistical Education
ERIC Educational Resources Information Center
Huck, Schuyler W.
2007-01-01
Two questions are considered in this article: (a) What should professionals in school psychology do in an effort to stay current with developments in applied statistics? (b) What should they do with their existing knowledge to move from surface understanding of statistics to deep understanding? Written for school psychologists who have completed…
The purpose of the Disability Statistics Center is to produce and disseminate statistical information on disability and the status of people with disabilities in American society and to establish and monitor indicators of how conditions are changing over time to meet their health...
ERIC Educational Resources Information Center
Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain
2004-01-01
Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that usesâ€¦
Adaptive training class statistics.
NASA Technical Reports Server (NTRS)
Kan, E. P. F.
1973-01-01
Formulas are derived for updating the mean vector and covariance matrix of a training class as new training fields are included and old training fields deleted from the class. These statistics of the class are expressed in terms of the already available statistics of the fields.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including ? statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. (©)RSNA, 2015. PMID:26466186
ERIC Educational Resources Information Center
Huberty, Carl J.
An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…
Statistical Mapping by Computer.
ERIC Educational Resources Information Center
Utano, Jack J.
The function of a statistical map is to provide readers with a visual impression of the data so that they may be able to identify any geographic characteristics of the displayed phenomena. The increasingly important role played by the computer in the production of statistical maps is manifested by the varied examples of computer maps in recentâ€¦
Pestana, Dinis
2013-01-01
Statistics is a privileged tool in building knowledge from information, since the purpose is to extract from a sample limited information conclusions to the whole population. The pervasive use of statistical software (that always provides an answer, the question being adequate or not), and the absence of statistics to confer a scientific flavour to so much bad science, has had a pernicious effect on some disbelief on statistical research. Would Lord Rutherford be alive today, it is almost certain that he would not condemn the use of statistics in research, as he did in the dawn of the 20th century. But he would indeed urge everyone to use statistics quantum satis, since to use bad data, too many data, and statistics to enquire on irrelevant questions, is a source of bad science, namely because with too many data we can establish statistical significance of irrelevant results. This is an important point that addicts of evidence based medicine should be aware of, since the meta analysis of two many data will inevitably establish senseless results. PMID:24192087
ERIC Educational Resources Information Center
Council of Ontario Universities, Toronto.
Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.â€¦
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds aâ€¦
ERIC Educational Resources Information Center
Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain
2004-01-01
Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses…
ERIC Educational Resources Information Center
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed toâ€¦
ERIC Educational Resources Information Center
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affectâ€¦
Vijayaraj, Veeraraghavan; Cheriyadat, Anil M; Bhaduri, Budhendra L; Vatsavai, Raju; Bright, Eddie A
2008-01-01
Statistical properties of high-resolution overhead images representing different land use categories are analyzed using various local and global statistical image properties based on the shape of the power spectrum, image gradient distributions, edge co-occurrence, and inter-scale wavelet coefficient distributions. The analysis was performed on a database of high-resolution (1 meter) overhead images representing a multitude of different downtown, suburban, commercial, agricultural and wooded exemplars. Various statistical properties relating to these image categories and their relationship are discussed. The categorical variations in power spectrum contour shapes, the unique gradient distribution characteristics of wooded categories, the similarity in edge co-occurrence statistics for overhead and natural images, and the unique edge co-occurrence statistics of downtown categories are presented in this work. Though previous work on natural image statistics has showed some of the unique characteristics for different categories, the relationships for overhead images are not well understood. The statistical properties of natural images were used in previous studies to develop prior image models, to predict and index objects in a scene and to improve computer vision models. The results from our research findings can be used to augment and adapt computer vision algorithms that rely on prior image statistics to process overhead images, calibrate the performance of overhead image analysis algorithms, and derive features for better discrimination of overhead image categories.
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect…
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
Explorations in Statistics: Correlation
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This sixth installment of "Explorations in Statistics" explores correlation, a familiar technique that estimates the magnitude of a straight-line relationship between two variables. Correlation is meaningful only when the…
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
STATWIZ - AN ELECTRONIC STATISTICAL TOOL (ABSTRACT)
StatWiz is a web-based, interactive, and dynamic statistical tool for researchers. It will allow researchers to input information and/or data and then receive experimental design options, or outputs from data analysis. StatWiz is envisioned as an expert system that will walk rese...
LED champing: statistically blessed?
Wang, Zhuo
2015-06-10
LED champing (smart mixing of individual LEDs to match the desired color and lumens) and color mixing strategies have been widely used to maintain the color consistency of light engines. Light engines with champed LEDs can easily achieve the color consistency of a couple MacAdam steps with widely distributed LEDs to begin with. From a statistical point of view, the distributions for the color coordinates and the flux after champing are studied. The related statistical parameters are derived, which facilitate process improvements such as Six Sigma and are instrumental to statistical quality control for mass productions. PMID:26192863
Winters, Ryan; Winters, Andrew; Amedee, Ronald G.
2010-01-01
The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter. PMID:21603381
Corn production with Spray, LEPA, and SDI
Technology Transfer Automated Retrieval System (TEKTRAN)
Corn, a major irrigated crop in the U.S. Great Plains, has a large irrigation requirement making efficient, effective irrigation technology important. The objective of this paper was to compare corn productivity for different irrigation methods and irrigation rates in 2009 and 2010 at Bushland, Texa...
Understanding Solar Flare Statistics
NASA Astrophysics Data System (ADS)
Wheatland, M. S.
2005-12-01
A review is presented of work aimed at understanding solar flare statistics, with emphasis on the well known flare power-law size distribution. Although avalanche models are perhaps the favoured model to describe flare statistics, their physical basis is unclear, and they are divorced from developing ideas in large-scale reconnection theory. An alternative model, aimed at reconciling large-scale reconnection models with solar flare statistics, is revisited. The solar flare waiting-time distribution has also attracted recent attention. Observed waiting-time distributions are described, together with what they might tell us about the flare phenomenon. Finally, a practical application of flare statistics to flare prediction is described in detail, including the results of a year of automated (web-based) predictions from the method.
Teaching Statistics with Minitab.
ERIC Educational Resources Information Center
Hubbard, Ruth
1992-01-01
Discusses the use of the computer software MINITAB in teaching statistics to explore concepts, simulate games of chance, transform the normal variable into a z-score, and stimulate small and large group discussions. (MDH)
Titanic: A Statistical Exploration.
ERIC Educational Resources Information Center
Takis, Sandra L.
1999-01-01
Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)
Playing at Statistical Mechanics
ERIC Educational Resources Information Center
Clark, Paul M.; And Others
1974-01-01
Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)
Cooperative Learning in Statistics.
ERIC Educational Resources Information Center
Keeler, Carolyn M.; And Others
1994-01-01
Formal use of cooperative learning techniques proved effective in improving student performance and retention in a freshman level statistics course. Lectures interspersed with group activities proved effective in increasing conceptual understanding and overall class performance. (11 references) (Author)
Presenting the statistical results.
Ng, K H; Peh, W C G
2009-01-01
Statistical methods are reported in a scientific paper to summarise the data that has been collected for a study and to enable its analysis. These methods should be described with enough detail to allow a knowledgeable reader who has access to the original data to verify the reported results. This article provides basic guidelines to aid authors in reporting the statistical aspects of the results of their studies clearly and accurately. PMID:19224078
Accelerated molecular dynamics methods
Perez, Danny
2011-01-04
The molecular dynamics method, although extremely powerful for materials simulations, is limited to times scales of roughly one microsecond or less. On longer time scales, dynamical evolution typically consists of infrequent events, which are usually activated processes. This course is focused on understanding infrequent-event dynamics, on methods for characterizing infrequent-event mechanisms and rate constants, and on methods for simulating long time scales in infrequent-event systems, emphasizing the recently developed accelerated molecular dynamics methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics). Some familiarity with basic statistical mechanics and molecular dynamics methods will be assumed.
Statistical Mechanics of Turbulent Dynamos
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2014-01-01
Incompressible magnetohydrodynamic (MHD) turbulence and magnetic dynamos, which occur in magnetofluids with large fluid and magnetic Reynolds numbers, will be discussed. When Reynolds numbers are large and energy decays slowly, the distribution of energy with respect to length scale becomes quasi-stationary and MHD turbulence can be described statistically. In the limit of infinite Reynolds numbers, viscosity and resistivity become zero and if these values are used in the MHD equations ab initio, a model system called ideal MHD turbulence results. This model system is typically confined in simple geometries with some form of homogeneous boundary conditions, allowing for velocity and magnetic field to be represented by orthogonal function expansions. One advantage to this is that the coefficients of the expansions form a set of nonlinearly interacting variables whose behavior can be described by equilibrium statistical mechanics, i.e., by a canonical ensemble theory based on the global invariants (energy, cross helicity and magnetic helicity) of ideal MHD turbulence. Another advantage is that truncated expansions provide a finite dynamical system whose time evolution can be numerically simulated to test the predictions of the associated statistical mechanics. If ensemble predictions are the same as time averages, then the system is said to be ergodic; if not, the system is nonergodic. Although it had been implicitly assumed in the early days of ideal MHD statistical theory development that these finite dynamical systems were ergodic, numerical simulations provided sufficient evidence that they were, in fact, nonergodic. Specifically, while canonical ensemble theory predicted that expansion coefficients would be (i) zero-mean random variables with (ii) energy that decreased with length scale, it was found that although (ii) was correct, (i) was not and the expected ergodicity was broken. The exact cause of this broken ergodicity was explained, after much investigation, by greatly extending the statistical theory of ideal MHD turbulence. The mathematical details of broken ergodicity, in fact, give a quantitative explanation of how coherent structure, dynamic alignment and force-free states appear in turbulent magnetofluids. The relevance of these ideal results to real MHD turbulence occurs because broken ergodicity is most manifest in the ideal case at the largest length scales and it is in these largest scales that a real magnetofluid has the least dissipation, i.e., most closely approaches the behavior of an ideal magnetofluid. Furthermore, the effects grow stronger when cross and magnetic helicities grow large with respect to energy, and this is exactly what occurs with time in a real magnetofluid, where it is called selective decay. The relevance of these results found in ideal MHD turbulence theory to the real world is that they provide at least a qualitative explanation of why confined turbulent magnetofluids, such as the liquid iron that fills the Earth's outer core, produce stationary, large-scale magnetic fields, i.e., the geomagnetic field. These results should also apply to other planets as well as to plasma confinement devices on Earth and in space, and the effects should be manifest if Reynolds numbers are high enough and there is enough time for stationarity to occur, at least approximately. In the presentation, details will be given for both theoretical and numerical results, and references will be provided.
Statistical Physics of Fracture
Alava, Mikko; Nukala, Phani K; Zapperi, Stefano
2006-05-01
Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances in computer power have enabled considerable progress in the understanding of such models. Among these partly still controversial issues, are the scaling and size-effects in material strength and accumulated damage, the statistics of avalanches or bursts of microfailures, and the morphology of the crack surface. Here we present an overview of the results obtained with lattice models for fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches.
Statistical benchmark for BosonSampling
NASA Astrophysics Data System (ADS)
Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas
2016-03-01
Boson samplersâ€”set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modesâ€”promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Churchâ€“Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.
Statistical electron densities
Pipek, J.; Varga, I.
1996-12-31
It is known that in numerous interesting systems one-electron states appear with multifractal internal structure. Physical intuition suggest, however, that electron densities should be smooth both at atomic distances and close to the macroscopic limit. Multifractal behavior is expected at intermediate length scales, with observable non-trivial statistical properties in considerably, but far from macroscopically sized clusters. We have demonstrated that differences of generalized Renyi entropies serve as relevant quantities for the global characterization of the statistical nature of such electron densities. Asymptotic expansion formulas are elaborated for these values as functions of the length scale of observation. The transition from deterministic electron densities to statistical ones along various length of resolution is traced both theoretically and by numerical calculations.
Suite versus composite statistics
Balsillie, J.H.; Tanner, W.F.
1999-01-01
Suite and composite methodologies, two statistically valid approaches for producing statistical descriptive measures, are investigated for sample groups representing a probability distribution where, in addition, each sample is probability distribution. Suite and composite means (first moment measures) are always equivalent. Composite standard deviations (second moment measures) are always larger than suite standard deviations. Suite and composite values for higher moment measures have more complex relationships. Very seldom, however, are they equivalent, and they normally yield statistically significant but different results. Multiple samples are preferable to single samples (including composites) because they permit the investigator to examine sample-to-sample variability. These and other relationships for suite and composite probability distribution analyses are investigated and reported using granulometric data.
Candidate Assembly Statistical Evaluation
1998-07-15
The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that a significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.
Candidate Assembly Statistical Evaluation
Energy Science and Technology Software Center (ESTSC)
1998-07-15
The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that amoreÂ Â» significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.Â«Â less
Statistical Learning without Attention.
Yang, Feitong; Flombaum, Jonathan
2015-09-01
We sought to investigate the role of attention in statistical learning, an area where current results conflict. Given a stream of shapes including two different colors, and instructed to attend one of the colors (via a cover task), will observers learn statistical regularities associated with the unattended color? Following previous studies, we employed a reaction time (RT) test following encoding. Speeded responses are made to a target shape (on each trial) embedded within an RSVP stream, with learning demonstrated as an RT benefit for second and third triplet items. However, typical procedures repeat the same triplets as fillers and test items, making learning during testing possible. We therefore conducted an experiment with only a test phase (i.e. no incidental exposure), and we found significant RT benefits consistent with statistical learning, even in the first 48 of 96 test trials. These results demonstrate that statistical learning can take place rapidly during the course of procedures that are at times employed to diagnose prior learning. We thus returned to the question of attention with a modified test procedure. In addition to a set of eight triplets shown during incidental exposure, we generated a place-holder-set of 12 additional shapes. Each test trial then included a target shape from one of the learning triplets. It appeared embedded appropriately within its triplet, but with that triplet embedded within a larger set including nine of the 12 place-holder-items. After confirming a lack of statistical learning during the test phase (i.e. without pre-exposure), we used it as the test component for the attended and unattended color experiment described initially. We found significant learning effects for attended and unattended shapes. In addition to furnishing an updated RT test, these results demonstrate the robustness of statistical learning, which arose rapidly and for unattended stimuli. Meeting abstract presented at VSS 2015. PMID:26326580
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
Random paths and current fluctuations in nonequilibrium statistical mechanics
Gaspard, Pierre
2014-07-15
An overview is given of recent advances in nonequilibrium statistical mechanics about the statistics of random paths and current fluctuations. Although statistics is carried out in space for equilibrium statistical mechanics, statistics is considered in time or spacetime for nonequilibrium systems. In this approach, relationships have been established between nonequilibrium properties such as the transport coefficients, the thermodynamic entropy production, or the affinities, and quantities characterizing the microscopic Hamiltonian dynamics and the chaos or fluctuations it may generate. This overview presents results for classical systems in the escape-rate formalism, stochastic processes, and open quantum systems.
Environmental Statistics and Optimal Regulation
2014-01-01
Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493
Environmental statistics and optimal regulation.
Sivak, David A; Thomson, Matt
2014-09-01
Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies--such as constitutive expression or graded response--for regulating protein levels in response to environmental inputs. We propose a general framework-here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient-to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493
Statistical complexity measure of pseudorandom bit generators
NASA Astrophysics Data System (ADS)
González, C. M.; Larrondo, H. A.; Rosso, O. A.
2005-08-01
Pseudorandom number generators (PRNG) are extensively used in Monte Carlo simulations, gambling machines and cryptography as substitutes of ideal random number generators (RNG). Each application imposes different statistical requirements to PRNGs. As L’Ecuyer clearly states “the main goal for Monte Carlo methods is to reproduce the statistical properties on which these methods are based whereas for gambling machines and cryptology, observing the sequence of output values for some time should provide no practical advantage for predicting the forthcoming numbers better than by just guessing at random”. In accordance with different applications several statistical test suites have been developed to analyze the sequences generated by PRNGs. In a recent paper a new statistical complexity measure [Phys. Lett. A 311 (2003) 126] has been defined. Here we propose this measure, as a randomness quantifier of a PRNGs. The test is applied to three very well known and widely tested PRNGs available in the literature. All of them are based on mathematical algorithms. Another PRNGs based on Lorenz 3D chaotic dynamical system is also analyzed. PRNGs based on chaos may be considered as a model for physical noise sources and important new results are recently reported. All the design steps of this PRNG are described, and each stage increase the PRNG randomness using different strategies. It is shown that the MPR statistical complexity measure is capable to quantify this randomness improvement. The PRNG based on the chaotic 3D Lorenz dynamical system is also evaluated using traditional digital signal processing tools for comparison.
Lifetime statistics in chaotic dielectric microresonators
Schomerus, Henning; Wiersig, Jan; Main, Joerg
2009-05-15
We discuss the statistical properties of lifetimes of electromagnetic quasibound states in dielectric microresonators with fully chaotic ray dynamics. Using the example of a resonator of stadium geometry, we find that a recently proposed random-matrix model very well describes the lifetime statistics of long-lived resonances, provided that two effective parameters are appropriately renormalized. This renormalization is linked to the formation of short-lived resonances, a mechanism also known from the fractal Weyl law and the resonance-trapping phenomen0008.
ERIC Educational Resources Information Center
Alaska Univ., Fairbanks.
The first annual Statistical Abstract for the University of Alaska System provides factual information for use by the Board of Regents, college administrators, and public officials in the development of university plans and programs. Topics cover: enrollments, programs and awards, faculty and staff, facilities and space, fiscal analysis,â€¦
Pharmacokinetics: statistical moment calculations.
TOXLINE Toxicology Bibliographic Information
Gouyette A
1983-01-01
A program for the HP-41C calculator allows the determination of the first three statistical moments, area under the curve (moment zero), mean residence time (first moment) and variance of residence time (second moment) of drug concentration-time curves which are used for noncompartmental pharmacokinetic analyses. Applications of this theory are given.
Pharmacokinetics: statistical moment calculations.
Gouyette, A
1983-01-01
A program for the HP-41C calculator allows the determination of the first three statistical moments, area under the curve (moment zero), mean residence time (first moment) and variance of residence time (second moment) of drug concentration-time curves which are used for noncompartmental pharmacokinetic analyses. Applications of this theory are given. PMID:6681972
Statistics for Learning Genetics
ERIC Educational Resources Information Center
Charles, Abigail Sheena
2012-01-01
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,â€¦
Lack of Statistical Significance
ERIC Educational Resources Information Center
Kehle, Thomas J.; Bray, Melissa A.; Chafouleas, Sandra M.; Kawano, Takuji
2007-01-01
Criticism has been leveled against the use of statistical significance testing (SST) in many disciplines. However, the field of school psychology has been largely devoid of critiques of SST. Inspection of the primary journals in school psychology indicated numerous examples of SST with nonrandom samples and/or samples of convenience. In this…
Graduate Statistics: Student Attitudes
ERIC Educational Resources Information Center
Kennedy, Robert L.; Broadston, Pamela M.
2004-01-01
This study investigated the attitudes toward statistics of graduate students who used a computer program as part of the instruction, which allowed for an individualized, self-paced, student-centered, activity-based course. The twelve sections involved in this study were offered in the spring and fall 2001, spring and fall 2002, spring and fallâ€¦
... Funded Projects Information For... People with Hemophilia Women Healthcare Providers Partners Media Policy Makers Data & Statistics Language: English EspaÃ±ol (Spanish) Recommend on Facebook Tweet Share Compartir In the United States [ Read article ] Hemophilia affects 1 in 5,000 ...
Statistical Energy Analysis Program
NASA Technical Reports Server (NTRS)
Ferebee, R. C.; Trudell, R. W.; Yano, L. I.; Nygaard, S. I.
1985-01-01
Statistical Energy Analysis (SEA) is powerful tool for estimating highfrequency vibration spectra of complex structural systems and incorporated into computer program. Basic SEA analysis procedure divided into three steps: Idealization, parameter generation, and problem solution. SEA computer program written in FORTRAN V for batch execution.
Foundations of Statistical Seismology
NASA Astrophysics Data System (ADS)
Vere-Jones, David
2010-06-01
A brief account is given of the principles of stochastic modelling in seismology, with special regard to the role and development of stochastic models for seismicity. Stochastic models are seen as arising in a hierarchy of roles in seismology, as in other scientific disciplines. At their simplest, they provide a convenient descriptive tool for summarizing data patterns; in engineering and other applications, they provide a practical way of bridging the gap between the detailed modelling of a complex system, and the need to fit models to limited data; at the most fundamental level they arise as a basic component in the modelling of earthquake phenomena, analogous to that of stochastic models in statistical mechanics or turbulence theory. As an emerging subdiscipline, statistical seismology includes elements of all of these. The scope for the development of stochastic models depends crucially on the quantity and quality of the available data. The availability of extensive, high-quality catalogues and other relevant data lies behind the recent explosion of interest in statistical seismology. At just such a stage, it seems important to review the underlying principles on which statistical modelling is based, and that is the main purpose of the present paper.
ERIC Educational Resources Information Center
Akram, Muhammad; Siddiqui, Asim Jamal; Yasmeen, Farah
2004-01-01
In order to learn the concept of statistical techniques one needs to run real experiments that generate reliable data. In practice, the data from some well-defined process or system is very costly and time consuming. It is difficult to run real experiments during the teaching period in the university. To overcome these difficulties, statisticians…
Revising Educational Statistics.
ERIC Educational Resources Information Center
Banner, James M., Jr.
When gathering and presenting educational statistics, five principles should be considered. (1) The data must be accurate, valid, and complete. Limitations, weaknesses, margins of error, and levels of confidence should be clearly stated. (2) The data must include comparable information, sought in comparable ways, in comparable forms, fromâ€¦
Quartiles in Elementary Statistics
ERIC Educational Resources Information Center
Langford, Eric
2006-01-01
The calculation of the upper and lower quartile values of a data set in an elementary statistics course is done in at least a dozen different ways, depending on the text or computer/calculator package being used (such as SAS, JMP, MINITAB, "Excel," and the TI-83 Plus). In this paper, we examine the various methods and offer a suggestion for a newâ€¦
NACME Statistical Report 1986.
ERIC Educational Resources Information Center
Miranda, Luis A.; Ruiz, Esther
This statistical report summarizes data on enrollment and graduation of minority students in engineering degree programs from 1974 to 1985. First, an introduction identifies major trends and briefly describes the Incentive Grants Program (IGP), the nation's largest privately supported source of scholarship funds available to minority engineering…
Statistical Significance Testing.
ERIC Educational Resources Information Center
McLean, James E., Ed.; Kaufman, Alan S., Ed.
1998-01-01
The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…
Women in Medicine: Statistics.
ERIC Educational Resources Information Center
Bickel, Janet; Quinnie, Renne
This publication consists solely of statistical data about women in medicine. Eight tables and three figures are presented. The tables are organized as follows: (1) Women Applicants, Enrollees and Graduates--Selected Years 1949-50 through 1991-92; (2) Comparative Acceptance Data for Men and Women Applicants 1973-74 through 1990-91; (3) Acceptance…
Women in Medicine. Statistics.
ERIC Educational Resources Information Center
Bickel, Janet; Quinnie, Renee
This publication consists solely of statistical data with respect to women in medicine. Seven tables and three figures are presented. The tables are organized as follows: (1) Women Applicants, Enrollees and Graduates, Selected Years 1949-50 through 1990-91; (2) Comparative Acceptance Data for Men and Women Applicants, 1973-74 through 1990-91; (3)…
Library Research and Statistics.
ERIC Educational Resources Information Center
Lynch, Mary Jo; St. Lifer, Evan; Halstead, Kent; Fox, Bette-Lee; Miller, Marilyn L.; Shontz, Marilyn L.
2001-01-01
These nine articles discuss research and statistics on libraries and librarianship, including libraries in the United States, Canada, and Mexico; acquisition expenditures in public, academic, special, and government libraries; price indexes; state rankings of public library data; library buildings; expenditures in school library media centers; and…
Statistical Reasoning over Lunch
ERIC Educational Resources Information Center
Selmer, Sarah J.; Bolyard, Johnna J.; Rye, James A.
2011-01-01
Students in the 21st century are exposed daily to a staggering amount of numerically infused media. In this era of abundant numeric data, students must be able to engage in sound statistical reasoning when making life decisions after exposure to varied information. The context of nutrition can be used to engage upper elementary and middle school…
ERIC Educational Resources Information Center
Office of the Assistant Secretary of Defense -- Comptroller (DOD), Washington, DC.
This document contains summaries of basic manpower statistical data for the Department of Defense, with the Army, Navy, Marine Corps, and Air Force totals shown separately and collectively. Included are figures for active duty military personnel, civilian personnel, reserve components, and retired military personnel. Some of the data show…
Statistics for Learning Genetics
ERIC Educational Resources Information Center
Charles, Abigail Sheena
2012-01-01
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,…
Banerjee, Rabin; Majhi, Bibhas Ranjan
2010-06-15
Starting from the definition of entropy used in statistical mechanics we show that it is proportional to the gravity action. For a stationary black hole this entropy is expressed as S=E/2T, where T is the Hawking temperature and E is shown to be the Komar energy. This relation is also compatible with the generalized Smarr formula for mass.
ASURV: Astronomical SURVival Statistics
NASA Astrophysics Data System (ADS)
Feigelson, E. D.; Nelson, P. I.; Isobe, T.; LaValley, M.
2014-06-01
ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.
Neuroendocrine Tumor: Statistics
... 5 years after the cancer is found. Percent means how many out of 100. The 5-year survival rate of people with neuroendocrine tumors varies and ... with a neuroendocrine tumor. Also, experts measure the survival statistics every 5 years. This means that the estimate may not show the results ...
Statistical instability of barrier microdischarges operating in townsend regime
Nagorny, V. P.
2007-01-15
The dynamics of barrier microdischarges operating in a Townsend regime is studied analytically and via kinetic particle-in-cell/Monte Carlo simulations. It is shown that statistical fluctuations of the number of charged particles in the discharge gap strongly influence the dynamics of natural oscillations of the discharge current and may even lead to a disruption of the discharge. Analysis of the statistical effects based on a simple model is suggested. The role of external sources in stabilizing microdischarges is clarified.
Statistics for Learning Genetics
NASA Astrophysics Data System (ADS)
Charles, Abigail Sheena
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless, although the necessity for infusing these quantitative subjects with genetics and, overall, the biological sciences is growing (topics including synthetic biology, molecular systems biology and phylogenetics) there remains little time in the semester to be dedicated to the consolidation of learning and understanding.
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density function, apparently previously unknown and dubbed "Maccone distribution" by Paul Davies. DATA ENRICHMENT PRINCIPLE. It should be noticed that ANY positive number of random variables in the Statistical Drake Equation is compatible with the CLT. So, our generalization allows for many more factors to be added in the future as long as more refined scientific knowledge about each factor will be known to the scientists. This capability to make room for more future factors in the statistical Drake equation, we call the "Data Enrichment Principle," and we regard it as the key to more profound future results in the fields of Astrobiology and SETI. Finally, a practical example is given of how our statistical Drake equation works numerically. We work out in detail the case, where each of the seven random variables is uniformly distributed around its own mean value and has a given standard deviation. For instance, the number of stars in the Galaxy is assumed to be uniformly distributed around (say) 350 billions with a standard deviation of (say) 1 billion. Then, the resulting lognormal distribution of N is computed numerically by virtue of a MathCad file that the author has written. This shows that the mean value of the lognormal random variable N is actually of the same order as the classical N given by the ordinary Drake equation, as one might expect from a good statistical generalization.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in 2008. 4. A practical example is then given of how the SEH works numerically. Each of the ten random variables is uniformly distributed around its own mean value as given by Dole (1964) and a standard deviation of 10% is assumed. The conclusion is that the average number of habitable planets in the Galaxy should be around 100 million ±200 million, and the average distance in between any two nearby habitable planets should be about 88 light years ±40 light years. 5. The SEH results are matched against the results of the Statistical Drake Equation from reference 4. As expected, the number of currently communicating ET civilizations in the Galaxy turns out to be much smaller than the number of habitable planets (about 10,000 against 100 million, i.e. one ET civilization out of 10,000 habitable planets). The average distance between any two nearby habitable planets is much smaller that the average distance between any two neighbouring ET civilizations: 88 light years vs. 2000 light years, respectively. This means an ET average distance about 20 times higher than the average distance between any pair of adjacent habitable planets. 6. Finally, a statistical model of the Fermi Paradox is derived by applying the above results to the coral expansion model of Galactic colonization. The symbolic manipulator "Macsyma" is used to solve these difficult equations. A new random variable Tcol, representing the time needed to colonize a new planet is introduced, which follows the lognormal distribution, Then the new quotient random variable Tcol/D is studied and its probability density function is derived by Macsyma. Finally a linear transformation of random variables yields the overall time TGalaxy needed to colonize the whole Galaxy. We believe that our mathematical work in deriving this STATISTICAL Fermi Paradox is highly innovative and fruitful for the future.
SHARE: Statistical hadronization with resonances
NASA Astrophysics Data System (ADS)
Torrieri, G.; Steinke, S.; Broniowski, W.; Florkowski, W.; Letessier, J.; Rafelski, J.
2005-05-01
SHARE is a collection of programs designed for the statistical analysis of particle production in relativistic heavy-ion collisions. With the physical input of intensive statistical parameters, it generates the ratios of particle abundances. The program includes cascade decays of all confirmed resonances from the Particle Data Tables. The complete treatment of these resonances has been known to be a crucial factor behind the success of the statistical approach. An optional feature implemented is the Breit-Wigner distribution for strong resonances. An interface for fitting the parameters of the model to the experimental data is provided. Program summaryTitle of the program:SHARE, October 2004, version 1.2 Catalogue identifier: ADVD Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVD Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: PC, Pentium III, 512 MB RAM (not hardware dependent) Operating system: Linux: RedHat 6.1, 7.2, FEDORA, etc. (not system dependent) Programming language:FORTRAN77: g77, f77 as well as Mathematica, ver. 4 or 5, for the case of full chemical equilibrium and particle widths set to zero Size of the package: 645 KB directory including example programs (87 KB compressed distribution archive) External routines: KERNLIB, MATHLIB and PACKLIB from the CERN Program Library (see http://cernlib.web.cern.ch for download and installation instructions) Distribution format: tar.gz Number of lines in distributed program, including test data, etc.: 15 277 Number of bytes in distributed program, including test data, etc.: 88 522 Computer: Any computer with an f77 compiler Nature of the physical problem: Statistical analysis of particle production in relativistic heavy-ion collisions involves the formation and the subsequent decays of a large number of resonances. With the physical input of thermal parameters, such as the temperature and fugacities, and considering cascading decays, along with weak interaction feed-down corrections, the observed hadron abundances are obtained. SHARE incorporates diverse physical approaches, with a flexibility of choice of the details of the statistical hadronization model, including the selection of a chemical (non-)equilibrium condition. SHARE also offers evaluation of the extensive properties of the source of particles, such as energy, entropy, baryon number, strangeness, as well as the determination of the best intensive input parameters fitting a set of experimental yields. This allows exploration of a proposed physical hypothesis about hadron production mechanisms and the determination of the properties of their source. Method of solving the problem: Distributions at freeze-out of both the stable particles and the hadronic resonances are set according to a statistical prescription, technically calculated via a series of Bessel functions, using CERN library programs. We also have the option of including finite particle widths of the resonances. While this is computationally expensive, it is necessary to fully implement the essence of the strong interaction dynamics within the statistical hadronization picture. In fact, including finite width has a considerable effect when modeling directly detectable short-lived resonances ( ?(1520),K, etc.), and is noticeable in fits to experimentally measured yields of stable particles. After production, all hadronic resonances decay. Resonance decays are accomplished by addition of the parent abundances to the daughter, normalized by the branching ratio. Weak interaction decays receive a special treatment, where we introduce daughter particle acceptance factors for both strongly interacting decay products. An interface for fitting to experimental particle ratios of the statistical model parameters with the help of MINUIT[1] is provided. The ? function is defined in the standard way. For an investigated quantity f and experimental error ? f, ?=((N=N-N. (note that systematic and statistical errors are independent, since the systematic error is not a random variable). Aside of ?, the program also calculates the statistical significance [2], defined as the probability that, given a "true" theory and a statistical (Gaussian) experimental error, the fitted ? assumes the values at or above the considered value. In the case that the best fit has statistical significance significantly below unity, the model under consideration is very likely inappropriate. In the limit of many degrees of freedom ( N), the statistical significance function depends only on ?/N, with 90% statistical significance at ?/N˜1, and falling steeply at ?/N>1. However, the degrees of freedom in fits involving ratios are generally not sufficient to reach the asymptotic limit. Hence, statistical significance depends strongly on ? and N separately. In particular, if N<20, often for a fit to have an acceptable statistical significance, a ?/N significantly less than 1 is required. The fit routine does not always find the true lowest ? minimum. Specifically, multi-parameter fits with too few degrees of freedom generally exhibit a non-trivial structure in parameter space, with several secondary minima, saddle points, valleys, etc. To help the user perform the minimization effectively, we have added tools to compute the ? contours and profiles. In addition, our program's flexibility allows for many strategies in performing the fit. It is therefore possible, by following the techniques described in Section 3.7, to scan the parameter space and ensure that the minimum found is the true one. Further systematic deviations between the model and experiment can be recognized via the program's output, which includes a particle-by-particle comparison between experiment and theory. Additional comments: In consideration of the wide stream of new data coming out from RHIC, there is an on-going activity, with several groups performing analysis of particle yields. It is our hope that SHARE will allow to create an analysis standard within the community. It can be useful in analyzing the experimental data, verifying simple physical assumptions, evaluating expected yields, as well as allowing to compare various similar models and programs which are currently being used. Typical running time: For the Fortran code, the computation time with the provided default input files is about 10 minutes on 1 GHz processor. The time may rise significantly (by a factor of 300) if the full-fledged optimization and finite widths are included. In Mathematica, the typical running times are of the order of minutes. Accessibility: The program is available from: The CPC program library, The following websites: http://www.ifj.edu.pl/Dept4/share.html or http://www.physics.arizona.edu/~torrieri/SHARE/share.html, From the authors upon request.
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics.
Tirnakli, Ugur; Borges, Ernesto P
2016-01-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results. PMID:27004989
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics
Tirnakli, Ugur; Borges, Ernesto P.
2016-01-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results. PMID:27004989
Statistical Inference at Work: Statistical Process Control as an Example
ERIC Educational Resources Information Center
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Statistical Inference at Work: Statistical Process Control as an Example
ERIC Educational Resources Information Center
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved inâ€¦
This article presents a general and versatile methodology for assessing sustainability with Fisher Information as a function of dynamic changes in urban systems. Using robust statistical methods, six Metropolitan Statistical Areas (MSAs) in Ohio were evaluated to comparatively as...
Pitfalls in computer statistics
Kinnison, R.R.
1983-03-01
The monograph discusses the computation of sums of squares, one of the most common statistical computations, and also one of the most sensitive quantities to finite numerical representation errors. Examples show that the computational device as well as the form of the mathematical formulas are an important consideration. Statistical computations suchas sums of squares are easy (but tedious) to perform accurately with paper and pencil. However, the difficulty in accurately calculating these results on computers is demonstrated. The problem is especially acute on microcomputers, where the user does not have access to double precision variables. Calculation of sums of squares in the single precision, one pass algorithm, can give highly inaccurate results. If double precision is not available (such as in BASIC), the update algorithms presented should be used.
Explorations in statistics: power.
Curran-Everett, Douglas
2010-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of Explorations in Statistics revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect power: the probability with which we are willing to reject-by mistake-a true null hypothesis, the magnitude of the difference we want to be able to detect, the variability of the underlying population, and the number of observations in our sample. In an application to an Institutional Animal Care and Use Committee or to the National Institutes of Health, we define power to justify the sample size we propose. PMID:20522895
Medical activities and statistics.
Weber, Patrick; Eriksson, Jan; Seewer, Stephan
2004-01-01
The evaluation of the medical activity is a major concern for hospitals and public health services. With the introduction of coding (IDC-10 and CHOP classifications) hospitals are now able to analyze their medical activity. A way to improve physicians' acceptance in analyzing their work is to give them valuable feedback information. Building statistics tools is costly and time consuming. Therefore introducing data warehouse tools is helpful. Nice Code is an easy-to-use software that helps medical encoding while immediately offering understandable statistics. In Switzerland, physicians demand real feedback based on the data transmitted at the "cantonal" or federal levels and more transparency in third payer's decisions. In this respect, several cantons decided to equip public health services and hospitals with this tool. The goal is to give, physicians and economists, powerful tools for analyzing the medical activity. PMID:15137212
Nock, Richard; Nielsen, Frank
2004-11-01
This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints. This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces. The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images. Experiments on gray-level and color images, obtained with a short readily available C-code, display the quality of the segmentations obtained. PMID:15521493
Statistical challenges of AIDS.
Becker, N G
1992-08-01
"This paper considers questions concerning the incubation period [of HIV infections], the effects of treatments, prediction of AIDS cases, the choice of surrogate end points for the assessment of treatments and design of strategies for screening blood samples. These issues give rise to a broad range of intriguing problems for statisticians. We describe some of these problems, how they have been tackled so far and what remains to be done. The discussion touches on topical statistical methods such as smoothing, bootstrapping, interval censoring and the ill-posed inverse problem, as well as asking fundamental questions for frequentist statistics." The geographical scope is worldwide, with some data for selected developed countries used to illustrate the models. PMID:12285674
Statistical evaluation of forecasts
NASA Astrophysics Data System (ADS)
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, BjÃ¶rn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.
Statistical evaluation of forecasts.
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast. PMID:25215714
Relativistic statistical arbitrage.
Wissner-Gross, A D; Freer, C E
2010-11-01
Recent advances in high-frequency financial trading have made light propagation delays between geographically separated exchanges relevant. Here we show that there exist optimal locations from which to coordinate the statistical arbitrage of pairs of spacelike separated securities, and calculate a representative map of such locations on Earth. Furthermore, trading local securities along chains of such intermediate locations results in a novel econophysical effect, in which the relativistic propagation of tradable information is effectively slowed or stopped by arbitrage. PMID:21230542
Relativistic statistical arbitrage
NASA Astrophysics Data System (ADS)
Wissner-Gross, Alexander; Freer, Cameron
2011-03-01
Recent advances in high-frequency financial trading have made light propagation delays between geographically separated exchanges relevant. Here we show that there exist optimal locations from which to coordinate the statistical arbitrage of pairs of spacelike separated securities, and calculate a representative map of such locations on Earth. Furthermore, trading local securities along chains of such intermediate locations results in a novel econophysical effect, in which the relativistic propagation of tradable information is effectively slowed or stopped by arbitrage.
Relativistic statistical arbitrage
NASA Astrophysics Data System (ADS)
Wissner-Gross, A. D.; Freer, C. E.
2010-11-01
Recent advances in high-frequency financial trading have made light propagation delays between geographically separated exchanges relevant. Here we show that there exist optimal locations from which to coordinate the statistical arbitrage of pairs of spacelike separated securities, and calculate a representative map of such locations on Earth. Furthermore, trading local securities along chains of such intermediate locations results in a novel econophysical effect, in which the relativistic propagation of tradable information is effectively slowed or stopped by arbitrage.
International petroleum statistics report
1995-04-01
The International Petroleum Statistic Report, a monthly publication, presents data on international oil production, demand, imports, exports, and stocks. The four sections of this April 1995 report are as follows: time series data on world oil production and oil demand and stocks in Organization for Economic Cooperation and Development (OECD); oil supply/demand balance for the world; data on oil imports by OECD countries; annual time series data on world oil production and oil stocks, demand and trade in OECD countries.
1979 DOE statistical symposium
Gardiner, D.A.; Truett T.
1980-09-01
The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.
Statistical interior tomography
NASA Astrophysics Data System (ADS)
Xu, Qiong; Yu, Hengyong; Mou, Xuanqin; Wang, Ge
2010-09-01
The long-standing interior problem has been recently revisited, leading to promising results on exact local reconstruction also referred to as interior tomography. To date, there are two key computational ingredients of interior tomography. The first ingredient is inversion of the truncated Hilbert transform with prior sub-region knowledge. The second is compressed sensing (CS) assuming a piecewise constant or polynomial region of interest (ROI). Here we propose a statistical approach for interior tomography incorporating the aforementioned two ingredients as well. In our approach, projection data follows the Poisson model, and an image is reconstructed in the maximum a posterior (MAP) framework subject to other interior tomography constraints including known subregion and minimized total variation (TV). A deterministic interior reconstruction based on the inversion of the truncated Hilbert transform is used as the initial image for the statistical interior reconstruction. This algorithm has been extensively evaluated in numerical and animal studies in terms of major image quality indices, radiation dose and machine time. In particular, our encouraging results from a low-contrast Shepp-Logan phantom and a real sheep scan demonstrate the feasibility and merits of our proposed statistical interior tomography approach.
Bradley, Robert K.; Roberts, Adam; Smoot, Michael; Juvekar, Sudeep; Do, Jaeyoung; Dewey, Colin; Holmes, Ian; Pachter, Lior
2009-01-01
We describe a new program for the alignment of multiple biological sequences that is both statistically motivated and fast enough for problem sizes that arise in practice. Our Fast Statistical Alignment program is based on pair hidden Markov models which approximate an insertion/deletion process on a tree and uses a sequence annealing algorithm to combine the posterior probabilities estimated from these models into a multiple alignment. FSA uses its explicit statistical model to produce multiple alignments which are accompanied by estimates of the alignment accuracy and uncertainty for every column and character of the alignment—previously available only with alignment programs which use computationally-expensive Markov Chain Monte Carlo approaches—yet can align thousands of long sequences. Moreover, FSA utilizes an unsupervised query-specific learning procedure for parameter estimation which leads to improved accuracy on benchmark reference alignments in comparison to existing programs. The centroid alignment approach taken by FSA, in combination with its learning procedure, drastically reduces the amount of false-positive alignment on biological data in comparison to that given by other methods. The FSA program and a companion visualization tool for exploring uncertainty in alignments can be used via a web interface at http://orangutan.math.berkeley.edu/fsa/, and the source code is available at http://fsa.sourceforge.net/. PMID:19478997
Guta, Madalin; Butucea, Cristina
2010-10-15
The notion of a U-statistic for an n-tuple of identical quantum systems is introduced in analogy to the classical (commutative) case: given a self-adjoint 'kernel' K acting on (C{sup d}){sup '}x{sup r} with r
Overdispersion in nuclear statistics
NASA Astrophysics Data System (ADS)
Semkow, Thomas M.
1999-02-01
The modern statistical distribution theory is applied to the development of the overdispersion theory in ionizing-radiation statistics for the first time. The physical nuclear system is treated as a sequence of binomial processes, each depending on a characteristic probability, such as probability of decay, detection, etc. The probabilities fluctuate in the course of a measurement, and the physical reasons for that are discussed. If the average values of the probabilities change from measurement to measurement, which originates from the random Lexis binomial sampling scheme, then the resulting distribution is overdispersed. The generating functions and probability distribution functions are derived, followed by a moment analysis. The Poisson and Gaussian limits are also given. The distribution functions belong to a family of generalized hypergeometric factorial moment distributions by Kemp and Kemp, and can serve as likelihood functions for the statistical estimations. An application to radioactive decay with detection is described and working formulae are given, including a procedure for testing the counting data for overdispersion. More complex experiments in nuclear physics (such as solar neutrino) can be handled by this model, as well as distinguishing between the source and background.
NASA Astrophysics Data System (ADS)
William, Peter
In this dissertation several two dimensional statistical systems exhibiting discrete Z(n) symmetries are studied. For this purpose a newly developed algorithm to compute the partition function of these models exactly is utilized. The zeros of the partition function are examined in order to obtain information about the observable quantities at the critical point. This occurs in the form of critical exponents of the order parameters which characterize phenomena at the critical point. The correlation length exponent is found to agree very well with those computed from strong coupling expansions for the mass gap and with Monte Carlo results. In Feynman's path integral formalism the partition function of a statistical system can be related to the vacuum expectation value of the time ordered product of the observable quantities of the corresponding field theoretic model. Hence a generalization of ordinary scale invariance in the form of conformal invariance is focussed upon. This principle is very suitably applicable, in the case of two dimensional statistical models undergoing second order phase transitions at criticality. The conformal anomaly specifies the universality class to which these models belong. From an evaluation of the partition function, the free energy at criticality is computed, to determine the conformal anomaly of these models. The conformal anomaly for all the models considered here are in good agreement with the predicted values.
Spike train statistics and Gibbs distributions.
Cessac, B; Cofré, R
2013-11-01
This paper is based on a lecture given in the LACONEU summer school, Valparaiso, January 2012. We introduce Gibbs distribution in a general setting, including non stationary dynamics, and present then three examples of such Gibbs distributions, in the context of neural networks spike train statistics: (i) maximum entropy model with spatio-temporal constraints; (ii) generalized linear models; and (iii) conductance based integrate and fire model with chemical synapses and gap junctions. PMID:23501168
Finite statistical complexity for sofic systems
NASA Astrophysics Data System (ADS)
Perry, Nicolás; Binder, P.-M.
1999-07-01
We propose a measure of complexity for symbolic sequences, which is based on conditional probabilities, and captures computational aspects of complexity without the explicit construction of minimal deterministic finite automata (DFA). Moreover, if the sequence is obtained from a dynamical system through a suitable encoding and its equations of motion are known, we show how to estimate the regions of phase space that correspond to computational states with statistically equivalent futures (causal states).
Experimental Mathematics and Computational Statistics
Bailey, David H.; Borwein, Jonathan M.
2009-04-30
The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.
Truth, Damn Truth, and Statistics
ERIC Educational Resources Information Center
Velleman, Paul F.
2008-01-01
Statisticians and Statistics teachers often have to push back against the popular impression that Statistics teaches how to lie with data. Those who believe incorrectly that Statistics is solely a branch of Mathematics (and thus algorithmic), often see the use of judgment in Statistics as evidence that we do indeed manipulate our results. In the…
Quality management through statistics.
Bush, D L
1991-01-01
One of the responsibilities of a quality assurance professional is to assess patient care data in an effort to meet "outcome" requirements as established by the institution, the patient, the patient's family, the Joint Commission, the Health Care Financing Administration, the state PRO, and other regulatory agencies. A major goal of effective quality management is to assure that patients treated in a similar diagnostic category achieve similar outcomes. To accomplish this, the QA manager must gain and maintain control over patient care processes. To obtain this control, the conditions of patient care processes must be measured. Over one hundred years ago, the English scientist Lord Kelvin said, "When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is...unsatisfactory." This article will describe some of the statistical quality control techniques that will assist in measuring the performance of specific patient care processes and outcomes and expressing them in numbers. A review of the numbers will show whether a particular patient care process is running smoothly or needs further investigation or adjustments. Using statistical quality control techniques will enable the QA manager to predict how well patient care processes and outcomes will run in the future. The simple statistical techniques discussed in this article can be used to measure the performance of patient care processes and outcomes both before and after corrective actions have been taken. They apply both to attempts to bring a specific process or outcome into "control" or to break through to a new, improved level of quality performance. PMID:10112986
NASA Technical Reports Server (NTRS)
1996-01-01
This booklet of pocket statistics includes the 1996 NASA Major Launch Record, NASA Procurement, Financial, and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Luanch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
International petroleum statistics report
1995-10-01
The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report presents data on international oil production, demand, imports, exports and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Section 2 presents an oil supply/demand balance for the world, in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries.
[USA] highway statistics, 1997
1999-11-01
This is an annual report containing analyzed statistical data on motor fuel; motor vehicles, driver licensing, highway-user taxation; state highway finance; highway mileage; federal aid for highways; highway finance data for municipalities; counties; townships, and other units of local government; select tables/charts from the 1995 Nationwide Personal transportation Survey; and international data. This report has been published since 1945. These and other State-by-State tabulations are all available in electronic form on the Internet at http:///www.fhwa. dot.gov/pubstats.html. The data tables can be viewed in PDF and downloaded as in spreadsheet format.
Statistical signals in bioinformatics
Karlin, Samuel
2005-01-01
The Arthur M. Sackler Colloquium of the National Academy of Sciences, â€œFrontiers in Bioinformatics: Unsolved Problems and Challenges,â€ organized by David Eisenberg, Russ Altman, and myself, was held October 15-17, 2004, to provide a forum for discussing concepts and methods in bioinformatics serving the biological and medical sciences. The deluge of genomic and proteomic data in the last two decades has driven the creation of tools that search and analyze biomolecular sequences and structures. Bioinformatics is highly interdisciplinary, using knowledge from mathematics, statistics, computer science, biology, medicine, physics, chemistry, and engineering. PMID:16157888
Who Needs Statistics? | Poster
You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.
NASA Astrophysics Data System (ADS)
de Gouvêa, André; Murayama, Hitoshi
2003-10-01
“Anarchy” is the hypothesis that there is no fundamental distinction among the three flavors of neutrinos. It describes the mixing angles as random variables, drawn from well-defined probability distributions dictated by the group Haar measure. We perform a Kolmogorov-Smirnov (KS) statistical test to verify whether anarchy is consistent with all neutrino data, including the new result presented by KamLAND. We find a KS probability for Nature's choice of mixing angles equal to 64%, quite consistent with the anarchical hypothesis. In turn, assuming that anarchy is indeed correct, we compute lower bounds on |Ue3|2, the remaining unknown “angle” of the leptonic mixing matrix.
NASA Technical Reports Server (NTRS)
1994-01-01
Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
NASA Technical Reports Server (NTRS)
1995-01-01
NASA Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
ERIC Educational Resources Information Center
Perepiczka, Michelle; Chandler, Nichelle; Becerra, Michael
2011-01-01
Statistics plays an integral role in graduate programs. However, numerous intra- and interpersonal factors may lead to successful completion of needed coursework in this area. The authors examined the extent of the relationship between self-efficacy to learn statistics and statistics anxiety, attitude towards statistics, and social support of 166â€¦