SDI: setting the record straight
Adelman, K.L.
1985-01-01
After a few introductory remarks, Mr. Adelman first discusses Soviet propaganda against SDI. He then poses and answers questions regarding the following: SDI and the ABM Treaty; SDI and US arms control objectives; and the ethics of SDI. The final portion of the address reviews US nonproliferation efforts.
NASA Astrophysics Data System (ADS)
Anderson, C.; Kosch, M.; Nicolls, M. J.; Conde, M. G.
2011-12-01
Interactions between the plasma and neutral components of the upper atmosphere result in a diverse set of phenomena that occur over a wide range of spatial and temporal scales. Investigating these interactions requires essentially simultaneous measurements of (spatially resolved) ion and neutral parameters at a time resolution comparable to the time-scales of the underlying driving forces. Two instruments that are ideally suited to such investigations are the Scanning Doppler Imager (SDI) and the Advanced Modular Incoherent Scatter Radar (AMISR). The SDI is capable of resolving small-scale neutral horizontal flow structures and temperature fields across a wide field-of-view, with a temporal resolution of around 4 minutes. The AMISR allows for 'volumetric' ionospheric imaging by sampling along multiple range resolved beams simultaneously, with integration times at least comparable to the SDI. Here we present initial results from a campaign of coordinated observations between an AMISR and SDI located at Poker Flat Research Range in Alaska. This study focuses on the observed signatures of ion-neutral coupling at E and F-region altitudes, in particular the directly measured local ion-neutral velocity difference (required for calculating frictional heating rates) and estimates of the ion-neutral collision frequency from measurements taken along the local geomagnetic field-aligned direction. These observations are placed in the context of the large-scale neutral and ion flows.
Lee, S.
2011-05-05
The Savannah River Remediation (SRR) Organization requested that Savannah River National Laboratory (SRNL) develop a Computational Fluid Dynamics (CFD) method to mix and blend the miscible contents of the blend tanks to ensure the contents are properly blended before they are transferred from the blend tank; such as, Tank 50H, to the Salt Waste Processing Facility (SWPF) feed tank. The work described here consists of two modeling areas. They are the mixing modeling analysis during miscible liquid blending operation, and the flow pattern analysis during transfer operation of the blended liquid. The transient CFD governing equations consisting of three momentum equations, one mass balance, two turbulence transport equations for kinetic energy and dissipation rate, and one species transport were solved by an iterative technique until the species concentrations of tank fluid were in equilibrium. The steady-state flow solutions for the entire tank fluid were used for flow pattern analysis, for velocity scaling analysis, and the initial conditions for transient blending calculations. A series of the modeling calculations were performed to estimate the blending times for various jet flow conditions, and to investigate the impact of the cooling coils on the blending time of the tank contents. The modeling results were benchmarked against the pilot scale test results. All of the flow and mixing models were performed with the nozzles installed at the mid-elevation, and parallel to the tank wall. From the CFD modeling calculations, the main results are summarized as follows: (1) The benchmark analyses for the CFD flow velocity and blending models demonstrate their consistency with Engineering Development Laboratory (EDL) and literature test results in terms of local velocity measurements and experimental observations. Thus, an application of the established criterion to SRS full scale tank will provide a better, physically-based estimate of the required mixing time, and
A Framework for Comparing SDI Systems.
ERIC Educational Resources Information Center
Jordan, John R.
The process of comparing the many Selective Dissemination of Information (SDI) systems is a complicated task, requiring proper test procedures and the control of critical variables. The necessity for controlling the interest level as well as the number of documents disseminated when evaluating systems is demonstrated. The process of SDI is…
The Evaluation of SISMAKOM (Computerized SDI Project).
ERIC Educational Resources Information Center
University of Science, Penang (Malaysia).
A survey of 88 users of SISMAKOM, a computerized selective dissemination of information (SDI) and document delivery service provided by the Universiti Sains Malaysia and four other Malaysian universities, was conducted in August 1982 in order to collect data about SISMAKOM and to assess the value of a computerized SDI service in a developing…
SDI spinoffs: research now, standards later
Smith, T.K. Jr.
1986-04-01
A major benefit of the Strategic Defense Initiative (SDI) its is potential for technological spinoffs. The lack of a consistent answer on the feasibility of developing an effective ballistic missile defense system may force Congress to look at the possible spinoffs in order to make a funding decision on SDI. Spinoffs have historically played an important role in providing industry with commercial applications, but there are also a number of unattractive aspects: unpredictability and possible suppression for national security reasons. Edward Teller is among those who promote X-ray lasers, while others support gamma-ray laser research. The possibility of SDI technology and spinoffs gives scientists and engineers a chance to participate in the development of new standards. 7 references.
SDI (Strategic Defense Initiative): a policy analysis
Fought, S.O.
1987-01-01
Contents include -- Foundations of Deterrence; A Model for Stability; Analysis of SDI/Stability; Related Issues; Treatment of Implementation Factors; Historical Evolution and Trends; The Strategic Choices and Flexible Response; The Planners' Perspective; The Impact of Strategic Defense on a Strategy of Flexible Response; Synthesis.
Deriving statistical closure from dynamical optimization
NASA Astrophysics Data System (ADS)
Turkington, Bruce
2015-11-01
Turbulence theorists have traditionally deduced statistical models by generating a hierarchy of moment equations and invoking some closure rules to truncate the hierarchy. In this talk a conceptually different approach to model reduction and statistical closure will be presented, and its implications for coarse-graining fluid turbulence will be indicated. The author has developed this method in the context of nonequilibrium statistical descriptions of Hamiltonian systems with many degrees of freedom. With respect to a chosen parametric statistical model, the lack-of-fit of model paths to the full dynamics is minimized in a time-integrated, mean-squared sense. This optimal closure method is applied to coarse-grain spectrally-truncated inviscid dynamics, including the Burgers-Hopf equation and incompressible two-dimensional flow, using the means and/or variances of low modes as resolved variables. The derived reduced dynamics for these test cases contain (1) scale-dependent dissipation which is not a local eddy viscosity, (2) modified nonlinear interactions between resolved modes, and (3) coupling between the mean and variance of each resolved mode. These predictions are validated against direct numerical simulations of ensembles for the fully resolved dynamics.
Takatsuka, Kazuo; Matsumoto, Kentaro
2016-01-21
We present a basic theory to study real-time dynamics embedded in a large environment that is treated using a statistical method. In light of great progress in the molecular-level studies on time-resolved spectroscopies, chemical reaction dynamics, and so on, not only in the gas phase but also in condensed phases like liquid solvents and even in crowded environments in living cells, we need to bridge over a gap between statistical mechanics and microscopic real-time dynamics. For instance, an analogy to gas-phase dynamics in which molecules are driven by the gradient of the potential energy hyper-surfaces (PESs) suggests that particles in condensed phases should run on the free energy surface instead. The question is whether this anticipation is correct. To answer it, we here propose a mixed dynamics and statistical representation to treat chemical dynamics embedded in a statistical ensemble. We first define the entropy functional, which is a function of the phase-space position of the dynamical subsystem, being dressed with statistical weights from the statistical counterpart. We then consider the functionals of temperature, free energy, and chemical potential as their extensions in statistical mechanics, through which one can clarify the relationship between real-time microscopic dynamics and statistical quantities. As an illustrative example we show that molecules in the dynamical subsystem should run on the free-energy functional surface, if and only if the spatial gradients of the temperature functional are all zero. Otherwise, additional forces emerge from the gradient of the temperature functional. Numerical demonstrations are presented at the very basic level of this theory of molecular dissociation in atomic cluster solvents. PMID:26674298
The origins of SDI, 1944--1983
Baucom, D.R.
1992-01-01
The most distinctive and important contribution of this new book on the Strategic Defense Initiative is that it ends where most other studies begin, with President Ronald Reagan's famous (or infamous, depending on one's perspective) March 1983 speech that introduced the Star Wars concept. In taking this approach, Donald R. Baucom - a former Air Force historian who has been the official historian who has been the official historian of the Strategic Defense Initiative Organization since May 1987 - helps to correct the common misperception that US efforts in strategic defense began and ended with the SDI. Although Baucom tells us that The Origins of SDI is a significantly revised version of an SDIO study he completed in 1989, representing his own views and not those of the SDIO, the reader should be warned that the book reads like an official history. It is often dry or too episodic and offers little that is new in the way of analysis or interpretation.
Using SDI-12 with ST microelectronics MCU's
Saari, Alexandra; Hinzey, Shawn Adrian; Frigo, Janette Rose; Proicou, Michael Chris; Borges, Louis
2015-09-03
ST Microelectronics microcontrollers and processors are readily available, capable and economical processors. Unfortunately they lack a broad user base like similar offerings from Texas Instrument, Atmel, or Microchip. All of these devices could be useful in economical devices for remote sensing applications used with environmental sensing. With the increased need for environmental studies, and limited budgets, flexibility in hardware is very important. To that end, and in an effort to increase open support of ST devices, I am sharing my teams' experience in interfacing a common environmental sensor communication protocol (SDI-12) with ST devices.
Grumman and SDI-related technology
Lewis, B.
1985-01-01
The application of Grumman Corporation's aerospace and nuclear fusion technology to the Strategic Defense Initiative (SDI) program has taken place in at least five major areas. These include infrared boost surveillance and tracking to detect intercontinental ballistic missiles just after launch, space-based radar, neutral particle beam platforms, nuclear electric power and propulsion units in space, and battle management systems. The author summarizes developments in each of these areas to illustrate how Grumman has responded to the request that the scientific and industrial communities pursue innovative, high-risk concepts involving materials, structures, space power, space physics, and kinetic energy weapon concepts. 3 figures.
Controlling statistical moments of stochastic dynamical networks
NASA Astrophysics Data System (ADS)
Bielievtsov, Dmytro; Ladenbauer, Josef; Obermayer, Klaus
2016-07-01
We consider a general class of stochastic networks and ask which network nodes need to be controlled, and how, to stabilize and switch between desired metastable (target) states in terms of the first and second statistical moments of the system. We first show that it is sufficient to directly interfere with a subset of nodes which can be identified using information about the graph of the network only. Then we develop a suitable method for feedback control which acts on that subset of nodes and preserves the covariance structure of the desired target state. Finally, we demonstrate our theoretical results using a stochastic Hopfield network and a global brain model. Our results are applicable to a variety of (model) networks and further our understanding of the relationship between network structure and collective dynamics for the benefit of effective control.
Controlling statistical moments of stochastic dynamical networks.
Bielievtsov, Dmytro; Ladenbauer, Josef; Obermayer, Klaus
2016-07-01
We consider a general class of stochastic networks and ask which network nodes need to be controlled, and how, to stabilize and switch between desired metastable (target) states in terms of the first and second statistical moments of the system. We first show that it is sufficient to directly interfere with a subset of nodes which can be identified using information about the graph of the network only. Then we develop a suitable method for feedback control which acts on that subset of nodes and preserves the covariance structure of the desired target state. Finally, we demonstrate our theoretical results using a stochastic Hopfield network and a global brain model. Our results are applicable to a variety of (model) networks and further our understanding of the relationship between network structure and collective dynamics for the benefit of effective control. PMID:27575147
Lost in space: SDI struggles through its sixth year
MacDonald, B.W.
1989-09-01
After six years of debate, it is clear that Congress is willing to support a robust research program for SDI, but it is also clear that Congress will not support SDI annual outlays on the order of $10 billion. Thus the policy choice is between a good research program that meshes with fiscal reality, or an inadequate and wasteful development program that continues to focus on preparing for a Phase I deployment for which the funds simply will not be available. The Bush administration so far seems trapped by its own rhetoric from coming to grips with the implications of the new SDI reality. The responsibility for getting SDI on a steadier course toward more realistic research objectives thus seems to lie with Congress in the near term. Since Congress has been reluctant to earmark SDI research funds for specific objectives, it will take a change in administration perceptions before SDI program goals can be changed away from Phase I deployment. The only likely way this could happen in the near term would be as a result of a Congress-executive branch summit agreement on SDI objectives and funding levels. In the absence of such an agreement, SDI will be sailing under ever weaker fiscal and political winds and runs the risk of finding itself becalmed, working ceaselessly toward goals that will never be fulfilled.
Teachers' Use of Transnumeration in Solving Statistical Tasks with Dynamic Statistical Software
ERIC Educational Resources Information Center
Lee, Hollylynne S.; Kersaint, Gladis; Harper, Suzanne R.; Driskell, Shannon O.; Jones, Dusty L.; Leatham, Keith R.; Angotti, Robin L.; Adu-Gyamfi, Kwaku
2014-01-01
This study examined a random stratified sample (n = 62) of teachers' work across eight institutions on three tasks that utilized dynamic statistical software. We considered how teachers may utilize and develop their statistical knowledge and technological statistical knowledge when investigating a statistical task. We examined how teachers…
Artificial intelligence applications in space and SDI: A survey
NASA Technical Reports Server (NTRS)
Fiala, Harvey E.
1988-01-01
The purpose of this paper is to survey existing and planned Artificial Intelligence (AI) applications to show that they are sufficiently advanced for 32 percent of all space applications and SDI (Space Defense Initiative) software to be AI-based software. To best define the needs that AI can fill in space and SDI programs, this paper enumerates primary areas of research and lists generic application areas. Current and planned NASA and military space projects in AI will be reviewed. This review will be largely in the selected area of expert systems. Finally, direct applications of AI to SDI will be treated. The conclusion covers the importance of AI to space and SDI applications, and conversely, their importance to AI.
Multifragmentation: New dynamics or old statistics?
Moretto, L.G.; Delis, D.N.; Wozniak, G.J.
1993-10-01
The understanding of the fission process as it has developed over the last fifty years has been applied to multifragmentation. Two salient aspects have been discovered: 1) a strong decoupling of the entrance and exit channels with the formation of well-characterized sources: 2) a statistical competition between two-, three-, four-, five-, ... n-body decays.
Photon Counts Statistics in Leukocyte Cell Dynamics
NASA Astrophysics Data System (ADS)
van Wijk, Eduard; van der Greef, Jan; van Wijk, Roeland
2011-12-01
In the present experiment ultra-weak photon emission/ chemiluminescence from isolated neutrophils was recorded. It is associated with the production of reactive oxygen species (ROS) in the "respiratory burst" process which can be activated by PMA (Phorbol 12-Myristate 13-Acetate). Commonly, the reaction is demonstrated utilizing the enhancer luminol. However, with the use of highly sensitive photomultiplier equipment it is also recorded without enhancer. In that case, it can be hypothesized that photon count statistics may assist in understanding the underlying metabolic activity and cooperation of these cells. To study this hypothesis leukocytes were stimulated with PMA and increased photon signals were recorded in the quasi stable period utilizing Fano factor analysis at different window sizes. The Fano factor is defined by the variance over the mean of the number of photon within the observation time. The analysis demonstrated that the Fano factor of true signal and not of the surrogate signals obtained by random shuffling increases when the window size increased. It is concluded that photon count statistics, in particular Fano factor analysis, provides information regarding leukocyte interactions. It opens the perspective to utilize this analytical procedure in (in vivo) inflammation research. However, this needs further validation.
Protein electron transfer: Dynamics and statistics
NASA Astrophysics Data System (ADS)
Matyushov, Dmitry V.
2013-07-01
Electron transfer between redox proteins participating in energy chains of biology is required to proceed with high energetic efficiency, minimizing losses of redox energy to heat. Within the standard models of electron transfer, this requirement, combined with the need for unidirectional (preferably activationless) transitions, is translated into the need to minimize the reorganization energy of electron transfer. This design program is, however, unrealistic for proteins whose active sites are typically positioned close to the polar and flexible protein-water interface to allow inter-protein electron tunneling. The high flexibility of the interfacial region makes both the hydration water and the surface protein layer act as highly polar solvents. The reorganization energy, as measured by fluctuations, is not minimized, but rather maximized in this region. Natural systems in fact utilize the broad breadth of interfacial electrostatic fluctuations, but in the ways not anticipated by the standard models based on equilibrium thermodynamics. The combination of the broad spectrum of static fluctuations with their dispersive dynamics offers the mechanism of dynamical freezing (ergodicity breaking) of subsets of nuclear modes on the time of reaction/residence of the electron at a redox cofactor. The separation of time-scales of nuclear modes coupled to electron transfer allows dynamical freezing. In particular, the separation between the relaxation time of electro-elastic fluctuations of the interface and the time of conformational transitions of the protein caused by changing redox state results in dynamical freezing of the latter for sufficiently fast electron transfer. The observable consequence of this dynamical freezing is significantly different reorganization energies describing the curvature at the bottom of electron-transfer free energy surfaces (large) and the distance between their minima (Stokes shift, small). The ratio of the two reorganization energies
SDI (Strategic Defense Initiative) and national security policy. Research report
Davis, R.W.
1988-04-01
The paper attempts to answer the fundamental question of Can SDI make a significant contribution to US national security. It uses as its evaluation criteria historical arms-control measurements of stability, reduction in the probability of war, reduction in the consequences of war, economic benefits, and political benefits. A historical discussion of US nuclear strategy development along with Soviet thinking is provided as a backdrop to set the stage for an analysis of the reasons for President Reagan's March 1983 speech. The objectives of SDI are discussed along with the major concerns expressed by the program critics. Using the evaluation criteria defined above, the author analyzes SDI potential position in a long-term integrated national strategy that includes arms control and competitive strategies.
Future SDI (Strategic Defense Initiative) decision making. Study project
Holman, B.W.
1988-03-29
Nearly five years have lapsed since President Reagan made his now famous speech launching his Strategic Defense Initiative. Yet, polarized debate continues over the program's feasibility, desirability, affordability, goals, and direction. Some claim the program's goals have changed over time, and that today the primary goal is for an enhanced deterrence rather than providing a population defense as originally envisioned. Although the Congress has provided continuing and expanded funding for SDI, a consensus does not exist between the Congress and the Administration over the program's direction and goals. Some concerns exist within the Congress that the Administration is rushing too quickly to reach a decision on initial system development. Others would like to see initial development of a more limited defensive capability than that envisioned by the SDI program. This paper examines the evolution of SDI from a policy standpoint and addresses a series of questions that taken together may suggest parameters for future decision making.
Statistics and dynamics of the perturbed universe
NASA Astrophysics Data System (ADS)
Lemson, G.
1995-09-01
Wilson discovered the corresponding radiation field, at a temperature of roughly 3K (Penzias & Wilson, 1965). It soon appeared that this microwave background radiation was isotropic to a high degree, which conrmed the assumptions made about the homogeneity of the early Universe. At present however, we see that the Universe is no longer featureless and smooth. Starting from the smallest scales we see matter organized in structures up to very large scales: from planets to stars to stellar systems to galaxies to groups and clusters of galaxies, up to super-clusters, where clusters and galaxies are organized in the largest structures known. Somewhere during the evolution of the Universe, these structures must have developed out of the featureless, uniform sea of matter and radiation. Various different theories have been developed to explain the emergence of structure, but in this thesis I will concentrate exclusively on the most generally accepted theory, that of gravitational instability. In this theory it is assumed that in the early Universe, small fluctuations in the density were present, and these would grow under the influence of gravity towards the presently observed structures. There is actually a rather complete theory of the early stages of this process, that regime where these deviations from homogeneity are small. In that case, the inhomogeneous field may be seen as a small disturbance to the uniform model, and the standard apparatus of perturbation theory may be applied. In this thesis I investigate the later stages of this process of structure formation, where the fluctuations have grown to such a size that this 'linear' perturbation approach breaks down. There is as yet no comprehensive model describing this 'nonlinear' regime as successfully as the linear theory describes the early stages of structure formation. Instead, the problem is approached from many different directions, using different, approximate models for describing the dynamics and other
Modeling Statistical and Dynamic Features of Earthquakes
NASA Astrophysics Data System (ADS)
Rydelek, P. A.; Suyehiro, K.; Sacks, S. I.; Smith, D. E.; Takanami, T.; Hatano, T.
2015-12-01
The cellular automaton earthquake model by Sacks and Rydelek (1995) is extended to explain spatio-temporal change in seismicity with the regional tectonic stress buildup. Our approach is to apply a simple Coulomb failure law to our model space of discrete cells, which successfully reproduces empirical laws (e.g. Gutenberg-Richter law) and dynamic failure characteristics (e.g. stress drop vs. magnitude and asperities) of earthquakes. Once the stress condition supersedes the Coulomb threshold on a discrete cell, its accumulated stress is transferred to only neighboring cells, which cascades to more neighboring cells to create various size ruptures. A fundamental point here is the cellular view of the continuous earth. We suggest the cell size varies regionally with the maturity of the faults of the region. Seismic gaps (e.g. Mogi, 1979) and changes in seismicity such as indicated by b-values have been known but poorly understood. There have been reports of magnitude dependent seismic quiescence before large event at plate boundaries and intraplate (Smith et al., 2013). Recently, decreases in b-value for large earthquakes have been reported (Nanjo et al., 2012) as anticipated from lab experiments (Mogi, 1963). Our model reproduces the b-value decrease towards eventual large earthquake (increasing tectonic stress and its heterogeneous distribution). We succeeded in reproducing the cut-off of larger events above some threshold magnitude (M3-4) by slightly increasing the Coulomb failure level for only 2 % or more of the highly stressed cells. This is equivalent to reducing the pore pressure in these distributed cells. We are working on the model to introduce the recovery of pore pressure incorporating the observed orders of magnitude higher permeability fault zones than the surrounding rock (Lockner, 2009) allowing for a large earthquake to be generated. Our interpretation requires interactions of pores and fluids. We suggest heterogeneously distributed patches hardened
Statistics and dynamics of the perturbed universe
NASA Astrophysics Data System (ADS)
Lemson, G.
1995-09-01
Wilson discovered the corresponding radiation field, at a temperature of roughly 3K (Penzias & Wilson, 1965). It soon appeared that this microwave background radiation was isotropic to a high degree, which conrmed the assumptions made about the homogeneity of the early Universe. At present however, we see that the Universe is no longer featureless and smooth. Starting from the smallest scales we see matter organized in structures up to very large scales: from planets to stars to stellar systems to galaxies to groups and clusters of galaxies, up to super-clusters, where clusters and galaxies are organized in the largest structures known. Somewhere during the evolution of the Universe, these structures must have developed out of the featureless, uniform sea of matter and radiation. Various different theories have been developed to explain the emergence of structure, but in this thesis I will concentrate exclusively on the most generally accepted theory, that of gravitational instability. In this theory it is assumed that in the early Universe, small fluctuations in the density were present, and these would grow under the influence of gravity towards the presently observed structures. There is actually a rather complete theory of the early stages of this process, that regime where these deviations from homogeneity are small. In that case, the inhomogeneous field may be seen as a small disturbance to the uniform model, and the standard apparatus of perturbation theory may be applied. In this thesis I investigate the later stages of this process of structure formation, where the fluctuations have grown to such a size that this 'linear' perturbation approach breaks down. There is as yet no comprehensive model describing this 'nonlinear' regime as successfully as the linear theory describes the early stages of structure formation. Instead, the problem is approached from many different directions, using different, approximate models for describing the dynamics and other
Computerized Information Service--SDI. Annual Report 1974-75.
ERIC Educational Resources Information Center
Hjerppe, Roland
The Information and Documentation Centre of the Royal Institute of Technology Library performs research and development in information science. The two main areas of this continuing research and development programme are (1) development of a comprehensive SDI service and (2) investigations in interactive retrieval services. This annual report…
Deterrence enhanced by SDI (Strategic Defense Initiative). Student essay
Sowa, P.T.
1987-03-23
The Strategic Defense Initiative (SDI) is an investigation by scientists, military leaders, and technologists of the feasibility of strategic defenses against ballistic missiles. Whether strategic defense is the way to go in the future remains to be seen. Although the capability of the envisioned layered, ground, and space-based defensive systems is not specifically addressed, the SDI research program's affect on national-security strategy and its formulation is examined. If found to be feasible and cost-effective, SDI will require a drastic change in our military strategy. This essay reviews how that strategy was formulated in the past in terms of a model and how strategic-defense strategy would be used in the future. More importantly, a discussion of how the SDI impacts on present US national strategy and enhances deterrence is presented. Its influence on technology, conventional defense, and arms controls are offered on this evolving strategic concept and the renewed vigor and interest it has provided in strategy formulation.
SDI: O, what a tangled web we weave
Keeny, S.M. Jr.
1993-11-01
The ghost of the Strategic Defense Initiative (SDI) still haunts the Pentagon. The recent relevation that the highly publicized 1984 intercept of a mock Soviet reentry vehicle (RV) was rigged - as part of a highly secret deception plan to mislead the Soviet Union - has raised questions about the integrity and wisdom of defense development and policy processes.
Statistical properties of chaotic dynamical systems which exhibit strange attractors
Jensen, R.V.; Oberman, C.R.
1981-07-01
A path integral method is developed for the calculation of the statistical properties of turbulent dynamical systems. The method is applicable to conservative systems which exhibit a transition to stochasticity as well as dissipative systems which exhibit strange attractors. A specific dissipative mapping is considered in detail which models the dynamics of a Brownian particle in a wave field with a broad frequency spectrum. Results are presented for the low order statistical moments for three turbulent regimes which exhibit strange attractors corresponding to strong, intermediate, and weak collisional damping.
Statistical determination of space shuttle component dynamic magnification factors
NASA Technical Reports Server (NTRS)
Lehner, F.
1973-01-01
A method is presented of obtaining vibration design loads for components and brackets. Dynamic Magnification Factors from applicable Saturn/Apollo qualification, reliability, and vibroacoustic tests have been statistically formulated into design nomographs. These design nomographs have been developed for different component and bracket types, mounted on backup structure or rigidly mounted and excited by sinusoidal or random inputs. Typical nomographs are shown.
Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points
ERIC Educational Resources Information Center
Ekol, George
2015-01-01
This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…
Soviet military on SDI (Strategic Defense Initiative). Professional paper
Fitzgerald, M.C.
1987-08-01
Numerous Western analysts have suggested that all American assessments of SDI should proceed not only from a consideration of American intentions, but also from the outlook of Soviet perceptions. Since 23 March 1983, the prevailing tone of Soviet military writings on SDI has been overwhelmingly negative. Myron Hedlin has concluded that this harsh reaction to a U.S. initiative still years from realization suggests both a strong concern about the ultimate impact of these plans on the strategic balance, and a perceived opportunity for scoring propaganda points. Indeed, the present review of Soviet writings since President Reagan's so-called Star Wars speech has yielded both objective Soviet concerns and regressions to psychological warfare. This, in turn, has necessitated a careful effort to separate rhetoric from more official assessments of SDI. While there has long been dispute in the West over the validity of Soviet statements, they have time and again been subsequently confirmed in Soviet hardware, exercises, and operational behavior. Some Western analysts will nonetheless contend that the Soviet statements under examination in this study are merely a commodity for export.
Statistical Computations Underlying the Dynamics of Memory Updating
Gershman, Samuel J.; Radulescu, Angela; Norman, Kenneth A.; Niv, Yael
2014-01-01
Psychophysical and neurophysiological studies have suggested that memory is not simply a carbon copy of our experience: Memories are modified or new memories are formed depending on the dynamic structure of our experience, and specifically, on how gradually or abruptly the world changes. We present a statistical theory of memory formation in a dynamic environment, based on a nonparametric generalization of the switching Kalman filter. We show that this theory can qualitatively account for several psychophysical and neural phenomena, and present results of a new visual memory experiment aimed at testing the theory directly. Our experimental findings suggest that humans can use temporal discontinuities in the structure of the environment to determine when to form new memory traces. The statistical perspective we offer provides a coherent account of the conditions under which new experience is integrated into an old memory versus forming a new memory, and shows that memory formation depends on inferences about the underlying structure of our experience. PMID:25375816
Seasonal drought predictability in Portugal using statistical-dynamical techniques
NASA Astrophysics Data System (ADS)
Ribeiro, A. F. S.; Pires, C. A. L.
2016-08-01
Atmospheric forecasting and predictability are important to promote adaption and mitigation measures in order to minimize drought impacts. This study estimates hybrid (statistical-dynamical) long-range forecasts of the regional drought index SPI (3-months) over homogeneous regions from mainland Portugal, based on forecasts from the UKMO operational forecasting system, with lead-times up to 6 months. ERA-Interim reanalysis data is used for the purpose of building a set of SPI predictors integrating recent past information prior to the forecast launching. Then, the advantage of combining predictors with both dynamical and statistical background in the prediction of drought conditions at different lags is evaluated. A two-step hybridization procedure is performed, in which both forecasted and observed 500 hPa geopotential height fields are subjected to a PCA in order to use forecasted PCs and persistent PCs as predictors. A second hybridization step consists on a statistical/hybrid downscaling to the regional SPI, based on regression techniques, after the pre-selection of the statistically significant predictors. The SPI forecasts and the added value of combining dynamical and statistical methods are evaluated in cross-validation mode, using the R2 and binary event scores. Results are obtained for the four seasons and it was found that winter is the most predictable season, and that most of the predictive power is on the large-scale fields from past observations. The hybridization improves the downscaling based on the forecasted PCs, since they provide complementary information (though modest) beyond that of persistent PCs. These findings provide clues about the predictability of the SPI, particularly in Portugal, and may contribute to the predictability of crops yields and to some guidance on users (such as farmers) decision making process.
Dynamics, stability, and statistics on lattices and networks
NASA Astrophysics Data System (ADS)
Livi, Roberto
2014-07-01
These lectures aim at surveying some dynamical models that have been widely explored in the recent scientific literature as case studies of complex dynamical evolution, emerging from the spatio-temporal organization of several coupled dynamical variables. The first message is that a suitable mathematical description of such models needs tools and concepts borrowed from the general theory of dynamical systems and from out-of-equilibrium statistical mechanics. The second message is that the overall scenario is definitely reacher than the standard problems in these fields. For instance, systems exhibiting complex unpredictable evolution do not necessarily exhibit deterministic chaotic behavior (i.e., Lyapunov chaos) as it happens for dynamical models made of a few degrees of freedom. In fact, a very large number of spatially organized dynamical variables may yield unpredictable evolution even in the absence of Lyapunov instability. Such a mechanism may emerge from the combination of spatial extension and nonlinearity. Moreover, spatial extension allows one to introduce naturally disorder, or heterogeneity of the interactions as important ingredients for complex evolution. It is worth to point out that the models discussed in these lectures share such features, despite they have been inspired by quite different physical and biological problems. Along these lectures we describe also some of the technical tools employed for the study of such models, e.g., Lyapunov stability analysis, unpredictability indicators for "stable chaos," hydrodynamic description of transport in low spatial dimension, spectral decomposition of stochastic dynamics on directed networks, etc.
Dynamics, stability, and statistics on lattices and networks
Livi, Roberto
2014-07-15
These lectures aim at surveying some dynamical models that have been widely explored in the recent scientific literature as case studies of complex dynamical evolution, emerging from the spatio-temporal organization of several coupled dynamical variables. The first message is that a suitable mathematical description of such models needs tools and concepts borrowed from the general theory of dynamical systems and from out-of-equilibrium statistical mechanics. The second message is that the overall scenario is definitely reacher than the standard problems in these fields. For instance, systems exhibiting complex unpredictable evolution do not necessarily exhibit deterministic chaotic behavior (i.e., Lyapunov chaos) as it happens for dynamical models made of a few degrees of freedom. In fact, a very large number of spatially organized dynamical variables may yield unpredictable evolution even in the absence of Lyapunov instability. Such a mechanism may emerge from the combination of spatial extension and nonlinearity. Moreover, spatial extension allows one to introduce naturally disorder, or heterogeneity of the interactions as important ingredients for complex evolution. It is worth to point out that the models discussed in these lectures share such features, despite they have been inspired by quite different physical and biological problems. Along these lectures we describe also some of the technical tools employed for the study of such models, e.g., Lyapunov stability analysis, unpredictability indicators for “stable chaos,” hydrodynamic description of transport in low spatial dimension, spectral decomposition of stochastic dynamics on directed networks, etc.
A study on modeling the dynamics of statistically dependent returns
NASA Astrophysics Data System (ADS)
Davari-Ardakani, Hamed; Aminnayeri, Majid; Seifi, Abbas
2014-07-01
This paper develops a method to characterize the dynamic behavior of statistically dependent returns of assets via a scenario set. The proposed method uses heteroskedastic time series to model serial correlations of returns, as well as Cholesky decomposition to generate the set of scenarios such that the statistical dependence of different asset returns is preserved. In addition, this scenario generation method preserves marginal distributions of returns. To demonstrate the performance of the proposed method, a multi-period portfolio optimization model is presented. Then, the method is implemented through a number of stocks selected from New York Stock Exchange (NYSE). Computational results show a high performance of the proposed method from the statistical point of view. Also, results confirm sufficiency and in-sample stability of the generated scenario set. Besides, out-of-sample simulations, for both risk and return, illustrate a good performance of the proposed method.
Statistical energy conservation principle for inhomogeneous turbulent dynamical systems.
Majda, Andrew J
2015-07-21
Understanding the complexity of anisotropic turbulent processes over a wide range of spatiotemporal scales in engineering shear turbulence as well as climate atmosphere ocean science is a grand challenge of contemporary science with important societal impact. In such inhomogeneous turbulent dynamical systems there is a large dimensional phase space with a large dimension of unstable directions where a large-scale ensemble mean and the turbulent fluctuations exchange energy and strongly influence each other. These complex features strongly impact practical prediction and uncertainty quantification. A systematic energy conservation principle is developed here in a Theorem that precisely accounts for the statistical energy exchange between the mean flow and the related turbulent fluctuations. This statistical energy is a sum of the energy in the mean and the trace of the covariance of the fluctuating turbulence. This result applies to general inhomogeneous turbulent dynamical systems including the above applications. The Theorem involves an assessment of statistical symmetries for the nonlinear interactions and a self-contained treatment is presented below. Corollary 1 and Corollary 2 illustrate the power of the method with general closed differential equalities for the statistical energy in time either exactly or with upper and lower bounds, provided that the negative symmetric dissipation matrix is diagonal in a suitable basis. Implications of the energy principle for low-order closure modeling and automatic estimates for the single point variance are discussed below.
Statistical energy conservation principle for inhomogeneous turbulent dynamical systems.
Majda, Andrew J
2015-07-21
Understanding the complexity of anisotropic turbulent processes over a wide range of spatiotemporal scales in engineering shear turbulence as well as climate atmosphere ocean science is a grand challenge of contemporary science with important societal impact. In such inhomogeneous turbulent dynamical systems there is a large dimensional phase space with a large dimension of unstable directions where a large-scale ensemble mean and the turbulent fluctuations exchange energy and strongly influence each other. These complex features strongly impact practical prediction and uncertainty quantification. A systematic energy conservation principle is developed here in a Theorem that precisely accounts for the statistical energy exchange between the mean flow and the related turbulent fluctuations. This statistical energy is a sum of the energy in the mean and the trace of the covariance of the fluctuating turbulence. This result applies to general inhomogeneous turbulent dynamical systems including the above applications. The Theorem involves an assessment of statistical symmetries for the nonlinear interactions and a self-contained treatment is presented below. Corollary 1 and Corollary 2 illustrate the power of the method with general closed differential equalities for the statistical energy in time either exactly or with upper and lower bounds, provided that the negative symmetric dissipation matrix is diagonal in a suitable basis. Implications of the energy principle for low-order closure modeling and automatic estimates for the single point variance are discussed below. PMID:26150510
Dynamical topology and statistical properties of spatiotemporal chaos.
Zhuang, Quntao; Gao, Xun; Ouyang, Qi; Wang, Hongli
2012-12-01
For spatiotemporal chaos described by partial differential equations, there are generally locations where the dynamical variable achieves its local extremum or where the time partial derivative of the variable vanishes instantaneously. To a large extent, the location and movement of these topologically special points determine the qualitative structure of the disordered states. We analyze numerically statistical properties of the topologically special points in one-dimensional spatiotemporal chaos. The probability distribution functions for the number of point, the lifespan, and the distance covered during their lifetime are obtained from numerical simulations. Mathematically, we establish a probabilistic model to describe the dynamics of these topologically special points. In spite of the different definitions in different spatiotemporal chaos, the dynamics of these special points can be described in a uniform approach.
A Stochastic Fractional Dynamics Model of Rainfall Statistics
NASA Astrophysics Data System (ADS)
Kundu, Prasun; Travis, James
2013-04-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.
Dynamical Scheme for Interferometric Measurements of Full-Counting Statistics
NASA Astrophysics Data System (ADS)
Dasenbrook, David; Flindt, Christian
2016-09-01
We propose a dynamical scheme for measuring the full-counting statistics in a mesoscopic conductor using an electronic Mach-Zehnder interferometer. The conductor couples capacitively to one arm of the interferometer and causes a phase shift which is proportional to the number of transferred charges. Importantly, the full-counting statistics can be obtained from average current measurements at the outputs of the interferometer. The counting field can be controlled by varying the time delay between two separate voltage signals applied to the conductor and the interferometer, respectively. As a specific application, we consider measuring the entanglement entropy generated by partitioning electrons on a quantum point contact. Our scheme is robust against moderate environmental dephasing and may be realized thanks to recent advances in gigahertz quantum electronics.
Statistical mechanics of the Toda lattice based on soliton dynamics
NASA Astrophysics Data System (ADS)
Yoshida, Fumio; Sakurma, Tetsuro
1982-05-01
A classical theory of statistical mechanics of the Toda lattice is presented on the basis of soliton dynamics. Following the inverse spectral theory, the partition function of the Toda lattice is reconstructed from one-particle partition functions of soliton and ripple modes. Discussions are made on the contribution of these modes to the thermodynamic properties of the Toda lattice. At low temperatures, it is shown that the average number of excited solitons has the temperature dependence T13. With the comparison of our results with those from the exact theory, several problems to be worked out are pointed out in our soliton-ripple gas-mixture model.
Dynamic statistical models of biological cognition: insights from communications theory
NASA Astrophysics Data System (ADS)
Wallace, Rodrick
2014-10-01
Maturana's cognitive perspective on the living state, Dretske's insight on how information theory constrains cognition, the Atlan/Cohen cognitive paradigm, and models of intelligence without representation, permit construction of a spectrum of dynamic necessary conditions statistical models of signal transduction, regulation, and metabolism at and across the many scales and levels of organisation of an organism and its context. Nonequilibrium critical phenomena analogous to physical phase transitions, driven by crosstalk, will be ubiquitous, representing not only signal switching, but the recruitment of underlying cognitive modules into tunable dynamic coalitions that address changing patterns of need and opportunity at all scales and levels of organisation. The models proposed here, while certainly providing much conceptual insight, should be most useful in the analysis of empirical data, much as are fitted regression equations.
Role of quantum statistics in multi-particle decay dynamics
Marchewka, Avi; Granot, Er’el
2015-04-15
The role of quantum statistics in the decay dynamics of a multi-particle state, which is suddenly released from a confining potential, is investigated. For an initially confined double particle state, the exact dynamics is presented for both bosons and fermions. The time-evolution of the probability to measure two-particle is evaluated and some counterintuitive features are discussed. For instance, it is shown that although there is a higher chance of finding the two bosons (as oppose to fermions, and even distinguishable particles) at the initial trap region, there is a higher chance (higher than fermions) of finding them on two opposite sides of the trap as if the repulsion between bosons is higher than the repulsion between fermions. The results are demonstrated by numerical simulations and are calculated analytically in the short-time approximation. Furthermore, experimental validation is suggested.
Not Available
1986-07-01
The same Federal budget cuts which are constraining in-space testing of SDI components and systems are slowing the development of environmental facilities to simulate space conditions for testing the components on earth. At Arnold Engineering Development Center (AEDC), an attempt is being made to obtain funds for construction of facilities as national assets, rather than as military appropriations. AEDC is involved in studies of plume signature measurement, vacuum chamber testing, kinetic energy projectile testing, high endoatmospheric interceptor development, and toxic propellant facility support. Some development is devoted to scene-generation capabilities, large optics for collimating signals and the isolation of vacuum chambers from vibration, as well as efforts to produce numerical simulations for computational fluid dynamics and complex geometries. Tests are proceeding on components to be projected with a rail gun operated in corrosive environments.
Flow Equation Approach to the Statistics of Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Marston, J. B.; Hastings, M. B.
2005-03-01
The probability distribution function of non-linear dynamical systems is governed by a linear framework that resembles quantum many-body theory, in which stochastic forcing and/or averaging over initial conditions play the role of non-zero . Besides the well-known Fokker-Planck approach, there is a related Hopf functional methodootnotetextUriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov (Cambridge University Press, 1995) chapter 9.5.; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we investigate the method of continuous unitary transformationsootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994). (also known as the flow equation approachootnotetextF. Wegner, Ann. Phys. 3, 77 (1994).), suitably generalized to the diagonalization of non-Hermitian matrices. Comparison to the more traditional cumulant expansion method is illustrated with low-dimensional attractors. The treatment of high-dimensional dynamical systems is also discussed.
Spectral statistics and dynamics of Lévy matrices.
Araujo, M; Medina, E; Aponte, E
1999-10-01
We study the spectral statistics and dynamics of a random matrix model where matrix elements are taken from power-law tailed distributions. Such distributions, labeled by a parameter mu, converge on the Lévy basin, giving the matrix model the label "Lévy matrix" [P. Cizeau and J. P. Bouchaud, Phys. Rev. E 50, 1810 (1994)]. Such matrices are interesting because their properties go beyond the Gaussian universality class and they model many physically relevant systems such as spin glasses with dipolar or Ruderman-Kittel-Kasuya-Yosida interactions, electronic systems with power-law decaying interactions, and the spectral behavior at the metal insulator transition. Regarding the density of states we extend previous work to reveal the sparse matrix limit as mu-->0. Furthermore, we find for 2 x 2 Lévy matrices that geometrical level repulsion is not affected by the distribution's broadness. Nevertheless, essential singularities particular to Lévy distributions for small arguments break geometrical repulsion and make it mu dependent. Level dynamics as a function of a symmetry breaking parameter gives new insight into the phases found by Cizeau and Bouchaud (CB). We map the phase diagram drawn qualitatively by CB by using the delta3 statistic. Finally we compute the conductance of each phase by using the Thouless formula, and find that the mixed phase separating conducting and insulating phases has a unique character.
Spectral statistics and dynamics of Lévy matrices.
Araujo, M; Medina, E; Aponte, E
1999-10-01
We study the spectral statistics and dynamics of a random matrix model where matrix elements are taken from power-law tailed distributions. Such distributions, labeled by a parameter mu, converge on the Lévy basin, giving the matrix model the label "Lévy matrix" [P. Cizeau and J. P. Bouchaud, Phys. Rev. E 50, 1810 (1994)]. Such matrices are interesting because their properties go beyond the Gaussian universality class and they model many physically relevant systems such as spin glasses with dipolar or Ruderman-Kittel-Kasuya-Yosida interactions, electronic systems with power-law decaying interactions, and the spectral behavior at the metal insulator transition. Regarding the density of states we extend previous work to reveal the sparse matrix limit as mu-->0. Furthermore, we find for 2 x 2 Lévy matrices that geometrical level repulsion is not affected by the distribution's broadness. Nevertheless, essential singularities particular to Lévy distributions for small arguments break geometrical repulsion and make it mu dependent. Level dynamics as a function of a symmetry breaking parameter gives new insight into the phases found by Cizeau and Bouchaud (CB). We map the phase diagram drawn qualitatively by CB by using the delta3 statistic. Finally we compute the conductance of each phase by using the Thouless formula, and find that the mixed phase separating conducting and insulating phases has a unique character. PMID:11970191
Nonextensive Statistical Mechanics: Introduction, Dynamical Foundations and Applications
NASA Astrophysics Data System (ADS)
Tsallis, Constantino
2003-03-01
Many natural and artificial systems exist whose thermostatistical properties appear to be hardly tractable or just untractable within Boltzmann-Gibbes statistical mechanics. Nonextensive statistical mechanics is a generalization of the standard formalism which addresses such systems, typically characterized by long-range interactions, long-range memory, (multi)fractal structures and similar anomalies. This formalism is based on the entropic form Sq = k (1-sumi p_i^q)/(q-1) (S1 = - k sumi pi ln p_i). A brief review of the formalism as well as some illustrative applications will be presented. Finally, the a priori calculation of the entropic index q to be associated with specific systems will be exhibited, starting from the knowledge of the corresponding micorscopic or mesoscopic dynamics. This formalism yields, for nonequilibrium stationary states (e.g., metastable states) and relaxation properties of many ubiquitous systems, asymptotic power-laws, as Boltzmann-Gibbs statistical mechanics yields, for the thermal equilibrium and relaxation properties of standard systems, exponential laws. Bibliography: http://tsallis.cat.cbpf.br/biblio.htm
ERIC Educational Resources Information Center
Scheffler, F. L.; March, J. F.
The Aerospace Materials Information Center (AMIC) Selective Dissemination of Information (SDI) program was evaluated by an interview technique after one year of operation. The data base for the SDI consists of the periodic document index records input to the AMIC system. The users are 63 engineers, scientists, and technical administrators at the…
Microform Informing: Use of DIALOG SDI to Produce a Microfiche Announcement Bulletin.
ERIC Educational Resources Information Center
Rowe, Gladys E.
1984-01-01
Describes use of selective dissemination of information (SDI) feature on DIALOG at Sandia Technical Library to produce bulletin announcing library acquisitions of technical reports in microfiche. Microfiche acquisition, developing profile, assembling profile output, costs, and suggestions for improvement are highlighted. Examples of SDI profiles…
Moments of probable seas: statistical dynamics of Planet Ocean
NASA Astrophysics Data System (ADS)
Holloway, Greg
The ocean is too big. From the scale of planetary radius to scales of turbulent microstructure, the range of length scales is 109. Likewise for time scales. Classical geophysical fluid dynamics does not have an apparatus for dealing with such complexity, while `brute force' computing on the most powerful supercomputers, extant or presently foreseen, barely scratches this complexity. Yet the everywhere-swirling-churning ocean interacts unpredictably in climate history and climate future - against which we attempt to devise planetary stewardship. Can we better take into account the unpredictability of oceans to improve upon present ocean/climate forecasting? What to do? First, recognize that our goal is to comprehend probabilities of possible oceans. Questions we would ask are posed as moments (expectations). Then the dynamical goal is clear: we seek equations of motion of moments of probable oceans. Classical fluid mechanics offers part of the answer but fails to recognize statistical dynamical aspects (missing the arrow of time as past==>future). At probabilities of oceans, the missing physics emerges: moments are forced by gradients of entropy with respect to moments. Time regains its arrow, and first (simplest) approximations to entropy-gradient forces enhance the fidelity of ocean theories and practical models.
Forecasting: it is not about statistics, it is about dynamics.
Judd, Kevin; Stemler, Thomas
2010-01-13
In 1963, the mathematician and meteorologist Edward Lorenz published a paper (Lorenz 1963 J. Atmos. Sci. 20, 130-141) that changed the way scientists think about the prediction of geophysical systems, by introducing the ideas of chaos, attractors, sensitivity to initial conditions and the limitations to forecasting nonlinear systems. Three years earlier, the mathematician and engineer Rudolf Kalman had published a paper (Kalman 1960 Trans. ASME Ser. D, J. Basic Eng. 82, 35-45) that changed the way engineers thought about prediction of electronic and mechanical systems. Ironically, in recent years, geophysicists have become increasingly interested in Kalman filters, whereas engineers have become increasingly interested in chaos. It is argued that more often than not the tracking and forecasting of nonlinear systems has more to do with the nonlinear dynamics that Lorenz considered than it has to do with statistics that Kalman considered. A position with which both Lorenz and Kalman would appear to agree. PMID:19948555
A statistical model for interpreting computerized dynamic posturography data
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.
2002-01-01
Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.
Statistical light-mode dynamics of multipulse passive mode locking.
Weill, Rafi; Well, Rafi; Vodonos, Boris; Gordon, Ariel; Gat, Omri; Fischer, Baruch
2007-09-01
We study the multipulse formation in passive mode locking in the framework of the statistical light-mode dynamics theory. It is a many-body theory that treats the complex many-mode laser system by statistical mechanics. We give a detailed theory and experimental verification for the important case of multiple-pulse formation in the laser cavity. We follow and extend our former work on the subject. We give a detailed analysis with a rigorous calculation of the partition function, the free energy, and the order parameter in the coarse-graining method within the mean-field theory that is exact in the light-mode system. The outcome is a comprehensive picture of multipulse formation and annihilation, pulse after pulse, in an almost quantized manner, as the noise ("temperature") or the light power is varied. We obtain the phase diagram of the system, showing a series of first-order phase transitions, each belonging to a different number of pulses. We also study the hysteresis behavior, typical for such thermodynamic systems. We elaborate on the role of the saturable absorber structure in determining the multipulse formation. The theoretical results are compared to experimental measurements that we obtained with mode-locked fiber lasers, and we find an excellent agreement. PMID:17930204
Pasta nucleosynthesis: Molecular dynamics simulations of nuclear statistical equilibrium
NASA Astrophysics Data System (ADS)
Caplan, M. E.; Schneider, A. S.; Horowitz, C. J.; Berry, D. K.
2015-06-01
Background: Exotic nonspherical nuclear pasta shapes are expected in nuclear matter at just below saturation density because of competition between short-range nuclear attraction and long-range Coulomb repulsion. Purpose: We explore the impact nuclear pasta may have on nucleosynthesis during neutron star mergers when cold dense nuclear matter is ejected and decompressed. Methods: We use a hybrid CPU/GPU molecular dynamics (MD) code to perform decompression simulations of cold dense matter with 51 200 and 409 600 nucleons from 0.080 fm-3 down to 0.00125 fm-3 . Simulations are run for proton fractions YP= 0.05, 0.10, 0.20, 0.30, and 0.40 at temperatures T = 0.5, 0.75, and 1.0 MeV. The final composition of each simulation is obtained using a cluster algorithm and compared to a constant density run. Results: Size of nuclei in the final state of decompression runs are in good agreement with nuclear statistical equilibrium (NSE) models for temperatures of 1 MeV while constant density runs produce nuclei smaller than the ones obtained with NSE. Our MD simulations produces unphysical results with large rod-like nuclei in the final state of T =0.5 MeV runs. Conclusions: Our MD model is valid at higher densities than simple nuclear statistical equilibrium models and may help determine the initial temperatures and proton fractions of matter ejected in mergers.
OPEN PROBLEM: Orbits' statistics in chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Arnold, V.
2008-07-01
This paper shows how the measurement of the stochasticity degree of a finite sequence of real numbers, published by Kolmogorov in Italian in a journal of insurances' statistics, can be usefully applied to measure the objective stochasticity degree of sequences, originating from dynamical systems theory and from number theory. Namely, whenever the value of Kolmogorov's stochasticity parameter of a given sequence of numbers is too small (or too big), one may conclude that the conjecture describing this sequence as a sample of independent values of a random variables is highly improbable. Kolmogorov used this strategy fighting (in a paper in 'Doklady', 1940) against Lysenko, who had tried to disprove the classical genetics' law of Mendel experimentally. Calculating his stochasticity parameter value for the numbers from Lysenko's experiment reports, Kolmogorov deduced, that, while these numbers were different from the exact fulfilment of Mendel's 3 : 1 law, any smaller deviation would be a manifestation of the report's number falsification. The calculation of the values of the stochasticity parameter would be useful for many other generators of pseudorandom numbers and for many other chaotically looking statistics, including even the prime numbers distribution (discussed in this paper as an example).
Statistical mechanics and dynamics of two supported stacked lipid bilayers.
Manghi, Manoel; Destainville, Nicolas
2010-03-16
The statistical physics and dynamics of double supported bilayers are studied theoretically. The main goal in designing double supported lipid bilayers is to obtain model systems of biomembranes: the upper bilayer is meant to be almost freely floating, the substrate being screened by the lower bilayer. The fluctuation-induced repulsion between membranes and between the lower membrane and the wall are explicitly taken into account using a Gaussian variational approach. It is shown that the variational parameters, the "effective" adsorption strength, and the average distance to the substrate, depend strongly on temperature and membrane elastic moduli, the bending rigidity, and the microscopic surface tension, which is a signature of the crucial role played by membrane fluctuations. The range of stability of these supported membranes is studied, showing a complex dependence on bare adsorption strengths. In particular, the experimental conditions of having an upper membrane slightly perturbed by the lower one and still bound to the surface are found. Included in the theoretical calculation of the damping rates associated with membrane normal modes are hydrodynamic friction by the wall and hydrodynamic interactions between both membranes. PMID:20000797
A Statistical Model for In Vivo Neuronal Dynamics
Surace, Simone Carlo; Pfister, Jean-Pascal
2015-01-01
Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371
Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming
2013-05-01
Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.
NASA Technical Reports Server (NTRS)
Schweikhard, W. G.; Chen, Y. S.
1986-01-01
The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.
NASA Astrophysics Data System (ADS)
Potirakis, Stelios M.; Zitis, Pavlos I.; Eftaxias, Konstantinos
2013-07-01
The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe a great variety of scientific and technological approaches of different types of natural, artificial, and social systems. Several authors have suggested that earthquake dynamics and the dynamics of economic (financial) systems can be analyzed within similar mathematical frameworks. We apply concepts of the nonextensive statistical physics, on time-series data of observable manifestations of the underlying complex processes ending up with these different extreme events, in order to support the suggestion that a dynamical analogy exists between a financial crisis (in the form of share or index price collapse) and a single earthquake. We also investigate the existence of such an analogy by means of scale-free statistics (the Gutenberg-Richter distribution of event sizes). We show that the populations of: (i) fracto-electromagnetic events rooted in the activation of a single fault, emerging prior to a significant earthquake, (ii) the trade volume events of different shares/economic indices, prior to a collapse, and (iii) the price fluctuation (considered as the difference of maximum minus minimum price within a day) events of different shares/economic indices, prior to a collapse, follow both the traditional Gutenberg-Richter law as well as a nonextensive model for earthquake dynamics, with similar parameter values. The obtained results imply the existence of a dynamic analogy between earthquakes and economic crises, which moreover follow the dynamics of seizures, magnetic storms and solar flares.
Crystallization and preliminary X-ray studies of SdiA from Escherichia coli
Wu, Chunai; Lokanath, Neratur K.; Kim, Dong Young; Nguyen, Lan Dao Ngoc; Kim, Kyeong Kyu
2008-01-01
E. coli SdiA was overexpressed, purified and crystallized. The crystals belonged to the hexagonal space group P6{sub 1}22 or P6{sub 5}22 and diffracted to 2.7 Å resolution. SdiA enhances cell division by regulating the ftsQAZ operon in Escherichia coli as a transcription activator. In addition, SdiA is suggested to play a role in detecting quorum signals that emanate from other species. It is therefore a homologue of LuxR, a cognate quorum-sensing receptor that recognizes a quorum signal and activates the quorum responses. To elucidate the role of SdiA and its functional and structural relationship to LuxR, structural studies were performed on E. coli SdiA. Recombinant SdiA was overexpressed, purified and crystallized at 287 K using the hanging-drop vapour-diffusion method. X-ray diffraction data from a native crystal were collected with 99.7% completeness to 2.7 Å resolution with an R{sub merge} of 6.0%. The crystals belong to the hexagonal space group P6{sub 1}22 or P6{sub 5}22, with unit-cell parameters a = b = 130.47, c = 125.23 Å.
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues.
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues. PMID:24267876
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Detection of Other Microbial Species by Salmonella: Expression of the SdiA Regulon
Smith, Jenée N.; Ahmer, Brian M. M.
2003-01-01
Salmonella, Escherichia, and Klebsiella do not encode any recognized type of N-acylhomoserine lactone (AHL) synthase, and consistent with this, they do not synthesize AHLs under any conditions tested. However, they do encode an AHL receptor of the LuxR family, named SdiA. MudJ fusions in four loci are known to respond to plasmid-encoded sdiA in Salmonella, but only the rck locus has been described. Here we report the location and sequence analysis of the remaining three loci. The srg-6::MudJ is within gtgA of the gifsy-2 prophage, and the srg-7::MudJ is within PSLT61 of the virulence plasmid. Both fusions are in the antisense orientation. The third fusion, srgE5::MudJ, is within a horizontally acquired gene of unknown function at 33.6 centisomes that we have named srgE. Previously, sdiA expressed from its natural position in the chromosome was demonstrated to activate a plasmid-based transcriptional fusion to the rck promoter in response to AHL production by other bacterial species. However, the MudJ fusions did not respond to chromosomal sdiA. Here we report that MudJ fusions to three of the four loci (not srg-6) are activated by AHL in an sdiA-dependent manner during growth in motility agar (0.25% agar) but not during growth in top agar (0.7% agar) or on agar plates (1.2% agar). In motility agar, the srgE promoter responds to sdiA at 30°C and higher while the rck and srg-7 promoters respond only at 37 or 42°C. Substantial AHL-independent SdiA activity was observed at 30°C but not at 37°C. PMID:12562806
SdiA aids enterohemorrhagic Escherichia coli carriage by cattle fed a forage or grain diet.
Sheng, Haiqing; Nguyen, Y N; Hovde, Carolyn J; Sperandio, Vanessa
2013-09-01
Enterohemorrhagic Escherichia coli (EHEC) causes hemorrhagic colitis and life-threatening complications. The main reservoirs for EHEC are healthy ruminants. We reported that SdiA senses acyl homoserine lactones (AHLs) in the bovine rumen to activate expression of the glutamate acid resistance (gad) genes priming EHEC's acid resistance before they pass into the acidic abomasum. Conversely, SdiA represses expression of the locus of enterocyte effacement (LEE) genes, whose expression is not required for bacterial survival in the rumen but is necessary for efficient colonization at the rectoanal junction (RAJ) mucosa. Our previous studies show that SdiA-dependent regulation was necessary for efficient EHEC colonization of cattle fed a grain diet. Here, we compared the SdiA role in EHEC colonization of cattle fed a forage hay diet. We detected AHLs in the rumen of cattle fed a hay diet, and these AHLs activated gad gene expression in an SdiA-dependent manner. The rumen fluid and fecal samples from hay-fed cattle were near neutrality, while the same digesta samples from grain-fed animals were acidic. Cattle fed either grain or hay and challenged with EHEC orally carried the bacteria similarly. EHEC was cleared from the rumen within days and from the RAJ mucosa after approximately one month. In competition trials, where animals were challenged with both wild-type and SdiA deletion mutant bacteria, diet did not affect the outcome that the wild-type strain was better able to persist and colonize. However, the wild-type strain had a greater advantage over the SdiA deletion mutant at the RAJ mucosa among cattle fed the grain diet.
Statistical Anomaly Detection for Monitoring of Human Dynamics
NASA Astrophysics Data System (ADS)
Kamiya, K.; Fuse, T.
2015-05-01
Understanding of human dynamics has drawn attention to various areas. Due to the wide spread of positioning technologies that use GPS or public Wi-Fi, location information can be obtained with high spatial-temporal resolution as well as at low cost. By collecting set of individual location information in real time, monitoring of human dynamics is recently considered possible and is expected to lead to dynamic traffic control in the future. Although this monitoring focuses on detecting anomalous states of human dynamics, anomaly detection methods are developed ad hoc and not fully systematized. This research aims to define an anomaly detection problem of the human dynamics monitoring with gridded population data and develop an anomaly detection method based on the definition. According to the result of a review we have comprehensively conducted, we discussed the characteristics of the anomaly detection of human dynamics monitoring and categorized our problem to a semi-supervised anomaly detection problem that detects contextual anomalies behind time-series data. We developed an anomaly detection method based on a sticky HDP-HMM, which is able to estimate the number of hidden states according to input data. Results of the experiment with synthetic data showed that our proposed method has good fundamental performance with respect to the detection rate. Through the experiment with real gridded population data, an anomaly was detected when and where an actual social event had occurred.
Measures of trajectory ensemble disparity in nonequilibrium statistical dynamics
Crooks, Gavin; Sivak, David
2011-06-03
Many interesting divergence measures between conjugate ensembles of nonequilibrium trajectories can be experimentally determined from the work distribution of the process. Herein, we review the statistical and physical significance of several of these measures, in particular the relative entropy (dissipation), Jeffreys divergence (hysteresis), Jensen-Shannon divergence (time-asymmetry), Chernoff divergence (work cumulant generating function), and Renyi divergence.
Material Phase Causality or a Dynamics-Statistical Interpretation of Quantum Mechanics
Koprinkov, I. G.
2010-11-25
The internal phase dynamics of a quantum system interacting with an electromagnetic field is revealed in details. Theoretical and experimental evidences of a causal relation of the phase of the wave function to the dynamics of the quantum system are presented sistematically for the first time. A dynamics-statistical interpretation of the quantum mechanics is introduced.
Identification of sdiA-regulated genes in a mouse commensal strain of Enterobacter cloacae
Sabag-Daigle, Anice; Dyszel, Jessica L.; Gonzalez, Juan F.; Ali, Mohamed M.; Ahmer, Brian M. M.
2015-01-01
Many bacteria determine their population density using quorum sensing. The most intensively studied mechanism of quorum sensing utilizes proteins of the LuxI family to synthesize a signaling molecule of the acylhomoserine lactone (AHL) type, and a protein of the LuxR family to bind AHL and regulate transcription. Genes regulated by quorum sensing often encode functions that are most effective when a group of bacteria are working cooperatively (e.g., luminescence, biofilm formation, host interactions). Bacteria in the Escherichia, Salmonella, Klebsiella, and Enterobacter genera do not encode an AHL synthase but they do encode an AHL receptor of the LuxR family, SdiA. Instead of detecting their own AHL synthesis, these organisms use SdiA to detect the AHLs synthesized by other bacterial species. In this study, we used a genetic screen to identify AHL-responsive genes in a commensal Enterobacter cloacae strain that was isolated from a laboratory mouse. The genes include a putative type VI secretion system, copA (a copper transporter), and fepE (extends O-antigen chain length). A new transposon mutagenesis strategy and suicide vectors were used to construct an sdiA mutant of E. cloacae. The AHL-responsiveness of all fusions was entirely sdiA-dependent, although some genes were regulated by sdiA in the absence of AHL. PMID:26075189
Statistical analysis of modeling error in structural dynamic systems
NASA Technical Reports Server (NTRS)
Hasselman, T. K.; Chrostowski, J. D.
1990-01-01
The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.
New Dynamical-Statistical Techniques for Wind Power Prediction
NASA Astrophysics Data System (ADS)
Stathopoulos, C.; Kaperoni, A.; Galanis, G.; Kallos, G.
2012-04-01
The increased use of renewable energy sources, and especially of wind power, has revealed the significance of accurate environmental and wind power predictions over wind farms that critically affect the integration of the produced power in the general grid. This issue is studied in the present paper by means of high resolution physical and statistical models. Two numerical weather prediction (NWP) systems namely SKIRON and RAMS are used to simulate the flow characteristics in selected wind farms in Greece. The NWP model output is post-processed by utilizing Kalman and Kolmogorov statistics in order to remove systematic errors. Modeled wind predictions in combination with available on-site observations are used for estimation of the wind power potential by utilizing a variety of statistical power prediction models based on non-linear and hyperbolic functions. The obtained results reveal the strong dependence of the forecasts uncertainty on the wind variation, the limited influence of previously recorded power values and the advantages that nonlinear - non polynomial functions could have in the successful control of power curve characteristics. This methodology is developed at the framework of the FP7 projects WAUDIT and MARINA PLATFORM.
Lars Onsager Prize Lecture: Statistical Dynamics of Disordered Systems
NASA Astrophysics Data System (ADS)
Fisher, Daniel S.
2013-03-01
The properties of many systems are strongly affected by quenched disorder that arose from their past history but is frozen on the time scales of interest. Although equilibrium phases and phase transitions in disordered materials can be very different from their counterparts in pure systems, the most striking phenomena involve non-equilibrium dynamics. The state of understanding of some of these will be reviewed including approach to equilibrium in spin glasses and the onset of motion in driven systems such as vortices in superconductors or earthquakes on geological faults. The potential for developing understanding of short-term evolutionary dynamics of microbial populations by taking advantage of the randomness of their past histories and the biological complexities will be discussed briefly.
Eddies in the Red Sea: A statistical and dynamical study
NASA Astrophysics Data System (ADS)
Zhan, Peng; Subramanian, Aneesh C.; Yao, Fengchao; Hoteit, Ibrahim
2014-06-01
Sea level anomaly (SLA) data spanning 1992-2012 were analyzed to study the statistical properties of eddies in the Red Sea. An algorithm that identifies winding angles was employed to detect 4998 eddies propagating along 938 unique eddy tracks. Statistics suggest that eddies are generated across the entire Red Sea but that they are prevalent in certain regions. A high number of eddies is found in the central basin between 18°N and 24°N. More than 87% of the detected eddies have a radius ranging from 50 to 135 km. Both the intensity and relative vorticity scale of these eddies decrease as the eddy radii increase. The averaged eddy lifespan is approximately 6 weeks. AEs and cyclonic eddies (CEs) have different deformation features, and those with stronger intensities are less deformed and more circular. Analysis of long-lived eddies suggests that they are likely to appear in the central basin with AEs tending to move northward. In addition, their eddy kinetic energy (EKE) increases gradually throughout their lifespans. The annual cycles of CEs and AEs differ, although both exhibit significant seasonal cycles of intensity with the winter and summer peaks appearing in February and August, respectively. The seasonal cycle of EKE is negatively correlated with stratification but positively correlated with vertical shear of horizontal velocity and eddy growth rate, suggesting that the generation of baroclinic instability is responsible for the activities of eddies in the Red Sea.
Statistical dynamics of internal gravity waves-turbulence
NASA Astrophysics Data System (ADS)
Frederiksen, J. S.; Bell, R. C.
Numerical simulations of internal gravity waves-turbulence are carried out for the inviscid, viscous and forced-dissipative two-dimensional primitive equations using the spectral method. Some of the results are compared with the predictions of the eddy damped quasi-normal Markovian (EDQNM) closure for internal waves of Carnevale and Frederiksen, generalized for periodic boundary conditions and possible random forcing and dissipation. The EDQNM reduces to the Boltzman equation of resonant interaction theory in the continuum space limit and as the forcing and dissipation vanish. However, the limit is singular in the sense that as well as conserving total energy, E, and total cross-correlation between the vorticity and buoyancy fields, C, an additional conservation law, viz. z-momentum, Pz, occurs in the limit. This means that the resonant interaction equilibrium (RIE) solution of the Boltzmann equation differs from the statistical mechanical equilibrium (SME) solution of the EDQNM closure. The statistical stability of the SME and RIE spectra for the primitive equations is tested by integrating the inviscid equations using initial realizations of these spectra with random phases. It is found that E and C are accurately conserved while Pz undergoes large amplitude variations. The approach to equilibrium of initial disequilibrium spectra is monitored by examining the evolution of the entropy. The increase and asymptotic approach to a constant value corresponding to complete chaos is consistent with the behaviour predicted by the EDQNM closure. For the viscous decay and forced-dissipative experiments, the behaviour of the entropy is also consistent with that predicted by the EDQNM closure. There is approximate equipartition of potential and total kinetic energies throughout the integrations from initial conditions having equal potential and total kinetic energies and as well equal vertical and horizontal energies, but as expected, the ratio of horizontal to vertical
Statistical characterization of spatiotemporal sediment dynamics in the Venice lagoon
NASA Astrophysics Data System (ADS)
Carniello, Luca; D'Alpaos, Andrea; Botter, Gianluca; Rinaldo, Andrea
2016-05-01
Characterizing the dynamics of suspended sediment is crucial when investigating the long-term evolution of tidal landscapes. Here we apply a widely tested mathematical model which describes the dynamics of cohesive and noncohesive sediments, driven by the combined effect of tidal currents and wind waves, using 1 year long time series of observed water levels and wind data from the Venice lagoon. The spatiotemporal evolution of the computed suspended sediment concentration (SSC) is analyzed on the basis of the "peak over threshold" theory. Our analysis suggests that events characterized by high SSC can be modeled as a marked Poisson process over most of the lagoon. The interarrival time between two consecutive over threshold events, the intensity of peak excesses, and the duration are found to be exponentially distributed random variables over most of tidal flats. Our study suggests that intensity and duration of over threshold events are temporally correlated, while almost no correlation exists between interarrival times and both durations and intensities. The benthic vegetation colonizing the central southern part of the Venice lagoon is found to exert a crucial role on sediment dynamics: vegetation locally decreases the frequency of significant resuspension events by affecting spatiotemporal patterns of SSCs also in adjacent areas. Spatial patterns of the mean interarrival of over threshold SSC events are found to be less heterogeneous than the corresponding patterns of mean interarrivals of over threshold bottom shear stress events because of the role of advection/dispersion processes in mixing suspended sediments within the lagoon. Implications for long-term morphodynamic modeling of tidal environments are discussed.
Noisy inverted pendulums with time-delayed feedback: Statistical Dynamics
NASA Astrophysics Data System (ADS)
Milton, John G.
2001-03-01
The question of how an inverted pendulum can be stabilized has puzzled scientists for over 300 years. Studies of postural sway and stick balancing at the fingertip provide insights into how the human nervous system solves this problem. Time delays and noise are intrinsic features of the neural control and thus models are in the form of stochastic delay-differential equations. Examples are presented to show that the statistical properties of the fluctuations in posture and stick balancing are dominated by noise-dependent, nonlinear phenomena: noise-induced switching between limit cycle attractors (postural sway) and "on-off intermittency" arising from the stochastic forcing of a control parameter across a stability boundary (stick balancing). The existence of these phenomena is difficult to reconcile with classical concepts of neural feedback control.
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
Human turnover dynamics during sleep: Statistical behavior and its modeling
NASA Astrophysics Data System (ADS)
Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi
2014-03-01
Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ⩽10 s and the other at ⩾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (⩽10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (≈30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.
Human turnover dynamics during sleep: statistical behavior and its modeling.
Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi
2014-03-01
Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ⩽10 s and the other at ⩾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (⩽10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (≈30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.
Introduction to Focus Issue: Statistical mechanics and billiard-type dynamical systems
NASA Astrophysics Data System (ADS)
Leonel, Edson D.; Beims, Marcus W.; Bunimovich, Leonid A.
2012-06-01
Dynamical systems of the billiard type are of fundamental importance for the description of numerous phenomena observed in many different fields of research, including statistical mechanics, Hamiltonian dynamics, nonlinear physics, and many others. This Focus Issue presents the recent progress in this area with contributions from the mathematical as well as physical stand point.
Statistical precision and sensitivity of measures of dynamic gait stability.
Bruijn, Sjoerd M; van Dieën, Jaap H; Meijer, Onno G; Beek, Peter J
2009-04-15
Recently, two methods for quantifying a system's dynamic stability have been applied to human locomotion: local stability (quantified by finite time maximum Lyapunov exponents, lambda(S-stride) and lambda(L-stride)) and orbital stability (quantified as maximum Floquet multipliers, MaxFm). Thus far, however, it has remained unclear how many data points are required to obtain precise estimates of these measures during walking, and to what extent these estimates are sensitive to changes in walking behaviour. To resolve these issues, we collected long data series of healthy subjects (n=9) walking on a treadmill in three conditions (normal walking at 0.83 m/s (3 km/h) and 1.38 m/s (5 km/h), and walking at 1.38 m/s (5 km/h) while performing a Stroop dual task). Data series from 0.83 and 1.38 m/s trials were submitted to a bootstrap procedure and paired t-tests for samples of different data series lengths were performed between 0.83 and 1.38 m/s and between 1.38 m/s with and without Stroop task. Longer data series led to more precise estimates for lambda(S-stride), lambda(L-stride), and MaxFm. All variables showed an effect of data series length. Thus, when estimating and comparing these variables across conditions, data series covering an equal number of strides should be analysed. lambda(S-stride), lambda(L-stride), and MaxFm were sensitive to the change in walking speed while only lambda(S-stride) and MaxFm were sensitive enough to capture the modulations of walking induced by the Stroop task. Still, these modulations could only be detected when using a substantial number of strides (>150). PMID:19135478
ERIC Educational Resources Information Center
DOWNIE, CURRIE S.; HOSHOVSKY, ALEXANDER G.
AN OVERVIEW OF THE OPERATIONAL AND EXPERIMENTAL SYSTEMS ESTABLISHED FOR THE SELECTIVE DISSEMINATION OF SCIENTIFIC AND TECHNICAL INFORMATION (SDI) IS PRESENTED. AN ATTEMPT HAS ALSO BEEN MADE TO IDENTIFY THE TRENDS WHICH MAY SHAPE THE FUTURE DEVELOPMENT OF THE SELECTIVE DISSEMINATION PROCEDURES. THE REPORT IS BASED IN PART ON THE EXISTING SDI…
Social Development in Hong Kong: Development Issues Identified by Social Development Index (SDI)
ERIC Educational Resources Information Center
Chua, Hoi-wai; Wong, Anthony K. W.; Shek, Daniel T. L.
2010-01-01
Surviving the aftermaths of the Asian Financial Crisis and SARS in 2003, Hong Kong's economy has re-gained its momentum and its economic growth has been quite remarkable too in recent few years. Nevertheless, as reflected by the Social Development Index (SDI), economic growth in Hong Kong does not seem to have benefited the people of the city at…
Kazumba, Shija; Gillerman, Leonid; DeMalach, Yoel; Oron, Gideon
2010-01-01
Scarcity of fresh high-quality water has heightened the importance of wastewater reuse primarily in dry regions together with improving its efficient use by implementing the Subsurface Drip Irrigation (SDI) method. Sustainable effluent reuse combines soil and plant aspects, along with the maintainability of the application system. In this study, field experiments were conducted for two years on the commercial farm of Revivim and Mashabay-Sade farm (RMF) southeast of the City of Beer-Sheva, Israel. The purpose was to examine the response of alfalfa (Medicago sativa) as a perennial model crop to secondary domestic effluent application by means of a SDI system as compared with conventional overhead sprinkler irrigation. Emitters were installed at different depths and spacing. Similar amounts of effluent were applied to all plots during the experimental period. The results indicated that in all SDI treatments, the alfalfa yields were 11% to 25% higher than the ones obtained under sprinkler irrigated plots, besides the one in which the drip laterals were 200 cm apart. The average Water Use Efficiency (WUE) was better in all SDI treatments in comparison with the sprinkler irrigated plots. An economic assessment reveals the dependence of the net profit on the emitters' installation geometry, combined with the return for alfalfa in the market. PMID:20150698
Monthly to seasonal low flow prediction: statistical versus dynamical models
NASA Astrophysics Data System (ADS)
Ionita-Scholz, Monica; Klein, Bastian; Meissner, Dennis; Rademacher, Silke
2016-04-01
the Alfred Wegener Institute a purely statistical scheme to generate streamflow forecasts for several months ahead. Instead of directly using teleconnection indices (e.g. NAO, AO) the idea is to identify regions with stable teleconnections between different global climate information (e.g. sea surface temperature, geopotential height etc.) and streamflow at different gauges relevant for inland waterway transport. So-called stability (correlation) maps are generated showing regions where streamflow and climate variable from previous months are significantly correlated in a 21 (31) years moving window. Finally, the optimal forecast model is established based on a multiple regression analysis of the stable predictors. We will present current results of the aforementioned approaches with focus on the River Rhine (being one of the world's most frequented waterways and the backbone of the European inland waterway network) and the Elbe River. Overall, our analysis reveals the existence of a valuable predictability of the low flows at monthly and seasonal time scales, a result that may be useful to water resources management. Given that all predictors used in the models are available at the end of each month, the forecast scheme can be used operationally to predict extreme events and to provide early warnings for upcoming low flows.
NASA Astrophysics Data System (ADS)
Madadgar, Shahrbanou; AghaKouchak, Amir; Shukla, Shraddhanand; Wood, Andrew W.; Cheng, Linyin; Hsu, Kou-Lin; Svoboda, Mark
2016-07-01
Improving water management in water stressed-regions requires reliable seasonal precipitation predication, which remains a grand challenge. Numerous statistical and dynamical model simulations have been developed for predicting precipitation. However, both types of models offer limited seasonal predictability. This study outlines a hybrid statistical-dynamical modeling framework for predicting seasonal precipitation. The dynamical component relies on the physically based North American Multi-Model Ensemble (NMME) model simulations (99 ensemble members). The statistical component relies on a multivariate Bayesian-based model that relates precipitation to atmosphere-ocean teleconnections (also known as an analog-year statistical model). Here the Pacific Decadal Oscillation (PDO), Multivariate ENSO Index (MEI), and Atlantic Multidecadal Oscillation (AMO) are used in the statistical component. The dynamical and statistical predictions are linked using the so-called Expert Advice algorithm, which offers an ensemble response (as an alternative to the ensemble mean). The latter part leads to the best precipitation prediction based on contributing statistical and dynamical ensembles. It combines the strength of physically based dynamical simulations and the capability of an analog-year model. An application of the framework in the southwestern United States, which has suffered from major droughts over the past decade, improves seasonal precipitation predictions (3-5 month lead time) by 5-60% relative to the NMME simulations. Overall, the hybrid framework performs better in predicting negative precipitation anomalies (10-60% improvement over NMME) than positive precipitation anomalies (5-25% improvement over NMME). The results indicate that the framework would likely improve our ability to predict droughts such as the 2012-2014 event in the western United States that resulted in significant socioeconomic impacts.
Nguyen, Y.; Nguyen, Nam X.; Rogers, Jamie L.; Liao, Jun; MacMillan, John B.; Jiang, Youxing; Sperandio, Vanessa
2015-05-19
Bacteria engage in chemical signaling, termed quorum sensing (QS), to mediate intercellular communication, mimicking multicellular organisms. The LuxR family of QS transcription factors regulates gene expression, coordinating population behavior by sensing endogenous acyl homoserine lactones (AHLs). However, some bacteria (such as Escherichia coli) do not produce AHLs. These LuxR orphans sense exogenous AHLs but also regulate transcription in the absence of AHLs. Importantly, this AHL-independent regulatory mechanism is still largely unknown. Here we present several structures of one such orphan LuxR-type protein, SdiA, from enterohemorrhagic E. coli (EHEC), in the presence and absence of AHL. SdiA is actually not inmore » an apo state without AHL but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL), which is ubiquitously found throughout the tree of life and serves as an energy source, signaling molecule, and substrate for membrane biogenesis. While exogenous AHL renders to SdiA higher stability and DNA binding affinity, OCL may function as a chemical chaperone placeholder that stabilizes SdiA, allowing for basal activity. Structural comparison between SdiA-AHL and SdiA-OCL complexes provides crucial mechanistic insights into the ligand regulation of AHL-dependent and -independent function of LuxR-type proteins. Importantly, in addition to its contribution to basic science, this work has implications for public health, inasmuch as the SdiA signaling system aids the deadly human pathogen EHEC to adapt to a commensal lifestyle in the gastrointestinal (GI) tract of cattle, its main reservoir. These studies open exciting and novel avenues to control shedding of this human pathogen in the environment. IMPORTANCE Quorum sensing refers to bacterial chemical signaling. The QS acyl homoserine lactone (AHL) signals are recognized by LuxR-type receptors that regulate gene transcription. However, some bacteria have orphan Lux
Nguyen, Y.; Nguyen, Nam X.; Rogers, Jamie L.; Liao, Jun; MacMillan, John B.; Jiang, Youxing; Sperandio, Vanessa
2015-05-19
Bacteria engage in chemical signaling, termed quorum sensing (QS), to mediate intercellular communication, mimicking multicellular organisms. The LuxR family of QS transcription factors regulates gene expression, coordinating population behavior by sensing endogenous acyl homoserine lactones (AHLs). However, some bacteria (such as Escherichia coli) do not produce AHLs. These LuxR orphans sense exogenous AHLs but also regulate transcription in the absence of AHLs. Importantly, this AHL-independent regulatory mechanism is still largely unknown. Here we present several structures of one such orphan LuxR-type protein, SdiA, from enterohemorrhagic E. coli (EHEC), in the presence and absence of AHL. SdiA is actually not in an apo state without AHL but is regulated by a previously unknown endogenous ligand, 1-octanoyl-rac-glycerol (OCL), which is ubiquitously found throughout the tree of life and serves as an energy source, signaling molecule, and substrate for membrane biogenesis. While exogenous AHL renders to SdiA higher stability and DNA binding affinity, OCL may function as a chemical chaperone placeholder that stabilizes SdiA, allowing for basal activity. Structural comparison between SdiA-AHL and SdiA-OCL complexes provides crucial mechanistic insights into the ligand regulation of AHL-dependent and -independent function of LuxR-type proteins. Importantly, in addition to its contribution to basic science, this work has implications for public health, inasmuch as the SdiA signaling system aids the deadly human pathogen EHEC to adapt to a commensal lifestyle in the gastrointestinal (GI) tract of cattle, its main reservoir. These studies open exciting and novel avenues to control shedding of this human pathogen in the environment. IMPORTANCE Quorum sensing refers to bacterial chemical signaling. The QS acyl homoserine lactone (AHL) signals are recognized by LuxR-type receptors that regulate gene transcription. However, some bacteria have orphan LuxR-type receptors and
Enriching Spatial Data Infrastructure (sdi) by User Generated Contents for Transportation
NASA Astrophysics Data System (ADS)
Shakeri, M.; Alimohammadi, A.; Sadeghi-Niaraki, A.; Alesheikh, A. A.
2013-09-01
Spatial data is one of the most critical elements underpinning decision making for many disciplines. Accessing and sharing spatial data have always been a great struggle for researchers. Spatial data infrastructure (SDI) plays a key role in spatial data sharing by building a suitable platform for collaboration and cooperation among the different data producer organizations. In recent years, SDI vision has been moved toward a user-centric platform which has led to development of a new and enriched generation of SDI (third generation). This vision is to provide an environment where users can cooperate to handle spatial data in an effective and satisfactory way. User-centric SDI concentrates on users, their requirements and preferences while in the past, SDI initiatives were mainly concentrated on technological issues such as the data harmonization, standardized metadata models, standardized web services for data discovery, visualization and download. On the other hand, new technologies such as the GPS-equipped smart phones, navigation devices and Web 2.0 technologies have enabled citizens to actively participate in production and sharing of the spatial information. This has led to emergence of the new phenomenon called the Volunteered Geographic Information (VGI). VGI describes any type of content that has a geographic element which has been voluntarily collected. However, its distinctive element is the geographic information that can be collected and produced by citizens with different formal expertise and knowledge of the spatial or geographical concepts. Therefore, ordinary citizens can cooperate in providing massive sources of information that cannot be ignored. These can be considered as the valuable spatial information sources in SDI. These sources can be used for completing, improving and updating of the existing databases. Spatial information and technologies are an important part of the transportation systems. Planning, design and operation of the
Stochastic dynamics of N correlated binary variables and non-extensive statistical mechanics
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Ruseckas, J.
2016-04-01
The non-extensive statistical mechanics has been applied to describe a variety of complex systems with inherent correlations and feedback loops. Here we present a dynamical model based on previously proposed static model exhibiting in the thermodynamic limit the extensivity of the Tsallis entropy with q < 1 as well as a q-Gaussian distribution. The dynamical model consists of a one-dimensional ring of particles characterized by correlated binary random variables, which are allowed to flip according to a simple random walk rule. The proposed dynamical model provides an insight how a mesoscopic dynamics characterized by the non-extensive statistical mechanics could emerge from a microscopic description of the system.
Dynamics of statistical distance: Quantum limits for two-level clocks
Braunstein, S.L. ); Milburn, G.J. )
1995-03-01
We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.
Sapsis, Themistoklis P.; Majda, Andrew J.
2013-01-01
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra. PMID:23918398
McKie, F.
1987-03-23
The Strategic Defense Initiative (SDI) is an intensive research program aimed at determining whether there are cost-effective defensive technologies that could enhance deterrence, strengthen stability, and increase the security of the United States and its allies against ballistic missile nuclear attack. Since its inception in March 1983, opposition to the SDI program has been widely publicized by the media. The most prominent sources of such opposition have been members of the scientific, political, and Allied communities. The sources and rationale for this opposition, along with effects on calibre of support for SDI research efforts, congressional funding, and program changes in the SDI research schedule were examined. Information was gathered through a review of the literature.
SDI-based business processes: A territorial analysis web information system in Spain
NASA Astrophysics Data System (ADS)
Béjar, Rubén; Latre, Miguel Á.; Lopez-Pellicer, Francisco J.; Nogueras-Iso, Javier; Zarazaga-Soria, F. J.; Muro-Medrano, Pedro R.
2012-09-01
Spatial Data Infrastructures (SDIs) provide access to geospatial data and operations through interoperable Web services. These data and operations can be chained to set up specialized geospatial business processes, and these processes can give support to different applications. End users can benefit from these applications, while experts can integrate the Web services in their own business processes and developments. This paper presents an SDI-based territorial analysis Web information system for Spain, which gives access to land cover, topography and elevation data, as well as to a number of interoperable geospatial operations by means of a Web Processing Service (WPS). Several examples illustrate how different territorial analysis business processes are supported. The system has been established by the Spanish National SDI (Infraestructura de Datos Espaciales de España, IDEE) both as an experimental platform for geoscientists and geoinformation system developers, and as a mechanism to contribute to the Spanish citizens knowledge about their territory.
Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function
ERIC Educational Resources Information Center
Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.
2011-01-01
In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.
Ferro, F; Quarati, P
2005-02-01
We show that in stellar core plasmas, the one-body momentum distribution function is strongly dependent, at least in the high velocity regime, on the microscopic dynamics of ion elastic collisions and therefore on the effective collisional cross sections if a random force field is present. We take into account two cross sections describing ion-dipole and ion-ion screened interactions. Furthermore, we introduce a third unusual cross section to link statistical distributions and a quantum effect originated by the energy-momentum uncertainty owing to many-body collisions. We also propose a possible physical interpretation in terms of a tidal-like force. We show that each collisional cross section gives rise to a slight peculiar correction on the Maxwellian momentum distribution function in a well defined velocity interval. We also find a possible link between microscopic dynamics of ions and statistical mechanics in interpreting our results in the framework of nonextensive statistical mechanics.
NASA Astrophysics Data System (ADS)
Koparan, Timur
2016-02-01
In this study, the effect on the achievement and attitudes of prospective teachers is examined. With this aim ahead, achievement test, attitude scale for statistics and interviews were used as data collection tools. The achievement test comprises 8 problems based on statistical data, and the attitude scale comprises 13 Likert-type items. The study was carried out in 2014-2015 academic year fall semester at a university in Turkey. The study, which employed the pre-test-post-test control group design of quasi-experimental research method, was carried out on a group of 80 prospective teachers, 40 in the control group and 40 in the experimental group. Both groups had four-hour classes about descriptive statistics. The classes with the control group were carried out through traditional methods while dynamic statistics software was used in the experimental group. Five prospective teachers from the experimental group were interviewed clinically after the application for a deeper examination of their views about application. Qualitative data gained are presented under various themes. At the end of the study, it was found that there is a significant difference in favour of the experimental group in terms of achievement and attitudes, the prospective teachers have affirmative approach to the use of dynamic software and see it as an effective tool to enrich maths classes. In accordance with the findings of the study, it is suggested that dynamic software, which offers unique opportunities, be used in classes by teachers and students.
Statistical analysis of cellular detonation dynamics from numerical simulations: one-step chemistry
NASA Astrophysics Data System (ADS)
Sharpe, G. J.; Radulescu, M. I.
2011-10-01
In this paper, two methods are developed for statistically analysing the nonlinear cellular dynamics from numerical simulations of gaseous detonations, one use of which is the systematic determination of detonation cell sizes from such simulations. Both these methods rely on signed vorticity records in which the individual families of transverse waves are captured independently. The first method involves an automated extraction of the main triple-point tracks from the vorticity records, allowing statistical analysis of the spacings between neighbouring tracks. The second method uses the autocorrelation function to spectrally analyse the vorticity records. These methods are then employed for a preliminary analysis of the cellular dynamics of the standard, idealized one-step chemistry model. Evidence is found for 'cell size doubling' bifurcations in the one-step model as the cellular dynamics become more irregular (e.g. as the activation is increased). It is also shown that the statistical models converge slowly due to systematic 'shot-to-shot' variation in the cellular dynamics for fixed parameters with different initial perturbations. Instead, it appears that a range of equally probable cell sizes can be obtained for given parameters.
A statistical physics viewpoint on the dynamics of the bouncing ball
NASA Astrophysics Data System (ADS)
Chastaing, Jean-Yonnel; Géminard, Jean-Christophe; Bertin, Eric
2016-06-01
We compute, in a statistical physics perspective, the dynamics of a bouncing ball maintained in a chaotic regime thanks to collisions with a plate experiencing an aperiodic vibration. We analyze in details the energy exchanges between the bead and the vibrating plate, and show that the coupling between the bead and the plate can be modeled in terms of both a dissipative process and an injection mechanism by an energy reservoir. An analysis of the injection statistics in terms of fluctuation relation is also provided.
Modified statistical dynamical diffraction theory: analysis of model SiGe heterostructures.
Shreeman, P K; Dunn, K A; Novak, S W; Matyi, R J
2013-08-01
A modified version of the statistical dynamical diffraction theory (mSDDT) permits full-pattern fitting of high-resolution X-ray diffraction scans from thin-film systems across the entire range from fully dynamic to fully kinematic scattering. The mSDDT analysis has been applied to a set of model SiGe/Si thin-film samples in order to define the capabilities of this approach. For defect-free materials that diffract at the dynamic limit, mSDDT analyses return structural information that is consistent with commercial dynamical diffraction simulation software. As defect levels increase and the diffraction characteristics shift towards the kinematic limit, the mSDDT provides new insights into the structural characteristics of these materials. PMID:24046498
a Statistical Dynamic Approach to Structural Evolution of Complex Capital Market Systems
NASA Astrophysics Data System (ADS)
Shao, Xiao; Chai, Li H.
As an important part of modern financial systems, capital market has played a crucial role on diverse social resource allocations and economical exchanges. Beyond traditional models and/or theories based on neoclassical economics, considering capital markets as typical complex open systems, this paper attempts to develop a new approach to overcome some shortcomings of the available researches. By defining the generalized entropy of capital market systems, a theoretical model and nonlinear dynamic equation on the operations of capital market are proposed from statistical dynamic perspectives. The US security market from 1995 to 2001 is then simulated and analyzed as a typical case. Some instructive results are discussed and summarized.
Providing Geographic Datasets as Linked Data in Sdi
NASA Astrophysics Data System (ADS)
Hietanen, E.; Lehto, L.; Latvala, P.
2016-06-01
In this study, a prototype service to provide data from Web Feature Service (WFS) as linked data is implemented. At first, persistent and unique Uniform Resource Identifiers (URI) are created to all spatial objects in the dataset. The objects are available from those URIs in Resource Description Framework (RDF) data format. Next, a Web Ontology Language (OWL) ontology is created to describe the dataset information content using the Open Geospatial Consortium's (OGC) GeoSPARQL vocabulary. The existing data model is modified in order to take into account the linked data principles. The implemented service produces an HTTP response dynamically. The data for the response is first fetched from existing WFS. Then the Geographic Markup Language (GML) format output of the WFS is transformed on-the-fly to the RDF format. Content Negotiation is used to serve the data in different RDF serialization formats. This solution facilitates the use of a dataset in different applications without replicating the whole dataset. In addition, individual spatial objects in the dataset can be referred with URIs. Furthermore, the needed information content of the objects can be easily extracted from the RDF serializations available from those URIs. A solution for linking data objects to the dataset URI is also introduced by using the Vocabulary of Interlinked Datasets (VoID). The dataset is divided to the subsets and each subset is given its persistent and unique URI. This enables the whole dataset to be explored with a web browser and all individual objects to be indexed by search engines.
The applications of Complexity Theory and Tsallis Non-extensive Statistics at Solar Plasma Dynamics
NASA Astrophysics Data System (ADS)
Pavlos, George
2015-04-01
As the solar plasma lives far from equilibrium it is an excellent laboratory for testing complexity theory and non-equilibrium statistical mechanics. In this study, we present the highlights of complexity theory and Tsallis non extensive statistical mechanics as concerns their applications at solar plasma dynamics, especially at sunspot, solar flare and solar wind phenomena. Generally, when a physical system is driven far from equilibrium states some novel characteristics can be observed related to the nonlinear character of dynamics. Generally, the nonlinearity in space plasma dynamics can generate intermittent turbulence with the typical characteristics of the anomalous diffusion process and strange topologies of stochastic space plasma fields (velocity and magnetic fields) caused by the strange dynamics and strange kinetics (Zaslavsky, 2002). In addition, according to Zelenyi and Milovanov (2004) the complex character of the space plasma system includes the existence of non-equilibrium (quasi)-stationary states (NESS) having the topology of a percolating fractal set. The stabilization of a system near the NESS is perceived as a transition into a turbulent state determined by self-organization processes. The long-range correlation effects manifest themselves as a strange non-Gaussian behavior of kinetic processes near the NESS plasma state. The complex character of space plasma can also be described by the non-extensive statistical thermodynamics pioneered by Tsallis, which offers a consistent and effective theoretical framework, based on a generalization of Boltzmann - Gibbs (BG) entropy, to describe far from equilibrium nonlinear complex dynamics (Tsallis, 2009). In a series of recent papers, the hypothesis of Tsallis non-extensive statistics in magnetosphere, sunspot dynamics, solar flares, solar wind and space plasma in general, was tested and verified (Karakatsanis et al., 2013; Pavlos et al., 2014; 2015). Our study includes the analysis of solar plasma time
The applications of Complexity Theory and Tsallis Non-extensive Statistics at Solar Plasma Dynamics
NASA Astrophysics Data System (ADS)
Pavlos, George
2015-04-01
As the solar plasma lives far from equilibrium it is an excellent laboratory for testing complexity theory and non-equilibrium statistical mechanics. In this study, we present the highlights of complexity theory and Tsallis non extensive statistical mechanics as concerns their applications at solar plasma dynamics, especially at sunspot, solar flare and solar wind phenomena. Generally, when a physical system is driven far from equilibrium states some novel characteristics can be observed related to the nonlinear character of dynamics. Generally, the nonlinearity in space plasma dynamics can generate intermittent turbulence with the typical characteristics of the anomalous diffusion process and strange topologies of stochastic space plasma fields (velocity and magnetic fields) caused by the strange dynamics and strange kinetics (Zaslavsky, 2002). In addition, according to Zelenyi and Milovanov (2004) the complex character of the space plasma system includes the existence of non-equilibrium (quasi)-stationary states (NESS) having the topology of a percolating fractal set. The stabilization of a system near the NESS is perceived as a transition into a turbulent state determined by self-organization processes. The long-range correlation effects manifest themselves as a strange non-Gaussian behavior of kinetic processes near the NESS plasma state. The complex character of space plasma can also be described by the non-extensive statistical thermodynamics pioneered by Tsallis, which offers a consistent and effective theoretical framework, based on a generalization of Boltzmann - Gibbs (BG) entropy, to describe far from equilibrium nonlinear complex dynamics (Tsallis, 2009). In a series of recent papers, the hypothesis of Tsallis non-extensive statistics in magnetosphere, sunspot dynamics, solar flares, solar wind and space plasma in general, was tested and verified (Karakatsanis et al., 2013; Pavlos et al., 2014; 2015). Our study includes the analysis of solar plasma time
NASA Astrophysics Data System (ADS)
Nielsen, Eric L.; Close, Laird M.; Biller, Beth A.; Masciadri, Elena; Lenzen, Rainer
2008-02-01
We examine the implications for the distribution of extrasolar planets based on the null results from two of the largest direct imaging surveys published to date. Combining the measured contrast curves from 22 of the stars observed with the VLT NACO adaptive optics system by Masciadri and coworkers and 48 of the stars observed with the VLT NACO SDI and MMT SDI devices by Biller and coworkers (for a total of 60 unique stars), we consider what distributions of planet masses and semimajor axes can be ruled out by these data, based on Monte Carlo simulations of planet populations. We can set the following upper limit with 95% confidence: the fraction of stars with planets with semimajor axis between 20 and 100 AU, and mass above 4 MJup, is 20% or less. Also, with a distribution of planet mass of dN/dM propto M-1.16 in the range of 0.5-13 MJup, we can rule out a power-law distribution for semimajor axis (dN/da propto aα) with index 0 and upper cutoff of 18 AU, and index -0.5 with an upper cutoff of 48 AU. For the distribution suggested by Cumming et al., a power-law of index -0.61, we can place an upper limit of 75 AU on the semimajor axis distribution. In general, we find that even null results from direct imaging surveys are very powerful in constraining the distributions of giant planets (0.5-13 MJup) at large separations, but more work needs to be done to close the gap between planets that can be detected by direct imaging, and those to which the radial velocity method is sensitive.
NASA Astrophysics Data System (ADS)
Cortez, Vasco; Medina, Pablo; Goles, Eric; Zarama, Roberto; Rica, Sergio
2015-01-01
Statistical properties, fluctuations and probabilistic arguments are shown to explain the robust dynamics of the Schelling's social segregation model. With the aid of probability density functions we characterize the attractors for multiple external parameters and conditions. We discuss the role of the initial states and we show that, indeed, the system evolves towards well defined attractors. Finally, we provide probabilistic arguments to explain quantitatively the observed behavior.
Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane
He, W.; Song, H.; Su, Y.; Geng, L.; Ackerson, B. J.; Peng, H. B.; Tong, P.
2016-01-01
The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network. PMID:27226072
Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane.
He, W; Song, H; Su, Y; Geng, L; Ackerson, B J; Peng, H B; Tong, P
2016-01-01
The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network.
Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane
NASA Astrophysics Data System (ADS)
He, W.; Song, H.; Su, Y.; Geng, L.; Ackerson, B. J.; Peng, H. B.; Tong, P.
2016-05-01
The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network.
Dynamic heterogeneity and non-Gaussian statistics for acetylcholine receptors on live cell membrane.
He, W; Song, H; Su, Y; Geng, L; Ackerson, B J; Peng, H B; Tong, P
2016-01-01
The Brownian motion of molecules at thermal equilibrium usually has a finite correlation time and will eventually be randomized after a long delay time, so that their displacement follows the Gaussian statistics. This is true even when the molecules have experienced a complex environment with a finite correlation time. Here, we report that the lateral motion of the acetylcholine receptors on live muscle cell membranes does not follow the Gaussian statistics for normal Brownian diffusion. From a careful analysis of a large volume of the protein trajectories obtained over a wide range of sampling rates and long durations, we find that the normalized histogram of the protein displacements shows an exponential tail, which is robust and universal for cells under different conditions. The experiment indicates that the observed non-Gaussian statistics and dynamic heterogeneity are inherently linked to the slow-active remodelling of the underlying cortical actin network. PMID:27226072
NASA Astrophysics Data System (ADS)
Xu, Hao; Lu, Bo; Su, Zhongqing; Cheng, Li
2015-09-01
A previously developed damage identification strategy, named Pseudo-Excitation (PE), was enhanced using a statistical processing approach. In terms of the local dynamic equilibrium of the structural component under inspection, the distribution of its vibration displacements, which are of necessity to construct the damage index in the PE, was re-defined using sole dynamic strains based on the statistical method. On top of those advantages inheriting from the original PE compared with traditional vibration-based damage detection including the independence of baseline signals and pre-developed benchmark structures, the enhanced PE (EPE) possesses improved immunity to the interference of measurement noise. Moreover, the EPE can facilitate practical implementation of online structural health monitoring, benefiting from the use of sole strain information. Proof-of-concept numerical study was conducted to examine the feasibility and accuracy of the EPE, and the effectiveness of the proposed statistical enhancement in re-constructing the vibration displacements was evaluated under noise influence; experimental validation was followed up by characterizing multi-cracks in a beam-like structure, in which the dynamic strains were measured using Lead zirconium titanate (PZT) sensors. For comparison, the original PE, the Gapped Smoothing Method (GSM), and the EPE were respectively used to evaluate the cracks. It was observed from the damage identification results that both the GSM and EPE were able to achieve higher identification accuracy than the original PE, and the robustness of the EPE in damage identification was proven to be superior than that of the GSM.
New Methods for Applying Statistical State Dynamics to Problems in Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Farrell, B.; Ioannou, P. J.
2015-12-01
Adopting the perspective of statistical state dynamics (SSD) has led to a number of recent advances inunderstanding and simulating atmospheric turbulence at both boundary layer and planetary scale. Traditionally, realizations have been used to study turbulence and if a statistical quantity was needed it was obtained by averaging. However, it is now becomimg more widely appreciated that there are important advantages to studying the statistical state dynamics (SSD) directly. In turbulent systems statistical quantities are often the most useful and the advantage of obtaining these quantities directly as state variables is obvious. Moreover, quantities such as the probability density function (pdf) are often difficult to obtain accurately by sampling state trajectories. In the event that the pdf is itself time dependent or even chaotic, as is the case in the turbulence of the planetary boundary layer, the pdf can only be obtained as a state variable. However, perhaps the greatest advantage of the SSD approach is that it reveals directly the essential cooperative mechanisms of interaction among spatial and temporal scales that underly the turbulent state. In order to exploit these advantages of the SSD approach to geophysical turbulence, new analytical and computational methods are being developed. Example problems in atmospheric turbulence will be presented in which these new SSD analysis and computational methods are used.
The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model
NASA Astrophysics Data System (ADS)
Verkley, Wim; Severijns, Camiel
2014-05-01
Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy
Dynamical and Statistical Aspects in Nucleus--Nucleus Collisions Around the Fermi Energy
NASA Astrophysics Data System (ADS)
Tamain, B.; Assenard, M.; Auger, G.; Bacri, C. O.; Benlliure, J.; Bisquer, E.; Bocage, F.; Borderie, B.; Bougault, R.; Buchet, P.; Charvet, J. L.; Chbihi, A.; Colin, J.; Cussol, D.; Dayras, R.; Demeyer, A.; Dore, D.; Durand, D.; Eudes, P.; Frankland, J.; Galichet, E.; Genouin-Duhamel, E.; Gerlic, E.; Germain, M.; Gourio, D.; Guinet, D.; Gulminelli, F.; Lautesse, P.; Laville, J. L.; Lebrun, C.; Lecolley, J. F.; Lefevre, A.; Lefort, T.; Legrain, R.; Le Neindre, N.; Lopez, O.; Louvel, M.; Lukasik, J.; Marie, N.; Maskay, M.; Metivier, V.; Nalpas, L.; Nguyen, A.; Parlog, M.; Peter, J.; Plagnol, E.; Rahmani, A.; Reposeur, T.; Rivet, M. F.; Rosato, E.; Saint-Laurent, F.; Salou, S.; Squalli, M.; Steckmeyer, J. C.; Stern, M.; Tabacaru, T.; Tassan-Got, L.; Tirel, O.; Vient, E.; Volan, C.; Wieleczko, J. P.
1998-01-01
This contribution is devoted to two important aspects of intermediate energy nucleus-nucleus collisions: the competition of dynamical and statistical features, and the origin of the multifragmentation process. These two questions are discussed in focusing on Indra data. It turns out that most of collisions are binary and reminiscent of deep inelastic collisions observed at low energy. However, intermediate velocity emission is a clear signature of dynamical emission and establishes a link with the participant-spectator picture which applies at high bombarding energies. Multifragmentation is observed when the dissipated energy is large and it turns out that expansion occurs at least for central collisions, as it is expected if this phenomenum has a dynamical origin.
Quench dynamics and statistics of measurements for a line of quantum spins in two dimensions
NASA Astrophysics Data System (ADS)
Lux, Jonathan; Rosch, Achim
2015-02-01
Motivated by recent experiments, we investigate the dynamics of a line of spin-down spins embedded in the ferromagnetic spin-up ground state of a two-dimensional XXZ model close to the Ising limit. In a situation where the couplings in the x and y directions are different, the quench dynamics of this system is governed by the interplay of one-dimensional excitations (kinks and holes) moving along the line and single-spin excitations evaporating into the two-dimensional background. A semiclassical approximation can be used to calculate the dynamics of this complex quantum system. Recently, it became possible to perform projective quantum measurements on such spin systems, allowing us to determine, e.g., the z component of each individual spin. We predict the statistical properties of such measurements which contain much more information than correlation functions.
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.
An open-source wireless sensor stack: from Arduino to SDI-12 to Water One Flow
NASA Astrophysics Data System (ADS)
Hicks, S.; Damiano, S. G.; Smith, K. M.; Olexy, J.; Horsburgh, J. S.; Mayorga, E.; Aufdenkampe, A. K.
2013-12-01
Implementing a large-scale streaming environmental sensor network has previously been limited by the high cost of the datalogging and data communication infrastructure. The Christina River Basin Critical Zone Observatory (CRB-CZO) is overcoming the obstacles to large near-real-time data collection networks by using Arduino, an open source electronics platform, in combination with XBee ZigBee wireless radio modules. These extremely low-cost and easy-to-use open source electronics are at the heart of the new DIY movement and have provided solutions to countless projects by over half a million users worldwide. However, their use in environmental sensing is in its infancy. At present a primary limitation to widespread deployment of open-source electronics for environmental sensing is the lack of a simple, open-source software stack to manage streaming data from heterogeneous sensor networks. Here we present a functioning prototype software stack that receives sensor data over a self-meshing ZigBee wireless network from over a hundred sensors, stores the data locally and serves it on demand as a CUAHSI Water One Flow (WOF) web service. We highlight a few new, innovative components, including: (1) a versatile open data logger design based the Arduino electronics platform and ZigBee radios; (2) a software library implementing SDI-12 communication protocol between any Arduino platform and SDI12-enabled sensors without the need for additional hardware (https://github.com/StroudCenter/Arduino-SDI-12); and (3) 'midStream', a light-weight set of Python code that receives streaming sensor data, appends it with metadata on the fly by querying a relational database structured on an early version of the Observations Data Model version 2.0 (ODM2), and uses the WOFpy library to serve the data as WaterML via SOAP and REST web services.
NASA Astrophysics Data System (ADS)
Farmer, Dean A.
1992-08-01
The various core technologies developed from the SDI programs are described and the cost and weight reductions that have resulted from the systematic exploitation of today's aerospace expertise are characterized. Avionics, sensors, and on-orbit propulsion systems can be utilized in developing small, low-cost devices for space exploration with significant performance capabilities. It is shown how the resulting core technologies can be employed in constructing three specific types of miniaturized spacecraft: a 16 kg planetary rover, a 200 kg lunar lander, and a 45 kg space vehicle repair and rescue craft.
NASA Astrophysics Data System (ADS)
Miksovsky, J.; Huth, R.; Halenka, T.; Belda, M.; Farda, A.; Skalak, P.; Stepanek, P.
2009-12-01
To bridge the resolution gap between the outputs of global climate models (GCMs) and finer-scale data needed for studies of the climate change impacts, two approaches are widely used: dynamical downscaling, based on application of regional climate models (RCMs) embedded into the domain of the GCM simulation, and statistical downscaling (SDS), using empirical transfer functions between the large-scale data generated by the GCM and local measurements. In our contribution, we compare the performance of different variants of both techniques for the region of Central Europe. The dynamical downscaling is represented by the outputs of two regional models run in the 10 km horizontal grid, ALADIN-CLIMATE/CZ (co-developed by the Czech Hydrometeorological Institute and Meteo-France) and RegCM3 (developed by the Abdus Salam Centre for Theoretical Physics). The applied statistical methods were based on multiple linear regression, as well as on several of its nonlinear alternatives, including techniques employing artificial neural networks. Validation of the downscaling outputs was carried out using measured data, gathered from weather stations in the Czech Republic, Slovakia, Austria and Hungary for the end of the 20th century; series of daily values of maximum and minimum temperature, precipitation and relative humidity were analyzed. None of the regional models or statistical downscaling techniques could be identified as the universally best one. For instance, while most statistical methods misrepresented the shape of the statistical distribution of the target variables (especially in the more challenging cases such as estimation of daily precipitation), RCM-generated data often suffered from severe biases. It is also shown that further enhancement of the simulated fields of climate variables can be achieved through a combination of dynamical downscaling and statistical postprocessing. This can not only be used to reduce biases and other systematic flaws in the generated time
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions
Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu
2014-01-01
Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics. PMID:24769917
Specificity of mathematical description of statistical and dynamical properties of CELSS
NASA Astrophysics Data System (ADS)
Bartsev, Sergey
CELSS for long-term space missions has to be possessed high level of matter turnover closure. Designing, studying, and maintaining such kind of systems seems to be not possible without accounting their specificity -high closure. For measuring this specific property potentially universal coefficient of closure is suggested and disscussed. It can be shown standard statistical formulas are incorrect for estimating mean values of biomass of CELSS components. Account-ing closure as specific constraint of closed ecological systems allows obtaining correct formulas for calculating mean values of biomass and composition of chemical compounds of CELSS. Errors due to using standard statistical evaluations are discussed. Organisms composing bi-ological LSS consume and produce spectrum of different substances. Providing high level of closure -the absence of deadlocks -depends on accuracy of adjusting all organisms input and output to each other. This is practical objective of high importance. Adequate mathematical models ought to describe possibility of organisms to vary their consumption and production spectrum (stoichiometric ratio). Traditional ecological models describing dynamics of limiting element can not be adequately applied for describing CELSS dynamics over all possible oper-ating regimes. Possible use of adaptive metabolism models for providing correct description of CELSS dynamics is considered.
Heterogeneous Structure of Stem Cells Dynamics: Statistical Models and Quantitative Predictions
NASA Astrophysics Data System (ADS)
Bogdan, Paul; Deasy, Bridget M.; Gharaibeh, Burhan; Roehrs, Timo; Marculescu, Radu
2014-04-01
Understanding stem cell (SC) population dynamics is essential for developing models that can be used in basic science and medicine, to aid in predicting cells fate. These models can be used as tools e.g. in studying patho-physiological events at the cellular and tissue level, predicting (mal)functions along the developmental course, and personalized regenerative medicine. Using time-lapsed imaging and statistical tools, we show that the dynamics of SC populations involve a heterogeneous structure consisting of multiple sub-population behaviors. Using non-Gaussian statistical approaches, we identify the co-existence of fast and slow dividing subpopulations, and quiescent cells, in stem cells from three species. The mathematical analysis also shows that, instead of developing independently, SCs exhibit a time-dependent fractal behavior as they interact with each other through molecular and tactile signals. These findings suggest that more sophisticated models of SC dynamics should view SC populations as a collective and avoid the simplifying homogeneity assumption by accounting for the presence of more than one dividing sub-population, and their multi-fractal characteristics.
Cluster statistics and quasisoliton dynamics in microscopic optimal-velocity models
NASA Astrophysics Data System (ADS)
Yang, Bo; Xu, Xihua; Pang, John Z. F.; Monterola, Christopher
2016-04-01
Using the non-linear optimal velocity models as an example, we show that there exists an emergent intrinsic scale that characterizes the interaction strength between multiple clusters appearing in the solutions of such models. The interaction characterizes the dynamics of the localized quasisoliton structures given by the time derivative of the headways, and the intrinsic scale is analogous to the "charge" of the quasisolitons, leading to non-trivial cluster statistics from the random perturbations to the initial steady states of uniform headways. The cluster statistics depend both on the quasisoliton charge and the density of the traffic. The intrinsic scale is also related to an emergent quantity that gives the extremum headways in the cluster formation, as well as the coexistence curve separating the absolute stable phase from the metastable phase. The relationship is qualitatively universal for general optimal velocity models.
Statistical characterization of the dynamic human body communication channel at 45 MHz.
Nie, Zedong; Ma, Jingjing; Chen, Hong; Wang, Lei
2013-01-01
The dynamic human body communication (HBC) propagation channel at 45 MHz was statistical characterized in this paper. A large amount of measurement data has been gathered in practical environment with real activities -treadmill running at different speeds in a lab room. The received power between two lower legs was acquired from three volunteers, with more than 60,000 snap shot of data in total. The statistical analyses confirmed that the HBC propagation channel at 45 MHz followed the Gamma and Lognormal distributions at the slower (2 km/h and 4 km/h) and faster (6 km/h and 8 km/h) running activities, respectively. The channel is insensitive to body motion with the maximum average fade duration is 0.0413 s and the most averaging bad channel duration time being less than 60 ms with the percentage of the bad channel duration time being less than 4.35%.
SERVIR's Contributions and Benefits to Belize thru Spatial Data Infrastructure (SDI) Development
NASA Technical Reports Server (NTRS)
Irwin, Daniel E.
2006-01-01
Dan Irwin, the SERVIR Project Manager is being honored with the privilege of delivering the opening remarks at Belize s second celebration of GIS Day, a weeklong event to be held at the University of Belize's campus in the nation s capital, Belmopan. The request has been extended by the GIS Day Planning Committee which operates under the auspices of Belize s Ministry of Natural Resources & the Environment, which is the focal ministry for SERVIR. In the 20-30 min. allotted for the opening remarks, the SERVIR Project Manager will expound on how SERVIR, operating under the auspices of NASA s Ecological Forecasting Program, contributes to spatial data infrastructure (SDI) development in Belize. NASA s contributions to the region - particularly work under the Mesoamerican Biological Corridor - will be highlighted. Continuing, the remarks will discuss SERVIR s role in Belize s steadily expanding SDI, particularly in the context of delivering integrated decision support products via web-based infrastructure. The remarks will close with a call to the parties assembled to work together in the application of Earth Observation Systems technologies for the benefit of Belizean society as a whole. NASA s strong presence in Belize s GIS Day celebrations will be highlighted as sustained goodwill of the American people - in partial fulfillment of goals set forth under the Global Earth Observation System of Systems (GEOSS).
A unified n-body and statistical treatment of stellar dynamics
NASA Technical Reports Server (NTRS)
Lightman, A. P.; Mcmillan, S. L. W.
1985-01-01
The methods of a new 'hybrid' computed code for stellar dynamics are summarized. All particles in the inner spatial region are followed exactly via a direct N-body code and all particles in the outer spatial region are treated statistically via a distribution function and Fokker-Planck type methods. An intermediate region, with features of both, allows exchange of particles and energy between the outer and inner regions. The code is applied to the period just before core collapse and just after and the results are summarized.
Dynamics and statistics of wave-particle interactions in a confined geometry.
Gilet, Tristan
2014-11-01
A walker is a droplet bouncing on a liquid surface and propelled by the waves that it generates. This macroscopic wave-particle association exhibits behaviors reminiscent of quantum particles. This article presents a toy model of the coupling between a particle and a confined standing wave. The resulting two-dimensional iterated map captures many features of the walker dynamics observed in different configurations of confinement. These features include the time decomposition of the chaotic trajectory in quantized eigenstates and the particle statistics being shaped by the wave. It shows that deterministic wave-particle coupling expressed in its simplest form can account for some quantumlike behaviors.
Dynamics and statistics of wave-particle interactions in a confined geometry
NASA Astrophysics Data System (ADS)
Gilet, Tristan
2014-11-01
A walker is a droplet bouncing on a liquid surface and propelled by the waves that it generates. This macroscopic wave-particle association exhibits behaviors reminiscent of quantum particles. This article presents a toy model of the coupling between a particle and a confined standing wave. The resulting two-dimensional iterated map captures many features of the walker dynamics observed in different configurations of confinement. These features include the time decomposition of the chaotic trajectory in quantized eigenstates and the particle statistics being shaped by the wave. It shows that deterministic wave-particle coupling expressed in its simplest form can account for some quantumlike behaviors.
Hotspots of boundary accumulation: dynamics and statistics of micro-swimmers in flowing films.
Mathijssen, Arnold J T M; Doostmohammadi, Amin; Yeomans, Julia M; Shendruk, Tyler N
2016-02-01
Biological flows over surfaces and interfaces can result in accumulation hotspots or depleted voids of microorganisms in natural environments. Apprehending the mechanisms that lead to such distributions is essential for understanding biofilm initiation. Using a systematic framework, we resolve the dynamics and statistics of swimming microbes within flowing films, considering the impact of confinement through steric and hydrodynamic interactions, flow and motility, along with Brownian and run-tumble fluctuations. Micro-swimmers can be peeled off the solid wall above a critical flow strength. However, the interplay of flow and fluctuations causes organisms to migrate back towards the wall above a secondary critical value. Hence, faster flows may not always be the most efficacious strategy to discourage biofilm initiation. Moreover, we find run-tumble dynamics commonly used by flagellated microbes to be an intrinsically more successful strategy to escape from boundaries than equivalent levels of enhanced Brownian noise in ciliated organisms.
NASA Astrophysics Data System (ADS)
Haas, R.; Pinto, J. G.
2012-12-01
The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.
Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.
2015-01-01
Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total
Dynamic Statistical Characterization of Variation in Source Processes of Microseismic Events
NASA Astrophysics Data System (ADS)
Smith-Boughner, L.; Viegas, G. F.; Urbancic, T.; Baig, A. M.
2015-12-01
During a hydraulic fracture, water is pumped at high pressure into a formation. A proppant, typically sand is later injected in the hope that it will make its way into a fracture, keep it open and provide a path for the hydrocarbon to enter the well. This injection can create micro-earthquakes, generated by deformation within the reservoir during treatment. When these injections are monitored, thousands of microseismic events are recorded within several hundred cubic meters. For each well-located event, many source parameters are estimated e.g. stress drop, Savage-Wood efficiency and apparent stress. However, because we are evaluating outputs from a power-law process, the extent to which the failure is impacted by fluid injection or stress triggering is not immediately clear. To better detect differences in source processes, we use a set of dynamic statistical parameters which characterize various force balance assumptions using the average distance to the nearest event, event rate, volume enclosed by the events, cumulative moment and energy from a group of events. One parameter, the Fracability index, approximates the ratio of viscous to elastic forcing and highlights differences in the response time of a rock to changes in stress. These dynamic parameters are applied to a database of more than 90 000 events in a shale-gas play in the Horn River Basin to characterize spatial-temporal variations in the source processes. In order to resolve these differences, a moving window, nearest neighbour approach was used. First, the center of mass of the local distribution was estimated for several source parameters. Then, a set of dynamic parameters, which characterize the response of the rock were estimated. These techniques reveal changes in seismic efficiency and apparent stress and often coincide with marked changes in the Fracability index and other dynamic statistical parameters. Utilizing these approaches allowed for the characterization of fluid injection related
Statistical state dynamics of jet/wave coexistence in beta-plane turbulence
NASA Astrophysics Data System (ADS)
Constantinou, Navid; Farrell, Brian; Ioannou, Petros
Jets are commonly observed to coexist in the turbulence of planetary atmospheres with planetary scale waves and embedded vortices. These large-scale coherent structures arise and are maintained in the turbulence on time scales long compared to dissipation or advective time scales. The emergence, equilibration at finite amplitude, maintenance and stability of these structures pose fundamental theoretical problems. The emergence of jets and vortices from turbulence is not associated with an instability of the mean flow and their equilibration and stability at finite amplitude does not arise solely from the linear or nonlinear dynamics of these structures in isolation from the turbulence surrounding them. Rather the dynamics of these large-scale structures arises essentially from their cooperative interaction with the small-scale turbulence in which they are embedded. It follows that fundamental theoretical understanding of the dynamics of jets and vortices in turbulence requires adopting the perspective of the statistical state dynamics (SSD) of the entire turbulent state. In this work a theory for the jet/wave coexistence regime is developed using the SSD perspective.
Neutral dynamics with environmental noise: Age-size statistics and species lifetimes
NASA Astrophysics Data System (ADS)
Kessler, David; Suweis, Samir; Formentin, Marco; Shnerb, Nadav M.
2015-08-01
Neutral dynamics, where taxa are assumed to be demographically equivalent and their abundance is governed solely by the stochasticity of the underlying birth-death process, has proved itself as an important minimal model that accounts for many empirical datasets in genetics and ecology. However, the restriction of the model to demographic [O (√{N }) ] noise yields relatively slow dynamics that appears to be in conflict with both short-term and long-term characteristics of the observed systems. Here we analyze two of these problems—age-size relationships and species extinction time—in the framework of a neutral theory with both demographic and environmental stochasticity. It turns out that environmentally induced variations of the demographic rates control the long-term dynamics and modify dramatically the predictions of the neutral theory with demographic noise only, yielding much better agreement with empirical data. We consider two prototypes of "zero mean" environmental noise, one which is balanced with regard to the arithmetic abundance, another balanced in the logarithmic (fitness) space, study their species lifetime statistics, and discuss their relevance to realistic models of community dynamics.
Statistical and dynamical climate predictions to guide water resources in Ethiopia
NASA Astrophysics Data System (ADS)
Block, P. J.; Goddard, L. M.
2010-12-01
Climate predictions with lead times of one season or more often provide prospects for exploiting climate-related risks and opportunities. This motivates the evaluation of precipitation prediction techniques from statistical and dynamical models and their combination (multi-model) to potentially augment prediction skill over the Blue Nile basin in Ethiopia. Subsequently, this work considers to what degree greater skill or reliability in a particular prediction technique translates through hydropower management models given their nonlinear response. One hundred precipitation series from the period 1981-2000 are generated to compare prediction techniques. The linked multi-model ensemble forecast - hydropower system proves superior to the statistical and dynamical prediction technique linked systems across a range of metrics; an increase in annual benefits by $2-5 million dollars on average, with equivalent or better reliability, is also evident. The forecast - hydropower system is sufficiently flexible to allow water managers to attain an optimal balance between benefits and reliability, by varying exceedance probability and target energy thresholds, with the added benefit of forecast guidance. Ideally this provides decision-makers with incentives to integrate improved prediction techniques into sectoral management models, and further justifies expanding efforts into climate forecast improvement.
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
Avalappampatty Sivasamy, Aneetha; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
Statistical analysis of nonlinear dynamical systems using differential geometric sampling methods.
Calderhead, Ben; Girolami, Mark
2011-12-01
Mechanistic models based on systems of nonlinear differential equations can help provide a quantitative understanding of complex physical or biological phenomena. The use of such models to describe nonlinear interactions in molecular biology has a long history; however, it is only recently that advances in computing have allowed these models to be set within a statistical framework, further increasing their usefulness and binding modelling and experimental approaches more tightly together. A probabilistic approach to modelling allows us to quantify uncertainty in both the model parameters and the model predictions, as well as in the model hypotheses themselves. In this paper, the Bayesian approach to statistical inference is adopted and we examine the significant challenges that arise when performing inference over nonlinear ordinary differential equation models describing cell signalling pathways and enzymatic circadian control; in particular, we address the difficulties arising owing to strong nonlinear correlation structures, high dimensionality and non-identifiability of parameters. We demonstrate how recently introduced differential geometric Markov chain Monte Carlo methodology alleviates many of these issues by making proposals based on local sensitivity information, which ultimately allows us to perform effective statistical analysis. Along the way, we highlight the deep link between the sensitivity analysis of such dynamic system models and the underlying Riemannian geometry of the induced posterior probability distributions. PMID:23226584
Pavlov; Punegov
2000-05-01
The statistical dynamical theory of X-ray diffraction is developed for a crystal containing statistically distributed microdefects. Fourier-component equations for coherent and diffuse (incoherent) scattered waves have been obtained in the case of so-called triple-crystal diffractometry. New correlation lengths and areas are introduced for characterization of the scattered volume. PMID:10851584
Kurzyński, M
1998-01-01
An increasing body of experimental evidence indicates the slow character of internal dynamics of native proteins. The important consequence of this is that theories of chemical reactions, used hitherto, appear inadequate for description of most biochemical reactions. Construction of a contemporary, truly advanced statistical theory of biochemical processes will need simple but realistic models of microscopic dynamics of biomolecules. In this review, intended to be a contribution towards this direction, three topics are considered. First, an intentionally simplified picture of dynamics of native proteins which emerges from recent investigations is presented. Fast vibrational modes of motion, of periods varying from 10(-14) to 10(-11) s, are contrasted with purely stochastic conformational transitions. Significant evidence is adduced that the relaxation time spectrum of the latter spreads in the whole range from 10(-11) to 10(5) s or longer, and up to 10(-7) s it is practically quasi-continuous. Next, the essential ideas of the theory of reaction rates based on stochastic models of intramolecular dynamics are outlined. Special attention is paid to reactions involving molecules in the initial conformational substrates confirmed to the transition state, which is realized in actual experimental situations. And finally, the two best experimentally justified classes of models of conformational transition dynamics, symbolically referred to as "protein glass" and "protein machine", are described and applied to the interpretation of a few simple biochemical processes, perhaps the most important result reported is the demonstration of the possibility of predominance of the short initial condition-dependent stage of protein involved reactions over the main stage described by the standard kinetics. This initial stage, and not the latter, is expected to be responsible for the coupling of component reactions in the complete enzymatic cycles as well as more complex processes of
Dynamic Events in the Jovian Magnetotail: A Statistical Study From Magnetic Field Measurements
NASA Astrophysics Data System (ADS)
Vogt, M. F.; Kivelson, M. G.; Khurana, K. K.; Joy, S. P.; Walker, R. J.
2007-12-01
The Galileo mission to Jupiter provided magnetic field and energetic particle measurements of the Jovian plasma sheet in the near and distant tail. Dynamic events associated with reconnection have been observed in the Jovian magnetotail [ Russell et al., 1998]. These events are thought to imply the presence of a nearby neutral point. In the magnetic field data, they are characterized by an intensification of Bθ, the component of the magnetic field perpendicular to the equator. We have analyzed magnetometer data from Galileo using all intervals with time resolution of 24 s per vector or better in regions of the magnetotail between 1800 and 0600 local time and at radial distances beyond 30 RJ to identify dynamic events occurring in the Jovian magnetotail. Our criteria require that Bθ increase by at least a factor of 2 over background levels and concurrently a bend of field lines with respect to the radial direction change. Change of the bend-back angle indicates that rotational speed of the plasma is either increasing or decreasing to conserve angular momentum as plasma flows toward Jupiter or down the tail, respectively. Our analysis has identified nearly 1,000 dynamic events in 28 orbits including 7,500 hours of data. These events occur as far down the tail as 142 RJ. There is no evidence of a significant asymmetry in the frequency of events around midnight, though we do find that the frequency of the events decreases near midnight. A previous statistical study of particle bursts in the Jovian magnetotail [ Woch, Krupp, and Lagg, 2002], found that the events were concentrated in the dawn sector and noted the location of a statistical separatrix dividing predominantly inward and outward flow. Our results also suggest a similar contour separating the flow patterns.
NASA Astrophysics Data System (ADS)
Laugel, Amélie; Menendez, Melisa; Benoit, Michel; Mattarolo, Giovanni; Mendez, Fernando
2013-04-01
Wave climate forecasting is a major issue for numerous marine and coastal related activities, such as offshore industries, flooding risks assessment and wave energy resource evaluation, among others. Generally, there are two main ways to predict the impacts of the climate change on the wave climate at regional scale: the dynamical and the statistical downscaling of GCM (Global Climate Model). In this study, both methods have been applied on the French coast (Atlantic , English Channel and North Sea shoreline) under three climate change scenarios (A1B, A2, B1) simulated with the GCM ARPEGE-CLIMAT, from Météo-France (AR4, IPCC). The aim of the work is to characterise the wave climatology of the 21st century and compare the statistical and dynamical methods pointing out advantages and disadvantages of each approach. The statistical downscaling method proposed by the Environmental Hydraulics Institute of Cantabria (Spain) has been applied (Menendez et al., 2011). At a particular location, the sea-state climate (Predictand Y) is defined as a function, Y=f(X), of several atmospheric circulation patterns (Predictor X). Assuming these climate associations between predictor and predictand are stationary, the statistical approach has been used to project the future wave conditions with reference to the GCM. The statistical relations between predictor and predictand have been established over 31 years, from 1979 to 2009. The predictor is built as the 3-days-averaged squared sea level pressure gradient from the hourly CFSR database (Climate Forecast System Reanalysis, http://cfs.ncep.noaa.gov/cfsr/). The predictand has been extracted from the 31-years hindcast sea-state database ANEMOC-2 performed with the 3G spectral wave model TOMAWAC (Benoit et al., 1996), developed at EDF R&D LNHE and Saint-Venant Laboratory for Hydraulics and forced by the CFSR 10m wind field. Significant wave height, peak period and mean wave direction have been extracted with an hourly-resolution at
Technology Transfer Automated Retrieval System (TEKTRAN)
An experimental field moisture controlled subsurface drip irrigation (SDI) system was designed and installed as a field trial in a Vertisol in the Alabama Black Belt region for two years. The system was designed to start hydraulic dosing only when field moisture was below field capacity. Results sho...
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI) as with all microirrigation systems is typically only used on crops with greater value. In the U.S. Great Plains region, the typical irrigated crops are the cereal and oil seed crops and cotton. These crops have less economic revenue than typical microirrigated cro...
ERIC Educational Resources Information Center
Wixon, D. W.; Housman, E. M.
The report describes a large-scale computerized system for Selective Dissemination of Information (SDI) developed over the past five years at the U.S. Army Electronics Command to serve its technical personnel. The system, which uses as its document base the current accessions of the Defense Documentation Center, was developed in three phases: (1)…
Enhancing dynamic graphical analysis with the Lisp-Stat language and the ViSta statistical program.
Ledesma, Rubén; Molina, J Gabriel; Young, Forrest W
2005-11-01
Presented is a sample of computerized methods aimed at multidimensional scaling and psychometric item analysis that offer a dynamic graphical interface to execute analyses and help visualize the results. These methods show how the Lisp-Stat programming language and the ViSta statistical program can be jointly applied to develop powerful computer applications that enhance dynamic graphical analysis methods. The feasibility of this combined strategy relies on two main features: (1) The programming architecture of ViSta enables users to add new statistical methods as plug-ins, which are integrated into the program environment and can make use of all the functions already available in ViSta (e.g., data manipulation, editing, printing); and (2) the set of powerful statistical and graphical functions integrated into the Lisp-Stat programming language provides the means for developing statistical methods with dynamic graphical visualizations, which can be implemented as ViSta plug-ins. PMID:16629303
A copula approach on the dynamics of statistical dependencies in the US stock market
NASA Astrophysics Data System (ADS)
Münnix, Michael C.; Schäfer, Rudi
2011-11-01
We analyze the statistical dependence structure of the S&P 500 constituents in the 4-year period from 2007 to 2010 using intraday data from the New York Stock Exchange’s TAQ database. Instead of using a given parametric copula with a predetermined shape, we study the empirical pairwise copula directly. We find that the shape of this copula resembles the Gaussian copula to some degree, but exhibits a stronger tail dependence, for both correlated and anti-correlated extreme events. By comparing the tail dependence dynamically to the market’s average correlation level as a commonly used quantity we disclose the average level of error of the Gaussian copula, which is implied in the calculation of many correlation coefficients.
Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W
2007-01-01
Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.
Collisional statistics and dynamics of two-dimensional hard-disk systems: From fluid to solid.
Taloni, Alessandro; Meroz, Yasmine; Huerta, Adrián
2015-08-01
We perform extensive MD simulations of two-dimensional systems of hard disks, focusing on the collisional statistical properties. We analyze the distribution functions of velocity, free flight time, and free path length for packing fractions ranging from the fluid to the solid phase. The behaviors of the mean free flight time and path length between subsequent collisions are found to drastically change in the coexistence phase. We show that single-particle dynamical properties behave analogously in collisional and continuous-time representations, exhibiting apparent crossovers between the fluid and the solid phases. We find that, both in collisional and continuous-time representation, the mean-squared displacement, velocity autocorrelation functions, intermediate scattering functions, and self-part of the van Hove function (propagator) closely reproduce the same behavior exhibited by the corresponding quantities in granular media, colloids, and supercooled liquids close to the glass or jamming transition. PMID:26382368
Development of a Dynamics-Based Statistical Prediction Model for the Changma Onset
NASA Astrophysics Data System (ADS)
Park, H. L.; Seo, K. H.; Son, J. H.
2015-12-01
The timing of the changma onset has high impacts on the Korean Peninsula, yet its seasonal prediction remains a great challenge because the changma undergoes various influences from the tropics, subtropics, and midlatitudes. In this study, a dynamics-based statistical prediction model for the changma onset is proposed. This model utilizes three predictors of slowly varying sea surface temperature anomalies (SSTAs) over the northern tropical central Pacific, the North Atlantic, and the North Pacific occurring in the preceding spring season. SSTAs associated with each predictor persist until June and have an effect on the changma onset by inducing an anticyclonic anomaly to the southeast of the Korean Peninsula earlier than the climatological changma onset date. The persisting negative SSTAs over the northern tropical central Pacific and accompanying anomalous trade winds induce enhanced convection over the far-western tropical Pacific; in turn, these induce a cyclonic anomaly over the South China Sea and an anticyclonic anomaly southeast of the Korean Peninsula. Diabatic heating and cooling tendency related to the North Atlantic dipolar SSTAs induces downstream Rossby wave propagation in the upper troposphere, developing a barotropic anticyclonic anomaly to the south of the Korean Peninsula. A westerly wind anomaly at around 45°N resulting from the developing positive SSTAs over the North Pacific directly reduces the strength of the Okhotsk high and gives rise to an anticyclonic anomaly southeast of the Korean Peninsula. With the dynamics-based statistical prediction model, it is demonstrated that the changma onset has considerable predictability of r = 0.73 for the period from 1982 to 2014.
Drought episodes over Greece as simulated by dynamical and statistical downscaling approaches
NASA Astrophysics Data System (ADS)
Anagnostopoulou, Christina
2016-04-01
Drought over the Greek region is characterized by a strong seasonal cycle and large spatial variability. Dry spells longer than 10 consecutive days mainly characterize the duration and the intensity of Greek drought. Moreover, an increasing trend of the frequency of drought episodes has been observed, especially during the last 20 years of the 20th century. Moreover, the most recent regional circulation models (RCMs) present discrepancies compared to observed precipitation, while they are able to reproduce the main patterns of atmospheric circulation. In this study, both a statistical and a dynamical downscaling approach are used to quantify drought episodes over Greece by simulating the Standardized Precipitation Index (SPI) for different time steps (3, 6, and 12 months). A statistical downscaling technique based on artificial neural network is employed for the estimation of SPI over Greece, while this drought index is also estimated using the RCM precipitation for the time period of 1961-1990. Overall, it was found that the drought characteristics (intensity, duration, and spatial extent) were well reproduced by the regional climate models for long term drought indices (SPI12) while ANN simulations are better for the short-term drought indices (SPI3).
Yang, Jaw-Yen; Yan, Chih-Yuan; Diaz, Manuel; Huang, Juan-Chen; Li, Zhihui; Zhang, Hanxin
2014-01-01
The ideal quantum gas dynamics as manifested by the semiclassical ellipsoidal-statistical (ES) equilibrium distribution derived in Wu et al. (Wu et al. 2012 Proc. R. Soc. A 468, 1799-1823 (doi:10.1098/rspa.2011.0673)) is numerically studied for particles of three statistics. This anisotropic ES equilibrium distribution was derived using the maximum entropy principle and conserves the mass, momentum and energy, but differs from the standard Fermi-Dirac or Bose-Einstein distribution. The present numerical method combines the discrete velocity (or momentum) ordinate method in momentum space and the high-resolution shock-capturing method in physical space. A decoding procedure to obtain the necessary parameters for determining the ES distribution is also devised. Computations of two-dimensional Riemann problems are presented, and various contours of the quantities unique to this ES model are illustrated. The main flow features, such as shock waves, expansion waves and slip lines and their complex nonlinear interactions, are depicted and found to be consistent with existing calculations for a classical gas. PMID:24399919
NASA Astrophysics Data System (ADS)
Casanueva, A.; Herrera, S.; Fernández, J.; Frías, M. D.; Gutiérrez, J. M.
2013-08-01
The study of extreme events has become of great interest in recent years due to their direct impact on society. Extremes are usually evaluated by using extreme indicators, based on order statistics on the tail of the probability distribution function (typically percentiles). In this study, we focus on the tail of the distribution of daily maximum and minimum temperatures. For this purpose, we analyse high (95th) and low (5th) percentiles in daily maximum and minimum temperatures on the Iberian Peninsula, respectively, derived from different downscaling methods (statistical and dynamical). First, we analyse the performance of reanalysis-driven downscaling methods in present climate conditions. The comparison among the different methods is performed in terms of the bias of seasonal percentiles, considering as observations the public gridded data sets E-OBS and Spain02, and obtaining an estimation of both the mean and spatial percentile errors. Secondly, we analyse the increments of future percentile projections under the SRES A1B scenario and compare them with those corresponding to the mean temperature, showing that their relative importance depends on the method, and stressing the need to consider an ensemble of methodologies.
NASA Astrophysics Data System (ADS)
Funk, C. C.; Shukla, S.; Hoerling, M. P.; Robertson, F. R.; Hoell, A.; Liebmann, B.
2013-12-01
During boreal spring, eastern portions of Kenya and Somalia have experienced more frequent droughts since 1999. Given the region's high levels of food insecurity, better predictions of these droughts could provide substantial humanitarian benefits. We show that dynamical-statistical seasonal climate forecasts, based on the latest generation of coupled atmosphere-ocean and uncoupled atmospheric models, effectively predict boreal spring rainfall in this area. Skill sources are assessed by comparing ensembles driven with full-ocean forcing with ensembles driven with ENSO-only sea surface temperatures (SSTs). Our analysis suggests that both ENSO and non-ENSO Indo-Pacific SST forcing have played an important role in the increase in drought frequencies. Over the past 30 years, La Niña drought teleconnections have strengthened, while non-ENSO Indo-Pacific convection patterns have also supported increased (decreased) Western Pacific (East African) rainfall. To further examine the relative contribution of ENSO, low frequency warming and the Pacific Decadal Oscillation, we present decompositions of ECHAM5, GFS, CAM4 and GMAO AMIP simulations. These decompositions suggest that rapid warming in the western Pacific and steeper western-to-central Pacific SST gradients have likely played an important role in the recent intensification of the Walker circulation, and the associated increase in East African aridity. A linear combination of time series describing the Pacific Decadal Oscillation and the strength of Indo-Pacific warming are shown to track East African rainfall reasonably well. The talk concludes with a few thoughts linking the potentially important interplay of attribution and prediction. At least for recent East African droughts, it appears that a characteristic Indo-Pacific SST and precipitation anomaly pattern can be linked statistically to support forecasts and attribution analyses. The combination of traditional AGCM attribution analyses with simple yet
An Embedded Statistical Method for Coupling Molecular Dynamics and Finite Element Analyses
NASA Technical Reports Server (NTRS)
Saether, E.; Glaessgen, E.H.; Yamakov, V.
2008-01-01
The coupling of molecular dynamics (MD) simulations with finite element methods (FEM) yields computationally efficient models that link fundamental material processes at the atomistic level with continuum field responses at higher length scales. The theoretical challenge involves developing a seamless connection along an interface between two inherently different simulation frameworks. Various specialized methods have been developed to solve particular classes of problems. Many of these methods link the kinematics of individual MD atoms with FEM nodes at their common interface, necessarily requiring that the finite element mesh be refined to atomic resolution. Some of these coupling approaches also require simulations to be carried out at 0 K and restrict modeling to two-dimensional material domains due to difficulties in simulating full three-dimensional material processes. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the standard boundary value problem used to define a coupled domain. The method replaces a direct linkage of individual MD atoms and finite element (FE) nodes with a statistical averaging of atomistic displacements in local atomic volumes associated with each FE node in an interface region. The FEM and MD computational systems are effectively independent and communicate only through an iterative update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM). ESCM provides an enhanced coupling methodology that is inherently applicable to three-dimensional domains, avoids discretization of the continuum model to atomic scale resolution, and permits finite temperature states to be applied.
Soares, Jitesh A.; Ellermeier, Craig D.; Altier, Craig; Lawhon, Sara D.; Adams, L. Garry; Konjufca, Vjollca; Curtiss, Roy; Slauch, James M.; Ahmer, Brian M. M.
2008-01-01
Background LuxR-type transcription factors are typically used by bacteria to determine the population density of their own species by detecting N-acylhomoserine lactones (AHLs). However, while Escherichia and Salmonella encode a LuxR-type AHL receptor, SdiA, they cannot synthesize AHLs. In vitro, it is known that SdiA can detect AHLs produced by other bacterial species. Methodology/Principal Findings In this report, we tested the hypothesis that SdiA detects the AHL-production of other bacterial species within the animal host. SdiA did not detect AHLs during the transit of Salmonella through the gastrointestinal tract of a guinea pig, a rabbit, a cow, 5 mice, 6 pigs, or 12 chickens. However, SdiA was activated during the transit of Salmonella through turtles. All turtles examined were colonized by the AHL-producing species Aeromonas hydrophila. Conclusions/Significance We conclude that the normal gastrointestinal microbiota of most animal species do not produce AHLs of the correct type, in an appropriate location, or in sufficient quantities to activate SdiA. However, the results obtained with turtles represent the first demonstration of SdiA activity in animals. PMID:18665275
Ni, Bo; He, Fazhi; Yuan, ZhiYong
2015-12-01
Segmenting the lesion areas from ultrasound (US) images is an important step in the intra-operative planning of high-intensity focused ultrasound (HIFU). However, accurate segmentation remains a challenge due to intensity inhomogeneity, blurry boundaries in HIFU US images and the deformation of uterine fibroids caused by patient's breathing or external force. This paper presents a novel dynamic statistical shape model (SSM)-based segmentation method to accurately and efficiently segment the target region in HIFU US images of uterine fibroids. For accurately learning the prior shape information of lesion boundary fluctuations in the training set, the dynamic properties of stochastic differential equation and Fokker-Planck equation are incorporated into SSM (referred to as SF-SSM). Then, a new observation model of lesion areas (named to RPFM) in HIFU US images is developed to describe the features of the lesion areas and provide a likelihood probability to the prior shape given by SF-SSM. SF-SSM and RPFM are integrated into active contour model to improve the accuracy and robustness of segmentation in HIFU US images. We compare the proposed method with four well-known US segmentation methods to demonstrate its superiority. The experimental results in clinical HIFU US images validate the high accuracy and robustness of our approach, even when the quality of the images is unsatisfactory, indicating its potential for practical application in HIFU therapy.
NASA Astrophysics Data System (ADS)
Fitzgerald, J.; Farrell, B.
2013-12-01
Equatorial deep jets (EDJs) are persistent, zonally-coherent jets found within one degree of the equator in all ocean basins (Luyten and Swallow, 1976). The jets are characterized by a vertically oscillating ('stacked') structure between ~500-2000m depth, with jet amplitudes on the order of 10 cm/s superimposed upon a large-scale background shear flow. EDJs are a striking feature of the equatorial climate system and play an important role in equatorial ocean transport. However, the physical mechanism responsible for the presence of EDJs remains uncertain. Previous theoretical models for EDJs have suggested mechanisms involving the reflection and constructive interference of equatorially trapped waves (Wunsch 1977, McCreary 1984) and the instability of mixed Rossby-gravity waves with EDJs as the fastest-growing eigenfunction (Hua et al. 2008, Eden et al. 2008). In this work we explore the jet formation mechanism and the parameter dependence of EDJ structure in the idealized theoretical model of the stochastically-driven equatorial beta plane. The model is formulated in three ways: 1) Fully nonlinear equations of motion 2) Quasilinear (or mean-field) dynamics 3) Statistical state dynamics employing a second order closure method (stochastic structural stability theory). Results from the three models are compared, and the implications for both the jet formation and equilibration mechanisms, as well as the role of eddy-eddy nonlinearity in the EDJ system, are discussed.
Four-level atom dynamics and emission statistics using a quantum jump approach
NASA Astrophysics Data System (ADS)
Sandhya, S. N.
2007-01-01
Four-level atom dynamics is studied in a ladder system in the nine parameter space consisting of driving field strengths, detunings and decay constants, {Ω1,Ω2,Ω3,Δ1,Δ2,Δ3,Γ2,Γ3,Γ4} . One can selectively excite or induce two-level behavior between particular levels of ones choice by appropriately tuning the driving field strengths at three-photon resonance. The dynamics may be classified into two main regions of interest (i) small Ω2 coupling the ∣2⟩-∣3⟩ transition and (ii) large Ω2 . In case (i) one sees two-level behavior consisting of adjacent levels and in a particular region in the parameter space, there is an intermittent shelving of the electrons in one of the two subsystems. In case (ii) the levels consist of the ground state and the upper most level. Emission statistics is studied using the delay function approach in both the cases. In case (i), the behavior of the second order correlation function g2(t) , is similar to that of two-level emission for low Ω1 coupling the ∣1⟩-∣2⟩ transition, and the correlation increases with Ω1 for smaller time delays. While, in case (ii) when, in addition, Ω3 coupling the ∣3⟩-∣4⟩ transitionis kept low, g2(t) shows superpoissonian distribution, which may be attributed to three-photon processes.
Ingber, Lester; Nunez, Paul L
2011-02-01
The dynamic behavior of scalp potentials (EEG) is apparently due to some combination of global and local processes with important top-down and bottom-up interactions across spatial scales. In treating global mechanisms, we stress the importance of myelinated axon propagation delays and periodic boundary conditions in the cortical-white matter system, which is topologically close to a spherical shell. By contrast, the proposed local mechanisms are multiscale interactions between cortical columns via short-ranged non-myelinated fibers. A mechanical model consisting of a stretched string with attached nonlinear springs demonstrates the general idea. The string produces standing waves analogous to large-scale coherent EEG observed in some brain states. The attached springs are analogous to the smaller (mesoscopic) scale columnar dynamics. Generally, we expect string displacement and EEG at all scales to result from both global and local phenomena. A statistical mechanics of neocortical interactions (SMNI) calculates oscillatory behavior consistent with typical EEG, within columns, between neighboring columns via short-ranged non-myelinated fibers, across cortical regions via myelinated fibers, and also derives a string equation consistent with the global EEG model.
A STATISTICAL SURVEY OF DYNAMIC PRESSURE PULSES IN THE SOLAR WIND BASED ON WIND OBSERVATIONS
Zuo, Pingbing; Feng, Xueshang; Wang, Yi; Xie, Yanqiong; Xu, Xiaojun E-mail: fengx@spaceweather.ac.cn
2015-07-20
Solar wind dynamic pressure pulse (DPP) structures, across which the dynamic pressure changes abruptly over timescales from a few seconds to several minutes, are often observed in the near-Earth space environment. The space weather effects of DPPs on the magnetosphere–ionosphere coupling system have been widely investigated in the last two decades. In this study, we perform a statistical survey on the properties of DPPs near 1 AU based on nearly 20 years of observations from the WIND spacecraft. It is found that only a tiny fraction of DPPs (around 4.2%) can be regarded as interplanetary shocks. For most DPPs, the total pressure (the sum of the thermal pressure and magnetic pressure) remains in equilibrium, but there also exists a small fraction of DPPs that are not pressure-balanced. The overwhelming majority of DPPs are associated with solar wind disturbances, including coronal mass ejection-related flows, corotating interaction regions, as well as complex ejecta. The annual variations of the averaged occurrence rate of DPPs are roughly in phase with the solar activity during solar cycle 23, and during the rising phase of solar cycle 24.
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-04-09
The estimation of time-activity curves and kinetic model parameters directly from projection data is potentially useful for clinical dynamic single photon emission computed tomography (SPECT) studies, particularly in those clinics that have only single-detector systems and thus are not able to perform rapid tomographic acquisitions. Because the radiopharmaceutical distribution changes while the SPECT gantry rotates, projections at different angles come from different tracer distributions. A dynamic image sequence reconstructed from the inconsistent projections acquired by a slowly rotating gantry can contain artifacts that lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying regions of interest on the images. If cone beam collimators are used and the focal point of the collimators always remains in a particular transaxial plane, additional artifacts can arise in other planes reconstructed using insufficient projection samples [1]. If the projection samples truncate the patient's body, this can result in additional image artifacts. To overcome these sources of bias in conventional image based dynamic data analysis, we and others have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view [2-8]. In our previous work we developed a computationally efficient method for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions from dynamic SPECT projection data [5], which extended Formiconi's least squares algorithm for reconstructing temporally static distributions [9]. In addition, we studied the biases that result from modeling various orders temporal continuity and using various time samplings [5]. the present work, we address computational issues associated with evaluating the statistical uncertainty of
NASA Astrophysics Data System (ADS)
Ahn, J.; Lee, J.; Shim, K.; Kim, Y.
2013-12-01
In spite of dense meteorological observation conducting over South Korea (The average distance between stations: ~ 12.7km), the detailed topographical effect is not reflected properly due to its mountainous terrains and observation sites mostly situated on low altitudes. A model represents such a topographical effect well, but due to systematic biases in the model, the general temperature distribution is sometimes far different from actual observation. This study attempts to produce a detailed mean temperature distribution for South Korea through a method combining dynamical downscaling and statistical correction. For the dynamical downscaling, a multi-nesting technique is applied to obtain 3-km resolution data with a focus on the domain for the period of 10 years (1999-2008). For the correction of systematic biases, a perturbation method divided into the mean and the perturbation part was used with a different correction method being applied to each part. The mean was corrected by a weighting function while the perturbation was corrected by the self-organizing maps method. The results with correction agree well with the observed pattern compared to those without correction, improving the spatial and temporal correlations as well as the RMSE. In addition, they represented detailed spatial features of temperature including topographic signals, which cannot be expressed properly by gridded observation. Through comparison with in-situ observation with gridded values after objective analysis, it was found that the detailed structure correctly reflected topographically diverse signals that could not be derived from limited observation data. We expect that the correction method developed in this study can be effectively used for the analyses and projections of climate downscaled by using region climate models. Acknowledgements This work was carried out with the support of Korea Meteorological Administration Research and Development Program under Grant CATER 2012-3083 and
NASA Astrophysics Data System (ADS)
Zhang, Jin Z.; Kreger, Melissa A.; Klaerner, Gerrit; Kreyenschmidt, M.; Miller, Robert D.; Scott, J. Campbell
1997-12-01
The formation and decay dynamics of photogenerated excitons in polyfluorene statistical co-polymers in solutions and in thin films have been studied using femtosecond transient absorption spectroscopy. In solution photoexcitation of the polymer generates primarily intrachain singlet excitons which are initially hot and then relax quickly (< 200 fs) towards the equilibrium position in the excited state. The exciton subsequently decays following a double exponential with time constants of 30 ps and 330 ps in toluene. The fast decay is attributable to vibrational relaxation, spectral diffusion, or internal conversion (recombination) of the exciton from the excited to the ground electronic state through tunneling or thermal-activated barrier crossing before thermalization. The slow decay is assigned to conversion of the thermalized exciton to the ground state through both radiative and non-radiative pathways. In films the exciton dynamics are found to depend strongly on excitation intensity. At low intensity, the dynamics are similar to that in solutions, with a double exponential decay with time constants of 15 ps and 300 ps. At high intensities, a fast decay component with a time constant of 0.8 ps appears, which becomes more dominant at higher intensities. This fast decay is attributed to exciton- exciton annihilation due to high density of excitons created. The signal in films at both low and high excitation intensities is attributable to intrachain singlet excitons, as in solution. There is no evidence for formation of interchain bound polaron pairs in films at low intensities. At high intensities, the possibility cannot be ruled out completely, especially in relation to the fast decay. If bound polaron pairs are formed as indicated by the fast decay, they must be generated as a result of interaction between excitons on different chains since they are absent at low power, an they must be created and then decay within about 1 ps.
A Lagrangian dynamical theory for the mass function of cosmic structures - II. Statistics
NASA Astrophysics Data System (ADS)
Monaco, Pierluigi
1997-09-01
The statistical tools needed to obtain a mass function from realistic collapse-time estimates are presented. Collapse dynamics has been dealt with in Paper I of this series by means of the powerful Lagrangian perturbation theory and the simple ellipsoidal collapse model. The basic quantity considered here is the inverse collapse time F; it is a non-linear functional of the initial potential, with a non-Gaussian distribution. In the case of sharp k-space smoothing, it is demonstrated that the fraction of collapsed mass can be determined by extending to the F process the diffusion formalism introduced by Bond et al. The problem is then reduced to that of a random walk with a moving absorbing barrier, and numerically solved; an accurate analytical fit, valid for small and moderate resolutions, is found. For Gaussian smoothing, the F trajectories are strongly correlated in resolution. In this case, an approximation proposed by Peacock & Heavens can be used to determine the mass functions. Gaussian smoothing is preferred, as it optimizes the performances of dynamical predictions and stabilizes the F trajectories. The relation between resolution and mass is treated at a heuristic level, and the consequences of this approximation are discussed. The resulting mass functions, compared with the classical Press & Schechter one, are shifted toward large masses (confirming the findings of Monaco), and tend to give more intermediate-mass objects at the expense of small-mass objects. However, the small-mass part of the mass function, which depends on uncertain dynamics and is likely to be affected by uncertainties in the resolution-mass relation, is not considered a robust prediction of this theory.
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-07-15
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
NASA Astrophysics Data System (ADS)
Murphy, Kyle R.; Mann, Ian R.; Rae, I. Jonathan; Sibeck, David G.; Watt, Clare E. J.
2016-08-01
Wave-particle interactions play a crucial role in energetic particle dynamics in the Earth's radiation belts. However, the relative importance of different wave modes in these dynamics is poorly understood. Typically, this is assessed during geomagnetic storms using statistically averaged empirical wave models as a function of geomagnetic activity in advanced radiation belt simulations. However, statistical averages poorly characterize extreme events such as geomagnetic storms in that storm-time ultralow frequency wave power is typically larger than that derived over a solar cycle and Kp is a poor proxy for storm-time wave power.
OneGeology Web Services and Portal as a global geological SDI - latest standards and technology
NASA Astrophysics Data System (ADS)
Duffy, Tim; Tellez-Arenas, Agnes
2014-05-01
The global coverage of OneGeology Web Services (www.onegeology.org and portal.onegeology.org) achieved since 2007 from the 120 participating geological surveys will be reviewed and issues arising discussed. Recent enhancements to the OneGeology Web Services capabilities will be covered including new up to 5 star service accreditation scheme utilising the ISO/OGC Web Mapping Service standard version 1.3, core ISO 19115 metadata additions and Version 2.0 Web Feature Services (WFS) serving the new IUGS-CGI GeoSciML V3.2 geological web data exchange language standard (http://www.geosciml.org/) with its associated 30+ IUGS-CGI available vocabularies (http://resource.geosciml.org/ and http://srvgeosciml.brgm.fr/eXist2010/brgm/client.html). Use of the CGI simpelithology and timescale dictionaries now allow those who wish to do so to offer data harmonisation to query their GeoSciML 3.2 based Web Feature Services and their GeoSciML_Portrayal V2.0.1 (http://www.geosciml.org/) Web Map Services in the OneGeology portal (http://portal.onegeology.org). Contributing to OneGeology involves offering to serve ideally 1:1000,000 scale geological data (in practice any scale now is warmly welcomed) as an OGC (Open Geospatial Consortium) standard based WMS (Web Mapping Service) service from an available WWW server. This may either be hosted within the Geological Survey or a neighbouring, regional or elsewhere institution that offers to serve that data for them i.e. offers to help technically by providing the web serving IT infrastructure as a 'buddy'. OneGeology is a standards focussed Spatial Data Infrastructure (SDI) and works to ensure that these standards work together and it is now possible for European Geological Surveys to register their INSPIRE web services within the OneGeology SDI (e.g. see http://www.geosciml.org/geosciml/3.2/documentation/cookbook/INSPIRE_GeoSciML_Cookbook%20_1.0.pdf). The Onegeology portal (http://portal.onegeology.org) is the first port of call for anyone
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
NASA Technical Reports Server (NTRS)
Zhang, Taiping
1994-01-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations.
Statistical characteristics of dynamics for population migration driven by the economic interests
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Zhao, Ning; Hao, Rui
2016-06-01
Population migration typically occurs under some constraints, which can deeply affect the structure of a society and some other related aspects. Therefore, it is critical to investigate the characteristics of population migration. Data from the China Statistical Yearbook indicate that the regional gross domestic product per capita relates to the population size via a linear or power-law relation. In addition, the distribution of population migration sizes or relative migration strength introduced here is dominated by a shifted power-law relation. To reveal the mechanism that creates the aforementioned distributions, a dynamic model is proposed based on the population migration rule that migration is facilitated by higher financial gains and abated by fewer employment opportunities at the destination, considering the migration cost as a function of the migration distance. The calculated results indicate that the distribution of the relative migration strength is governed by a shifted power-law relation, and that the distribution of migration distances is dominated by a truncated power-law relation. These results suggest the use of a power-law to fit a distribution may be not always suitable. Additionally, from the modeling framework, one can infer that it is the randomness and determinacy that jointly create the scaling characteristics of the distributions. The calculation also demonstrates that the network formed by active nodes, representing the immigration and emigration regions, usually evolves from an ordered state with a non-uniform structure to a disordered state with a uniform structure, which is evidenced by the increasing structural entropy.
Roth, A E; Jones, C D; Durian, D J
2013-04-01
We report on the statistics of bubble size, topology, and shape and on their role in the coarsening dynamics for foams consisting of bubbles compressed between two parallel plates. The design of the sample cell permits control of the liquid content, through a constant pressure condition set by the height of the foam above a liquid reservoir. We find that in the scaling regime, all bubble distributions are independent not only of time, but also of liquid content. For coarsening, the average rate decreases with liquid content due to the blocking of gas diffusion by Plateau borders inflated with liquid; we achieve a factor of 4 reduction from the dry limit. By observing the growth rate of individual bubbles, we find that von Neumann's law becomes progressively violated with increasing wetness and decreasing bubble size. We successfully model this behavior by explicitly incorporating the border-blocking effect into the von Neumann argument. Two dimensionless bubble shape parameters naturally arise, one of which is primarily responsible for the violation of von Neumann's law for foams that are not perfectly dry.
Dynamical and statistical behavior of discrete combustion waves: A theoretical and numerical study
NASA Astrophysics Data System (ADS)
Bharath, Naine Tarun; Rashkovskiy, Sergey A.; Tewari, Surya P.; Gundawar, Manoj Kumar
2013-04-01
We present a detailed theoretical and numerical study of combustion waves in a discrete one-dimensional disordered system. The distances between neighboring reaction cells were modeled with a gamma distribution. The results show that the random structure of the microheterogeneous system plays a crucial role in the dynamical and statistical behavior of the system. This is a consequence of the nonlinear interaction of the random structure of the system with the thermal wave. An analysis of the experimental data on the combustion of a gasless system (Ti + xSi) and a wide range of thermite systems was performed in view of the developed model. We have shown that the burning rate of the powder system sensitively depends on its internal structure. The present model allows for reproducing theoretically the experimental data for a wide range of pyrotechnic mixtures. We show that Arrhenius’ macrokinetics at combustion of disperse systems can take place even in the absence of Arrhenius’ microkinetics; it can have a purely thermal nature and be related to their heterogeneity and to the existence of threshold temperature. It is also observed that the combustion of disperse systems always occurs in the microheterogeneous mode according to the relay-race mechanism.
Gaffney, Inez M.
1973-01-01
CAN/SDI is Canada's national Selective Dissemination of Information Service offering a choice of nine data bases to its scientific and technical community. The system is based on central processing at the National Science Library combined with the utilization of decentralized expertise and resources for profile formulation and user education. Its greatest strength lies in its wide interdisciplinary quality. The major advantage of centralized processing of many data bases is that Canadians need learn only one method of profile formulation to access many files. A breakdown of services used confirms that a single tape service does not cover all the information requirements of most users. On the average each profile accesses approximately 1.5 data bases. Constant subscriber growth and a low cancellation rate indicate that CAN/SDI is and will continue to be an important element in Canada's information system. PMID:4740714
Gaffney, I M
1973-07-01
CAN/SDI is Canada's national Selective Dissemination of Information Service offering a choice of nine data bases to its scientific and technical community. The system is based on central processing at the National Science Library combined with the utilization of decentralized expertise and resources for profile formulation and user education. Its greatest strength lies in its wide interdisciplinary quality. The major advantage of centralized processing of many data bases is that Canadians need learn only one method of profile formulation to access many files. A breakdown of services used confirms that a single tape service does not cover all the information requirements of most users. On the average each profile accesses approximately 1.5 data bases. Constant subscriber growth and a low cancellation rate indicate that CAN/SDI is and will continue to be an important element in Canada's information system.
Takao, Keizo; Toyama, Keiko; Nakanishi, Kazuo; Hattori, Satoko; Takamura, Hironori; Takeda, Masatoshi; Miyakawa, Tsuyoshi; Hashimoto, Ryota
2008-01-01
Background Schizophrenia is a complex genetic disorder caused by multiple genetic and environmental factors. The dystrobrevin-binding protein 1 (DTNBP1: dysbindin-1) gene is a major susceptibility gene for schizophrenia. Genetic variations in DTNBP1 are associated with cognitive functions, general cognitive ability and memory function, and clinical features of patients with schizophrenia including negative symptoms and cognitive decline. Since reduced expression of dysbindin-1 has been observed in postmortem brains of patients with schizophrenia, the sandy (sdy) mouse, which has a deletion in the Dtnbp1 gene and expresses no dysbindin-1 protein, could be an animal model of schizophrenia. To address this issue, we have carried out a comprehensive behavioral analysis of the sdy mouse in this study. Results In a rotarod test, sdy mice did not exhibit motor learning whilst the wild type mice did. In a Barnes circular maze test both sdy mice and wild type mice learned to selectively locate the escape hole during the course of the training period and in the probe trial conducted 24 hours after last training. However, sdy mice did not locate the correct hole in the retention probe tests 7 days after the last training trial, whereas wild type mice did, indicating impaired long-term memory retention. A T-maze forced alternation task, a task of working memory, revealed no effect of training in sdy mice despite the obvious effect of training in wild type mice, suggesting a working memory deficit. Conclusion Sdy mouse showed impaired long-term memory retention and working memory. Since genetic variation in DTNBP1 is associated with both schizophrenia and memory function, and memory function is compromised in patients with schizophrenia, the sdy mouse may represent a useful animal model to investigate the mechanisms of memory dysfunction in the disorder. PMID:18945333
He, Jiajie; Dougherty, Mark; Shaw, Joey; Fulton, John; Arriaga, Francisco
2011-10-01
Rural areas represent approximately 95% of the 14000 km(2) Alabama Black Belt, an area of widespread Vertisols dominated by clayey, smectitic, shrink-swell soils. These soils are unsuitable for conventional onsite wastewater treatment systems (OWTS) which are nevertheless widely used in this region. In order to provide an alternative wastewater dosing system, an experimental field moisture controlled subsurface drip irrigation (SDI) system was designed and installed as a field trial. The experimental system that integrates a seasonal cropping system was evaluated for two years on a 500-m(2) Houston clay site in west central Alabama from August 2006 to June 2008. The SDI system was designed to start hydraulic dosing only when field moisture was below field capacity. Hydraulic dosing rates fluctuated as expected with higher dosing rates during warm seasons with near zero or zero dosing rates during cold seasons. Lower hydraulic dosing in winter creates the need for at least a two-month waste storage structure which is an insurmountable challenge for rural homeowners. An estimated 30% of dosed water percolated below 45-cm depth during the first summer which included a 30-year historic drought. This massive volume of percolation was presumably the result of preferential flow stimulated by dry weather clay soil cracking. Although water percolation is necessary for OWTS, this massive water percolation loss indicated that this experimental system is not able to effective control soil moisture within its monitoring zone as designed. Overall findings of this study indicated that soil moisture controlled SDI wastewater dosing is not suitable as a standalone system in these Vertisols. However, the experimental soil moisture control system functioned as designed, demonstrating that soil moisture controlled SDI wastewater dosing may find application as a supplement to other wastewater disposal methods that can function during cold seasons. PMID:21621905
A review of gas-cooled reactor concepts for SDI (Strategic Defense Initiative)
NASA Astrophysics Data System (ADS)
Marshall, A. C.
1989-08-01
A review was completed of multimegawatt gas cooled reactor concepts proposed for SDI applications. The study concluded that the principal reason for considering gas cooled reactors for burst mode operation was the potential for significant system mass savings over closed cycle systems if open cycle gas cooled operation (effluent exhausted to space) is acceptable. The principal reason for considering gas cooled reactors for steady state operation is that they may represent a lower technology risk than other approaches. In the review, nine gas cooled reactor concepts were compared to identify the most promising. For burst mode operation, the NERVA (Nuclear Engine for Rocket Vehicle Application) derivative reactor concept emerged as a strong first choice since its performance exceeds the anticipated operational requirements and the technology was demonstrated and is retrievable. Although the NERVA derivative concepts were determined to be the lead candidates for the Multimegawatt Steady State (MMWSS) mode as well, their lead over the other candidates is not as great as for the burst mode.
A review of gas-cooled reactor concepts for SDI (Strategic Defense Initiative) applications
Marshall, A.C.
1989-08-01
We have completed a review of multimegawatt gas-cooled reactor concepts proposed for SDI applications. Our study concluded that the principal reason for considering gas-cooled reactors for burst-mode operation was the potential for significant system mass savings over closed-cycle systems if open-cycle gas-cooled operation (effluent exhausted to space) is acceptable. The principal reason for considering gas-cooled reactors for steady-state operation is that they may represent a lower technology risk than other approaches. In the review, nine gas-cooled reactor concepts were compared to identify the most promising. For burst-mode operation, the NERVA (Nuclear Engine for Rocket Vehicle Application) derivative reactor concept emerged as a strong first choice since its performance exceeds the anticipated operational requirements and the technology has been demonstrated and is retrievable. Although the NERVA derivative concepts were determined to be the lead candidates for the Multimegawatt Steady-State (MMWSS) mode as well, their lead over the other candidates is not as great as for the burst mode. 90 refs., 2 figs., 10 tabs.
Development of a current collection loss management system for SDI homopolar power supplies
Brown, D.W.
1989-01-01
High speed, high power density current collection systems have been identified as an enabling technology required to construct homopolar power supplies to meet SDI missions. This work is part of a three-year effort directed towards the analysis, experimental verification, and prototype construction of a current collection system designed to operate continuously at 2 kA/cm{sup 2}, at a rubbing speed of 200 m/s, and with acceptable losses in a space environment. To data, no system has achieved these conditions simultaneously. This is the annual report covering the second year period of performance on DOE contract DE-AC03-86SF16518. Major areas covered include design, construction and operation of a cryogenically cooled brush test rig, design and construction of a high speed brush test rig, optimization study for homopolar machines, loss analysis of the current collection system, and an application study which defines the air-core homopolar construction necessary to achieve the goal of 80--90 kW/kg generator power density. 17 figs., 2 tabs.
NASA Astrophysics Data System (ADS)
Schubert, David; Reyers, Mark; Pinto, Joaquim; Fink, Andreas; Massmeyer, Klaus
2016-04-01
Southeast Asia has been identified as one of the hot-spots of climate change. While the projected changes in annual precipitation are comparatively small, there is a clear tendency towards more rainfall in the dry season and an increase in extreme precipitation events. In this study, a statistical dynamical downscaling (SDD) approach is applied to obtain higher resolution and more robust regional climate change projections for tropical Southeast Asia with focus on Vietnam. First, a recent climate (RC) simulation with the regional climate model COSMO-CLM with a spatial resolution of ~50 km driven by ERA-Interim (1979-2008) is performed for the tropical region of Southeast Asia. For the SDD, six weather types (WTs) are selected for Vietnam during the wet season (April - October) using a k-means cluster analysis of daily zonal wind component in 850 hPa and 200 hPa from the RC run. For each calculated weather type, simulated representatives are selected from the RC run and are then further dynamically downscaled to a resolution of 0.0625° (7 km). By using historical WT frequencies, the simulated representatives are recombined to a high resolution rainfall climatology for the recent climate. It is shown that the SDD is generally able to capture the present day climatology and that the employment of the higher resolved simulated representatives enhances the performance of the SDD. However, an overestimation of rainfall at higher altitudes is found. To obtain future climate projections, an ensemble of eight CMIP5 model members are selected to study precipitation changes. For these projections, WT frequencies of future scenarios under two representative Concentration Pathways (RCP4.5 and RCP8.5) are taken into account for the mid-term scenario (2046-2065) and the long-term scenario (2081-2100). The strongest precipitation changes are found for the RCP8.5 scenario. Most of the models indicate a generally increase in precipitation amount in the wet period over Southeast
Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model
Zhang, T. )
1994-06-01
A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
Wood, Andrew W; Leung, Lai R; Sridhar, V; Lettenmaier, D P
2004-01-01
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregation (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to
Statistical properties and pre-hit dynamics of price limit hits in the Chinese stock markets.
Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
2015-01-01
Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders' short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners.
Statistical Properties and Pre-Hit Dynamics of Price Limit Hits in the Chinese Stock Markets
Wan, Yu-Lei; Xie, Wen-Jie; Gu, Gao-Feng; Jiang, Zhi-Qiang; Chen, Wei; Xiong, Xiong; Zhang, Wei; Zhou, Wei-Xing
2015-01-01
Price limit trading rules are adopted in some stock markets (especially emerging markets) trying to cool off traders’ short-term trading mania on individual stocks and increase market efficiency. Under such a microstructure, stocks may hit their up-limits and down-limits from time to time. However, the behaviors of price limit hits are not well studied partially due to the fact that main stock markets such as the US markets and most European markets do not set price limits. Here, we perform detailed analyses of the high-frequency data of all A-share common stocks traded on the Shanghai Stock Exchange and the Shenzhen Stock Exchange from 2000 to 2011 to investigate the statistical properties of price limit hits and the dynamical evolution of several important financial variables before stock price hits its limits. We compare the properties of up-limit hits and down-limit hits. We also divide the whole period into three bullish periods and three bearish periods to unveil possible differences during bullish and bearish market states. To uncover the impacts of stock capitalization on price limit hits, we partition all stocks into six portfolios according to their capitalizations on different trading days. We find that the price limit trading rule has a cooling-off effect (object to the magnet effect), indicating that the rule takes effect in the Chinese stock markets. We find that price continuation is much more likely to occur than price reversal on the next trading day after a limit-hitting day, especially for down-limit hits, which has potential practical values for market practitioners. PMID:25874716
NASA Astrophysics Data System (ADS)
Mondescu, Radu Paul
1999-08-01
In this dissertation we report new theoretical results-both analytical and numerical-concerning a variety of polymeric systems. Applying path-integral and differentiable manifolds techniques, we have obtained original results concerning the statistics of a Gaussian polymer embedded on a sphere, a cylinder, a cone and a torus. Generally, we found that the curvature of the surfaces induces a geometrical localization area. Next we employ field theoretical (instanton calculus) and differential equations techniques (Darboux method) to obtain approximate and exact new results regarding the average size and the Green function of a Gaussian, one- dimensional polymer chain subjected to a multi-stable potential (the tunnel effect in polymer physics). Extending the multiple scattering formalism, we have investigated the steady-state dynamics of suspensions of spheres and Gaussian polymer chains without excluded volume interactions. We have calculated the self- diffusion and friction coefficients for probe objects (sphere and polymer chain) and the shear viscosity of the suspensions. At certain values of the concentration of the ambient medium, motion of probe objects freezes. Deviation from the Stokes-Einstein behavior is observed and interpreted. Next, we have calculated the diffusion coefficient and the change in the viscosity of a dilute solution of freely translating and rotating diblock, Gaussian copolymers. Regimes that lead to increasing the efficiency of separation processes have been identified. The parallel between Navier-Stokes and Lamé equations was exploited to extend the effective medium formalism to the computation of the effective shear and Young moduli and the Poisson ratio of a composite material containing rigid, monodispersed, penetrable spheres. Our approach deals efficiently with the high concentration regime of inclusions.
ERIC Educational Resources Information Center
Koparan, Timur
2016-01-01
In this study, the effect on the achievement and attitudes of prospective teachers is examined. With this aim ahead, achievement test, attitude scale for statistics and interviews were used as data collection tools. The achievement test comprises 8 problems based on statistical data, and the attitude scale comprises 13 Likert-type items. The study…
Argonne CW Linac (ACWL)—legacy from SDI and opportunities for the future
NASA Astrophysics Data System (ADS)
McMichael, G. E.; Yule, T. J.
1995-09-01
The former Strategic Defense Initiative Organization (SDIO) invested significant resources over a 6-year period to develop and build an accelerator to demonstrate the launching of a cw beam with characteristics suitable for a space-based Neutral Particle Beam (NPB) system. This accelerator, the CWDD (Continuous Wave Deuterium Demonstrator) accelerator, was designed to accelerate 80 mA cw of D- to 7.5 MeV. A considerable amount of hardware was constructed and installed in the Argonne-based facility, and major performance milestones were achieved before program funding from the Department of Defense ended in October 1993. Existing assets have been turned over to Argonne. Assets include a fully functional 200 kV cw D- injector, a cw RFQ that has been tuned, leak checked and aligned, beam lines and a high-power beam stop, all installed in a shielded vault with appropriate safety and interlock systems. In addition, there are two high power (1 MW) cw rf amplifiers and all the ancillary power, cooling and control systems required for a high-power accelerator system. The SDI mission required that the CWDD accelerator structures operate at cryogenic temperatures (26K), a requirement that placed severe limitations on operating period (CWDD would have provided 20 seconds of cw beam every 90 minutes). However, the accelerator structures were designed for full-power rf operation with water cooling and ACWL (Argonne Continuous Wave Linac), the new name for CWDD in its water-cooled, positive-ion configuration, will be able to operate continuously. Project status and achievements will be reviewed. Preliminary design of a proton conversion for the RFQ, and other proposals for turning ACWL into a testbed for cw-linac engineering, will be discussed.
Chládek, J; Brázdil, M; Halámek, J; Plešinger, F; Jurák, P
2013-01-01
We present an off-line analysis procedure for exploring brain activity recorded from intra-cerebral electroencephalographic data (SEEG). The objective is to determine the statistical differences between different types of stimulations in the time-frequency domain. The procedure is based on computing relative signal power change and subsequent statistical analysis. An example of characteristic statistically significant event-related de/synchronization (ERD/ERS) detected across different frequency bands following different oddball stimuli is presented. The method is used for off-line functional classification of different brain areas. PMID:24109865
NASA Astrophysics Data System (ADS)
Wen, Haohua; Woo, C. H.
2016-03-01
Contributions from the vibrational thermodynamics of phonons and magnons in the dynamic simulations of thermally activated atomic processes in crystalline materials were considered within the framework of classical statistics in conventional studies. The neglect of quantum effects produces the wrong lattice and spin dynamics and erroneous activation characteristics, sometimes leading to the incorrect results. In this paper, we consider the formation and migration of mono-vacancy in BCC iron over a large temperature range from 10 K to 1400 K, across the ferro/paramagnetic phase boundary. Entropies and enthalpies of migration and formation are calculated using quantum heat baths based on a Bose-Einstein statistical description of thermal excitations in terms of phonons and magnons. Corrections due to the use of classical heat baths are evaluated and discussed.
NASA Astrophysics Data System (ADS)
Holland, M. P.; Rabassa, P.; Sterk, A. E.
2016-08-01
For non-uniformly hyperbolic dynamical systems we consider the time series of maxima along typical orbits. Using ideas based upon quantitative recurrence time statistics we prove convergence of the maxima (under suitable normalization) to an extreme value distribution, and obtain estimates on the rate of convergence. We show that our results are applicable to a range of examples, and include new results for Lorenz maps, certain partially hyperbolic systems, and non-uniformly expanding systems with sub-exponential decay of correlations. For applications where analytic results are not readily available we show how to estimate the rate of convergence to an extreme value distribution based upon numerical information of the quantitative recurrence statistics. We envisage that such information will lead to more efficient statistical parameter estimation schemes based upon the block-maxima method.
Statistical Tools for the Interpretation of Enzootic West Nile virus Transmission Dynamics.
Caillouët, Kevin A; Robertson, Suzanne
2016-01-01
Interpretation of enzootic West Nile virus (WNV) surveillance indicators requires little advanced mathematical skill, but greatly enhances the ability of public health officials to prescribe effective WNV management tactics. Stepwise procedures for the calculation of mosquito infection rates (IR) and vector index (VI) are presented alongside statistical tools that require additional computation. A brief review of advantages and important considerations for each statistic's use is provided. PMID:27188561
Statistical dynamics of classical systems: A self-consistent field approach
Grzetic, Douglas J. Wickham, Robert A.; Shi, An-Chang
2014-06-28
We develop a self-consistent field theory for particle dynamics by extremizing the functional integral representation of a microscopic Langevin equation with respect to the collective fields. Although our approach is general, here we formulate it in the context of polymer dynamics to highlight satisfying formal analogies with equilibrium self-consistent field theory. An exact treatment of the dynamics of a single chain in a mean force field emerges naturally via a functional Smoluchowski equation, while the time-dependent monomer density and mean force field are determined self-consistently. As a simple initial demonstration of the theory, leaving an application to polymer dynamics for future work, we examine the dynamics of trapped interacting Brownian particles. For binary particle mixtures, we observe the kinetics of phase separation.
Reutter, Bryan W.; Gullberg, Grant T.; Huesman, Ronald H.
2001-04-30
Artifacts can result when reconstructing a dynamic image sequence from inconsistent single photon emission computed tomography (SPECT) projections acquired by a slowly rotating gantry. The artifacts can lead to biases in kinetic parameters estimated from time-activity curves generated by overlaying volumes of interest on the images. To overcome these biases in conventional image based dynamic data analysis, we have been investigating the estimation of time-activity curves and kinetic model parameters directly from dynamic SPECT projection data by modeling the spatial and temporal distribution of the radiopharmaceutical throughout the projected field of view. In previous work we developed computationally efficient methods for fully four-dimensional (4-D) direct estimation of spatiotemporal distributions [1] and their statistical uncertainties [2] from dynamic SPECT projection data, using a spatial segmentation and temporal B-splines. In addition, we studied the bias that results from modeling various orders of temporal continuity and using various time samplings [1]. In the present work, we use the methods developed in [1, 2] and Monte Carlo simulations to study the effects of the temporal modeling on the statistical variability of the reconstructed distributions.
NASA Astrophysics Data System (ADS)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Mike; Gershunov, Alexander; Gutowski, William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun-Young; Lim, Young-Kwon; Prabhat
2016-02-01
The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.
Choi, Ok Ran; Lim, In Kyoung
2011-04-08
Highlights: {yields} Reduced p21 expression in senescent cells treated with DNA damaging agents. {yields} Increase of [{sup 3}H]thymidine and BrdU incorporations in DNA damaged-senescent cells. {yields} Upregulation of miR-93 expression in senescent cells in response to DSB. {yields} Failure of p53 binding to p21 promoter in senescent cells in response to DSB. {yields} Molecular mechanism of increased cancer development in aged than young individuals. -- Abstract: To answer what is a critical event for higher incidence of tumor development in old than young individuals, primary culture of human diploid fibroblasts were employed and DNA damage was induced by doxorubicin or X-ray irradiation. Response to the damage was different between young and old cells; loss of p21{sup sdi1} expression in spite of p53{sup S15} activation in old cells along with [{sup 3}H]thymidine and BrdU incorporation, but not in young cells. The phenomenon was confirmed by other tissue fibroblasts obtained from different donor ages. Induction of miR-93 expression and reduced p53 binding to p21 gene promoter account for loss of p21{sup sdi1} expression in senescent cells after DNA damage, suggesting a mechanism of in vivo carcinogenesis in aged tissue without repair arrest.
Characterizing molecular motion in H2O and H3O+ with dynamical instability statistics
NASA Astrophysics Data System (ADS)
Green, Jason R.; Hofer, Thomas S.; Berry, R. Stephen; Wales, David J.
2011-11-01
Sets of finite-time Lyapunov exponents characterize the stability and instability of classically chaotic dynamical trajectories. Here we show that their sample distributions can contain subpopulations identifying different types of dynamics. In small isolated molecules these dynamics correspond to distinct elementary motions, such as isomerizations. Exponents are calculated from constant total energy molecular dynamics simulations of H2O and H3O+, modelled with a classical, reactive, all-atom potential. Over a range of total energy, exponent distributions for these systems reveal that phase space exploration is more chaotic near saddles corresponding to isomerization and less chaotic near potential energy minima. This finding contrasts with previous results for Lennard-Jones clusters, and is explained in terms of the potential energy landscape.
NASA Astrophysics Data System (ADS)
Frossard, L.; Rieder, H. E.; Ribatet, M.; Staehelin, J.; Maeder, J. A.; Di Rocco, S.; Davison, A. C.; Peter, T.
2012-05-01
We use models for mean and extreme values of total column ozone on spatial scales to analyze "fingerprints" of atmospheric dynamics and chemistry on long-term ozone changes at northern and southern mid-latitudes. The r-largest order statistics method is used for pointwise analysis of extreme events in low and high total ozone (termed ELOs and EHOs, respectively). For the corresponding mean value analysis a pointwise autoregressive moving average model (ARMA) is used. The statistical models include important atmospheric covariates to describe the dynamical and chemical state of the atmosphere: the solar cycle, the Quasi-Biennial Oscillation (QBO), ozone depleting substances (ODS) in terms of equivalent effective stratospheric chlorine (EESC), the North Atlantic Oscillation (NAO), the Antarctic Oscillation (AAO), the El~Niño/Southern Oscillation (ENSO), and aerosol load after the volcanic eruptions of El Chichón and Mt. Pinatubo. The influence of the individual covariates on mean and extreme levels in total column ozone is derived on a grid cell basis. The results show that "fingerprints", i.e., significant influence, of dynamical and chemical features are captured in both the "bulk" and the tails of the ozone distribution, respectively described by means and EHOs/ELOs. While results for the solar cycle, QBO and EESC are in good agreement with findings of earlier studies, unprecedented spatial fingerprints are retrieved for the dynamical covariates.
Static Numbers to Dynamic Statistics: Designing a Policy-Friendly Social Policy Indicator Framework
ERIC Educational Resources Information Center
Ahn, Sang-Hoon; Choi, Young Jun; Kim, Young-Mi
2012-01-01
In line with the economic crisis and rapid socio-demographic changes, the interest in "social" and "well-being" indicators has been revived. Social indicator movements of the 1960s resulted in the establishment of social indicator statistical frameworks; that legacy has remained intact in many national governments and international organisations.…
NASA Astrophysics Data System (ADS)
De Bacco, Caterina; Guggiola, Alberto; Kühn, Reimer; Paga, Pierre
2016-05-01
Rare event statistics for random walks on complex networks are investigated using the large deviation formalism. Within this formalism, rare events are realised as typical events in a suitably deformed path-ensemble, and their statistics can be studied in terms of spectral properties of a deformed Markov transition matrix. We observe two different types of phase transition in such systems: (i) rare events which are singled out for sufficiently large values of the deformation parameter may correspond to localised modes of the deformed transition matrix; (ii) ‘mode-switching transitions’ may occur as the deformation parameter is varied. Details depend on the nature of the observable for which the rare event statistics is studied, as well as on the underlying graph ensemble. In the present paper we report results on rare events statistics for path averages of random walks in Erdős-Rényi and scale free networks. Large deviation rate functions and localisation properties are studied numerically. For observables of the type considered here, we also derive an analytical approximation for the Legendre transform of the large deviation rate function, which is valid in the large connectivity limit. It is found to agree well with simulations.
NASA Astrophysics Data System (ADS)
Tang, Jianping; Niu, Xiaorui; Wang, Shuyu; Gao, Hongxia; Wang, Xueyuan; Wu, Jian
2016-03-01
Statistical downscaling and dynamical downscaling are two approaches to generate high-resolution regional climate models based on the large-scale information from either reanalysis data or global climate models. In this study, these two downscaling methods are used to simulate the surface climate of China and compared. The Statistical Downscaling Model (SDSM) is cross validated and used to downscale the regional climate of China. Then, the downscaled historical climate of 1981-2000 and future climate of 2041-2060 are compared with that from the Weather Research and Forecasting (WRF) model driven by the European Center-Hamburg atmosphere model and the Max Planck Institute Ocean Model (ECHAM5/MPI-OM) and the L'Institut Pierre-Simon Laplace Coupled Model, version 5, coupled with the Nucleus for European Modelling of the ocean, low resolution (IPSL-CM5A-LR). The SDSM can reproduce the surface temperature characteristics of the present climate in China, whereas the WRF tends to underestimate the surface temperature over most of China. Both the SDSM and WRF require further work to improve their ability to downscale precipitation. Both statistical and dynamical downscaling methods produce future surface temperatures for 2041-2060 that are markedly different from the historical climatology. However, the changes in projected precipitation differ between the two downscaling methods. Indeed, large uncertainties remain in terms of the direction and magnitude of future precipitation changes over China.
NASA Astrophysics Data System (ADS)
Feldhoff, Jan H.; Lange, Stefan; Volkholz, Jan; Donges, Jonathan F.; Kurths, Jürgen; Gerstengarbe, Friedrich-Wilhelm
2015-03-01
In this study we introduce two new node-weighted difference measures on complex networks as a tool for climate model evaluation. The approach facilitates the quantification of a model's ability to reproduce the spatial covariability structure of climatological time series. We apply our methodology to compare the performance of a statistical and a dynamical regional climate model simulating the South American climate, as represented by the variables 2 m temperature, precipitation, sea level pressure, and geopotential height field at 500 hPa. For each variable, networks are constructed from the model outputs and evaluated against a reference network, derived from the ERA-Interim reanalysis, which also drives the models. We compare two network characteristics, the (linear) adjacency structure and the (nonlinear) clustering structure, and relate our findings to conventional methods of model evaluation. To set a benchmark, we construct different types of random networks and compare them alongside the climate model networks. Our main findings are: (1) The linear network structure is better reproduced by the statistical model statistical analogue resampling scheme (STARS) in summer and winter for all variables except the geopotential height field, where the dynamical model CCLM prevails. (2) For the nonlinear comparison, the seasonal differences are more pronounced and CCLM performs almost as well as STARS in summer (except for sea level pressure), while STARS performs better in winter for all variables.
NASA Astrophysics Data System (ADS)
Burkholder, Michael B.; Litster, Shawn
2016-05-01
In this study, we analyze the stability of two-phase flow regimes and their transitions using chaotic and fractal statistics, and we report new measurements of dynamic two-phase pressure drop hysteresis that is related to flow regime stability and channel water content. Two-phase flow dynamics are relevant to a variety of real-world systems, and quantifying transient two-phase flow phenomena is important for efficient design. We recorded two-phase (air and water) pressure drops and flow images in a microchannel under both steady and transient conditions. Using Lyapunov exponents and Hurst exponents to characterize the steady-state pressure fluctuations, we develop a new, measurable regime identification criteria based on the dynamic stability of the two-phase pressure signal. We also applied a new experimental technique by continuously cycling the air flow rate to study dynamic hysteresis in two-phase pressure drops, which is separate from steady-state hysteresis and can be used to understand two-phase flow development time scales. Using recorded images of the two-phase flow, we show that the capacitive dynamic hysteresis is related to channel water content and flow regime stability. The mixed-wettability microchannel and in-channel water introduction used in this study simulate a polymer electrolyte fuel cell cathode air flow channel.
Notaro, Michael; Wang, Yi; Liu, Zhengyu; Gallimore, Robert; Levis, Samuel
2008-01-05
A negative feedback of vegetation cover on subsequent annual precipitation is simulated for the mid-Holocene over North Africa using a fully coupled general circulation model with dynamic vegetation, FOAM-LPJ (Fast Ocean Atmosphere Model-Lund Potsdam Jena Model). By computing a vegetation feedback parameter based on lagged autocovariances, the simulated impact of North African vegetation on precipitation is statistically quantified. The feedback is also dynamically assessed through initial value ensemble experiments, in which North African grass cover is initially reduced and the climatic response analyzed. The statistical and dynamical assessments of the negative vegetation feedback agree in sign and relative magnitude for FOAM-LPJ. The negative feedback on annual precipitation largely results from a competition between bare soil evaporation and plant transpiration, with increases in the former outweighing reductions in the latter given reduced grass cover. This negative feedback weakens and eventually reverses sign over time during a transient simulation from the mid-Holocene to present. A similar, but weaker, negative feedback is identified in Community Climate System Model Version 2 (CCSM2) over North Africa for the mid-Holocene.
NASA Astrophysics Data System (ADS)
Koukas, Ioannis; Koukoravas, Vasilis; Mantesi, Konstantina; Sakellari, Katerina; Xanthopoulou, Themis-Demetra; Zarkadoulas, Akis; Markonis, Yannis; Papalexiou, Simon Michael; Koutsoyiannis, Demetris
2014-05-01
The statistical properties of over 300 different proxy records of the last two thousand years derived from the PAGES 2k database years are stochastically analysed. Analyses include estimation of their first four moments and their autocorrelation functions (ACF), as well as the determination of the presence of Hurst-Kolmogorov behaviour (known also as long term persistence). The data are investigated in groups according to their proxy type and location, while their statistical properties are also compared to those of the final temperature reconstructions. Acknowledgement: This research is conducted within the frame of the undergraduate course "Stochastic Methods in Water Resources" of the National Technical University of Athens (NTUA). The School of Civil Engineering of NTUA provided moral support for the participation of the students in the Assembly.
Muir, Ryan D.; Kissick, David J.; Simpson, Garth J.
2012-01-01
Data from photomultiplier tubes are typically analyzed using either counting or averaging techniques, which are most accurate in the dim and bright signal limits, respectively. A statistical means of adjoining these two techniques is presented by recovering the Poisson parameter from averaged data and relating it to the statistics of binomial counting from Kissick et al. [Anal. Chem. 82, 10129 (2010)]. The point at which binomial photon counting and averaging have equal signal to noise ratios is derived. Adjoining these two techniques generates signal to noise ratios at 87% to approaching 100% of theoretical maximum across the full dynamic range of the photomultiplier tube used. The technique is demonstrated in a second harmonic generation microscope. PMID:22535131
NASA Astrophysics Data System (ADS)
Pavlos, G. P.; Karakatsanis, L. P.; Xenakis, M. N.
2012-12-01
In this study, the non-linear analysis of the sunspot index is embedded in the non-extensive statistical theory of Tsallis (1988, 2004, 2009) [7,9,10]. The q-triplet of Tsallis, as well as the correlation dimension and the Lyapunov exponent spectrum were estimated for the SVD components of the sunspot index timeseries. Also the multifractal scaling exponent spectrum f(a), the generalized Renyi dimension spectrum D(q) and the spectrum J(p) of the structure function exponents were estimated experimentally and theoretically by using the q-entropy principle included in Tsallis non-extensive statistical theory, following Arimitsu and Arimitsu (2001, 2000) [76,77]. Our analysis showed clearly the following: (a) a phase transition process in the solar dynamics from high dimensional non-Gaussian SOC state to a low dimensional non-Gaussian chaotic state, (b) strong intermittent solar turbulence and anomalous (multifractal) diffusion solar process, which is strengthened as the solar dynamics makes a phase transition to low dimensional chaos in accordance to Ruzmaikin, Zeleny and Milovanov’s studies (Zelenyi and Milovanov (1991) [21]); Milovanov and Zelenyi (1993) [22]; Ruzmakin et al. (1996) [26]) (c) faithful agreement of Tsallis non-equilibrium statistical theory with the experimental estimations of (i) non-Gaussian probability distribution function P(x), (ii) multifractal scaling exponent spectrum f(a) and generalized Renyi dimension spectrum Dq, (iii) exponent spectrum J(p) of the structure functions estimated for the sunspot index and its underlying non equilibrium solar dynamics.
NASA Astrophysics Data System (ADS)
Frossard, L.; Rieder, H. E.; Ribatet, M.; Staehelin, J.; Maeder, J. A.; Di Rocco, S.; Davison, A. C.; Peter, T.
2013-01-01
We use statistical models for mean and extreme values of total column ozone to analyze "fingerprints" of atmospheric dynamics and chemistry on long-term ozone changes at northern and southern mid-latitudes on grid cell basis. At each grid cell, the r-largest order statistics method is used for the analysis of extreme events in low and high total ozone (termed ELOs and EHOs, respectively), and an autoregressive moving average (ARMA) model is used for the corresponding mean value analysis. In order to describe the dynamical and chemical state of the atmosphere, the statistical models include important atmospheric covariates: the solar cycle, the Quasi-Biennial Oscillation (QBO), ozone depleting substances (ODS) in terms of equivalent effective stratospheric chlorine (EESC), the North Atlantic Oscillation (NAO), the Antarctic Oscillation (AAO), the El Niño/Southern Oscillation (ENSO), and aerosol load after the volcanic eruptions of El Chichón and Mt. Pinatubo. The influence of the individual covariates on mean and extreme levels in total column ozone is derived on a grid cell basis. The results show that "fingerprints", i.e., significant influence, of dynamical and chemical features are captured in both the "bulk" and the tails of the statistical distribution of ozone, respectively described by mean values and EHOs/ELOs. While results for the solar cycle, QBO, and EESC are in good agreement with findings of earlier studies, unprecedented spatial fingerprints are retrieved for the dynamical covariates. Column ozone is enhanced over Labrador/Greenland, the North Atlantic sector and over the Norwegian Sea, but is reduced over Europe, Russia and the Eastern United States during the positive NAO phase, and vice-versa during the negative phase. The NAO's southern counterpart, the AAO, strongly influences column ozone at lower southern mid-latitudes, including the southern parts of South America and the Antarctic Peninsula, and the central southern mid-latitudes. Results
NASA Astrophysics Data System (ADS)
Baldovin, F.; Robledo, A.
2002-10-01
We uncover the dynamics at the chaos threshold μ∞ of the logistic map and find that it consists of trajectories made of intertwined power laws that reproduce the entire period-doubling cascade that occurs for μ<μ∞. We corroborate this structure analytically via the Feigenbaum renormalization-group (RG) transformation and find that the sensitivity to initial conditions has precisely the form of a q exponential, of which we determine the q index and the q-generalized Lyapunov coefficient λq. Our results are an unequivocal validation of the applicability of the nonextensive generalization of Boltzmann-Gibbs statistical mechanics to critical points of nonlinear maps.
NASA Technical Reports Server (NTRS)
Kozyra, J. U.; Cravens, T. E.; Nagy, A. F.; Brace, L. H.
1986-01-01
A statistical study of the subauroral electron temperature enhancement was undertaken using Langmuir probe observations during 488 traversals of the midlatitude plasmapause region by the DE-2 satellite. The subauroral electron temperature enhancement on the nightside is a quasi-permanent feature at all altitudes between 350 and 1000 km with an occurrence frequency that depends on altitude. The occurrence frequency of the subauroral electron temperature peak has a strong altitude dependence on the dayside. The position of the subauroral Te peak decreases with increasing magnetic activity in a manner similar to that of the equatorial plasmapause and other midlatitude plasmapause signatures.
Minimalist Model for the Dynamics of Helical Polypeptides: A Statistic-Based Parametrization.
Spampinato, Giulia Lia Beatrice; Maccari, Giuseppe; Tozzini, Valentina
2014-09-01
Low-resolution models are often used to address macroscopic time and size scales in molecular dynamics simulations of biomolecular systems. Coarse graining is often coupled to knowledge-based parametrization to obtain empirical potentials able to reproduce the system thermodynamic behavior. Here, a minimalist coarse grained (GC) model for the helical structures of proteins is reported. A knowledge-based parametrization strategy is coupled to the explicit inclusion of hydrogen-bonding-related terms, resulting in an accurate reproduction of the structure and dynamics of each single helical type, as well as the internal conformational variables correlation. The proposed strategy of basing the force field terms on real physicochemical interactions is transferable to different secondary structures. Thus, this work, though conclusive for helices, is to be considered the first of a series devoted to the application of the knowledge-based, physicochemical model to extended secondary structures and unstructured proteins.
Retrieval dynamics and retention in cross-situational statistical word learning
Vlach, Haley A.; Sandhofer, Catherine M.
2013-01-01
Previous research on cross-situational word learning has demonstrated that learners are able to reduce ambiguity in mapping words to referents by tracking co-occurrence probabilities across learning events. In the current experiments, we examined whether learners are able to retain mappings over time. The results revealed that learners are able to retain mappings for up to one week later. However, there were interactions between the amount of retention and the different learning conditions. Interestingly, the strongest retention was associated with a learning condition that engendered retrieval dynamics that initially challenged the learner, but eventually lead to more successful retrieval toward the end of learning. The ease/difficulty of retrieval is a critical process underlying cross-situational word learning and is a powerful example of how learning dynamics affect long-term learning outcomes. PMID:24117698
Dynamics and Statistical Mechanics of Rotating and non-Rotating Vortical Flows
Lim, Chjan
2013-12-18
Three projects were analyzed with the overall aim of developing a computational/analytical model for estimating values of the energy, angular momentum, enstrophy and total variation of fluid height at phase transitions between disordered and self-organized flow states in planetary atmospheres. It is believed that these transitions in equilibrium statistical mechanics models play a role in the construction of large-scale, stable structures including super-rotation in the Venusian atmosphere and the formation of the Great Red Spot on Jupiter. Exact solutions of the spherical energy-enstrophy models for rotating planetary atmospheres by Kac's method of steepest descent predicted phase transitions to super-rotating solid-body flows at high energy to enstrophy ratio for all planetary spins and to sub-rotating modes if the planetary spin is large enough. These canonical statistical ensembles are well-defined for the long-range energy interactions that arise from 2D fluid flows on compact oriented manifolds such as the surface of the sphere and torus. This is because in Fourier space available through Hodge theory, the energy terms are exactly diagonalizable and hence has zero range, leading to well-defined heat baths.
Statistical analysis of global wind dynamics in vigorous Rayleigh-Bénard convection
NASA Astrophysics Data System (ADS)
Petschel, K.; Wilczek, M.; Breuer, M.; Friedrich, R.; Hansen, U.
2011-08-01
Experimental and numerical studies of thermal convection have shown that sufficiently vigorous convective flows exhibit a large-scale thermal wind component sweeping along small-scale thermal boundary layer instabilities. A characteristic feature of these flows is an intermittent behavior in the form of irregular reversals in the orientation of the large-scale circulation. There have been several attempts toward a better understanding and description of the phenomenon of flow reversals, but so far most of these models are based on a statistical analysis of few-point measurements or on simplified theoretical assumptions. The analysis of long-term data sets (>5×105 turnover times τt=d/urms) obtained by numerical simulations of turbulent two-dimensional Rayleigh-Bénard convection allows us to get a more comprehensive view of the spatio-temporal flow behavior. By means of a global statistical analysis of the characteristic spatial modes of the flow we extract information about the stability of dominant large-scale modes as well as the reversal paths in state subspace. We examine probability density functions and drift vector fields of two-dimensional state subspaces spanned by different large-scale spatial modes. This also provides information about the coexistence of dominant modes.
Liu, Jianbo; Chambreau, Steven D; Vaghjiani, Ghanshyam L
2011-07-21
A large set of quasi-classical, direct dynamics trajectory simulations were performed for decomposition of 1,5-dinitrobiuret (DNB) over a temperature range from 4000 to 6000 K, aimed at providing insight into DNB decomposition mechanisms. The trajectories revealed various decomposition paths and reproduced the products (including HNCO, N(2)O, NO(2), NO, and water) observed in DNB pyrolysis experiments. Using trajectory results as a guide, structures of intermediate complexes and transition states that might be important for decomposition were determined using density functional theory calculations. Rice-Ramsperger-Kassel-Marcus (RRKM) theory was then utilized to examine behaviors of the energized reactant and intermediates and to determine unimolecular rates for crossing various transition states. According to RRKM predictions, the dominant initial decomposition path of energized DNB corresponds to elimination of HNNO(2)H via a concerted mechanism where the molecular decomposition is accompanied with intramolecular H-atom transfer from the central nitrogen to the terminal nitro oxygen. Other important paths correspond to elimination of NO(2) and H(2)NNO(2). NO(2) elimination is a simple N-N bond scission process. Formation and elimination of nitramide is, however, dynamically complicated, requiring twisting a -NHNO(2) group out of the molecular plane, followed by an intramolecular reaction to form nitramide before its elimination. These two paths become significant at temperatures above 1500 K, accounting for >17% of DNB decomposition at 2000 K. This work demonstrates that quasi-classical trajectory simulations, in conjunction with electronic structure and RRKM calculations, are able to extract mechanisms, kinetics, dynamics and product branching ratios for the decomposition of complex energetic molecules and to predict how they vary with decomposition temperature. PMID:21648953
Dynamics and statistics of noise-like pulses in modelocked lasers
NASA Astrophysics Data System (ADS)
Donovan, Graham M.
2015-08-01
Noise-like pulses and optical rogue waves are connected nonlinear phenomena which can occur in passively modelocked laser systems. Here we consider a range of model systems to explore the conditions under which noise-like pulses can be expected to occur, and further when the resulting statistics meet the optical rogue wave criteria. We show, via a series of careful simulations, that noise-like pulses and optical rogue waves can arise either separately or together, and that they may emerge from standard soliton-like solutions via different mechanisms. We also propose a quantitative definition of noise-like pulses, and explore the issues carefully in convergence testing numerical methods for such systems.
Delayed ionization and fragmentation en route to thermionic emission: statistics and dynamics.
Campbell, E E; Levine, R D
2000-01-01
Thermionic emission is discussed as a long time (microseconds) decay mode of energy-rich large molecules, metallic and metcar clusters, and fullerenes. We review what is known and consider the many experiments, systems, and theoretical and computational studies that still need to be done. We conclude with a wish list for future work. Particular attention is given to the experimental signatures, such as the dependence on the mode of energy acquisition, and theoretical indications of a not-quite-statistical delayed ionization and to the competition of electron emission with other decay modes, such as fragmentation or radiative cooling. Coupling of the electronic and nuclear modes can be a bottleneck and quite long time-delayed ionization can be observed, as in the decay of high Rydberg states probed by ZEKE spectroscopy, before the onset of complete energy partitioning.
Statistical Properties and Multifractal Behaviors of Market Returns by Ising Dynamic Systems
NASA Astrophysics Data System (ADS)
Fang, Wen; Wang, Jun
An interacting-agent model of speculative activity explaining price formation in financial markets is considered in the present paper, which based on the stochastic Ising model and the mean field theory. The model describes the interaction strength among the agents as well as an external field, and the corresponding random logarithmic price return process is investigated. According to the empirical research of the model, the time series formed by this Ising model exhibits the bursting typical of volatility clustering, the fat-tail phenomenon, the power-law distribution tails and the long-time memory. The statistical properties of the returns of Hushen 300 Index, Shanghai Stock Exchange (SSE) Composite Index and Shenzhen Stock Exchange (SZSE) Component Index are also studied for comparison between the real time series and the simulated ones. Further, the multifractal detrended fluctuation analysis is applied to investigate the time series returns simulated by Ising model have the distribution multifractality as well as the correlation multifractality.
A dynamical and statistical investigation of the shape of splashform tektites
NASA Astrophysics Data System (ADS)
Butler, S. L.; Stauffer, M.
2008-12-01
Splash-form tektites are believed to represent molten rock that was ejected by a large impactor that solidified while in flight. These glassy rocks are found in a number of strewn fields around the Earth and are found in a number of intriguing shapes including near spheres, axisymmetric biconcave shapes as well as rods and "dumb-bells". In this contribution, we will present the results of a statistical study of the shapes of over 1000 tektites from South-East Asia and the results of a numerical study of the evolution of fluid droplets under the influence of a centrifugal force and surface tension. As we will show, the numerical simulations first evolve to an oblate, axisymmetric form before becoming subject to a non-axisymmetric instability which results in a prolate shape. The numerical model results are consistent with the measurements of real tektites that show that there is a dearth of weakly deformed, highly prolate tektites.
Delayed Ionization and Fragmentation EN Route to Thermionic Emission: Statistics and Dynamics
NASA Astrophysics Data System (ADS)
Campbell, E. E. B.; Levine, R. D.
2000-10-01
Thermionic emission is discussed as a long time (microseconds) decay mode of energy-rich large molecules, metallic and metcar clusters, and fullerenes. We review what is known and consider the many experiments, systems, and theoretical and computational studies that still need to be done. We conclude with a wish list for future work. Particular attention is given to the experimental signatures, such as the dependence on the mode of energy acquisition, and theoretical indications of a not-quite-statistical delayed ionization and to the competition of electron emission with other decay modes, such as fragmentation or radiative cooling. Coupling of the electronic and nuclear modes can be a bottleneck and quite long time-delayed ionization can be observed, as in the decay of high Rydberg states probed by ZEKE spectroscopy, before the onset of complete energy partitioning.
DYNAMIC STABILITY OF THE SOLAR SYSTEM: STATISTICALLY INCONCLUSIVE RESULTS FROM ENSEMBLE INTEGRATIONS
Zeebe, Richard E.
2015-01-01
Due to the chaotic nature of the solar system, the question of its long-term stability can only be answered in a statistical sense, for instance, based on numerical ensemble integrations of nearby orbits. Destabilization of the inner planets, leading to close encounters and/or collisions can be initiated through a large increase in Mercury's eccentricity, with a currently assumed likelihood of ∼1%. However, little is known at present about the robustness of this number. Here I report ensemble integrations of the full equations of motion of the eight planets and Pluto over 5 Gyr, including contributions from general relativity. The results show that different numerical algorithms lead to statistically different results for the evolution of Mercury's eccentricity (e{sub M}). For instance, starting at present initial conditions (e{sub M}≃0.21), Mercury's maximum eccentricity achieved over 5 Gyr is, on average, significantly higher in symplectic ensemble integrations using heliocentric rather than Jacobi coordinates and stricter error control. In contrast, starting at a possible future configuration (e{sub M}≃0.53), Mercury's maximum eccentricity achieved over the subsequent 500 Myr is, on average, significantly lower using heliocentric rather than Jacobi coordinates. For example, the probability for e{sub M} to increase beyond 0.53 over 500 Myr is >90% (Jacobi) versus only 40%-55% (heliocentric). This poses a dilemma because the physical evolution of the real system—and its probabilistic behavior—cannot depend on the coordinate system or the numerical algorithm chosen to describe it. Some tests of the numerical algorithms suggest that symplectic integrators using heliocentric coordinates underestimate the odds for destabilization of Mercury's orbit at high initial e{sub M}.
Lee, Kwang-Min; Gilmore, David F
2006-11-01
The statistical design of experiments (DOE) is a collection of predetermined settings of the process variables of interest, which provides an efficient procedure for planning experiments. Experiments on biological processes typically produce long sequences of successive observations on each experimental unit (plant, animal, bioreactor, fermenter, or flask) in response to several treatments (combination of factors). Cell culture and other biotech-related experiments used to be performed by repeated-measures method of experimental design coupled with different levels of several process factors to investigate dynamic biological process. Data collected from this design can be analyzed by several kinds of general linear model (GLM) statistical methods such as multivariate analysis of variance (MANOVA), univariate ANOVA (time split-plot analysis with randomization restriction), and analysis of orthogonal polynomial contrasts of repeated factor (linear coefficient analysis). Last, regression model was introduced to describe responses over time to the different treatments along with model residual analysis. Statistical analysis of biprocess with repeated measurements can help investigate environmental factors and effects affecting physiological and bioprocesses in analyzing and optimizing biotechnology production. PMID:17159235
Financial price dynamics and pedestrian counterflows: A comparison of statistical stylized facts
NASA Astrophysics Data System (ADS)
Parisi, Daniel R.; Sornette, Didier; Helbing, Dirk
2013-01-01
We propose and document the evidence for an analogy between the dynamics of granular counterflows in the presence of bottlenecks or restrictions and financial price formation processes. Using extensive simulations, we find that the counterflows of simulated pedestrians through a door display eight stylized facts observed in financial markets when the density around the door is compared with the logarithm of the price. Finding so many stylized facts is very rare indeed among all agent-based models of financial markets. The stylized properties are present when the agents in the pedestrian model are assumed to display a zero-intelligent behavior. If agents are given decision-making capacity and adapt to partially follow the majority, periods of herding behavior may additionally occur. This generates the very slow decay of the autocorrelation of absolute return due to an intermittent dynamics. Our findings suggest that the stylized facts in the fluctuations of the financial prices result from a competition of two groups with opposite interests in the presence of a constraint funneling the flow of transactions to a narrow band of prices with limited liquidity.
Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.
2005-01-01
A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.
Dynamics of the instantaneous firing rate in response to changes in input statistics.
Fourcaud-Trocmé, Nicolas; Brunel, Nicolas
2005-06-01
We review and extend recent results on the instantaneous firing rate dynamics of simplified models of spiking neurons in response to noisy current inputs. It has been shown recently that the response of the instantaneous firing rate to small amplitude oscillations in the mean inputs depends in the large frequency limit f on the spike initiation dynamics. A particular simplified model, the exponential integrate-and-fire (EIF) model, has a response that decays as 1/f in the large frequency limit and describes very well the response of conductance-based models with a Hodgkin-Huxley type fast sodium current. Here, we show that the response of the EIF instantaneous firing rate also decays as 1/f in the case of an oscillation in the variance of the inputs for both white and colored noise. We then compute the initial transient response of the firing rate of the EIF model to a step change in its mean inputs and/or in the variance of its inputs. We show that in both cases the response speed is proportional to the neuron stationary firing rate and inversely proportional to a 'spike slope factor' Delta(T) that controls the sharpness of spike initiation: as 1/Delta(T) for a step change in mean inputs, and as 1/Delta(T) (2) for a step change in the variance in the inputs.
How electronic dynamics with Pauli exclusion produces Fermi-Dirac statistics
Nguyen, Triet S.; Nanguneri, Ravindra; Parkhill, John
2015-04-07
It is important that any dynamics method approaches the correct population distribution at long times. In this paper, we derive a one-body reduced density matrix dynamics for electrons in energetic contact with a bath. We obtain a remarkable equation of motion which shows that in order to reach equilibrium properly, rates of electron transitions depend on the density matrix. Even though the bath drives the electrons towards a Boltzmann distribution, hole blocking factors in our equation of motion cause the electronic populations to relax to a Fermi-Dirac distribution. These factors are an old concept, but we show how they can be derived with a combination of time-dependent perturbation theory and the extended normal ordering of Mukherjee and Kutzelnigg for a general electronic state. The resulting non-equilibrium kinetic equations generalize the usual Redfield theory to many-electron systems, while ensuring that the orbital occupations remain between zero and one. In numerical applications of our equations, we show that relaxation rates of molecules are not constant because of the blocking effect. Other applications to model atomic chains are also presented which highlight the importance of treating both dephasing and relaxation. Finally, we show how the bath localizes the electron density matrix.
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun -Young; Lim, Young -Kwon; Prabhat, -
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic to planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so
ERIC Educational Resources Information Center
Olive, G.; And Others
A selective dissemination of information service based on computer scanning of Nuclear Science Abstracts tapes has operated at the Atomic Energy Research Establishment, Harwell, England since October, 1968. The performance of the mechanized SDI service has been compared with that of the pre-existing current awareness service which is based on…
Technology Transfer Automated Retrieval System (TEKTRAN)
Subsurface drip irrigation (SDI) wets the soil at the depth of the drip line and in a volume around each emitter, but the soil wetted often does not include the soil surface. Because of this, the soil surface remains completely or at least partially dry and evaporative losses of irrigation water are...
Technology Transfer Automated Retrieval System (TEKTRAN)
QseA and SdiA are two of several transcriptional regulators that regulate virulence gene expression of enterohemorrhagic Escherichia coli (EHEC) O157:H7 via quorum sensing (QS). QseA regulates the expression of the locus of enterocyte effacement (LEE). LEE encodes for a type III secretion (T3S) sys...
NASA Astrophysics Data System (ADS)
Avissar, Roni
1991-03-01
Land surface interacts strongly with the atmosphere at all scales. This has a considerable impact on the hydrologic cycle and the climate. Therefore, in order to produce realistic simulations with climate models, their land-surface processes must be parameterized accurately. Because continental surfaces are usually extremely heterogeneous over the resolvable scales considered in these models, surface parameterizations based on the ‘big leaf-big stoma’ approach (that assume grid-scale homogeneity) fail to represent the land-atmosphere interactions that occur at much smaller scales. A parameterization based on a statistical-dynamical approach is suggested here. With this approach, each surface grid element of the numerical model is divided into homogeneous land patches (i.e., patches with similar internal heterogeneity). Assuming that horizontal fluxes between the different patches within a grid element are small as compared to the vertical fluxes, patches of the same type located at different places in the grid can be regrouped into one subgrid surface class. Then, for each one of the subgrid surface classes, probability density functions (pdf) are used to characterize the variability of the different parameters of the soil-plant-atmosphere system. These pdf are combined with the equations of the model that describe the dynamic and the energy and mass conservations in the atmosphere. The potential application of this statistical-dynamical parameterization is illustrated by simulating (i) the development of an agricultural area in an arid region and (ii) the process of deforestation in a tropical region. Both cases emphasize the importance of land-atmosphere interactions on regional hydrologic processes and climate.
Cascade statistics in the binary collision approximation and in full molecular dynamics
NASA Astrophysics Data System (ADS)
Hou, M.; Pan, Z.-Y.
1995-08-01
The Binary Collision Approximation (BCA) and Molecular Dynamics (MD) are used to simulate low energy atomic collision cascades in solids. Results are compared and discussed on the example of copper and gold self irradiation. For MD, long range N-body potentials are built, similar to those deduced from the second moment semi-empirical tight binding model. The pair interaction contribution is splined to a Molière screened Coulomb potential at small separation distances. This hybrid potential is checked for consistency with the already assessed N-body potential by means of thermal dynamics calculations in both the canonical (NVT) and the micro canonical (NVE) ensembles. Its use for long time enhanced diffusion simulations is discussed. The BCA calculations are performed with the MARLOWE program, using the same Molière potential as for MD, and modelling the N-body contribution by a binding of the atoms to their equilibrium lattice sites. The scattering integrals are estimated by means of a 4 points Gauss-Mehler quadrature. In MD, the NVT equations of motion are integrated with a constant time step of 2 fs. For the NVE cascade simulations, the Newton equations of motion are solved with a dynamically adjusted time step, kept lower than 2 fs. The influence of the time step on the simulated trajectories is discussed. The mean number of moving atoms with total energy above threshold values ranging from 1 to 100 eV is estimated as a function of time over 300 fs both with MARLOWE and by MD. This estimate is repeated for external primary energies ranging from 250 eV to 1 keV. In the case of copper, the BCA results are found to be in remarkable agreement with MD over about 200 fs cascade development, provided the size of the crystallite used in MD is sufficiently large in order to account for the early mechanical response of the close environment. This agreement between the two methods is found to be the best when the binding energy of the target atoms as modelled in the BCA
NASA Astrophysics Data System (ADS)
Liu, Zhichao; Zhao, Yunjie; Zeng, Chen; Computational Biophysics Lab Team
As the main protein of the bacterial flagella, flagellin plays an important role in perception and defense response. The newly discovered locus, FLS2, is ubiquitously expressed. FLS2 encodes a putative receptor kinase and shares many homologies with some plant resistance genes and even with some components of immune system of mammals and insects. In Arabidopsis, FLS2 perception is achieved by the recognition of epitope flg22, which induces FLS2 heteromerization with BAK1 and finally the plant immunity. Here we use both analytical methods such as Direct Coupling Analysis (DCA) and Molecular Dynamics (MD) Simulations to get a better understanding of the defense mechanism of FLS2. This may facilitate a redesign of flg22 or de-novo design for desired specificity and potency to extend the immune properties of FLS2 to other important crops and vegetables.
NASA Astrophysics Data System (ADS)
Calzavarini, Enrico; Volk, Romain; Lévêque, Emmanuel; Pinton, Jean-François; Toschi, Federico
2012-02-01
We study by means of an Eulerian-Lagrangian model the statistical properties of velocity and acceleration of a neutrally-buoyant finite-sized particle in a turbulent flow statistically homogeneous and isotropic. The particle equation of motion, besides added mass and steady Stokes drag, keeps into account the unsteady Stokes drag force-known as Basset-Boussinesq history force-and the non-Stokesian drag based on Schiller-Naumann parametrization, together with the finite-size Faxén corrections. We focus on the case of flow at low Taylor-Reynolds number, Reλ≃31, for which fully resolved numerical data which can be taken as a reference are available [Homann H., Bec J. Finite-size effects in the dynamics of neutrally buoyant particles in turbulent flow. J Fluid Mech 651 (2010) 81-91]. Remarkably, we show that while drag forces have always minor effects on the acceleration statistics, their role is important on the velocity behavior. We propose also that the scaling relations for the particle velocity variance as a function of its size, which have been first detected in fully resolved simulations, does not originate from inertial-scale properties of the background turbulent flow but it is likely to arise from the non-Stokesian component of the drag produced by the wake behind the particle. Furthermore, by means of comparison with fully resolved simulations, we show that the Faxén correction to the added mass has a dominant role in the particle acceleration statistics even for particles whose size attains the integral scale.
SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix
Michalski, D; Huq, M; Bednarz, G; Lalonde, R; Yang, Y; Heron, D
2014-06-01
Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same is for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for
Statistical techniques for modeling extreme price dynamics in the energy market
NASA Astrophysics Data System (ADS)
Mbugua, L. N.; Mwita, P. N.
2013-02-01
Extreme events have large impact throughout the span of engineering, science and economics. This is because extreme events often lead to failure and losses due to the nature unobservable of extra ordinary occurrences. In this context this paper focuses on appropriate statistical methods relating to a combination of quantile regression approach and extreme value theory to model the excesses. This plays a vital role in risk management. Locally, nonparametric quantile regression is used, a method that is flexible and best suited when one knows little about the functional forms of the object being estimated. The conditions are derived in order to estimate the extreme value distribution function. The threshold model of extreme values is used to circumvent the lack of adequate observation problem at the tail of the distribution function. The application of a selection of these techniques is demonstrated on the volatile fuel market. The results indicate that the method used can extract maximum possible reliable information from the data. The key attraction of this method is that it offers a set of ready made approaches to the most difficult problem of risk modeling.
NASA Astrophysics Data System (ADS)
Eckert, Nicolas; Schläppy, Romain; Jomelli, Vincent; Naaim, Mohamed
2013-04-01
A crucial step for proposing relevant long-term mitigation measures in long term avalanche forecasting is the accurate definition of high return period avalanches. Recently, "statistical-dynamical" approach combining a numerical model with stochastic operators describing the variability of its inputs-outputs have emerged. Their main interests is to take into account the topographic dependency of snow avalanche runout distances, and to constrain the correlation structure between model's variables by physical rules, so as to simulate the different marginal distributions of interest (pressure, flow depth, etc.) with a reasonable realism. Bayesian methods have been shown to be well adapted to achieve model inference, getting rid of identifiability problems thanks to prior information. An important problem which has virtually never been considered before is the validation of the predictions resulting from a statistical-dynamical approach (or from any other engineering method for computing extreme avalanches). In hydrology, independent "fossil" data such as flood deposits in caves are sometimes confronted to design discharges corresponding to high return periods. Hence, the aim of this work is to implement a similar comparison between high return period avalanches obtained with a statistical-dynamical approach and independent validation data resulting from careful dendrogeomorphological reconstructions. To do so, an up-to-date statistical model based on the depth-averaged equations and the classical Voellmy friction law is used on a well-documented case study. First, parameter values resulting from another path are applied, and the dendrological validation sample shows that this approach fails in providing realistic prediction for the case study. This may be due to the strongly bounded behaviour of runouts in this case (the extreme of their distribution is identified as belonging to the Weibull attraction domain). Second, local calibration on the available avalanche
Ly, Cheng; Tranchina, Daniel
2009-02-01
In the probability density function (PDF) approach to neural network modeling, a common simplifying assumption is that the arrival times of elementary postsynaptic events are governed by a Poisson process. This assumption ignores temporal correlations in the input that sometimes have important physiological consequences. We extend PDF methods to models with synaptic event times governed by any modulated renewal process. We focus on the integrate-and-fire neuron with instantaneous synaptic kinetics and a random elementary excitatory postsynaptic potential (EPSP), A. Between presynaptic events, the membrane voltage, v, decays exponentially toward rest, while s, the time since the last synaptic input event, evolves with unit velocity. When a synaptic event arrives, v jumps by A, and s is reset to zero. If v crosses the threshold voltage, an action potential occurs, and v is reset to v(reset). The probability per unit time of a synaptic event at time t, given the elapsed time s since the last event, h(s, t), depends on specifics of the renewal process. We study how regularity of the train of synaptic input events affects output spike rate, PDF and coefficient of variation (CV) of the interspike interval, and the autocorrelation function of the output spike train. In the limit of a deterministic, clocklike train of input events, the PDF of the interspike interval converges to a sum of delta functions, with coefficients determined by the PDF for A. The limiting autocorrelation function of the output spike train is a sum of delta functions whose coefficients fall under a damped oscillatory envelope. When the EPSP CV, sigma A/mu A, is equal to 0.45, a CV for the intersynaptic event interval, sigma T/mu T = 0.35, is functionally equivalent to a deterministic periodic train of synaptic input events (CV = 0) with respect to spike statistics. We discuss the relevance to neural network simulations. PMID:19431264
Rodríguez, Begoña; Blas, Juan; Lorenzo, Rubén M; Fernández, Patricia; Abril, Evaristo J
2011-04-01
Personal exposure meters (PEM) are routinely used for the exposure assessment to radio frequency electric or magnetic fields. However, their readings are subject to errors associated with perturbations of the fields caused by the presence of the human body. This paper presents a novel analysis method for the characterization of this effect. Using ray-tracing techniques, PEM measurements have been emulated, with and without an approximation of this shadowing effect. In particular, the Global System for Mobile Communication mobile phone frequency band was chosen for its ubiquity and, specifically, we considered the case where the subject is walking outdoors in a relatively open area. These simulations have been contrasted with real PEM measurements in a 35-min walk. Results show a good agreement in terms of root mean square error and E-field cumulative distribution function (CDF), with a significant improvement when the shadowing effect is taken into account. In particular, the Kolmogorov-Smirnov (KS) test provides a P-value of 0.05 when considering the shadowing effect, versus a P-value of 10⁻¹⁴ when this effect is ignored. In addition, although the E-field levels in the absence of a human body have been found to follow a Nakagami distribution, a lognormal distribution fits the statistics of the PEM values better than the Nakagami distribution. As a conclusion, although the mean could be adjusted by using correction factors, there are also other changes in the CDF that require particular attention due to the shadowing effect because they might lead to a systematic error.
The non-statistical dynamics of the 18O + 32O2 isotope exchange reaction at two energies
NASA Astrophysics Data System (ADS)
Van Wyngarden, Annalise L.; Mar, Kathleen A.; Quach, Jim; Nguyen, Anh P. Q.; Wiegel, Aaron A.; Lin, Shi-Ying; Lendvay, Gyorgy; Guo, Hua; Lin, Jim J.; Lee, Yuan T.; Boering, Kristie A.
2014-08-01
The dynamics of the 18O(3P) + 32O2 isotope exchange reaction were studied using crossed atomic and molecular beams at collision energies (Ecoll) of 5.7 and 7.3 kcal/mol, and experimental results were compared with quantum statistical (QS) and quasi-classical trajectory (QCT) calculations on the O3(X1A') potential energy surface (PES) of Babikov et al. [D. Babikov, B. K. Kendrick, R. B. Walker, R. T. Pack, P. Fleurat-Lesard, and R. Schinke, J. Chem. Phys. 118, 6298 (2003)]. In both QS and QCT calculations, agreement with experiment was markedly improved by performing calculations with the experimental distribution of collision energies instead of fixed at the average collision energy. At both collision energies, the scattering displayed a forward bias, with a smaller bias at the lower Ecoll. Comparisons with the QS calculations suggest that 34O2 is produced with a non-statistical rovibrational distribution that is hotter than predicted, and the discrepancy is larger at the lower Ecoll. If this underprediction of rovibrational excitation by the QS method is not due to PES errors and/or to non-adiabatic effects not included in the calculations, then this collision energy dependence is opposite to what might be expected based on collision complex lifetime arguments and opposite to that measured for the forward bias. While the QCT calculations captured the experimental product vibrational energy distribution better than the QS method, the QCT results underpredicted rotationally excited products, overpredicted forward-bias and predicted a trend in the strength of forward-bias with collision energy opposite to that measured, indicating that it does not completely capture the dynamic behavior measured in the experiment. Thus, these results further underscore the need for improvement in theoretical treatments of dynamics on the O3(X1A') PES and perhaps of the PES itself in order to better understand and predict non-statistical effects in this reaction and in the formation
Thompson, Keiran C.; Crittenden, Deborah L.; Kable, Scott H.; Jordan, Meredith J.T.
2006-01-28
Previous experimental and theoretical studies of the radical dissociation channel of T{sub 1} acetaldehyde show conflicting behavior in the HCO and CH{sub 3} product distributions. To resolve these conflicts, a full-dimensional potential-energy surface for the dissociation of CH{sub 3}CHO into HCO and CH{sub 3} fragments over the barrier on the T{sub 1} surface is developed based on RO-CCSD(T)/cc-pVTZ(DZ) ab initio calculations. 20 000 classical trajectories are calculated on this surface at each of five initial excess energies, spanning the excitation energies used in previous experimental studies, and translational, vibrational, and rotational distributions of the radical products are determined. For excess energies near the dissociation threshold, both the HCO and CH{sub 3} products are vibrationally cold; there is a small amount of HCO rotational excitation and little CH{sub 3} rotational excitation, and the reaction energy is partitioned dominantly (>90% at threshold) into relative translational motion. Close to threshold the HCO and CH{sub 3} rotational distributions are symmetrically shaped, resembling a Gaussian function, in agreement with observed experimental HCO rotational distributions. As the excess energy increases the calculated HCO and CH{sub 3} rotational distributions are observed to change from a Gaussian shape at threshold to one more resembling a Boltzmann distribution, a behavior also seen by various experimental groups. Thus the distribution of energy in these rotational degrees of freedom is observed to change from nonstatistical to apparently statistical, as excess energy increases. As the energy above threshold increases all the internal and external degrees of freedom are observed to gain population at a similar rate, broadly consistent with equipartitioning of the available energy at the transition state. These observations generally support the practice of separating the reaction dynamics into two reservoirs: an impulsive reservoir, fed
2016-01-01
We propose and develop a general approach based on reaction-diffusion equations for modelling a species dynamics in a realistic two-dimensional (2D) landscape crossed by linear one-dimensional (1D) corridors, such as roads, hedgerows or rivers. Our approach is based on a hybrid “2D/1D model”, i.e, a system of 2D and 1D reaction-diffusion equations with homogeneous coefficients, in which each equation describes the population dynamics in a given 2D or 1D element of the landscape. Using the example of the range expansion of the tiger mosquito Aedes albopictus in France and its main highways as 1D corridors, we show that the model can be fitted to realistic observation data. We develop a mechanistic-statistical approach, based on the coupling between a model of population dynamics and a probabilistic model of the observation process. This allows us to bridge the gap between the data (3 levels of infestation, at the scale of a French department) and the output of the model (population densities at each point of the landscape), and to estimate the model parameter values using a maximum-likelihood approach. Using classical model comparison criteria, we obtain a better fit and a better predictive power with the 2D/1D model than with a standard homogeneous reaction-diffusion model. This shows the potential importance of taking into account the effect of the corridors (highways in the present case) on species dynamics. With regard to the particular case of A. albopictus, the conclusion that highways played an important role in species range expansion in mainland France is consistent with recent findings from the literature. PMID:26986201
Roques, Lionel; Bonnefon, Olivier
2016-01-01
We propose and develop a general approach based on reaction-diffusion equations for modelling a species dynamics in a realistic two-dimensional (2D) landscape crossed by linear one-dimensional (1D) corridors, such as roads, hedgerows or rivers. Our approach is based on a hybrid "2D/1D model", i.e, a system of 2D and 1D reaction-diffusion equations with homogeneous coefficients, in which each equation describes the population dynamics in a given 2D or 1D element of the landscape. Using the example of the range expansion of the tiger mosquito Aedes albopictus in France and its main highways as 1D corridors, we show that the model can be fitted to realistic observation data. We develop a mechanistic-statistical approach, based on the coupling between a model of population dynamics and a probabilistic model of the observation process. This allows us to bridge the gap between the data (3 levels of infestation, at the scale of a French department) and the output of the model (population densities at each point of the landscape), and to estimate the model parameter values using a maximum-likelihood approach. Using classical model comparison criteria, we obtain a better fit and a better predictive power with the 2D/1D model than with a standard homogeneous reaction-diffusion model. This shows the potential importance of taking into account the effect of the corridors (highways in the present case) on species dynamics. With regard to the particular case of A. albopictus, the conclusion that highways played an important role in species range expansion in mainland France is consistent with recent findings from the literature. PMID:26986201
Coupled flow-polymer dynamics via statistical field theory: Modeling and computation
NASA Astrophysics Data System (ADS)
Ceniceros, Hector D.; Fredrickson, Glenn H.; Mohler, George O.
2009-03-01
Field-theoretic models, which replace interactions between polymers with interactions between polymers and one or more conjugate fields, offer a systematic framework for coarse-graining of complex fluids systems. While this approach has been used successfully to investigate a wide range of polymer formulations at equilibrium, field-theoretic models often fail to accurately capture the non-equilibrium behavior of polymers, especially in the early stages of phase separation. Here the "two-fluid" approach serves as a useful alternative, treating the motions of fluid components separately in order to incorporate asymmetries between polymer molecules. In this work we focus on the connection of these two theories, drawing upon the strengths of each of the approaches in order to couple polymer microstructure with the dynamics of the flow in a systematic way. For illustrative purposes we work with an inhomogeneous melt of elastic dumbbell polymers, though our methodology will apply more generally to a wide variety of inhomogeneous systems. First we derive the model, incorporating thermodynamic forces into a two-fluid model for the flow through the introduction of conjugate chemical potential and elastic strain fields for the polymer density and stress. The resulting equations are composed of a system of fourth order PDEs coupled with a non-linear, non-local optimization problem to determine the conjugate fields. The coupled system is severely stiff and with a high degree of computational complexity. Next, we overcome the formidable numerical challenges posed by the model by designing a robust semi-implicit method based on linear asymptotic behavior of the leading order terms at small scales, by exploiting the exponential structure of global (integral) operators, and by parallelizing the non-linear optimization problem. The semi-implicit method effectively removes the fourth order stability constraint associated with explicit methods and we observe only a first order time
NASA Astrophysics Data System (ADS)
Fang, N. Z.; Gao, S.
2015-12-01
Challenges of fully considering the complexity among spatially and temporally varied rainfall always exist in flood frequency analysis. Conventional approaches that simplify the complexity of spatiotemporal interactions generally undermine their impacts on flood risks. A previously developed stochastic storm generator called Dynamic Moving Storms (DMS) aims to address the highly-dependent nature of precipitation field: spatial variability, temporal variability, and movement of the storm. The authors utilize a multivariate statistical approach based on DMS to estimate the occurrence probability or frequency of extreme storm events. Fifteen years of radar rainfall data is used to generate a large number of synthetic storms as basis for statistical assessment. Two parametric retrieval algorithms are developed to recognize rain cells and track storm motions respectively. The resulted parameters are then used to establish probability density functions (PDFs), which are fitted to parametric distribution functions for further Monte Carlo simulations. Consequently, over 1,000,000 synthetic storms are generated based on twelve retrieved parameters for integrated risk assessment and ensemble forecasts. Furthermore, PDFs for parameters are used to calculate joint probabilities based on 2-dimensional Archimedean-Copula functions to determine the occurrence probabilities of extreme events. The approach is validated on the Upper Trinity River watershed and the generated results are compared with those from traditional rainfall frequency studies (i.e. Intensity-Duration-Frequency curves, and Areal Reduction Factors).
Jacobitz, Frank G; Schneider, Kai; Bos, Wouter J T; Farge, Marie
2016-01-01
The acceleration statistics of sheared and rotating homogeneous turbulence are studied using direct numerical simulation results. The statistical properties of Lagrangian and Eulerian accelerations are considered together with the influence of the rotation to shear ratio, as well as the scale dependence of their statistics. The probability density functions (pdfs) of both Lagrangian and Eulerian accelerations show a strong and similar dependence on the rotation to shear ratio. The variance and flatness of both accelerations are analyzed and the extreme values of the Eulerian acceleration are observed to be above those of the Lagrangian acceleration. For strong rotation it is observed that flatness yields values close to three, corresponding to Gaussian-like behavior, and for moderate and vanishing rotation the flatness increases. Furthermore, the Lagrangian and Eulerian accelerations are shown to be strongly correlated for strong rotation due to a reduced nonlinear term in this case. A wavelet-based scale-dependent analysis shows that the flatness of both Eulerian and Lagrangian accelerations increases as scale decreases, which provides evidence for intermittent behavior. For strong rotation the Eulerian acceleration is even more intermittent than the Lagrangian acceleration, while the opposite result is obtained for moderate rotation. Moreover, the dynamics of a passive scalar with gradient production in the direction of the mean velocity gradient is analyzed and the influence of the rotation to shear ratio is studied. Concerning the concentration of a passive scalar spread by the flow, the pdf of its Eulerian time rate of change presents higher extreme values than those of its Lagrangian time rate of change. This suggests that the Eulerian time rate of change of scalar concentration is mainly due to advection, while its Lagrangian counterpart is only due to gradient production and viscous dissipation. PMID:26871161
NASA Astrophysics Data System (ADS)
Jacobitz, Frank G.; Schneider, Kai; Bos, Wouter J. T.; Farge, Marie
2016-01-01
The acceleration statistics of sheared and rotating homogeneous turbulence are studied using direct numerical simulation results. The statistical properties of Lagrangian and Eulerian accelerations are considered together with the influence of the rotation to shear ratio, as well as the scale dependence of their statistics. The probability density functions (pdfs) of both Lagrangian and Eulerian accelerations show a strong and similar dependence on the rotation to shear ratio. The variance and flatness of both accelerations are analyzed and the extreme values of the Eulerian acceleration are observed to be above those of the Lagrangian acceleration. For strong rotation it is observed that flatness yields values close to three, corresponding to Gaussian-like behavior, and for moderate and vanishing rotation the flatness increases. Furthermore, the Lagrangian and Eulerian accelerations are shown to be strongly correlated for strong rotation due to a reduced nonlinear term in this case. A wavelet-based scale-dependent analysis shows that the flatness of both Eulerian and Lagrangian accelerations increases as scale decreases, which provides evidence for intermittent behavior. For strong rotation the Eulerian acceleration is even more intermittent than the Lagrangian acceleration, while the opposite result is obtained for moderate rotation. Moreover, the dynamics of a passive scalar with gradient production in the direction of the mean velocity gradient is analyzed and the influence of the rotation to shear ratio is studied. Concerning the concentration of a passive scalar spread by the flow, the pdf of its Eulerian time rate of change presents higher extreme values than those of its Lagrangian time rate of change. This suggests that the Eulerian time rate of change of scalar concentration is mainly due to advection, while its Lagrangian counterpart is only due to gradient production and viscous dissipation.
NASA Astrophysics Data System (ADS)
Sugiyama, K.; Nakajima, K.; Odaka, M.; Kuramoto, K.; Hayashi, Y.-Y.
2014-02-01
A series of long-term numerical simulations of moist convection in Jupiter’s atmosphere is performed in order to investigate the idealized characteristics of the vertical structure of multi-composition clouds and the convective motions associated with them, varying the deep abundances of condensable gases and the autoconversion time scale, the latter being one of the most questionable parameters in cloud microphysical parameterization. The simulations are conducted using a two-dimensional cloud resolving model that explicitly represents the convective motion and microphysics of the three cloud components, H2O, NH3, and NH4SH imposing a body cooling that substitutes the net radiative cooling. The results are qualitatively similar to those reported in Sugiyama et al. (Sugiyama, K. et al. [2011]. Intermittent cumulonimbus activity breaking the three-layer cloud structure of Jupiter. Geophys. Res. Lett. 38, L13201. doi:10.1029/2011GL047878): stable layers associated with condensation and chemical reaction act as effective dynamical and compositional boundaries, intense cumulonimbus clouds develop with distinct temporal intermittency, and the active transport associated with these clouds results in the establishment of mean vertical profiles of condensates and condensable gases that are distinctly different from the hitherto accepted three-layered structure (e.g., Atreya, S.K., Romani, P.N. [1985]. Photochemistry and clouds of Jupiter, Saturn and Uranus. In: Recent Advances in Planetary Meteorology. Cambridge Univ. Press, London, pp. 17-68). Our results also demonstrate that the period of intermittent cloud activity is roughly proportional to the deep abundance of H2O gas. The autoconversion time scale does not strongly affect the results, except for the vertical profiles of the condensates. Changing the autoconversion time scale by a factor of 100 changes the intermittency period by a factor of less than two, although it causes a dramatic increase in the amount of
NASA Astrophysics Data System (ADS)
Iizumi, Toshichika; Nishimori, Motoki; Dairaku, Koji; Adachi, Sachiho A.; Yokozawa, Masayuki
2011-01-01
In this study, we evaluate the accuracy of four regional climate models (NHRCM, NRAMS, TRAMS, and TWRF) and one bias correction-type statistical model (CDFDM) for daily precipitation indices under the present-day climate (1985-2004) over Japan on a 20 km grid interval. The evaluated indices are (1) mean precipitation, (2) number of days with precipitation ≥1 mm/d (corresponds to number of wet days), (3) mean amount per wet day, (4) 90th percentile of daily precipitation, and (5) number of days with precipitation ≥90th percentile of daily precipitation. The boundary conditions of the dynamical models and the predictors of the statistical model are given from the single reanalysis data, i.e., JRA25. Both types of models successfully improved the accuracy of the indices relative to the reanalysis data in terms of bias, seasonal cycle, geographical pattern, cumulative distribution function of wet-day amount, and interannual variation pattern. In most aspects, NHRCM is the best model of all indices. Through the intercomparison between the dynamical and statistical models, respective strengths and weaknesses emerged. Briefly, (1) many dynamical models simulate too many wet days with a small amount of precipitation in humid climate zones, such as summer in Japan, relative to the statistical model, unless the cumulus convection scheme improved for such a condition is incorporated; (2) a few dynamical models can derive a better high-order percentile of daily precipitation (e.g., 90th percentile) than the statistical model; (3) both the dynamical and statistical models are still insufficient in the representation of the interannual variation pattern of the number of days with precipitation ≥90th percentile of daily precipitation; (4) the statistical model is comparable to the dynamical models in the long-term mean geographical pattern of the indices even on a 20 km grid interval if a dense observation network is applicable; (5) the statistical model is less accurate
Bouchard, Kristofer E.; Brainard, Michael S.
2016-01-01
Predicting future events is a critical computation for both perception and behavior. Despite the essential nature of this computation, there are few studies demonstrating neural activity that predicts specific events in learned, probabilistic sequences. Here, we test the hypotheses that the dynamics of internally generated neural activity are predictive of future events and are structured by the learned temporal–sequential statistics of those events. We recorded neural activity in Bengalese finch sensory-motor area HVC in response to playback of sequences from individuals’ songs, and examined the neural activity that continued after stimulus offset. We found that the strength of response to a syllable in the sequence depended on the delay at which that syllable was played, with a maximal response when the delay matched the intersyllable gap normally present for that specific syllable during song production. Furthermore, poststimulus neural activity induced by sequence playback resembled the neural response to the next syllable in the sequence when that syllable was predictable, but not when the next syllable was uncertain. Our results demonstrate that the dynamics of internally generated HVC neural activity are predictive of the learned temporal–sequential structure of produced song and that the strength of this prediction is modulated by uncertainty. PMID:27506786
NASA Astrophysics Data System (ADS)
Reyers, Mark; Pinto, Joaquim G.; Moemken, Julia
2015-04-01
A statistical-dynamical downscaling (SDD) approach for the regionalisation of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily MSLP fields with the central point being located over Germany. 77 weather classes based on the associated circulation weather type and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamical downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different datasets the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate SDD is able to simulate realistic PDFs of 10m-wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD simulated Eout. In terms of decadal hindcasts results of SDD are similar to DD simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout timeseries of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the MPI-ESM decadal prediction system. Long-term climate change projections in SRES scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to results of other studies using DD methods, with increasing Eout over Northern Europe and a negative trend over Southern Europe. Despite some biases it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.
NASA Astrophysics Data System (ADS)
Vaittinada Ayar, Pradeebane; Vrac, Mathieu; Bastin, Sophie; Carreau, Julie; Déqué, Michel; Gallardo, Clemente
2016-02-01
Given the coarse spatial resolution of General Circulation Models, finer scale projections of variables affected by local-scale processes such as precipitation are often needed to drive impacts models, for example in hydrology or ecology among other fields. This need for high-resolution data leads to apply projection techniques called downscaling. Downscaling can be performed according to two approaches: dynamical and statistical models. The latter approach is constituted by various statistical families conceptually different. If several studies have made some intercomparisons of existing downscaling models, none of them included all those families and approaches in a manner that all the models are equally considered. To this end, the present study conducts an intercomparison exercise under the EURO- and MED-CORDEX initiative hindcast framework. Six Statistical Downscaling Models (SDMs) and five Regional Climate Models (RCMs) are compared in terms of precipitation outputs. The downscaled simulations are driven by the ERAinterim reanalyses over the 1989-2008 period over a common area at 0.44° of resolution. The 11 models are evaluated according to four aspects of the precipitation: occurrence, intensity, as well as spatial and temporal properties. For each aspect, one or several indicators are computed to discriminate the models. The results indicate that marginal properties of rain occurrence and intensity are better modelled by stochastic and resampling-based SDMs, while spatial and temporal variability are better modelled by RCMs and resampling-based SDM. These general conclusions have to be considered with caution because they rely on the chosen indicators and could change when considering other specific criteria. The indicators suit specific purpose and therefore the model evaluation results depend on the end-users point of view and how they intend to use with model outputs. Nevertheless, building on previous intercomparison exercises, this study provides a
NASA Astrophysics Data System (ADS)
Bocaniov, Serghei A.; Scavia, Donald
2016-06-01
Hypoxia or low bottom water dissolved oxygen (DO) is a world-wide problem of management concern requiring an understanding and ability to monitor and predict its spatial and temporal dynamics. However, this is often made difficult in large lakes and coastal oceans because of limited spatial and temporal coverage of field observations. We used a calibrated and validated three-dimensional ecological model of Lake Erie to extend a statistical relationship between hypoxic extent and bottom water DO concentrations to explore implications of the broader temporal and spatial development and dissipation of hypoxia. We provide the first numerical demonstration that hypoxia initiates in the nearshore, not the deep portion of the basin, and that the threshold used to define hypoxia matters in both spatial and temporal dynamics and in its sensitivity to climate. We show that existing monitoring programs likely underestimate both maximum hypoxic extent and the importance of low oxygen in the nearshore, discuss implications for ecosystem and drinking water protection, and recommend how these results could be used to efficiently and economically extend monitoring programs.
NASA Astrophysics Data System (ADS)
Zuo, Pingbing; Feng, Xueshang
2016-07-01
Solar wind dynamic pressure pulse (DPP) structures, across which the dynamic pressure abruptly changes over timescales from a few seconds to several minutes, are often observed in the near-Earth space environment. Recently we have developed a novel procedure that is able to rapidly identify the DPPs from the plasma data stream, and simultaneously define the transition region and smartly select the upstream and downstream region for analysis. The plasma data with high time-resolution from 3DP instrument on board the WIND spacecraft are inspected with this automatic DPP-searching code, and a complete list of solar wind DPPs of historic WIND observations are built up. We perform a statistical survey on the properties of DPPs near 1 AU based on this event list. It is found that overwhelming majority of DPPs are associated with the solar wind disturbances including the CME-related flows, the corotating interaction regions, as well as the complex ejecta. The annual variations of the averaged occurrence rate of DPPs are roughly in phase with the solar activities. Although the variabilities of geosynchronous magnetic fields (GMFs) due to the impact of positive DPPs have been well established, there appears no systematic investigations on the response of GMFs to negative DPPs. Here we also study the decompression/compression effects of very strong negative/positive DPPs on GMFs under northward IMFs. In response to the decompression of strong negative DPPs, GMFs on dayside, near the dawn and dusk on nightside are generally depressed. But near the midnight region, the responses of GMF are very diverse, being either positive or negative. For part of events when GOES is located at the midnight sector, GMF is found to abnormally increase as the result of magnetospheric decompression caused by negative DPPs. It is known that on certain conditions magnetic depression of nightside GMFs can be caused by the impact of positive DPPs. Statistically, both the decompression effect of
NASA Astrophysics Data System (ADS)
Chae, Kyu-Hyun; Gong, In-Taek
2015-08-01
Modified Newtonian dynamics (MOND) proposed by Milgrom provides a paradigm alternative to dark matter (DM) that has been successful in fitting and predicting the rich phenomenology of rotating disc galaxies. There have also been attempts to test MOND in dispersion-supported spheroidal early-type galaxies, but it remains unclear whether MOND can fit the various empirical properties of early-type galaxies for the whole ranges of mass and radius. As a way of rigorously testing MOND in elliptical galaxies we calculate the MOND-predicted velocity dispersion profiles (VDPs) in the inner regions of ˜2000 nearly round Sloan Digital Sky Survey elliptical galaxies under a variety of assumptions on velocity dispersion (VD) anisotropy, and then compare the predicted distribution of VDP slopes with the observed distribution in 11 ATLAS3D galaxies selected with essentially the same criteria. We find that the MOND model parametrized with an interpolating function that works well for rotating galaxies can also reproduce the observed distribution of VDP slopes based only on the observed stellar mass distribution without DM or any other galaxy-to-galaxy varying factor. This is remarkable in view that Newtonian dynamics with DM requires a specific amount and/or profile of DM for each galaxy in order to reproduce the observed distribution of VDP slopes. When we analyse non-round galaxy samples using the MOND-based spherical Jeans equation, we do not find any systematic difference in the mean property of the VDP slope distribution compared with the nearly round sample. However, in line with previous studies of MOND through individual analyses of elliptical galaxies, varying MOND interpolating function or VD anisotropy can lead to systematic change in the VDP slope distribution, indicating that a statistical analysis of VDPs can be used to constrain specific MOND models with an accurate measurement of VDP slopes or a prior constraint on VD anisotropy.
NASA Astrophysics Data System (ADS)
Krantz, Richard; Douthett, Jack; Cartwright, Julyan; Gonzalez, Diego; Piro, Oreste
2010-10-01
Some time ago two apparently dissimilar presentations were given at the 2007 Helmholtz Workshop in Berlin. One by J. Douthett and R. Krantz focused on the commonality between the mathematical descriptions of musical scales and the long-ranged, one-dimensional, anti-ferromagnetic Ising model of statistical physics. The other by J. Cartwright, D. Gonzalez, and O. Piro articulated a nonlinear dynamical model of pitch perception. Both approaches lead to a Farey series devil's staircase structure. In the first case, the ground state magnetic phase diagram of the Ising model is a Farey series devil's staircase. In the second case, the ear is modeled as a nonlinear system leading to a three-frequency resonant pitch perception model of the auditory system that exhibits a devil's staircase phase-locked structure. In this poster we present a summary of each of these works side-by-side to illuminate the link between these two seemingly disparate systems. Adapted from JMM Vol. 4, No. 1, 57, Mar. 2010.
Dodds, Peter Sheridan; Mitchell, Lewis; Reagan, Andrew J.; Danforth, Christopher M.
2016-01-01
Instabilities and long term shifts in seasons, whether induced by natural drivers or human activities, pose great disruptive threats to ecological, agricultural, and social systems. Here, we propose, measure, and explore two fundamental markers of location-sensitive seasonal variations: the Summer and Winter Teletherms—the on-average annual dates of the hottest and coldest days of the year. We analyse daily temperature extremes recorded at 1218 stations across the contiguous United States from 1853–2012, and observe large regional variation with the Summer Teletherm falling up to 90 days after the Summer Solstice, and 50 days for the Winter Teletherm after the Winter Solstice. We show that Teletherm temporal dynamics are substantive with clear and in some cases dramatic shifts reflective of system bifurcations. We also compare recorded daily temperature extremes with output from two regional climate models finding considerable though relatively unbiased error. Our work demonstrates that Teletherms are an intuitive, powerful, and statistically sound measure of local climate change, and that they pose detailed, stringent challenges for future theoretical and computational models. PMID:27167740
Dodds, Peter Sheridan; Mitchell, Lewis; Reagan, Andrew J; Danforth, Christopher M
2016-01-01
Instabilities and long term shifts in seasons, whether induced by natural drivers or human activities, pose great disruptive threats to ecological, agricultural, and social systems. Here, we propose, measure, and explore two fundamental markers of location-sensitive seasonal variations: the Summer and Winter Teletherms-the on-average annual dates of the hottest and coldest days of the year. We analyse daily temperature extremes recorded at 1218 stations across the contiguous United States from 1853-2012, and observe large regional variation with the Summer Teletherm falling up to 90 days after the Summer Solstice, and 50 days for the Winter Teletherm after the Winter Solstice. We show that Teletherm temporal dynamics are substantive with clear and in some cases dramatic shifts reflective of system bifurcations. We also compare recorded daily temperature extremes with output from two regional climate models finding considerable though relatively unbiased error. Our work demonstrates that Teletherms are an intuitive, powerful, and statistically sound measure of local climate change, and that they pose detailed, stringent challenges for future theoretical and computational models. PMID:27167740
NASA Astrophysics Data System (ADS)
Sherman, James P.; She, Chiao-Yao
2006-06-01
One thousand three hundred and eleven 15-min profiles of nocturnal mesopause region (80 105 km) temperature and horizontal wind, observed by Colorado State University sodium lidar over Fort Collins, CO (41°N, 105°W), between May 2002 and April 2003, were analyzed. From these profiles, taken over 390 h and each possessing vertical resolution of 2 km, a statistical analysis of seasonal variations in wind shears, convective and dynamical instabilities was performed. Large wind shears were most often observed near 100 km and during winter months. Thirty-five percent of the winter profiles contained wind shears exceeding 40 m/s per km at some altitude. In spite of large winds and shears, the mesopause region (at a resolution of 2 km and 15 min) is a very stable region. At a given altitude, the probability for convective instability is less than 1.4% for all seasons and the probability for dynamic instability (in the sense of Richardson number) ranges from 2.7% to 6.0%. Wind shear measurements are compared with four decades of chemical release measurements, compiled in a study by Larson [2002. Winds and shears in the mesosphere and lower thermosphere: results from four decades of chemical release wind measurements. Journal of Geophysical Research 107(A8), 1215]. Instability results are compared with those deduced from an annual lidar study conducted with higher spatial and temporal resolution at the Starfire Optical Range (SOR) in Albuquerque, NM, by Zhao et al. [2003. Measurements of atmospheric stability in the mesopause region at Starfire Optical Range, NM. Journal of Atmospheric and Solar-Terrestrial Physics 65, 219 232], and from a study by Li et al. [2005b. Characteristics of instabilities in the mesopause region over Maui, Hawaii. Journal of Geophysical Research 110, D09S12] with 19 days of data acquired from Maui Mesosphere and Lower Thermosphere (Maui MALT) Campaign . The Fort Collins lidar profiles were also analyzed using 1-h temporal resolution to compare
Xiang, T X; Anderson, B D
1994-01-01
A mean-field statistical mechanical theory has been developed to describe molecular distributions in interphases. The excluded volume interaction has been modeled in terms of a reversible work that is required to create a cavity of the solute size against a pressure tensor exerted by the surrounding interphase molecules. The free energy change associated with this compression process includes the configuration entropy as well as the change in conformational energy of the surrounding chain molecules. The lateral pressure profile in a model lipid bilayer (30.5 A2/chain molecule) has been calculated as a function of depth in the bilayer interior by molecular dynamics simulation. The lateral pressure has a plateau value of 309 +/- 48 bar in the highly ordered region and decreases abruptly in the center of the bilayer. Model calculations have shown that for solute molecules with ellipsoidal symmetry, the orientational order increases with the ratio of the long to short molecular axes at a given solute volume and increases with solute volume at a given axial ratio, in accordance with recent experimental data. Increased lateral pressure (p perpendicular) results in higher local order and exclusion of solute from the interphase, in parallel with the effect of surface density on the partitioning and local order. The logarithm of the interphase/water partition coefficient for spherical solutes decreases linearly with solute volume. This is also an excellent approximation for elongated solutes because of the relatively weak dependence of solute partitioning on molecular shape. The slope is equal to (2p perpendicular - p parallel)/3KBT, where p parallel is the normal pressure component, and different from that predicted by the mean-field lattice theory. Finally, the lattice theory has been extended herein to incorporate an additional constraint on chain packing in the interphase and to account for the effect of solute size on partitioning. Images FIGURE 1 FIGURE 2 PMID:8011890
NASA Astrophysics Data System (ADS)
Larsen, L.; Watts, D.; Khurana, A.; Anderson, J. L.; Xu, C.; Merritts, D. J.
2015-12-01
The classic signal of self-organization in nature is pattern formation. However, the interactions and feedbacks that organize depositional landscapes do not always result in regular or fractal patterns. How might we detect their existence and effects in these "irregular" landscapes? Emergent landscapes such as newly forming deltaic marshes or some restoration sites provide opportunities to study the autogenic processes that organize landscapes and their physical signatures. Here we describe a quest to understand autogenic vs. allogenic controls on landscape evolution in Big Spring Run, PA, a landscape undergoing restoration from bare-soil conditions to a target wet meadow landscape. The contemporary motivation for asking questions about autogenic vs. allogenic controls is to evaluate how important initial conditions or environmental controls may be for the attainment of management objectives. However, these questions can also inform interpretation of the sedimentary record by enabling researchers to separate signals that may have arisen through self-organization processes from those resulting from environmental perturbations. Over three years at Big Spring Run, we mapped the dynamic evolution of floodplain vegetation communities and distributions of abiotic variables and topography. We used principal component analysis and transition probability analysis to detect associative interactions between vegetation and geomorphic variables and convergent cross-mapping on lidar data to detect causal interactions between biomass and topography. Exploratory statistics revealed that plant communities with distinct morphologies exerted control on landscape evolution through stress divergence (i.e., channel initiation) and promoting the accumulation of fine sediment in channels. Together, these communities participated in a negative feedback that maintains low energy and multiple channels. Because of the spatially explicit nature of this feedback, causal interactions could not
p21{sup WAF1/Cip1/Sdi1} knockout mice respond to doxorubicin with reduced cardiotoxicity
Terrand, Jerome; Xu, Beibei; Morrissy, Steve; Dinh, Thai Nho; Williams, Stuart; Chen, Qin M.
2011-11-15
Doxorubicin (Dox) is an antineoplastic agent that can cause cardiomyopathy in humans and experimental animals. As an inducer of reactive oxygen species and a DNA damaging agent, Dox causes elevated expression of p21{sup WAF1/Cip1/Sdi1} (p21) gene. Elevated levels of p21 mRNA and p21 protein have been detected in the myocardium of mice following Dox treatment. With chronic treatment of Dox, wild type (WT) animals develop cardiomyopathy evidenced by elongated nuclei, mitochondrial swelling, myofilamental disarray, reduced cardiac output, reduced ejection fraction, reduced left ventricular contractility, and elevated expression of ANF gene. In contrast, p21 knockout (p21KO) mice did not show significant changes in the same parameters in response to Dox treatment. In an effort to understand the mechanism of the resistance against Dox induced cardiomyopathy, we measured levels of antioxidant enzymes and found that p21KO mice did not contain elevated basal or inducible levels of glutathione peroxidase and catalase. Measurements of 6 circulating cytokines indicated elevation of IL-6, IL-12, IFN{gamma} and TNF{alpha} in Dox treated WT mice but not p21KO mice. Dox induced elevation of IL-6 mRNA was detected in the myocardium of WT mice but not p21KO mice. While the mechanism of the resistance against Dox induced cardiomyopathy remains unclear, lack of inflammatory response may contribute to the observed cardiac protection in p21KO mice. -- Highlights: Black-Right-Pointing-Pointer Doxorubicin induces p21 elevation in the myocardium. Black-Right-Pointing-Pointer Doxorubicin causes dilated cardiomyopathy in wild type mice. Black-Right-Pointing-Pointer p21 Knockout mice are resistant against doxorubicin induced cardiomyopathy. Black-Right-Pointing-Pointer Lack of inflammatory response correlates with the resistance in p21 knockout mice.
Developing a Web-based system by integrating VGI and SDI for real estate management and marketing
NASA Astrophysics Data System (ADS)
Salajegheh, J.; Hakimpour, F.; Esmaeily, A.
2014-10-01
Property importance of various aspects, especially the impact on various sectors of the economy and the country's macroeconomic is clear. Because of the real, multi-dimensional and heterogeneous nature of housing as a commodity, the lack of an integrated system includes comprehensive information of property, the lack of awareness of some actors in this field about comprehensive information about property and the lack of clear and comprehensive rules and regulations for the trading and pricing, several problems arise for the people involved in this field. In this research implementation of a crowd-sourced Web-based real estate support system is desired. Creating a Spatial Data Infrastructure (SDI) in this system for collecting, updating and integrating all official data about property is also desired in this study. In this system a Web2.0 broker and technologies such as Web services and service composition has been used. This work aims to provide comprehensive and diverse information about property from different sources. For this purpose five-level real estate support system architecture is used. PostgreSql DBMS is used to implement the desired system. Geoserver software is also used as map server and reference implementation of OGC (Open Geospatial Consortium) standards. And Apache server is used to run web pages and user interfaces. Integration introduced methods and technologies provide a proper environment for various users to use the system and share their information. This goal is only achieved by cooperation between all involved organizations in real estate with implementation their required infrastructures in interoperability Web services format.
NASA Astrophysics Data System (ADS)
Sun, F.; Hall, A. D.; Walton, D.; Capps, S. B.; Qu, X.; Huang, H. J.; Berg, N.; Jousse, A.; Schwartz, M.; Nakamura, M.; Cerezo-Mota, R.
2012-12-01
Using a combination of dynamical and statistical downscaling techniques, we projected mid-21st century warming in the Los Angeles region at 2-km resolution. To account for uncertainty associated with the trajectory of future greenhouse gas emissions, we examined projections for both "business-as-usual" (RCP8.5) and "mitigation" (RCP2.6) emissions scenarios from the Fifth Coupled Model Intercomparison Project (CMIP5). To account for the considerable uncertainty associated with choice of global climate model, we downscaled results for all available global climate models in CMIP5. For the business-as-usual scenario, we find that by the mid-21st century, the most likely warming is roughly 2.6°C averaged over the region's land areas, with a 95% confidence that the warming lies between 0.9 and 4.2°C. The high resolution of the projections reveals a pronounced spatial pattern in the warming: High elevations and inland areas separated from the coast by at least one mountain complex warm 20 to 50% more than the areas near the coast or within the Los Angeles basin. This warming pattern is especially apparent in summertime. The summertime warming contrast between the inland and coastal zones has a large effect on the most likely expected number of extremely hot days per year. Coastal locations and areas within the Los Angeles basin see roughly two to three times the number of extremely hot days, while high elevations and inland areas typically experience approximately three to five times the number of extremely hot days. Under the mitigation emissions scenario, the most likely warming and increase in heat extremes are somewhat smaller. However, the majority of the warming seen in the business-as-usual scenario still occurs at all locations in the most likely case under the mitigation scenario, and heat extremes still increase significantly. This warming study is the first part of a series studies of our project. More climate change impacts on the Santa Ana wind, rainfall
NASA Technical Reports Server (NTRS)
Ramirez, Daniel Perez; Lyamani, H.; Olmo, F. J.; Whiteman, D. N.; Alados-Arboledas, L.
2012-01-01
This work presents the first analysis of longterm correlative day-to-night columnar aerosol optical properties. The aim is to better understand columnar aerosol dynamic from ground-based observations, which are poorly studied until now. To this end we have used a combination of sun-and-star photometry measurements acquired in the city of Granada (37.16 N, 3.60 W, 680 ma.s.l.; South-East of Spain) from 2007 to 2010. For the whole study period, mean aerosol optical depth (AOD) around 440 nm (+/-standard deviation) is 0.18 +/- 0.10 and 0.19 +/- 0.11 for daytime and nighttime, respectively, while the mean Angstr¨om exponent (alpha ) is 1.0 +/- 0.4 and 0.9 +/- 0.4 for daytime and nighttime. The ANOVA statistical tests reveal that there are no significant differences between AOD and obtained at daytime and those at nighttime. Additionally, the mean daytime values of AOD and obtained during this study period are coherent with the values obtained in the surrounding AERONET stations. On the other hand, AOD around 440 nm present evident seasonal patterns characterised by large values in summer (mean value of 0.20 +/- 0.10 both at daytime and nighttime) and low values in winter (mean value of 0.15 +/- 0.09 at daytime and 0.17 +/- 0.10 at nighttime). The Angstr¨om exponents also present seasonal patterns, but with low values in summer (mean values of 0.8 +/- 0.4 and 0.9 +/- 0.4 at dayand night-time) and relatively large values in winter (mean values of 1.2 +/- 0.4 and 1.0 +/- 0.3 at daytime and nighttime). These seasonal patterns are explained by the differences in the meteorological conditions and by the differences in the strength of the aerosol sources. To take more insight about the changes in aerosol particles between day and night, the spectral differences of the Angstrom exponent as function of the Angstr¨om exponent are also studied. These analyses reveal increases of the fine mode radius and of the fine mode contribution to AOD during nighttime, being more
NASA Astrophysics Data System (ADS)
Eslamizadeh, H.
2015-09-01
The fission probability, pre-scission neutron, proton and alpha multiplicities, anisotropy of fission fragment angular distribution and the fission time have been calculated for the compound nuclei 200Pb and 197Tl based on the modified statistical model and four-dimensional dynamical model. In dynamical calculations, dissipation was generated through the chaos weighted wall and window friction formula. The projection of the total spin of the compound nucleus to the symmetry axis, K, was considered as the fourth-dimension in Langevin dynamical calculations. In our dynamical calculations, we have used a constant dissipation coefficient of K, {γ }K=0.077{({{MeV}} {{zs}})}-{1/2}, and a non-constant dissipation coefficient to reproduce the above-mentioned experimental data. Comparison of the theoretical results of the fission probability and pre-scission particle multiplicities with the experimental data showed that the difference between the results of both dynamical models is small whereas, for the anisotropy of fission fragment angular distribution, it is slightly large. Furthermore, comparison of the results of the modified statistical model with the above-mentioned experimental data showed that with choosing appropriate values of the temperature coefficient of the effective potential, λ , and the scaling factor of the fission-barrier height, {r}s, the experimental data were satisfactorily reproduced.
NASA Astrophysics Data System (ADS)
Shreeman, Paul K.
The statistical dynamical diffraction theory, which has been initially developed by late Kato remained in obscurity for many years due to intense and difficult mathematical treatment that proved to be quite challenging to implement and apply. With assistance of many authors in past (including Bushuev, Pavlov, Pungeov, and among the others), it became possible to implement this unique x-ray diffraction theory that combines the kinematical (ideally imperfect) and dynamical (the characteristically perfect diffraction) into a single system of equations controlled by two factors determined by long range order and correlation function within the structure. The first stage is completed by the publication (Shreeman and Matyi, J. Appl. Cryst., 43, 550 (2010)) demonstrating the functionality of this theory with new modifications hence called modified statistical dynamical diffraction theory (mSDDT). The foundation of the theory is also incorporated into this dissertation, and the next stage of testing the model against several ion-implanted SiGe materials has been published: (Shreeman and Matyi, physica status solidi (a)208(11), 2533-2538, 2011). The dissertation with all the previous results summarized, dives into comprehensive analysis of HRXRD analyses complete with several different types of reflections (symmetrical, asymmetrical and skewed geometry). The dynamical results (with almost no defects) are compared with well-known commercial software. The defective materials, to which commercially available modeling software falls short, is then characterized and discussed in depth. The results will exemplify the power of the novel approach in the modified statistical dynamical diffraction theory: Ability to detect and measure defective structures qualitatively and quantitatively. The analysis will be compared alongside with TEM data analysis for verification and confirmation. The application of this theory will accelerate the ability to quickly characterize the relaxed
Rundle, John B.; Klein, William
2015-09-29
We have carried out research to determine the dynamics of failure in complex geomaterials, specifically focusing on the role of defects, damage and asperities in the catastrophic failure processes (now popularly termed “Black Swan events”). We have examined fracture branching and flow processes using models for invasion percolation, focusing particularly on the dynamics of bursts in the branching process. We have achieved a fundamental understanding of the dynamics of nucleation in complex geomaterials, specifically in the presence of inhomogeneous structures.
Technology Transfer Automated Retrieval System (TEKTRAN)
It is known that irrigation application method can impact crop water use and water use efficiency, but the mechanisms involved are incompletely understood, particularly in terms of the water and energy balances during the growing season from pre-irrigation through planting, early growth and yield de...
NASA Astrophysics Data System (ADS)
Quiroz-Martinez, B.; Schmitt, F. G.; Dauvin, J.-C.
2012-01-01
We consider here the dynamics of two polychaete populations based on a 20 yr temporal benthic survey of two muddy fine sand communities in the Bay of Morlaix, Western English Channel. These populations display high temporal variability, which is analyzed here using scaling approaches. We find that population densities have heavy tailed probability density functions. We analyze the dynamics of relative species abundance in two different communities of polychaetes by estimating in a novel way a "mean square drift" coefficient which characterizes their fluctuations in relative abundance over time. We show the usefulness of using new tools to approach and model such highly variable population dynamics in marine ecosystems.
Howarth, Jonathan R; Parmar, Saroj; Barraclough, Peter B; Hawkesford, Malcolm J
2009-02-01
A sulphate deficiency-induced gene, sdi1, has been identified by cDNA-amplified fragment length polymorphism (AFLP) analysis utilizing field-grown, nutrient-deficient wheat (Triticum aestivum var. Hereward). The expression of sdi1 was specifically induced in leaf and root tissues in response to sulphate deficiency, but was not induced by nitrogen, phosphorus, potassium or magnesium deficiency. Expression was also shown to increase in plant tissues as the external sulphate concentration in hydroponically grown plants was reduced from 1.0 to 0.0 mm. On this basis, sdi1 gene expression has potential as a sensitive indicator of sulphur nutritional status in wheat. Genome-walking techniques were used to clone the 2.7-kb region upstream of sdi1 from genomic DNA, revealing several cis-element motifs previously identified as being associated with sulphur responses in plants. The Arabidopsis thaliana gene most highly homologous to sdi1 is At5g48850, which was also demonstrated to be induced by sulphur deficiency, an observation confirmed by the analysis of microarray data available in the public domain. The expression of Atsdi1 was induced more rapidly than previously characterized sulphur-responsive genes in the period immediately following the transfer of plants to sulphur-deficient medium. Atsdi1 T-DNA 'knockout' mutants were shown to maintain higher tissue sulphate concentrations than wild-type plants under sulphur-limiting conditions, indicating a role in the utilization of stored sulphate under sulphur-deficient conditions. The structural features of the sdi1 gene and its application in the genetic determination of the sulphur nutritional status of wheat crops are discussed.
Soofer, R.M.
1987-01-01
This dissertation has four distinctive aspects. 1. By outlining the position of France, Britain, and West Germany on SDI and BMD, it hopes to elucidate the nature and extent of official and private European criticism and support for research into BMD as well as actual deployment of missile defenses - both in the US and Western Europe. 2. By examining European strategic thought as it pertains to deterrence, NATO strategy, and arms control, it attempts to explain the basis for the various views of SDI held by European governments and opposition groups, while affording the reader a better understanding of the Western European security predicament as well. 3. By analyzing the impact of various BMD deployment schemes in the continental US, Western Europe, and Soviet Union - on NATO strategy and European security, it hopes to contribute to the ongoing search for ways to strengthen NATO defense, and hence, deterrence capabilities. 4. Finally, this study seeks to examine the relationship between generally held security paradigms and specific strategic force initiatives. It is concluded that missile defenses of US strategic nuclear forces and command structure, as well as limited area defense of the continental US, would contribute to western European security by strengthening the credibility of the US strategic nuclear guarantee - the bedrock of NATO strategy.
NASA Astrophysics Data System (ADS)
Pahlavani, M. R.; Firoozi, B.
2015-11-01
Within a developed particle-hole approach, a systematic study of the β- transition from the ground state of the 16N nucleus to the ground and some exited states of the 16O nucleus has been carried out. The energy spectrum and the wave functions of pure configuration of the 16N and 16O nuclei are numerically obtained using the mean-field shell model with respect to the Woods-Saxon nuclear potential accompanying spin-orbit and Coulomb interaction. Considering SDI residual interaction, mixed configuration of ground and excited pnTDA and TDA states are extracted for the aforementioned nucleus. These energy spectra and corresponding eigenstates are highly correspondent to the experimental energy spectrum and eigenstates after adjusting the residual potential parameters using the Nelder-Mead (NM) algorithm. In this approach, the endpoint energy, log ft and the partial half-lives of some possible transitions are calculated. The obtained results using the optimized SDI approach are reasonably close to the available experimental data.
Li, Kun; Emani, Prashant S; Ash, Jason; Groves, Michael; Drobny, Gary P
2014-08-13
Extracellular matrix proteins adsorbed onto mineral surfaces exist in a unique environment where the structure and dynamics of the protein can be altered profoundly. To further elucidate how the mineral surface impacts molecular properties, we perform a comparative study of the dynamics of nonpolar side chains within the mineral-recognition domain of the biomineralization protein salivary statherin adsorbed onto its native hydroxyapatite (HAP) mineral surface versus the dynamics displayed by the native protein in the hydrated solid state. Specifically, the dynamics of phenylalanine side chains (viz., F7 and F14) located in the surface-adsorbed 15-amino acid HAP-recognition fragment (SN15: DpSpSEEKFLRRIGRFG) are studied using deuterium magic angle spinning ((2)H MAS) line shape and spin-lattice relaxation measurements. (2)H NMR MAS spectra and T1 relaxation times obtained from the deuterated phenylalanine side chains in free and HAP-adsorbed SN15 are fitted to models where the side chains are assumed to exchange between rotameric states and where the exchange rates and a priori rotameric state populations are varied iteratively. In condensed proteins, phenylalanine side-chain dynamics are dominated by 180° flips of the phenyl ring, i.e., the "π flip". However, for both F7 and F14, the number of exchanging side-chain rotameric states increases in the HAP-bound complex relative to the unbound solid sample, indicating that increased dynamic freedom accompanies introduction of the protein into the biofilm state. The observed rotameric exchange dynamics in the HAP-bound complex are on the order of 5-6 × 10(6) s(-1), as determined from the deuterium MAS line shapes. The dynamics in the HAP-bound complex are also shown to have some solution-like behavioral characteristics, with some interesting deviations from rotameric library statistics.
Madkour, Tarek M; Salem, Sarah A; Miller, Stephen A
2013-04-28
To fully understand the thermodynamic nature of polymer blends and accurately predict their miscibility on a microscopic level, a hybrid model employing both statistical mechanics and molecular dynamics techniques was developed to effectively predict the total free energy of mixing. The statistical mechanics principles were used to derive an expression for the deformational entropy of the chains in the polymeric blends that could be evaluated from molecular dynamics trajectories. Evaluation of the entropy loss due to the deformation of the polymer chains in the case of coiling as a result of the repulsive interactions between the blend components or in the case of swelling due to the attractive interactions between the polymeric segments predicted a negative value for the deformational entropy resulting in a decrease in the overall entropy change upon mixing. Molecular dynamics methods were then used to evaluate the enthalpy of mixing, entropy of mixing, the loss in entropy due to the deformation of the polymeric chains upon mixing and the total free energy change for a series of polar and non-polar, poly(glycolic acid), PGA, polymer blends. PMID:23493907
NASA Astrophysics Data System (ADS)
Massei, N.; Duran, L. P.; Fournier, M.; Jardani, A.; Lecoq, N.
2015-12-01
In this research we study the capability of time series analysis approaches to extract meaningful components of karst spring hydrographs. In this aim we compare these statistical components to the internal components of a conceptual precipitation/discharge model based on the physical knowledge of the site studied. We used the conceptual modeling software KARSTMOD developed by the INSU/CNRS National Karst Observatory to model discharge at a small karst spring in Normandy (France). The model comprised four reservoirs E, L, M and C (interpreted as epikarst, high- inertia/highly capacitive matrix, fissure network and conduits), consistent with previous works showing the existence of a triple porosity in chalk of Normandy. KARSTMOD internal flow components were analyzed with correlation and Fourier spectral analysis, and compared to statistical components extracted from spring discharge by wavelet multiresolution analysis and Ensemble Empirical Mode Decomposition (EEMD). We could also analyze how the hydrological signal acquired its red noise statistical characteristics while water flow propagates into the conceptual model. The trend of the discharge signal, given by the residue of EEMD, appeared quite similar to the variation in reservoir L and well correlated to the variation of the water level within the aquifer. Exchanges between fissured matrix and conduits (reservoirs M and C) could be also investigated: a high frequency pressure pulse-controlled flow from C to M (intermittent recharge from the conduits) was identified, as well as fissured matrix flow likely to take place in the surroundings of the conduit network. Flow from reservoir M to reservoir C could be recovered by recombining wavelet components of spring discharge. This study demonstrated that statistical components extracted from a discharge signal of a karst spring can provide meaningful hydrological information. Comparison with a physics-based model would however be required in order to complement this
Lee, Kwang-Min; Rhee, Chang-Hoon; Kang, Choong-Kyung; Kim, Jung-Hoe
2006-10-01
The production of recombinant anti-HIV peptide, T-20, in Escherichia coli was optimized by statistical experimental designs (successive designs with multifactors) such as 2(4-1) fractional factorial, 2(3) full factorial, and 2(2) rotational central composite design in order. The effects of media compositions (glucose, NPK sources, MgSO4, and trace elements), induction level, induction timing (optical density at induction process), and induction duration (culture time after induction) on T-20 production were studied by using a statistical response surface method. A series of iterative experimental designs was employed to determine optimal fermentation conditions (media and process factors). Optimal ranges characterized by %T-20 (proportion of peptide to the total cell protein) were observed, narrowed down, and further investigated to determine the optimal combination of culture conditions, which was as follows: 9, 6, 10, and 1 mL of glucose, NPK sources, MgSO4, and trace elements, respectively, in a total of 100 mL of medium inducted at an OD of 0.55-0.75 with 0.7 mM isopropyl-beta-D-thiogalactopyranoside in an induction duration of 4 h. Under these conditions, up to 14% of T-20 was obtained. This statistical optimization allowed the production of T-20 to be increased more than twofold (from 6 to 14%) within a shorter induction duration (from 6 to 4 h) at the shake-flask scale.
NASA Technical Reports Server (NTRS)
Mcmillan, S. L. W.; Lightman, A. P.
1984-01-01
A unified N-body and statistical treatment of stellar dynamics is developed and applied to the late stages of core collapse and early stages of post collapse evolution in globular clusters. A 'hybrid' computer code is joined to a direct N-body code which is used to calculate exactly the behavior of particles in the inner spatial region, and the combination is used to follow particles statistically in the outer spatial region. A transition zone allows the exchange of particles and energy between the two regions. The main application results include: formation of a hard central binary system, reversal of core collapse and expansion due to the heat input from this binary, ejection of the binary from the core, and recollapse of the core; density profiles that form a one-parameter sequence during the core oscillations; and indications that these oscillations will eventually cease.
Terai, Asuka; Nakagawa, Masanori
2007-08-01
The purpose of this paper is to construct a model that represents the human process of understanding metaphors, focusing specifically on similes of the form an "A like B". Generally speaking, human beings are able to generate and understand many sorts of metaphors. This study constructs the model based on a probabilistic knowledge structure for concepts which is computed from a statistical analysis of a large-scale corpus. Consequently, this model is able to cover the many kinds of metaphors that human beings can generate. Moreover, the model implements the dynamic process of metaphor understanding by using a neural network with dynamic interactions. Finally, the validity of the model is confirmed by comparing model simulations with the results from a psychological experiment.
Karpen, M E; Tobias, D J; Brooks, C L
1993-01-19
The microscopic interactions and mechanisms leading to nascent protein folding events are generally unknown. While such short time-scale events are difficult to study experimentally, molecular dynamics simulations of peptides can provide a useful model for studying events related to protein folding initiation. Recently, two extremely long molecular dynamics simulations (2.2 ns each) were carried out on the pentapeptide Tyr-Pro-Gly-Asp-Val [Tobias, D. J., Mertz, J. E., & Brooks, C. L., III (1991) Biochemistry 30, 6054-6058] that forms stable reverse turns in solution. Tobias et al. examined folding events in this large system (approximately 30,000 conformations) using traditional methods of trajectory analysis. The shear magnitude of this problem prompted us to develop an automated approach, based on self-organizing neural nets, to extract the key features of the molecular dynamics trajectory. The neural net is used to perform conformational clustering, which reduces the complexity of a system while minimizing the loss of information. The conformations were grouped together using distances in dihedral angle space as a measure of conformational similarity. The resulting clusters represent "conformational states", and transitions between these states were examined to identify mechanisms of conformational change. Many conformational changes involved the rotation of only a single dihedral angle, but concerted angle changes were also found. Most of the conformational information in the 30,000 samples from the full trajectories was retained in the relatively few resultant clusters, providing a powerful tool for analysis of an expanding base of large molecular simulations.
NASA Astrophysics Data System (ADS)
Hong, Mei; Zhang, Ren; Wang, Dong; Chen, Xi; Shi, Jian; Singh, Vijay
2014-12-01
The western Pacific subtropical high (WPSH) is closely correlated with the East Asian climate. To date, the underlying mechanisms and sustaining factors have not been positively elucidated. Based on the concept of dynamical system model reconstruction, this paper presents a nonlinear statistical-dynamical model of the subtropical high ridge line (SHRL) in concurrence with four summer monsoon factors. SHRL variations from 1990 to 2011 are subdivided into three categories, while parameter differences relating to three differing models are examined. Dynamical characteristics of SHRL are analyzed and an aberrance mechanism subsequently developed. Modeling suggests that different parameters may lead to significant variance pertaining to monsoon variables corresponding with numerous WPSH activities. Dynamical system bifurcation and mutation indicates that the South China Sea monsoon trough is a significant factor with respect to the occurrence and maintenance of the 'double-ridge' phenomenon. Moreover, the occurrence of the Mascarene cold high is predicted to cause an abnormal northward location of WPSH, resulting in the “empty plum” phenomenon.
Technology Transfer Automated Retrieval System (TEKTRAN)
Quorum sensing transcriptional regulator SdiA has been shown to enhance the survival of Escherichia coli O157:H7 (O157) in the acidic compartment of bovine rumen in response to N-acyl-L-homoserine lactones (AHLs) produced by the rumen bacteria. Bacteria that survive the rumen environment subsequentl...
Cosmic statistics of statistics
NASA Astrophysics Data System (ADS)
Szapudi, István; Colombi, Stéphane; Bernardeau, Francis
1999-12-01
The errors on statistics measured in finite galaxy catalogues are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly non-linear to weakly non-linear scales. For non-linear functions of unbiased estimators, such as the cumulants, the phenomenon of cosmic bias is identified and computed. Since it is subdued by the cosmic errors in the range of applicability of the theory, correction for it is inconsequential. In addition, the method of Colombi, Szapudi & Szalay concerning sampling effects is generalized, adapting the theory for inhomogeneous galaxy catalogues. While previous work focused on the variance only, the present article calculates the cross-correlations between moments and connected moments as well for a statistically complete description. The final analytic formulae representing the full theory are explicit but somewhat complicated. Therefore we have made available a fortran program capable of calculating the described quantities numerically (for further details e-mail SC at colombi@iap.fr). An important special case is the evaluation of the errors on the two-point correlation function, for which this should be more accurate than any method put forward previously. This tool will be immensely useful in the future for assessing the precision of measurements from existing catalogues, as well as aiding the design of new galaxy surveys. To illustrate the applicability of the results and to explore the numerical aspects of the theory qualitatively and quantitatively, the errors and cross-correlations are predicted under a wide range of assumptions for the future Sloan Digital Sky Survey. The principal results concerning the cumulants ξ, Q3 and Q4 is that
NASA Astrophysics Data System (ADS)
Zhao, Jun-Hu; Yang, Liu; Hou, Wei; Liu, Gang; Zeng, Yu-Xing
2015-05-01
The cold vortex is a major high impact weather system in northeast China during the warm season, its frequent activities also affect the short-term climate throughout eastern China. How to objectively and quantitatively predict the intensity trend of the cold vortex is an urgent and difficult problem for current short-term climate prediction. Based on the dynamical-statistical combining principle, the predicted results of the Beijing Climate Center’s global atmosphere-ocean coupled model and rich historical data are used for dynamic-statistical extra-seasonal prediction testing and actual prediction of the summer 500-hPa geopotential height over the cold vortex activity area. The results show that this method can significantly reduce the model’s prediction error over the cold vortex activity area, and improve the prediction skills. Furthermore, the results of the sensitivity test reveal that the predicted results are highly dependent on the quantity of similar factors and the number of similar years. Project supported by the National Natural Science Foundation of China (Grant No. 41375078), the National Basic Research Program of China (Grant Nos. 2012CB955902 and 2013CB430204), and the Special Scientific Research Fund of Public Welfare Profession of China (Grant No. GYHY201306021).
Jambrina, P G; Aoiz, F J; Bulut, N; Smith, Sean C; Balint-Kurti, G G; Hankel, M
2010-02-01
A detailed study of the proton exchange reaction H(+) + D(2)(v = 0, j = 0) --> HD + D(+) on its ground 1(1)A' potential energy surface has been carried out using 'exact' close-coupled quantum mechanical wavepacket (WP-EQM), quasi-classical trajectory (QCT), and statistical quasi-classical trajectory (SQCT) calculations for a range of collision energies starting from the reaction threshold to 1.3 eV. The WP-EQM calculations include all total angular momenta up to J(max) = 50, and therefore the various dynamical observables are converged up to 0.6 eV. It has been found that it is necessary to include all Coriolis couplings to obtain reliable converged results. Reaction probabilities obtained using the different methods are thoroughly compared as a function of the total energy for a series of J values. Comparisons are also made of total reaction cross sections as function of the collision energy, and rate constants. In addition, opacity functions, integral cross sections (ICS) and differential cross sections (DCS) are presented at 102 meV, 201.3 meV and 524.6 meV collision energy. The agreement between the three sets of results is only qualitative. The QCT calculations fail to describe the overall reactivity and most of the dynamical observables correctly. At low collision energies, the QCT method is plagued by the lack of conservation of zero point energy, whilst at higher collision energies and/or total angular momenta, the appearance of an effective repulsive potential associated with the centrifugal motion "over" the well causes a substantial decrease of the reactivity. In turn, the statistical models overestimate the reactivity over the whole range of collision energies as compared with the WP-EQM method. Specifically, at sufficiently high collision energies the reaction cannot be deemed to be statistical and important dynamical effects seem to be present. In general the WP-EQM results lie in between those obtained using the QCT and SQCT methods. One of the main
NASA Astrophysics Data System (ADS)
Stosic, Tatijana; Telesca, Luciano; de Souza Ferreira, Diego Vicente; Stosic, Borko
2016-09-01
In this paper we investigated the influence of the construction of Sobradinho dam on daily streamflow of São Francisco river, Brazil, using permutation entropy method. We analyzed a long daily streamflow time series recorded during the period 1929-2010 encompassing the construction of Sobradinho dam between 1973 and 1979. We found that the original and deseasonalized streamflow time series are characterized by clear different complexity and entropy patterns before the construction of the dam; while, after it, their degree of randomness and complexity are nearly identical. Furthermore, investigating the oscillatory behavior of the entropy and complexity time variation, the periodicity of 3.67 years was identified, identical to one of the main periodicities revealed in the Multivariate ENSO Index (MEI). Such finding confirms the close relationship between streamflow dynamics and ENSO phenomenon. After the construction of the dam, the time variation of entropy and complexity changes almost abruptly toward stochastic regime characterized by higher entropy and lower complexity. Although the dam operations could be considered responsible for such abrupt dynamical change in the streamflow, we cannot exclude the presence of a co-induced ENSO effect; in fact, the analysis of MEI shows a strikingly similar and concomitant change in the long-term trend, identified by using the singular spectrum analysis.
NASA Astrophysics Data System (ADS)
Mutz, Sebastian; Paeth, Heiko; Winkler, Stefan
2016-03-01
The long-term behaviour of Norwegian glaciers is reflected by the long mass-balance records provided by the Norwegian Water Resources and Energy Directorate. These show positive annual mass balances in the 1980s and 1990s at maritime glaciers followed by rapid mass loss since 2000. This study assesses the influence of various atmospheric variables on mass changes of selected Norwegian glaciers by correlation- and cross-validated stepwise multiple regression analyses. The atmospheric variables are constructed from reanalyses by the National Centers for Environmental Prediction and the European Centre for Medium-Range Weather Forecasts. Transfer functions determined by the multiple regression are applied to predictors derived from a multi-model ensemble of climate projections to estimate future mass-balance changes until 2100. The statistical relationship to the North Atlantic Oscillation (NAO), the strongest predictor, is highest for maritime glaciers and less for more continental ones. The mass surplus in the 1980s and 1990s can be attributed to a strong NAO phase and lower air temperatures during the ablation season. The mass loss since 2000 can be explained by an increase of summer air temperatures and a slight weakening of the NAO. From 2000 to 2100 the statistical model predicts predicts changes for glaciers in more continental settings of c. -20 m w.e. (water equivalent) or 0.2 m w.e./a. The corresponding range for their more maritime counterparts is -0.5 to +0.2 m w.e./a. Results from Bayesian classification of observed atmospheric states associated with high melt or high accumulation in the past into different simulated climates in the future suggest that climatic conditions towards the end of the twenty-first century favour less winterly accumulation and more ablation in summer. The posterior probabilities for high accumulation at the end of the twenty-first century are typically 1.5-3 times lower than in the twentieth century while the posterior
Diegert, Carl F.
2006-12-01
We define a new diagnostic method where computationally-intensive numerical solutions are used as an integral part of making difficult, non-contact, nanometer-scale measurements. The limited scope of this report comprises most of a due diligence investigation into implementing the new diagnostic for measuring dynamic operation of Sandia's RF Ohmic Switch. Our results are all positive, providing insight into how this switch deforms during normal operation. Future work should contribute important measurements on a variety of operating MEMS devices, with insights that are complimentary to those from measurements made using interferometry and laser Doppler methods. More generally, the work opens up a broad front of possibility where exploiting massive high-performance computers enable new measurements.
NASA Technical Reports Server (NTRS)
Mcmillan, S. L. W.
1986-01-01
The period immediately following the core collapse phase in the evolution of a globular cluster is studied using a hybrid N-body/Fokker-Planck stellar dynamical code. Several core oscillations of the type predicted in earlier work are seen. The oscillations are driven by the formation, hardening, and ejection of binaries by three-body processes, and appear to decay on a timescale of about 10 to the 7th yr, for the choice of 'typical' cluster parameters made here. There is no evidence that they are gravothermal in nature. The mechanisms responsible for the decay are discussed in some detail. The distribution of hard binaries produced by the oscillations is compared with theoretical expectations and the longer term evolution of the system is considered.
NASA Astrophysics Data System (ADS)
Vannitsem, Stéphane; Lucarini, Valerio
2016-06-01
We study a simplified coupled atmosphere-ocean model using the formalism of covariant Lyapunov vectors (CLVs), which link physically-based directions of perturbations to growth/decay rates. The model is obtained via a severe truncation of quasi-geostrophic equations for the two fluids, and includes a simple yet physically meaningful representation of their dynamical/thermodynamical coupling. The model has 36 degrees of freedom, and the parameters are chosen so that a chaotic behaviour is observed. There are two positive Lyapunov exponents (LEs), sixteen negative LEs, and eighteen near-zero LEs. The presence of many near-zero LEs results from the vast time-scale separation between the characteristic time scales of the two fluids, and leads to nontrivial error growth properties in the tangent space spanned by the corresponding CLVs, which are geometrically very degenerate. Such CLVs correspond to two different classes of ocean/atmosphere coupled modes. The tangent space spanned by the CLVs corresponding to the positive and negative LEs has, instead, a non-pathological behaviour, and one can construct robust large deviations laws for the finite time LEs, thus providing a universal model for assessing predictability on long to ultra-long scales along such directions. Interestingly, the tangent space of the unstable manifold has substantial projection on both atmospheric and oceanic components. The results show the difficulties in using hyperbolicity as a conceptual framework for multiscale chaotic dynamical systems, whereas the framework of partial hyperbolicity seems better suited, possibly indicating an alternative definition for the chaotic hypothesis. They also suggest the need for an accurate analysis of error dynamics on different time scales and domains and for a careful set-up of assimilation schemes when looking at coupled atmosphere-ocean models.
Van Wyngarden, Annalise L.; Mar, Kathleen A.; Wiegel, Aaron A.; Quach, Jim; Nguyen, Anh P. Q.; Lin, Shi-Ying; Lendvay, Gyorgy; Guo, Hua; Lin, Jim J.; Lee, Yuan T.; Boering, Kristie A.
2014-08-14
The dynamics of the {sup 18}O({sup 3}P) + {sup 32}O{sub 2} isotope exchange reaction were studied using crossed atomic and molecular beams at collision energies (E{sub coll}) of 5.7 and 7.3 kcal/mol, and experimental results were compared with quantum statistical (QS) and quasi-classical trajectory (QCT) calculations on the O{sub 3}(X{sup 1}A’) potential energy surface (PES) of Babikov et al. [D. Babikov, B. K. Kendrick, R. B. Walker, R. T. Pack, P. Fleurat-Lesard, and R. Schinke, J. Chem. Phys. 118, 6298 (2003)]. In both QS and QCT calculations, agreement with experiment was markedly improved by performing calculations with the experimental distribution of collision energies instead of fixed at the average collision energy. At both collision energies, the scattering displayed a forward bias, with a smaller bias at the lower E{sub coll}. Comparisons with the QS calculations suggest that {sup 34}O{sub 2} is produced with a non-statistical rovibrational distribution that is hotter than predicted, and the discrepancy is larger at the lower E{sub coll}. If this underprediction of rovibrational excitation by the QS method is not due to PES errors and/or to non-adiabatic effects not included in the calculations, then this collision energy dependence is opposite to what might be expected based on collision complex lifetime arguments and opposite to that measured for the forward bias. While the QCT calculations captured the experimental product vibrational energy distribution better than the QS method, the QCT results underpredicted rotationally excited products, overpredicted forward-bias and predicted a trend in the strength of forward-bias with collision energy opposite to that measured, indicating that it does not completely capture the dynamic behavior measured in the experiment. Thus, these results further underscore the need for improvement in theoretical treatments of dynamics on the O{sub 3}(X{sup 1}A’) PES and perhaps of the PES itself in order to better
NASA Technical Reports Server (NTRS)
Zheng, Quanan; Yan, Xiao-Hai; Klemas, Vic
1993-01-01
The internal waves on the continental shelf on the Middle Atlantic Bight seen on Space Shuttle photographs taken during the STS-40 mission in June 1991 are measured and analyzed. The internal wave field in the sample area has a three-level structure which consists of packet groups, packets, and solitons. An average packet group wavelength of 17.5 km and an average soliton wavelength of 0.6 km are measured. Finite-depth theory is used to derive the dynamic parameters of the internal solitons: the maximum amplitude of 5.6 m, the characteristic phase speed of 0.42 m/s, the characteristic period of 23.8 min, the velocity amplitude of the water particles in the upper and lower layers of 0.13 m/s and 0.030 m/s respectively, and the theoretical energy per unit crest line of 6.8 x 10 exp 4 J/m. The frequency distribution of solitons is triple-peaked rather than continuous. The major generation source is at 160 m water depth, and a second is at 1800 m depth, corresponding to the upper and lower edges of the shelf break.
Kissick, David J; Muir, Ryan D; Simpson, Garth J
2010-12-15
An experimentally simple photon counting method is demonstrated providing 7 orders of magnitude in linear dynamic range (LDR) for a single photomultiplier tube (PMT) detector. In conventional photon/electron counting methods, the linear range is dictated by the agreement between the binomially distributed measurement of counted events and the underlying Poisson distribution of photons/electrons. By explicitly considering the log-normal probability distribution in voltage transients as a function of the number of photons present and the Poisson distribution of photons, observed counts for a given threshold can be related to the mean number of photons well beyond the conventional limit. Analytical expressions are derived relating counts and photons that extend the linear range to an average of ∼11 photons arriving simultaneously with a single threshold. These expressions can be evaluated numerically for multiple thresholds extending the linear range to the saturation point of the PMT. The peak voltage distributions are experimentally shown to follow a Poisson weighted sum of log-normal distributions that can all be derived from the single photoelectron voltage peak-height distribution. The LDR that results from this method is compared to conventional single photon counting (SPC) and to signal averaging by analog to digital conversion (ADC).
Demkin, V. P.; Mel'nichuk, S. V.
2014-09-15
In the present work, results of investigations into the dynamics of secondary electrons with helium atoms in the presence of the reverse electric field arising in the flare of a high-voltage pulsed beam-type discharge and leading to degradation of the primary electron beam are presented. The electric field in the discharge of this type at moderate pressures can reach several hundred V/cm and leads to considerable changes in the kinetics of secondary electrons created in the process of propagation of the electron beam generated in the accelerating gap with a grid anode. Moving in the accelerating electric field toward the anode, secondary electrons create the so-called compensating current to the anode. The character of electron motion and the compensating current itself are determined by the ratio of the field strength to the concentration of atoms (E/n). The energy and angular spectra of secondary electrons are calculated by the Monte Carlo method for different ratios E/n of the electric field strength to the helium atom concentration. The motion of secondary electrons with threshold energy is studied for inelastic collisions of helium atoms and differential analysis is carried out of the collisional processes causing energy losses of electrons in helium for different E/n values. The mechanism of creation and accumulation of slow electrons as a result of inelastic collisions of secondary electrons with helium atoms and selective population of metastable states of helium atoms is considered. It is demonstrated that in a wide range of E/n values the motion of secondary electrons in the beam-type discharge flare has the character of drift. At E/n values characteristic for the discharge of the given type, the drift velocity of these electrons is calculated and compared with the available experimental data.
Pasquaretta, Cristian; Klenschi, Elizabeth; Pansanel, Jérôme; Battesti, Marine; Mery, Frederic; Sueur, Cédric
2016-01-01
Social learning – the transmission of behaviors through observation or interaction with conspecifics – can be viewed as a decision-making process driven by interactions among individuals. Animal group structures change over time and interactions among individuals occur in particular orders that may be repeated following specific patterns, change in their nature, or disappear completely. Here we used a stochastic actor-oriented model built using the RSiena package in R to estimate individual behaviors and their changes through time, by analyzing the dynamic of the interaction network of the fruit fly Drosophila melanogaster during social learning experiments. In particular, we re-analyzed an experimental dataset where uninformed flies, left free to interact with informed ones, acquired and later used information about oviposition site choice obtained by social interactions. We estimated the degree to which the uninformed flies had successfully acquired the information carried by informed individuals using the proportion of eggs laid by uninformed flies on the medium their conspecifics had been trained to favor. Regardless of the degree of information acquisition measured in uninformed individuals, they always received and started interactions more frequently than informed ones did. However, information was efficiently transmitted (i.e., uninformed flies predominantly laid eggs on the same medium informed ones had learn to prefer) only when the difference in contacts sent between the two fly types was small. Interestingly, we found that the degree of reciprocation, the tendency of individuals to form mutual connections between each other, strongly affected oviposition site choice in uninformed flies. This work highlights the great potential of RSiena and its utility in the studies of interaction networks among non-human animals. PMID:27148146
NASA Astrophysics Data System (ADS)
Demkin, V. P.; Mel'nichuk, S. V.
2014-09-01
In the present work, results of investigations into the dynamics of secondary electrons with helium atoms in the presence of the reverse electric field arising in the flare of a high-voltage pulsed beam-type discharge and leading to degradation of the primary electron beam are presented. The electric field in the discharge of this type at moderate pressures can reach several hundred V/cm and leads to considerable changes in the kinetics of secondary electrons created in the process of propagation of the electron beam generated in the accelerating gap with a grid anode. Moving in the accelerating electric field toward the anode, secondary electrons create the so-called compensating current to the anode. The character of electron motion and the compensating current itself are determined by the ratio of the field strength to the concentration of atoms (Е/n). The energy and angular spectra of secondary electrons are calculated by the Monte Carlo method for different ratios E/n of the electric field strength to the helium atom concentration. The motion of secondary electrons with threshold energy is studied for inelastic collisions of helium atoms and differential analysis is carried out of the collisional processes causing energy losses of electrons in helium for different E/n values. The mechanism of creation and accumulation of slow electrons as a result of inelastic collisions of secondary electrons with helium atoms and selective population of metastable states of helium atoms is considered. It is demonstrated that in a wide range of E/n values the motion of secondary electrons in the beam-type discharge flare has the character of drift. At E/n values characteristic for the discharge of the given type, the drift velocity of these electrons is calculated and compared with the available experimental data.
Pasquaretta, Cristian; Klenschi, Elizabeth; Pansanel, Jérôme; Battesti, Marine; Mery, Frederic; Sueur, Cédric
2016-01-01
Social learning - the transmission of behaviors through observation or interaction with conspecifics - can be viewed as a decision-making process driven by interactions among individuals. Animal group structures change over time and interactions among individuals occur in particular orders that may be repeated following specific patterns, change in their nature, or disappear completely. Here we used a stochastic actor-oriented model built using the RSiena package in R to estimate individual behaviors and their changes through time, by analyzing the dynamic of the interaction network of the fruit fly Drosophila melanogaster during social learning experiments. In particular, we re-analyzed an experimental dataset where uninformed flies, left free to interact with informed ones, acquired and later used information about oviposition site choice obtained by social interactions. We estimated the degree to which the uninformed flies had successfully acquired the information carried by informed individuals using the proportion of eggs laid by uninformed flies on the medium their conspecifics had been trained to favor. Regardless of the degree of information acquisition measured in uninformed individuals, they always received and started interactions more frequently than informed ones did. However, information was efficiently transmitted (i.e., uninformed flies predominantly laid eggs on the same medium informed ones had learn to prefer) only when the difference in contacts sent between the two fly types was small. Interestingly, we found that the degree of reciprocation, the tendency of individuals to form mutual connections between each other, strongly affected oviposition site choice in uninformed flies. This work highlights the great potential of RSiena and its utility in the studies of interaction networks among non-human animals. PMID:27148146
Kissick, David J.; Muir, Ryan D.; Sullivan, Shane Z.; Oglesbee, Robert A.; Simpson, Garth J.
2014-01-01
Despite the ubiquitous use of multi-photon and confocal microscopy measurements in biology, the core techniques typically suffer from fundamental compromises between signal to noise (S/N) and linear dynamic range (LDR). In this study, direct synchronous digitization of voltage transients coupled with statistical analysis is shown to allow S/N approaching the theoretical maximum throughout an LDR spanning more than 8 decades, limited only by the dark counts of the detector on the low end and by the intrinsic nonlinearities of the photomultiplier tube (PMT) detector on the high end. Synchronous digitization of each voltage transient represents a fundamental departure from established methods in confocal/multi-photon imaging, which are currently based on either photon counting or signal averaging. High information-density data acquisition (up to 3.2 GB/s of raw data) enables the smooth transition between the two modalities on a pixel-by-pixel basis and the ultimate writing of much smaller files (few kB/s). Modeling of the PMT response allows extraction of key sensor parameters from the histogram of voltage peak-heights. Applications in second harmonic generation (SHG) microscopy are described demonstrating S/N approaching the shot-noise limit of the detector over large dynamic ranges. PMID:24817799
Larson, Wesley A; McKinney, Garrett J; Seeb, James E; Seeb, Lisa W
2016-11-01
Loci that can be used to screen for sex in salmon can provide important information for study of both wild and cultured populations. Here, we tested for associations between sex and genotypes at thousands of loci available from a genotyping-by-sequencing (GBS) dataset to discover sex-associated loci in sockeye salmon (Oncorhynchus nerka). We discovered 7 sex-associated loci, developed high-throughput assays for 2 loci, and tested the utility of these 2 assays in 8 collections of sockeye salmon sampled throughout North America. We also screened an existing assay based on the master sex-determining gene in salmon (sdY) in these collections. The ability of GBS-derived loci to assign fish to their phenotypic sex varied substantially among collections suggesting that recombination between the loci that we discovered and the sex-determining gene has occurred. Assignment accuracy to phenotypic sex was much higher with the sdY assay but was still less than 100%. Alignment of sequences from GBS-derived loci to draft genomes for 2 salmonids provided strong evidence that many of these loci are found on chromosomes orthologous to the known sex chromosome in sockeye salmon. Our study is the first to describe the approximate location of the sex-determining region in sockeye salmon and indicates that sdY is also the master sex-determining gene in this species. However, discordances between sdY genotypes and phenotypic sex and the variable performance of GBS-derived loci warrant more research. PMID:27417855
Larson, Wesley A; McKinney, Garrett J; Seeb, James E; Seeb, Lisa W
2016-11-01
Loci that can be used to screen for sex in salmon can provide important information for study of both wild and cultured populations. Here, we tested for associations between sex and genotypes at thousands of loci available from a genotyping-by-sequencing (GBS) dataset to discover sex-associated loci in sockeye salmon (Oncorhynchus nerka). We discovered 7 sex-associated loci, developed high-throughput assays for 2 loci, and tested the utility of these 2 assays in 8 collections of sockeye salmon sampled throughout North America. We also screened an existing assay based on the master sex-determining gene in salmon (sdY) in these collections. The ability of GBS-derived loci to assign fish to their phenotypic sex varied substantially among collections suggesting that recombination between the loci that we discovered and the sex-determining gene has occurred. Assignment accuracy to phenotypic sex was much higher with the sdY assay but was still less than 100%. Alignment of sequences from GBS-derived loci to draft genomes for 2 salmonids provided strong evidence that many of these loci are found on chromosomes orthologous to the known sex chromosome in sockeye salmon. Our study is the first to describe the approximate location of the sex-determining region in sockeye salmon and indicates that sdY is also the master sex-determining gene in this species. However, discordances between sdY genotypes and phenotypic sex and the variable performance of GBS-derived loci warrant more research.
Smith, Alwyn
1969-01-01
This paper is based on an analysis of questionnaires sent to the health ministries of Member States of WHO asking for information about the extent, nature, and scope of morbidity statistical information. It is clear that most countries collect some statistics of morbidity and many countries collect extensive data. However, few countries relate their collection to the needs of health administrators for information, and many countries collect statistics principally for publication in annual volumes which may appear anything up to 3 years after the year to which they refer. The desiderata of morbidity statistics may be summarized as reliability, representativeness, and relevance to current health problems. PMID:5306722
Hay, L.E.; Clark, M.P.
2003-01-01
This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of
Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy
2015-10-15
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii
Guyonvarch, Estelle; Ramin, Elham; Kulahci, Murat; Plósz, Benedek Gy
2015-10-15
The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models - computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method - presented in a straightforward and transparent way - is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the statistical…
Gai, Lili; Vogel, Thomas; Maerzke, Katie A; Iacovella, Christopher R; Landau, David P; Cummings, Peter T; McCabe, Clare
2013-08-01
Two different techniques - replica-exchange Wang-Landau (REWL) and statistical temperature molecular dynamics (STMD) - were applied to systematically study the phase transition behavior of self-assembling lipids as a function of temperature using an off-lattice lipid model. Both methods allow the direct calculation of the density of states with improved efficiency compared to the original Wang-Landau method. A 3-segment model of amphiphilic lipids solvated in water has been studied with varied particle interaction energies (ε) and lipid concentrations. The phase behavior of the lipid molecules with respect to bilayer formation has been characterized through the calculation of the heat capacity as a function of temperature, in addition to various order parameters and general visual inspection. The simulations conducted by both methods can go to very low temperatures with the whole system exhibiting well-ordered structures. With optimized parameters, several bilayer phases are observed within the temperature range studied, including gel phase bilayers with frozen water, mixed water (i.e., frozen and liquid water), and liquid water, and a more fluid bilayer with liquid water. The results obtained from both methods, STMD and REWL, are consistently in excellent agreement with each other, thereby validating both the methods and the results.
NASA Astrophysics Data System (ADS)
Li, R.; Wang, S.-Y.; Gillies, R. R.
2016-04-01
Large biases associated with climate projections are problematic when it comes to their regional application in the assessment of water resources and ecosystems. Here, we demonstrate a method that can reduce systematic biases in regional climate projections. The global and regional climate models employed to demonstrate the technique are the Community Climate System Model (CCSM) and the Weather Research and Forecasting (WRF) model. The method first utilized a statistical regression technique and a global reanalysis dataset to correct biases in the CCSM-simulated variables (e.g., temperature, geopotential height, specific humidity, and winds) that are subsequently used to drive the WRF model. The WRF simulations were conducted for the western United States and were driven with (a) global reanalysis, (b) original CCSM, and (c) bias-corrected CCSM data. The bias-corrected CCSM data led to a more realistic regional climate simulation of precipitation and associated atmospheric dynamics, as well as snow water equivalent (SWE), in comparison to the original CCSM-driven WRF simulation. Since most climate applications rely on existing global model output as the forcing data (i.e., they cannot re-run or change the global model), which often contain large biases, this method provides an effective and economical tool to reduce biases in regional climate downscaling simulations of water resource variables.
Gai, Lili; Vogel, Thomas; Maerzke, Katie A.; Iacovella, Christopher R.; Landau, David P.; Cummings, Peter T.; McCabe, Clare
2013-01-01
Two different techniques – replica-exchange Wang-Landau (REWL) and statistical temperature molecular dynamics (STMD) – were applied to systematically study the phase transition behavior of self-assembling lipids as a function of temperature using an off-lattice lipid model. Both methods allow the direct calculation of the density of states with improved efficiency compared to the original Wang-Landau method. A 3-segment model of amphiphilic lipids solvated in water has been studied with varied particle interaction energies (ɛ) and lipid concentrations. The phase behavior of the lipid molecules with respect to bilayer formation has been characterized through the calculation of the heat capacity as a function of temperature, in addition to various order parameters and general visual inspection. The simulations conducted by both methods can go to very low temperatures with the whole system exhibiting well-ordered structures. With optimized parameters, several bilayer phases are observed within the temperature range studied, including gel phase bilayers with frozen water, mixed water (i.e., frozen and liquid water), and liquid water, and a more fluid bilayer with liquid water. The results obtained from both methods, STMD and REWL, are consistently in excellent agreement with each other, thereby validating both the methods and the results. PMID:23927268
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1987-01-01
A dynamic rain attenuation prediction model is developed for use in obtaining the temporal characteristics, on time scales of minutes or hours, of satellite communication link availability. Analagous to the associated static rain attenuation model, which yields yearly attenuation predictions, this dynamic model is applicable at any location in the world that is characterized by the static rain attenuation statistics peculiar to the geometry of the satellite link and the rain statistics of the location. Such statistics are calculated by employing the formalism of Part I of this report. In fact, the dynamic model presented here is an extension of the static model and reduces to the static model in the appropriate limit. By assuming that rain attenuation is dynamically described by a first-order stochastic differential equation in time and that this random attenuation process is a Markov process, an expression for the associated transition probability is obtained by solving the related forward Kolmogorov equation. This transition probability is then used to obtain such temporal rain attenuation statistics as attenuation durations and allowable attenuation margins versus control system delay.
The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.
... cancer statistics across the world. U.S. Cancer Mortality Trends The best indicator of progress against cancer is ... the number of cancer survivors has increased. These trends show that progress is being made against the ...
NASA Astrophysics Data System (ADS)
Hermann, Claudine
Statistical Physics bridges the properties of a macroscopic system and the microscopic behavior of its constituting particles, otherwise impossible due to the giant magnitude of Avogadro's number. Numerous systems of today's key technologies - such as semiconductors or lasers - are macroscopic quantum objects; only statistical physics allows for understanding their fundamentals. Therefore, this graduate text also focuses on particular applications such as the properties of electrons in solids with applications, and radiation thermodynamics and the greenhouse effect.
Li, Zheng; Borner, Arnaud; Levin, Deborah A
2014-06-14
Homogeneous water condensation and ice formation in supersonic expansions to vacuum for stagnation pressures from 12 to 1000 mbar are studied using the particle-based Ellipsoidal-Statistical Bhatnagar-Gross-Krook (ES-BGK) method. We find that when condensation starts to occur, at a stagnation pressure of 96 mbar, the increase in the degree of condensation causes an increase in the rotational temperature due to the latent heat of vaporization. The simulated rotational temperature profiles along the plume expansion agree well with measurements confirming the kinetic homogeneous condensation models and the method of simulation. Comparisons of the simulated gas and cluster number densities, cluster size for different stagnation pressures along the plume centerline were made and it is found that the cluster size increase linearly with respect to stagnation pressure, consistent with classical nucleation theory. The sensitivity of our results to cluster nucleation model and latent heat values based on bulk water, specific cluster size, or bulk ice are examined. In particular, the ES-BGK simulations are found to be too coarse-grained to provide information on the phase or structure of the clusters formed. For this reason, molecular dynamics simulations of water condensation in a one-dimensional free expansion to simulate the conditions in the core of a plume are performed. We find that the internal structure of the clusters formed depends on the stagnation temperature. A larger cluster of average size 21 was tracked down the expansion, and a calculation of its average internal temperature as well as a comparison of its radial distribution functions (RDFs) with values measured for solid amorphous ice clusters lead us to conclude that this cluster is in a solid-like rather than liquid form. In another molecular-dynamics simulation at a much lower stagnation temperature, a larger cluster of size 324 and internal temperature 200 K was extracted from an expansion plume and
Li, Zheng; Borner, Arnaud; Levin, Deborah A
2014-06-14
Homogeneous water condensation and ice formation in supersonic expansions to vacuum for stagnation pressures from 12 to 1000 mbar are studied using the particle-based Ellipsoidal-Statistical Bhatnagar-Gross-Krook (ES-BGK) method. We find that when condensation starts to occur, at a stagnation pressure of 96 mbar, the increase in the degree of condensation causes an increase in the rotational temperature due to the latent heat of vaporization. The simulated rotational temperature profiles along the plume expansion agree well with measurements confirming the kinetic homogeneous condensation models and the method of simulation. Comparisons of the simulated gas and cluster number densities, cluster size for different stagnation pressures along the plume centerline were made and it is found that the cluster size increase linearly with respect to stagnation pressure, consistent with classical nucleation theory. The sensitivity of our results to cluster nucleation model and latent heat values based on bulk water, specific cluster size, or bulk ice are examined. In particular, the ES-BGK simulations are found to be too coarse-grained to provide information on the phase or structure of the clusters formed. For this reason, molecular dynamics simulations of water condensation in a one-dimensional free expansion to simulate the conditions in the core of a plume are performed. We find that the internal structure of the clusters formed depends on the stagnation temperature. A larger cluster of average size 21 was tracked down the expansion, and a calculation of its average internal temperature as well as a comparison of its radial distribution functions (RDFs) with values measured for solid amorphous ice clusters lead us to conclude that this cluster is in a solid-like rather than liquid form. In another molecular-dynamics simulation at a much lower stagnation temperature, a larger cluster of size 324 and internal temperature 200 K was extracted from an expansion plume and
Manos, Thanos; Robnik, Marko
2013-06-01
We study the kicked rotator in the classically fully chaotic regime using Izrailev's N-dimensional model for various N≤4000, which in the limit N→∞ tends to the quantized kicked rotator. We do treat not only the case K=5, as studied previously, but also many different values of the classical kick parameter 5≤K≤35 and many different values of the quantum parameter kε[5,60]. We describe the features of dynamical localization of chaotic eigenstates as a paradigm for other both time-periodic and time-independent (autonomous) fully chaotic or/and mixed-type Hamilton systems. We generalize the scaling variable Λ=l(∞)/N to the case of anomalous diffusion in the classical phase space by deriving the localization length l(∞) for the case of generalized classical diffusion. We greatly improve the accuracy and statistical significance of the numerical calculations, giving rise to the following conclusions: (1) The level-spacing distribution of the eigenphases (or quasienergies) is very well described by the Brody distribution, systematically better than by other proposed models, for various Brody exponents β(BR). (2) We study the eigenfunctions of the Floquet operator and characterize their localization properties using the information entropy measure, which after normalization is given by β(loc) in the interval [0,1]. The level repulsion parameters β(BR) and β(loc) are almost linearly related, close to the identity line. (3) We show the existence of a scaling law between β(loc) and the relative localization length Λ, now including the regimes of anomalous diffusion. The above findings are important also for chaotic eigenstates in time-independent systems [Batistić and Robnik, J. Phys. A: Math. Gen. 43, 215101 (2010); arXiv:1302.7174 (2013)], where the Brody distribution is confirmed to a very high degree of precision for dynamically localized chaotic eigenstates, even in the mixed-type systems (after separation of regular and chaotic eigenstates).
NASA Astrophysics Data System (ADS)
Su, Haifeng; Xiong, Zhe; Yan, Xiaodong; Dai, Xingang; Wei, Wenguang
2016-04-01
Monthly rainfall in the Heihe River Basin (HRB) was simulated by the dynamical downscaling model (DDM) and statistical downscaling model (SDM). The rainy-season rainfall in the HRB obtained by SDM and DDM was compared with the observed datasets (OBS) over the period of 2003-2012. The results showed the following: (1) Both methods reasonably reproduced the spatial pattern of rainy-season rainfall in the HRB with a high-level skill. Rainfall simulated by DDM was better than that by SDM in the upstream, with biases of -12.09 and -13.59 %, respectively; rainfall simulated by SDM was better than that by DDM in the midstream, with biases of 3.91 and -23.22 %, respectively; there was little difference between the rainfall simulated by SDM and DDM in the downstream, with biases of -10.89 and -9.50 %, respectively. (2) Both methods reasonably reproduced monthly rainfall in rainy season in different subregions. Rainfall simulated by DDM was better than that by SDM in May and July in the upstream, whereas rainfall simulated by SDM was closer to OBS except August in the midstream and except August and September in the downstream. (3) For multi-year mean rainy-season rainfall in different stations, there was a little difference between the rainfall simulated by DDM and SDM in Tuole station in the upstream, with biases of -13.16 and -12.40 %, respectively; rainfall in Zhangye station simulated by SDM was overestimated with bias of 14.02 %, and rainfall simulated by DDM was underestimated with bias of -14.60 %; rainfall in Dingxin station simulated by DDM was reproduced better than that by SDM, with biases of -19.34 and -32.75 %, respectively.
NASA Astrophysics Data System (ADS)
Frazin, Richard A.
2016-04-01
A new generation of telescopes with mirror diameters of 20 m or more, called extremely large telescopes (ELTs) has the potential to provide unprecedented imaging and spectroscopy of exo-planetary systems, if the difficulties in achieving the extremely high dynamic range required to differentiate the planetary signal from the star can be overcome to a sufficient degree. Fully utilizing the potential of ELTs for exoplanet imaging will likely require simultaneous and self-consistent determination of both the planetary image and the unknown aberrations in multiple planes of the optical system, using statistical inference based on the wavefront sensor and science camera data streams. This approach promises to overcome the most important systematic errors inherent in the various schemes based on differential imaging, such as ADI and SDI. This paper is the first in a series on this subject, in which a formalism is established for the exoplanet imaging problem, setting the stage for the statistical inference methods to follow in the future. Every effort has been made to be rigorous and complete, so that validity of approximations to be made later can be assessed. Here, the polarimetric image is expressed in terms of aberrations in the various planes of a polarizing telescope with an adaptive optics system. Further, it is shown that current methods that utilize focal plane sensing to correct the speckle field, e.g., electric field conjugation, rely on the tacit assumption that aberrations on multiple optical surfaces can be represented as aberration on a single optical surface, ultimately limiting their potential effectiveness for ground-based astronomy.
Carnley, Mark V.
2016-09-30
The Design Analysis Associates (DAA) DAA H-3613i radar water-level sensor (DAA H-3613i), manufactured by Xylem Incorporated, was evaluated by the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) for conformance to manufacturer’s accuracy specifications for measuring a distance throughout the sensor’s operating temperature range, for measuring distances from 3 to 15 feet at ambient temperatures, and for compliance with the SDI-12 serial-to-digital interface at 1200-baud communication standard. The DAA H-3613i is a noncontact water-level sensor that uses pulsed radar to measure the distance between the radar and the water surface from 0.75 to 131 feet over a temperature range of −40 to 60 degrees Celsius (°C). Manufacturer accuracy specifications that were evaluated, the test procedures that followed, and the results obtained are described in this report. The sensor’s accuracy specification of ± 0.01 feet (± 3 millimeters) meets USGS requirements for a primary water-stage sensor used in the operation of a streamgage. The sensor met the manufacturer’s stated accuracy specifications for water-level measurements during temperature testing at a distance of 8 feet from the target over its temperature-compensated operating range of −40 to 60 °C, except at 60 °C. At 60 °C, about half the measurements exceeded the manufacturer’s accuracy specification by not more than 0.005 feet.The sensor met the manufacturer’s stated accuracy specifications for water-level measurements during distance-accuracy testing at the tested distances from 3 to 15 feet above the water surface at the HIF.
Carnley, Mark V.
2016-09-30
The Design Analysis Associates (DAA) DAA H-3613i radar water-level sensor (DAA H-3613i), manufactured by Xylem Incorporated, was evaluated by the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) for conformance to manufacturer’s accuracy specifications for measuring a distance throughout the sensor’s operating temperature range, for measuring distances from 3 to 15 feet at ambient temperatures, and for compliance with the SDI-12 serial-to-digital interface at 1200-baud communication standard. The DAA H-3613i is a noncontact water-level sensor that uses pulsed radar to measure the distance between the radar and the water surface from 0.75 to 131 feet over a temperature range of −40 to 60 degrees Celsius (°C). Manufacturer accuracy specifications that were evaluated, the test procedures that followed, and the results obtained are described in this report. The sensor’s accuracy specification of ± 0.01 feet (± 3 millimeters) meets USGS requirements for a primary water-stage sensor used in the operation of a streamgage. The sensor met the manufacturer’s stated accuracy specifications for water-level measurements during temperature testing at a distance of 8 feet from the target over its temperature-compensated operating range of −40 to 60 °C, except at 60 °C. At 60 °C, about half the measurements exceeded the manufacturer’s accuracy specification by not more than 0.005 feet.The sensor met the manufacturer’s stated accuracy specifications for water-level measurements during distance-accuracy testing at the tested distances from 3 to 15 feet above the water surface at the HIF.
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
NASA Astrophysics Data System (ADS)
Kardar, Mehran
2006-06-01
While many scientists are familiar with fractals, fewer are familiar with the concepts of scale-invariance and universality which underly the ubiquity of their shapes. These properties may emerge from the collective behaviour of simple fundamental constituents, and are studied using statistical field theories. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook demonstrates how such theories are formulated and studied. Perturbation theory, exact solutions, renormalization groups, and other tools are employed to demonstrate the emergence of scale invariance and universality, and the non-equilibrium dynamics of interfaces and directed paths in random media are discussed. Ideal for advanced graduate courses in statistical physics, it contains an integrated set of problems, with solutions to selected problems at the end of the book. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873413. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 65 exercises, with solutions to selected problems Features a thorough introduction to the methods of Statistical Field theory Ideal for graduate courses in Statistical Physics
1986-01-01
Official population data for the USSR are presented for 1985 and 1986. Part 1 (pp. 65-72) contains data on capitals of union republics and cities with over one million inhabitants, including population estimates for 1986 and vital statistics for 1985. Part 2 (p. 72) presents population estimates by sex and union republic, 1986. Part 3 (pp. 73-6) presents data on population growth, including birth, death, and natural increase rates, 1984-1985; seasonal distribution of births and deaths; birth order; age-specific birth rates in urban and rural areas and by union republic; marriages; age at marriage; and divorces. PMID:12178831
NASA Technical Reports Server (NTRS)
Druzhinin, I. P.; Khamyanova, N. V.; Yagodinskiy, V. N.
1974-01-01
Statistical evaluations of the significance of the relationship of abrupt changes in solar activity and discontinuities in the multi-year pattern of an epidemic process are reported. They reliably (with probability of more than 99.9%) show the real nature of this relationship and its great specific weight (about half) in the formation of discontinuities in the multi-year pattern of the processes in question.
Mitrikas, V G
2014-01-01
The on-going 24th solar cycle (SC) is distinguished from the previous ones by low activity. On the contrary, levels of proton fluxes from galactic cosmic rays (GCR) are high, which increases the proton flow striking the Earth's radiation belts (ERB). Therefore, at present the absorbed dose from ERB protons should be calculated with consideration of the tangible increase of protons intensity built into the model descriptions based on experimental measurements during the minimum between cycles 19 and 20, and the cycle 21 maximum. The absorbed dose from GCR and ERB protons copies galactic protons dynamics, while the ERB electrons dose copies SC dynamics. The major factors that determine the absorbed dose value are SC phase, ISS orbital altitude and shielding of the dosimeter readings of which are used in analysis. The paper presents the results of dynamic analysis of absorbed doses measured by a variety of dosimeters, namely, R-16 (2 ionization chambers), DB8-1, DB8-2, DB8-3, DB8-4 as a function of ISS orbit altitude and SC phase. The existence of annual variation in the absorbed dose dynamics has been confirmed; several additional variations with the periods of 17 and 52 months have been detected. Modulation of absorbed dose variations by the SC and GCR amplitudes has been demonstrated.
NASA Astrophysics Data System (ADS)
González-Lezana, Tomás; Honvault, Pascal; Scribano, Yohann
2013-08-01
The D+ +H2(v = 0, j = 0, 1) → HD+H+ reaction has been investigated at the low energy regime by means of a statistical quantum mechanical (SQM) method. Reaction probabilities and integral cross sections (ICSs) between a collisional energy of 10-4 eV and 0.1 eV have been calculated and compared with previously reported results of a time independent quantum mechanical (TIQM) approach. The TIQM results exhibit a dense profile with numerous narrow resonances down to Ec ˜ 10-2 eV and for the case of H2(v = 0, j = 0) a prominent peak is found at ˜2.5 × 10-4 eV. The analysis at the state-to-state level reveals that this feature is originated in those processes which yield the formation of rotationally excited HD(v' = 0, j' > 0). The statistical predictions reproduce reasonably well the overall behaviour of the TIQM ICSs at the larger energy range (Ec ⩾ 10-3 eV). Thermal rate constants are in qualitative agreement for the whole range of temperatures investigated in this work, 10-100 K, although the SQM values remain above the TIQM results for both initial H2 rotational states, j = 0 and 1. The enlargement of the asymptotic region for the statistical approach is crucial for a proper description at low energies. In particular, we find that the SQM method leads to rate coefficients in terms of the energy in perfect agreement with previously reported measurements if the maximum distance at which the calculation is performed increases noticeably with respect to the value employed to reproduce the TIQM results.
González-Lezana, Tomás; Honvault, Pascal; Scribano, Yohann
2013-08-01
The D(+) +H2(v = 0, j = 0, 1) → HD+H(+) reaction has been investigated at the low energy regime by means of a statistical quantum mechanical (SQM) method. Reaction probabilities and integral cross sections (ICSs) between a collisional energy of 10(-4) eV and 0.1 eV have been calculated and compared with previously reported results of a time independent quantum mechanical (TIQM) approach. The TIQM results exhibit a dense profile with numerous narrow resonances down to Ec ~ 10(-2) eV and for the case of H2(v = 0, j = 0) a prominent peak is found at ~2.5 × 10(-4) eV. The analysis at the state-to-state level reveals that this feature is originated in those processes which yield the formation of rotationally excited HD(v' = 0, j' > 0). The statistical predictions reproduce reasonably well the overall behaviour of the TIQM ICSs at the larger energy range (Ec ≥ 10(-3) eV). Thermal rate constants are in qualitative agreement for the whole range of temperatures investigated in this work, 10-100 K, although the SQM values remain above the TIQM results for both initial H2 rotational states, j = 0 and 1. The enlargement of the asymptotic region for the statistical approach is crucial for a proper description at low energies. In particular, we find that the SQM method leads to rate coefficients in terms of the energy in perfect agreement with previously reported measurements if the maximum distance at which the calculation is performed increases noticeably with respect to the value employed to reproduce the TIQM results. PMID:23927256
NASA Astrophysics Data System (ADS)
Graham, D. B.; Cairns, Iver H.; Skjaeraasen, O.; Robinson, P. A.
2012-02-01
The temperature ratio Ti/Te of ions to electrons affects both the ion-damping rate and the ion-acoustic speed in plasmas. The effects of changing the ion-damping rate and ion-acoustic speed are investigated for electrostatic strong turbulence and electromagnetic strong turbulence in three dimensions. When ion damping is strong, density wells relax in place and act as nucleation sites for the formation of new wave packets. In this case, the density perturbations are primarily density wells supported by the ponderomotive force. For weak ion damping, corresponding to low Ti/Te, ion-acoustic waves are launched radially outwards when wave packets dissipate at burnout, thereby increasing the level of density perturbations in the system and thus raising the level of scattering of Langmuir waves off density perturbations. Density wells no longer relax in place so renucleation at recent collapse sites no longer occurs, instead wave packets form in background low density regions, such as superpositions of troughs of propagating ion-acoustic waves. This transition is found to occur at Ti/Te ≈ 0.1. The change in behavior with Ti/Te is shown to change the bulk statistical properties, scaling behavior, spectra, and field statistics of strong turbulence. For Ti/Te>rsim0.1, the electrostatic results approach the predictions of the two-component model of Robinson and Newman, and good agreement is found for Ti/Te>rsim0.15.
NASA Technical Reports Server (NTRS)
Balcer-Kubiczek, E. K.; Zhang, X. F.; Harrison, G. H.; Zhou, X. J.; Vigneulle, R. M.; Ove, R.; McCready, W. A.; Xu, J. F.
1999-01-01
PURPOSE: Differences in gene expression underlie the phenotypic differences between irradiated and unirradiated cells. The goal was to identify late-transcribed genes following irradiations differing in quality, and to determine the RBE of 1 GeV/n Fe ions. MATERIALS AND METHODS: Clonogenic assay was used to determine the RBE of Fe ions. Differential hybridization to cDNA target clones was used to detect differences in expression of corresponding genes in mRNA samples isolated from MCF7 cells irradiated with iso-survival doses of Fe ions (0 or 2.5 Gy) or fission neutrons (0 or 1.2 Gy) 7 days earlier. Northern analysis was used to confirm differential expression of cDNA-specific mRNA and to examine expression kinetics up to 2 weeks after irradiation. RESULTS: Fe ion RBE values were between 2.2 and 2.6 in the lines examined. Two of 17 differentially expressed cDNA clones were characterized. hpS2 mRNA was elevated from 1 to 14 days after irradiation, whereas CIP1/WAF1/SDI1 remained elevated from 3 h to 14 days after irradiation. Induction of hpS2 mRNA by irradiation was independent of p53, whereas induction of CIP1/WAF1/SDI1 was observed only in wild-type p53 lines. CONCLUSIONS: A set of coordinately regulated genes, some of which are independent of p53, is associated with change in gene expression during the first 2 weeks post-irradiation.
NASA Technical Reports Server (NTRS)
Vangelder, B. H. W.
1978-01-01
Non-Bayesian statistics were used in simulation studies centered around laser range observations to LAGEOS. The capabilities of satellite laser ranging especially in connection with relative station positioning are evaluated. The satellite measurement system under investigation may fall short in precise determinations of the earth's orientation (precession and nutation) and earth's rotation as opposed to systems as very long baseline interferometry (VLBI) and lunar laser ranging (LLR). Relative station positioning, determination of (differential) polar motion, positioning of stations with respect to the earth's center of mass and determination of the earth's gravity field should be easily realized by satellite laser ranging (SLR). The last two features should be considered as best (or solely) determinable by SLR in contrast to VLBI and LLR.
Cosmetic Plastic Surgery Statistics
2014 Cosmetic Plastic Surgery Statistics Cosmetic Procedure Trends 2014 Plastic Surgery Statistics Report Please credit the AMERICAN SOCIETY OF PLASTIC SURGEONS when citing statistical data or using ...
NASA Astrophysics Data System (ADS)
Jin, Yaqiu; Yan, Fenghua
2003-04-01
A massive sandstorm has enveloped most northern China during the spring season 2002. Monitoring the evolution of sandstorm and desertification has become one of most serious problems for China's environment. Since 1989, one of the most advanced and operational passive microwave sensors is the DMSP SSM/I (special sensor microwave imager) operated at seven channels (19, 37, 85GHz with vertical and horizontal polarization and 22GHz with vertical polarization only). In the paper, the sandstorm and desertification indexes, SDI and DI, are derived from the radiative transfer equation, and are employed with multi-channel measurements of the DMSP SSM/I for monitoring the sandstorm and desertification in Northern China. Some SSM/I data in 1997 and 2001 are employed. The algorithm of the Getis statistics is developed to categorize the spatial correlation and its evolution during these days. It is demonstrated that the SSM/I indexes, SDI and DI, and its Getis statistics are well applicable for monitoring the sandstorm and desertification.
NASA Astrophysics Data System (ADS)
Schläppy, Romain; Eckert, Nicolas; Jomelli, Vincent; Grancher, Delphine; Brunstein, Daniel; Stoffel, Markus; Naaim, Mohamed
2013-04-01
Documenting past avalanche activity represents an indispensable step in avalanche hazard assessment. Nevertheless, (i) archival records of past avalanche events do not normally yield data with satisfying spatial and temporal resolution and (ii) precision concerning runout distance is generally poorly defined. In addition, historic documentation is most often (iii) biased toward events that caused damage to structure or loss of life on the one hand and (iv) undersampled in unpopulated areas on the other hand. On forested paths dendrogeomorphology has been demonstrated to represent a powerful tool to reconstruct past activity of avalanches with annual resolution and for periods covering the past decades to centuries. This method is based on the fact that living trees may be affected by snow avalanches during their flow and deposition phases. Affected trees will react upon these disturbances with a certain growth response. An analysis of the responses recorded in tree rings coupled with an evaluation of the position of reacting trees within the path allows the dendrogeomorphic expert to identify past snow avalanche events and deduced their minimum runout distance. The objective of the work presented here is firstly to dendrochronogically -reconstruct snow avalanche activity in the Château Jouan path located near Montgenèvre in the French Alps. Minimal runout distances are then determined for each reconstructed event by considering the point of further reach along the topographic profile. Related empirical return intervals are evaluated, combining the extent of each event with the average local frequency of the dendrological record. In a second step, the runout distance distribution derived from dendrochronological reconstruction is compared to the one derived from historical archives and to high return period avalanches predicted by an up-to-date locally calibrated statistical-numerical model. It appears that dendrochronological reconstructions correspond mostly to
Faugeras, Blaise; Maury, Olivier
2005-10-01
We develop an advection-diffusion size-structured fish population dynamics model and apply it to simulate the skipjack tuna population in the Indian Ocean. The model is fully spatialized, and movements are parameterized with oceanographical and biological data; thus it naturally reacts to environment changes. We first formulate an initial-boundary value problem and prove existence of a unique positive solution. We then discuss the numerical scheme chosen for the integration of the simulation model. In a second step we address the parameter estimation problem for such a model. With the help of automatic differentiation, we derive the adjoint code which is used to compute the exact gradient of a Bayesian cost function measuring the distance between the outputs of the model and catch and length frequency data. A sensitivity analysis shows that not all parameters can be estimated from the data. Finally twin experiments in which pertubated parameters are recovered from simulated data are successfully conducted.
NASA Astrophysics Data System (ADS)
Tanoh, K. S.; Adohi, B. J.-P.; Coulibaly, I. S.; Amory-Mazaudier, C.; Kobea, A. T.; Assamoi, P.
2015-01-01
In this paper, we report on the night-time equatorial F-layer height behaviour at Korhogo (9.2° N, 5° W; 2.4° S dip lat), Ivory Coast, in the West African sector during the solar minimum period 1995-1997. The data were collected from quarter-hourly ionograms of an Ionospheric Prediction Service (IPS) 42-type vertical sounder. The main focus of this work was to study the seasonal changes in the F-layer height and to clarify the equinox transition process recently evidenced at Korhogo during 1995, the year of declining solar flux activity. The F-layer height was found to vary strongly with time, with up to three main phases. The night-to-night variability of these morphological phases was then analysed. The early post-sunset slow rise, commonly associated with rapid chemical recombination processes in the bottom part of the F layer, remained featureless and was observed regardless of the date. By contrast, the following event, either presented like the post-sunset height peak associated with the evening E × B drift, or was delayed to the midnight sector, thus involving another mechanism. The statistical analysis of the occurrence of these events throughout the solar minimum period 1995-1997 revealed two main F-layer height patterns, each characteristic of a specific season. The one with the post-sunset height peak was associated with the northern winter period, whereas the other, with the midnight height peak, characterized the northern summer period. The transition process from one pattern to the other took place during the equinox periods and was found to last only a few weeks. We discuss these results in the light of earlier works.
NASA Astrophysics Data System (ADS)
Yeung, Chi Ho
In this thesis, we study two interdisciplinary problems in the framework of statistical physics, which show the broad applicability of physics on problems with various origins. The first problem corresponds to an optimization problem in allocating resources on random regular networks. Frustrations arise from competition for resources. When the initial resources are uniform, different regimes with discrete fractions of satisfied nodes are observed, resembling the Devil's staircase. We apply the spin glass theory in analyses and demonstrate how functional recursions are converted to simple recursions of probabilities. Equilibrium properties such as the average energy and the fraction of free nodes are derived. When the initial resources are bimodally distributed, increases in the fraction of rich nodes induce a glassy transition, entering a glassy phase described by the existence of multiple metastable states, in which we employ the replica symmetry breaking ansatz for analysis. The second problem corresponds to the study of multi-agent systems modeling financial markets. Agents in the system trade among themselves, and self-organize to produce macroscopic trading behaviors resembling the real financial markets. These behaviors include the arbitraging activities, the setting up and the following of price trends. A phase diagram of these behaviors is obtained, as a function of the sensitivity of price and the market impact factor. We finally test the applicability of the models with real financial data including the Hang Seng Index, the Nasdaq Composite and the Dow Jones Industrial Average. A substantial fraction of agents gains faster than the inflation rate of the indices, suggesting the possibility of using multi-agent systems as a tool for real trading.
Rabbel, Hauke; Frey, Holger; Schmid, Friederike
2015-12-28
The reaction of ABm monomers (m = 2, 3) with a multifunctional Bf-type polymer chain ("hypergrafting") is studied by coarse-grained molecular dynamics simulations. The ABm monomers are hypergrafted using the slow monomer addition strategy. Fully dendronized, i.e., perfectly branched polymers are also simulated for comparison. The degree of branching of the molecules obtained with the "hypergrafting" process critically depends on the rate with which monomers attach to inner monomers compared to terminal monomers. This ratio is more favorable if the ABm monomers have lower reactivity, since the free monomers then have time to diffuse inside the chain. Configurational chain properties are also determined, showing that the stretching of the polymer backbone as a consequence of the "hypergrafting" procedure is much less pronounced than for perfectly dendronized chains. Furthermore, we analyze the scaling of various quantities with molecular weight M for large M (M > 100). The Wiener index scales as M(2.3), which is intermediate between linear chains (M(3)) and perfectly branched polymers (M(2)ln(M)). The polymer size, characterized by the radius of gyration Rg or the hydrodynamic radius Rh, is found to scale as Rg,h ∝ M(ν) with ν ≈ 0.38, which lies between the exponent of diffusion limited aggregation (ν = 0.4) and the mean-field exponent predicted by Konkolewicz and co-workers [Phys. Rev. Lett. 98, 238301 (2007)] (ν = 0.33).
NASA Astrophysics Data System (ADS)
Rabbel, Hauke; Frey, Holger; Schmid, Friederike
2015-12-01
The reaction of ABm monomers (m = 2, 3) with a multifunctional Bf-type polymer chain ("hypergrafting") is studied by coarse-grained molecular dynamics simulations. The ABm monomers are hypergrafted using the slow monomer addition strategy. Fully dendronized, i.e., perfectly branched polymers are also simulated for comparison. The degree of branching of the molecules obtained with the "hypergrafting" process critically depends on the rate with which monomers attach to inner monomers compared to terminal monomers. This ratio is more favorable if the ABm monomers have lower reactivity, since the free monomers then have time to diffuse inside the chain. Configurational chain properties are also determined, showing that the stretching of the polymer backbone as a consequence of the "hypergrafting" procedure is much less pronounced than for perfectly dendronized chains. Furthermore, we analyze the scaling of various quantities with molecular weight M for large M (M > 100). The Wiener index scales as M2.3, which is intermediate between linear chains (M3) and perfectly branched polymers (M2ln(M)). The polymer size, characterized by the radius of gyration Rg or the hydrodynamic radius Rh, is found to scale as Rg,h ∝ Mν with ν ≈ 0.38, which lies between the exponent of diffusion limited aggregation (ν = 0.4) and the mean-field exponent predicted by Konkolewicz and co-workers [Phys. Rev. Lett. 98, 238301 (2007)] (ν = 0.33).
Rabbel, Hauke; Frey, Holger; Schmid, Friederike
2015-12-28
The reaction of ABm monomers (m = 2, 3) with a multifunctional Bf-type polymer chain ("hypergrafting") is studied by coarse-grained molecular dynamics simulations. The ABm monomers are hypergrafted using the slow monomer addition strategy. Fully dendronized, i.e., perfectly branched polymers are also simulated for comparison. The degree of branching of the molecules obtained with the "hypergrafting" process critically depends on the rate with which monomers attach to inner monomers compared to terminal monomers. This ratio is more favorable if the ABm monomers have lower reactivity, since the free monomers then have time to diffuse inside the chain. Configurational chain properties are also determined, showing that the stretching of the polymer backbone as a consequence of the "hypergrafting" procedure is much less pronounced than for perfectly dendronized chains. Furthermore, we analyze the scaling of various quantities with molecular weight M for large M (M > 100). The Wiener index scales as M(2.3), which is intermediate between linear chains (M(3)) and perfectly branched polymers (M(2)ln(M)). The polymer size, characterized by the radius of gyration Rg or the hydrodynamic radius Rh, is found to scale as Rg,h ∝ M(ν) with ν ≈ 0.38, which lies between the exponent of diffusion limited aggregation (ν = 0.4) and the mean-field exponent predicted by Konkolewicz and co-workers [Phys. Rev. Lett. 98, 238301 (2007)] (ν = 0.33). PMID:26723610
NASA Astrophysics Data System (ADS)
Moreira, Antonio Jose De Araujo
Soybean, Glycine max (L.) Merr., is an important source of oil and protein worldwide, and soybean cyst nematode (SCN), Heterodera glycines, is among the most important yield-limiting factors in soybean production worldwide. Early detection of SCN is difficult because soybean plants infected by SCN often do not exhibit visible symptoms. It was hypothesized, however, that reflectance data obtained by remote sensing from soybean canopies may be used to detect plant stress caused by SCN infection. Moreover, reflectance measurements may be related to soybean growth and yield. Two field experiments were conducted from 2000 to 2002 to study the relationships among reflectance data, quantity and quality of soybean yield, and SCN population densities. The best relationships between reflectance and the quantity of soybean grain yield occurred when reflectance data were obtained late August to early September. Similarly, reflectance was best related to seed oil and seed protein content and seed size when measured during late August/early September. Grain quality-reflectance relationships varied spatially and temporally. Reflectance measured early or late in the season had the best relationships with SCN population densities measured at planting. Soil properties likely affected reflectance measurements obtained at the beginning of the season and somehow may have been related to SCN population densities at planting. Reflectance data obtained at the end of the growing season likely was affected by early senescence of SCN-infected soybeans. Spatio-temporal aspects of SCN population densities in both experiments were assessed using spatial statistics and regression analyses. In the 2000 and 2001 growing seasons, spring-to-fall changes in SCN population densities were best related to SCN population densities at planting for both experiments. However, within-season changes in SCN population densities were best related to SCN population densities at harvest for both experiments in
Statistical ecology comes of age
Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-01-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151
Statistical ecology comes of age.
Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric
2014-12-01
The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.
Predict! Teaching Statistics Using Informational Statistical Inference
ERIC Educational Resources Information Center
Makar, Katie
2013-01-01
Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…
Rossell, David
2016-01-01
Big Data brings unprecedented power to address scientific, economic and societal issues, but also amplifies the possibility of certain pitfalls. These include using purely data-driven approaches that disregard understanding the phenomenon under study, aiming at a dynamically moving target, ignoring critical data collection issues, summarizing or preprocessing the data inadequately and mistaking noise for signal. We review some success stories and illustrate how statistical principles can help obtain more reliable information from data. We also touch upon current challenges that require active methodological research, such as strategies for efficient computation, integration of heterogeneous data, extending the underlying theory to increasingly complex questions and, perhaps most importantly, training a new generation of scientists to develop and deploy these strategies. PMID:27722040
Statistical physics ""Beyond equilibrium
Ecke, Robert E
2009-01-01
The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.
NASA Astrophysics Data System (ADS)
Barnston, Anthony G.; He, Yuxiang; Glantz, Michael H.
1999-02-01
Critical reviews of forecasts of ENSO conditions, based on a set of 15 dynamical and statistical models, are given for the 1997-98 El Niño event and the initial stages of the 1998-99 La Niña. While many of the models forecasted some degree of warming one to two seasons prior to the onset of the El Niño in boreal spring of 1997, none predicted its strength until the event was already becoming very strong in late spring. Neither the dynamical nor the statistical models, as groups, performed significantly better than the other during this episode. The best performing statistical models and dynamical models forecast SST anomalies of about +1°C (vs 2.5°-3° observed) in the Niño 3.4 region prior to any observed positive anomalies. The most comprehensive dynamical models performed better than the simple dynamical models. Once the El Niño had developed in mid-1997, a larger set of models was able to forecast its peak in late 1997 and dissipation and reversal to cold conditions in late spring/early summer 1998. Overall, however, skill for these recent two years does not appear greater than that found over an earlier (1982-93) period. In both cases, median model correlation skill averaged over lead times of one to three seasons is near or just above 0.6.Because ENSO extremes usually develop in boreal spring or early summer and persist through the following winter, forecasting impact tendencies in extratropical North America for winter (when impacts are most pronounced) at 5 months of lead time is not difficult, requiring only good observations of the summer ENSO state and knowledge of the winter teleconnections. Because of the strength of the 1997-98 El Niño and the consequent skill of 5-month lead forecasts of U.S. winter 1997-98 impacts, the success of these forecasts was noticed to an unprecedented extent by the general public. However, forecasting impacts in austral winter that occur simultaneously with the initial appearance of an ENSO extreme (e.g., in Chile
Tsallis statistics and neurodegenerative disorders
NASA Astrophysics Data System (ADS)
Iliopoulos, Aggelos C.; Tsolaki, Magdalini; Aifantis, Elias C.
2016-08-01
In this paper, we perform statistical analysis of time series deriving from four neurodegenerative disorders, namely epilepsy, amyotrophic lateral sclerosis (ALS), Parkinson's disease (PD), Huntington's disease (HD). The time series are concerned with electroencephalograms (EEGs) of healthy and epileptic states, as well as gait dynamics (in particular stride intervals) of the ALS, PD and HDs. We study data concerning one subject for each neurodegenerative disorder and one healthy control. The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis q-triplet, namely {qstat, qsen, qrel}. The deviation of Tsallis q-triplet from unity indicates non-Gaussian statistics and long-range dependencies for all time series considered. In addition, the results reveal the efficiency of Tsallis statistics in capturing differences in brain dynamics between healthy and epileptic states, as well as differences between ALS, PD, HDs from healthy control subjects. The results indicate that estimations of Tsallis q-indices could be used as possible biomarkers, along with others, for improving classification and prediction of epileptic seizures, as well as for studying the gait complex dynamics of various diseases providing new insights into severity, medications and fall risk, improving therapeutic interventions.
Statistical Reference Datasets
National Institute of Standards and Technology Data Gateway
Statistical Reference Datasets (Web, free access) The Statistical Reference Datasets is also supported by the Standard Reference Data Program. The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software.
Explorations in statistics: statistical facets of reproducibility.
Curran-Everett, Douglas
2016-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eleventh installment of Explorations in Statistics explores statistical facets of reproducibility. If we obtain an experimental result that is scientifically meaningful and statistically unusual, we would like to know that our result reflects a general biological phenomenon that another researcher could reproduce if (s)he repeated our experiment. But more often than not, we may learn this researcher cannot replicate our result. The National Institutes of Health and the Federation of American Societies for Experimental Biology have created training modules and outlined strategies to help improve the reproducibility of research. These particular approaches are necessary, but they are not sufficient. The principles of hypothesis testing and estimation are inherent to the notion of reproducibility in science. If we want to improve the reproducibility of our research, then we need to rethink how we apply fundamental concepts of statistics to our science.
Statistical dynamics of early river networks
NASA Astrophysics Data System (ADS)
Wang, Xu-Ming; Wang, Peng; Zhang, Ping; Hao, Rui; Huo, Jie
2012-10-01
Based on local erosion rule and fluctuations in rainfall, geology and parameters of a river channel, a generalized Langevin equation is proposed to describe the random prolongation of a river channel. This equation is transformed into the Fokker-Plank equation to follow the early evolution of a river network and the variation of probability distribution of channel lengths. The general solution of the equation is in the product form of two terms. One term is in power form and the other is in exponent form. This distribution shows a complete history of a river network evolving from its infancy to “adulthood”). The infancy is characterized by the Gaussian distribution of the channel lengths, while the adulthood is marked by a power law distribution of the channel lengths. The variation of the distribution from the Gaussian to the power law displays a gradual developing progress of the river network. The distribution of basin areas is obtained by means of Hack's law. These provide us with new understandings towards river networks.
NASA Astrophysics Data System (ADS)
Spera, F. J.; Martin, B.; Creamer, J. B.; Nevins, D.; Cutler, I.; Ghiorso, M. S.; Tikunoff, D.
2010-12-01
Empirical Potential Molecular Dynamics (EPMD) simulations have been carried out for molten MgSiO3, Mg2SiO4, CaMgSi2O6, CaAl2Si2O8 and 1-bar eutectic liquid in the binary system CaMgSi2O6-CaAl2Si2O8 using a Coulomb-Born-Mayer-van der Waals pair potential form and the potential parameters from Matsui (1996, GRL 23:395) for the system CaO-MgO-Al2O3-SiO2. Simulations were performed in the microcanonical ensemble (NEV) with 8000 atoms, a 1 fs time step, and simulation durations up to 2 ns. Computations were carried out every 500 K over a temperature range of 2500 - 5000 K along 10-20 isochores for each composition to insure good coverage in P-T space. During run T and P fluctuations, giving the uncertainty of state point coordinates was typically ± 30 K and ± 0.5 GPa, respectively. Coordination statistics are determined by counting nearest neighbor configurations up to a cutoff defined by the first minima of the pair correlation function. A complete set of coordination statistics was collected at each state point for each composition. At each state point self-diffusivity of each atom was determined from the Einstein relation between Mean Square Displacement and time. Shear viscosity was computed for a subset of state points using Green-Kubo linear response theory, by studying the autocorrelated regressions of spontaneous fluctuations of appropriate components of the stress tensor. Thermodynamic models (and EOS) for each liquid previously developed from these simulations based on combining the Rosenfeld-Tarazona (1998, Mol Phys 95:141) potential energy-temperature scaling law with the Universal EOS (1986, J Phys C, 19:L467) enable self-consistent computation of liquid sound speeds and isochoric heat capacity used to develop phonon thermal conductivity values at high T and P. Self-diffusivity, shear viscosity and phonon thermal conductivity values from the MD simulations vary systematically with composition, temperature and pressure. These systematic relations correlate
Statistical Ensemble of Large Eddy Simulations
NASA Technical Reports Server (NTRS)
Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)
2001-01-01
A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.
A statistical mechanical problem?
Costa, Tommaso; Ferraro, Mario
2014-01-01
The problem of deriving the processes of perception and cognition or the modes of behavior from states of the brain appears to be unsolvable in view of the huge numbers of elements involved. However, neural activities are not random, nor independent, but constrained to form spatio-temporal patterns, and thanks to these restrictions, which in turn are due to connections among neurons, the problem can at least be approached. The situation is similar to what happens in large physical ensembles, where global behaviors are derived by microscopic properties. Despite the obvious differences between neural and physical systems a statistical mechanics approach is almost inescapable, since dynamics of the brain as a whole are clearly determined by the outputs of single neurons. In this paper it will be shown how, starting from very simple systems, connectivity engenders levels of increasing complexity in the functions of the brain depending on specific constraints. Correspondingly levels of explanations must take into account the fundamental role of constraints and assign at each level proper model structures and variables, that, on one hand, emerge from outputs of the lower levels, and yet are specific, in that they ignore irrelevant details. PMID:25228891
Developments in Statistical Education.
ERIC Educational Resources Information Center
Kapadia, Ramesh
1980-01-01
The current status of statistics education at the secondary level is reviewed, with particular attention focused on the various instructional programs in England. A description and preliminary evaluation of the Schools Council Project on Statistical Education is included. (MP)
Mathematical and statistical analysis
NASA Technical Reports Server (NTRS)
Houston, A. Glen
1988-01-01
The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Borges, Ernesto P.
2016-03-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results.
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics.
Tirnakli, Ugur; Borges, Ernesto P
2016-03-23
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results.
ERIC Educational Resources Information Center
Bopp, Richard E.; Van Der Laan, Sharon J.
1985-01-01
Presents a search strategy for locating time-series or cross-sectional statistical data in published sources which was designed for undergraduate students who require 30 units of data for five separate variables in a statistical model. Instructional context and the broader applicability of the search strategy for general statistical research is…
ERIC Educational Resources Information Center
Strasser, Nora
2007-01-01
Avoiding statistical mistakes is important for educators at all levels. Basic concepts will help you to avoid making mistakes using statistics and to look at data with a critical eye. Statistical data is used at educational institutions for many purposes. It can be used to support budget requests, changes in educational philosophy, changes to…
ERIC Educational Resources Information Center
Lenard, Christopher; McCarthy, Sally; Mills, Terence
2014-01-01
There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…
Statistical quality management
NASA Astrophysics Data System (ADS)
Vanderlaan, Paul
1992-10-01
Some aspects of statistical quality management are discussed. Quality has to be defined as a concrete, measurable quantity. The concepts of Total Quality Management (TQM), Statistical Process Control (SPC), and inspection are explained. In most cases SPC is better than inspection. It can be concluded that statistics has great possibilities in the field of TQM.
Nonlinear Statistical Modeling of Speech
NASA Astrophysics Data System (ADS)
Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.
2009-12-01
Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and
Nonstationary statistical theory for multipactor
Anza, S.; Vicente, C.; Gil, J.
2010-06-15
This work presents a new and general approach to the real dynamics of the multipactor process: the nonstationary statistical multipactor theory. The nonstationary theory removes the stationarity assumption of the classical theory and, as a consequence, it is able to adequately model electron exponential growth as well as absorption processes, above and below the multipactor breakdown level. In addition, it considers both double-surface and single-surface interactions constituting a full framework for nonresonant polyphase multipactor analysis. This work formulates the new theory and validates it with numerical and experimental results with excellent agreement.
Statistical Properties of Online Auctions
NASA Astrophysics Data System (ADS)
Namazi, Alireza; Schadschneider, Andreas
We characterize the statistical properties of a large number of online auctions run on eBay. Both stationary and dynamic properties, like distributions of prices, number of bids etc., as well as relations between these quantities are studied. The analysis of the data reveals surprisingly simple distributions and relations, typically of power-law form. Based on these findings we introduce a simple method to identify suspicious auctions that could be influenced by a form of fraud known as shill bidding. Furthermore the influence of bidding strategies is discussed. The results indicate that the observed behavior is related to a mixture of agents using a variety of strategies.
Tannery, Nancy Hrinya; Silverman, Deborah L; Epstein, Barbara A
2002-01-01
Online use statistics can provide libraries with a tool to be used when developing an online collection of resources. Statistics can provide information on overall use of a collection, individual print and electronic journal use, and collection use by specific user populations. They can also be used to determine the number of user licenses to purchase. This paper focuses on the issue of use statistics made available for one collection of online resources.
The Statistical Mechanics of Zombies
NASA Astrophysics Data System (ADS)
Alemi, Alexander A.; Bierbaum, Matthew; Myers, Christopher R.; Sethna, James P.
2015-03-01
We present results and analysis from a large scale exact stochastic dynamical simulation of a zombie outbreak. Zombies have attracted some attention lately as a novel and interesting twist on classic disease models. While most of the initial investigations have focused on the continuous, fully mixed dynamics of a differential equation model, we have explored stochastic, discrete simulations on lattices. We explore some of the basic statistical mechanical properties of the zombie model, including its phase diagram and critical exponents. We report on several variant models, including both homogeneous and inhomogeneous lattices, as well as allowing diffusive motion of infected hosts. We build up to a full scale simulation of an outbreak in the United States, and discover that for `realistic' parameters, we are largely doomed.
Statistical distribution sampling
NASA Technical Reports Server (NTRS)
Johnson, E. S.
1975-01-01
Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.
Laws and chances in statistical mechanics
NASA Astrophysics Data System (ADS)
Winsberg, Eric
Statistical mechanics involves probabilities. At the same time, most approaches to the foundations of statistical mechanics-programs whose goal is to understand the macroscopic laws of thermal physics from the point of view of microphysics-are classical; they begin with the assumption that the underlying dynamical laws that govern the microscopic furniture of the world are (or can without loss of generality be treated as) deterministic. This raises some potential puzzles about the proper interpretation of these probabilities.
Introductory statistical mechanics for electron storage rings
Jowett, J.M.
1986-07-01
These lectures introduce the beam dynamics of electron-positron storage rings with particular emphasis on the effects due to synchrotron radiation. They differ from most other introductions in their systematic use of the physical principles and mathematical techniques of the non-equilibrium statistical mechanics of fluctuating dynamical systems. A self-contained exposition of the necessary topics from this field is included. Throughout the development, a Hamiltonian description of the effects of the externally applied fields is maintained in order to preserve the links with other lectures on beam dynamics and to show clearly the extent to which electron dynamics in non-Hamiltonian. The statistical mechanical framework is extended to a discussion of the conceptual foundations of the treatment of collective effects through the Vlasov equation.
Multidimensional Visual Statistical Learning
ERIC Educational Resources Information Center
Turk-Browne, Nicholas B.; Isola, Phillip J.; Scholl, Brian J.; Treat, Teresa A.
2008-01-01
Recent studies of visual statistical learning (VSL) have demonstrated that statistical regularities in sequences of visual stimuli can be automatically extracted, even without intent or awareness. Despite much work on this topic, however, several fundamental questions remain about the nature of VSL. In particular, previous experiments have not…
Croarkin, M. Carroll
2001-01-01
For more than 50 years, the Statistical Engineering Division (SED) has been instrumental in the success of a broad spectrum of metrology projects at NBS/NIST. This paper highlights fundamental contributions of NBS/NIST statisticians to statistics and to measurement science and technology. Published methods developed by SED staff, especially during the early years, endure as cornerstones of statistics not only in metrology and standards applications, but as data-analytic resources used across all disciplines. The history of statistics at NBS/NIST began with the formation of what is now the SED. Examples from the first five decades of the SED illustrate the critical role of the division in the successful resolution of a few of the highly visible, and sometimes controversial, statistical studies of national importance. A review of the history of major early publications of the division on statistical methods, design of experiments, and error analysis and uncertainty is followed by a survey of several thematic areas. The accompanying examples illustrate the importance of SED in the history of statistics, measurements and standards: calibration and measurement assurance, interlaboratory tests, development of measurement methods, Standard Reference Materials, statistical computing, and dissemination of measurement technology. A brief look forward sketches the expanding opportunity and demand for SED statisticians created by current trends in research and development at NIST. PMID:27500023
Explorations in Statistics: Regression
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2011-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This seventh installment of "Explorations in Statistics" explores regression, a technique that estimates the nature of the relationship between two things for which we may only surmise a mechanistic or predictive connection.…
ERIC Educational Resources Information Center
Huberty, Carl J.
An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…
Reform in Statistical Education
ERIC Educational Resources Information Center
Huck, Schuyler W.
2007-01-01
Two questions are considered in this article: (a) What should professionals in school psychology do in an effort to stay current with developments in applied statistics? (b) What should they do with their existing knowledge to move from surface understanding of statistics to deep understanding? Written for school psychologists who have completed…
Demonstrating Poisson Statistics.
ERIC Educational Resources Information Center
Vetterling, William T.
1980-01-01
Describes an apparatus that offers a very lucid demonstration of Poisson statistics as applied to electrical currents, and the manner in which such statistics account for shot noise when applied to macroscopic currents. The experiment described is intended for undergraduate physics students. (HM)
Statistical Summaries: Public Institutions.
ERIC Educational Resources Information Center
Virginia State Council of Higher Education, Richmond.
This document, presents a statistical portrait of the Virginia's 17 public higher education institutions. Data provided include: enrollment figures (broken down in categories such as sex, residency, full- and part-time status, residence, ethnicity, age, and level of postsecondary education); FTE figures; admissions statistics (such as number…
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. PMID:26466186
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four things affect…
ERIC Educational Resources Information Center
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…
ERIC Educational Resources Information Center
Council of Ontario Universities, Toronto.
Summary statistics on application and registration patterns of applicants wishing to pursue full-time study in first-year places in Ontario universities (for the fall of 1987) are given. Data on registrations were received indirectly from the universities as part of their annual submission of USIS/UAR enrollment data to Statistics Canada and MCU.…
Introduction to Statistical Physics
NASA Astrophysics Data System (ADS)
Casquilho, João Paulo; Ivo Cortez Teixeira, Paulo
2014-12-01
Preface; 1. Random walks; 2. Review of thermodynamics; 3. The postulates of statistical physics. Thermodynamic equilibrium; 4. Statistical thermodynamics – developments and applications; 5. The classical ideal gas; 6. The quantum ideal gas; 7. Magnetism; 8. The Ising model; 9. Liquid crystals; 10. Phase transitions and critical phenomena; 11. Irreversible processes; Appendixes; Index.
Deconstructing Statistical Analysis
ERIC Educational Resources Information Center
Snell, Joel
2014-01-01
Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…
ERIC Educational Resources Information Center
Hodgson, Ted; Andersen, Lyle; Robison-Cox, Jim; Jones, Clain
2004-01-01
Water quality experiments, especially the use of macroinvertebrates as indicators of water quality, offer an ideal context for connecting statistics and science. In the STAR program for secondary students and teachers, water quality experiments were also used as a context for teaching statistics. In this article, we trace one activity that uses…
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced.
Understanding Undergraduate Statistical Anxiety
ERIC Educational Resources Information Center
McKim, Courtney
2014-01-01
The purpose of this study was to understand undergraduate students' views of statistics. Results reveal that students with less anxiety have a higher interest in statistics and also believe in their ability to perform well in the course. Also students who have a more positive attitude about the class tend to have a higher belief in their…
Explorations in Statistics: Correlation
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This sixth installment of "Explorations in Statistics" explores correlation, a familiar technique that estimates the magnitude of a straight-line relationship between two variables. Correlation is meaningful only when the…
Statistical regimes of random laser fluctuations
Lepri, Stefano; Cavalieri, Stefano; Oppo, Gian-Luca; Wiersma, Diederik S.
2007-06-15
Statistical fluctuations of the light emitted from amplifying random media are studied theoretically and numerically. The characteristic scales of the diffusive motion of light lead to Gaussian or power-law (Levy) distributed fluctuations depending on external control parameters. In the Levy regime, the output pulse is highly irregular leading to huge deviations from a mean-field description. Monte Carlo simulations of a simplified model which includes the population of the medium demonstrate the two statistical regimes and provide a comparison with dynamical rate equations. Different statistics of the fluctuations helps to explain recent experimental observations reported in the literature.
Statistical mechanics of polymer systems. Final
Kovac, J.
1993-06-01
Work on computer simulation of polymer dynamics and the statistical mechanics of quenched systems carried out over seven years with the support of this grant is reviewed. The computer simulation work has focused on elucidation the roles of the excluded volume and the nearest-neighbor attractive interactions in the dynamics of polymers. To study quenched systems we have applied the formalism suggested long ago by Mazo to two model systems and found qualitative agreement with the properties of real glasses.
Environmental statistics and optimal regulation
NASA Astrophysics Data System (ADS)
Sivak, David; Thomson, Matt
2015-03-01
The precision with which an organism can detect its environment, and the timescale for and statistics of environmental change, will affect the suitability of different strategies for regulating protein levels in response to environmental inputs. We propose a general framework--here applied to the enzymatic regulation of metabolism in response to changing nutrient concentrations--to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, and the costs associated with enzyme production. We find: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones.
SDI satellite autonomy using AI and Ada
NASA Technical Reports Server (NTRS)
Fiala, Harvey E.
1990-01-01
The use of Artificial Intelligence (AI) and the programming language Ada to help a satellite recover from selected failures that could lead to mission failure are described. An unmanned satellite will have a separate AI subsystem running in parallel with the normal satellite subsystems. A satellite monitoring subsystem (SMS), under the control of a blackboard system, will continuously monitor selected satellite subsystems to become alert to any actual or potential problems. In the case of loss of communications with the earth or the home base, the satellite will go into a survival mode to reestablish communications with the earth. The use of an AI subsystem in this manner would have avoided the tragic loss of the two recent Soviet probes that were sent to investigate the planet Mars and its moons. The blackboard system works in conjunction with an SMS and a reconfiguration control subsystem (RCS). It can be shown to be an effective way for one central control subsystem to monitor and coordinate the activities and loads of many interacting subsystems that may or may not contain redundant and/or fault-tolerant elements. The blackboard system will be coded in Ada using tools such as the ABLE development system and the Ada Production system.
LED champing: statistically blessed?
Wang, Zhuo
2015-06-10
LED champing (smart mixing of individual LEDs to match the desired color and lumens) and color mixing strategies have been widely used to maintain the color consistency of light engines. Light engines with champed LEDs can easily achieve the color consistency of a couple MacAdam steps with widely distributed LEDs to begin with. From a statistical point of view, the distributions for the color coordinates and the flux after champing are studied. The related statistical parameters are derived, which facilitate process improvements such as Six Sigma and are instrumental to statistical quality control for mass productions. PMID:26192863
Winters, Ryan; Winters, Andrew; Amedee, Ronald G.
2010-01-01
The Accreditation Council for Graduate Medical Education sets forth a number of required educational topics that must be addressed in residency and fellowship programs. We sought to provide a primer on some of the important basic statistical concepts to consider when examining the medical literature. It is not essential to understand the exact workings and methodology of every statistical test encountered, but it is necessary to understand selected concepts such as parametric and nonparametric tests, correlation, and numerical versus categorical data. This working knowledge will allow you to spot obvious irregularities in statistical analyses that you encounter. PMID:21603381
NASA Technical Reports Server (NTRS)
Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.
2017-01-01
Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.
NASA Astrophysics Data System (ADS)
Richfield, Jon; bookfeller
2016-07-01
In reply to Ralph Kenna and Pádraig Mac Carron's feature article “Maths meets myths” in which they describe how they are using techniques from statistical physics to characterize the societies depicted in ancient Icelandic sagas.
... facts and statistics here include brain and central nervous system tumors (including spinal cord, pituitary and pineal gland ... U.S. living with a primary brain and central nervous system tumor. This year, nearly 17,000 people will ...
Titanic: A Statistical Exploration.
ERIC Educational Resources Information Center
Takis, Sandra L.
1999-01-01
Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)
NASA Astrophysics Data System (ADS)
Grégoire, G.
2016-05-01
This chapter is devoted to two objectives. The first one is to answer the request expressed by attendees of the first Astrostatistics School (Annecy, October 2013) to be provided with an elementary vademecum of statistics that would facilitate understanding of the given courses. In this spirit we recall very basic notions, that is definitions and properties that we think sufficient to benefit from courses given in the Astrostatistical School. Thus we give briefly definitions and elementary properties on random variables and vectors, distributions, estimation and tests, maximum likelihood methodology. We intend to present basic ideas in a hopefully comprehensible way. We do not try to give a rigorous presentation, and due to the place devoted to this chapter, can cover only a rather limited field of statistics. The second aim is to focus on some statistical tools that are useful in classification: basic introduction to Bayesian statistics, maximum likelihood methodology, Gaussian vectors and Gaussian mixture models.
... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...
Cooperative Learning in Statistics.
ERIC Educational Resources Information Center
Keeler, Carolyn M.; And Others
1994-01-01
Formal use of cooperative learning techniques proved effective in improving student performance and retention in a freshman level statistics course. Lectures interspersed with group activities proved effective in increasing conceptual understanding and overall class performance. (11 references) (Author)
Purposeful Statistical Investigations
ERIC Educational Resources Information Center
Day, Lorraine
2014-01-01
Lorraine Day provides us with a great range of statistical investigations using various resources such as maths300 and TinkerPlots. Each of the investigations link mathematics to students' lives and provide engaging and meaningful contexts for mathematical inquiry.
Tuberculosis Data and Statistics
... Organization Chart Advisory Groups Federal TB Task Force Data and Statistics Language: English Español (Spanish) Recommend on ... United States publication. PDF [6 MB] Interactive TB Data Tool Online Tuberculosis Information System (OTIS) OTIS is ...
Understanding Solar Flare Statistics
NASA Astrophysics Data System (ADS)
Wheatland, M. S.
2005-12-01
A review is presented of work aimed at understanding solar flare statistics, with emphasis on the well known flare power-law size distribution. Although avalanche models are perhaps the favoured model to describe flare statistics, their physical basis is unclear, and they are divorced from developing ideas in large-scale reconnection theory. An alternative model, aimed at reconciling large-scale reconnection models with solar flare statistics, is revisited. The solar flare waiting-time distribution has also attracted recent attention. Observed waiting-time distributions are described, together with what they might tell us about the flare phenomenon. Finally, a practical application of flare statistics to flare prediction is described in detail, including the results of a year of automated (web-based) predictions from the method.
Oakland, J.S.
1986-01-01
Addressing the increasing importance for firms to have a thorough knowledge of statistically based quality control procedures, this book presents the fundamentals of statistical process control (SPC) in a non-mathematical, practical way. It provides real-life examples and data drawn from a wide variety of industries. The foundations of good quality management and process control, and control of conformance and consistency during production are given. Offers clear guidance to those who wish to understand and implement modern SPC techniques.
STATWIZ - AN ELECTRONIC STATISTICAL TOOL (ABSTRACT)
StatWiz is a web-based, interactive, and dynamic statistical tool for researchers. It will allow researchers to input information and/or data and then receive experimental design options, or outputs from data analysis. StatWiz is envisioned as an expert system that will walk rese...
Statistical Physics of Particles
NASA Astrophysics Data System (ADS)
Kardar, Mehran
2006-06-01
Statistical physics has its origins in attempts to describe the thermal properties of matter in terms of its constituent particles, and has played a fundamental role in the development of quantum mechanics. Based on lectures for a course in statistical mechanics taught by Professor Kardar at Massachusetts Institute of Technology, this textbook introduces the central concepts and tools of statistical physics. It contains a chapter on probability and related issues such as the central limit theorem and information theory, and covers interacting particles, with an extensive description of the van der Waals equation and its derivation by mean field approximation. It also contains an integrated set of problems, with solutions to selected problems at the end of the book. It will be invaluable for graduate and advanced undergraduate courses in statistical physics. A complete set of solutions is available to lecturers on a password protected website at www.cambridge.org/9780521873420. Based on lecture notes from a course on Statistical Mechanics taught by the author at MIT Contains 89 exercises, with solutions to selected problems Contains chapters on probability and interacting particles Ideal for graduate courses in Statistical Mechanics
NASA Technical Reports Server (NTRS)
Laird, Philip
1992-01-01
We distinguish static and dynamic optimization of programs: whereas static optimization modifies a program before runtime and is based only on its syntactical structure, dynamic optimization is based on the statistical properties of the input source and examples of program execution. Explanation-based generalization is a commonly used dynamic optimization method, but its effectiveness as a speedup-learning method is limited, in part because it fails to separate the learning process from the program transformation process. This paper describes a dynamic optimization technique called a learn-optimize cycle that first uses a learning element to uncover predictable patterns in the program execution and then uses an optimization algorithm to map these patterns into beneficial transformations. The technique has been used successfully for dynamic optimization of pure Prolog.
Statistical Physics of Fracture
Alava, Mikko; Nukala, Phani K; Zapperi, Stefano
2006-05-01
Disorder and long-range interactions are two of the key components that make material failure an interesting playfield for the application of statistical mechanics. The cornerstone in this respect has been lattice models of the fracture in which a network of elastic beams, bonds, or electrical fuses with random failure thresholds are subject to an increasing external load. These models describe on a qualitative level the failure processes of real, brittle, or quasi-brittle materials. This has been particularly important in solving the classical engineering problems of material strength: the size dependence of maximum stress and its sample-to-sample statistical fluctuations. At the same time, lattice models pose many new fundamental questions in statistical physics, such as the relation between fracture and phase transitions. Experimental results point out to the existence of an intriguing crackling noise in the acoustic emission and of self-affine fractals in the crack surface morphology. Recent advances in computer power have enabled considerable progress in the understanding of such models. Among these partly still controversial issues, are the scaling and size-effects in material strength and accumulated damage, the statistics of avalanches or bursts of microfailures, and the morphology of the crack surface. Here we present an overview of the results obtained with lattice models for fracture, highlighting the relations with statistical physics theories and more conventional fracture mechanics approaches.
Statistical characterization of real-world illumination.
Dror, Ron O; Willsky, Alan S; Adelson, Edward H
2004-09-28
Although studies of vision and graphics often assume simple illumination models, real-world illumination is highly complex, with reflected light incident on a surface from almost every direction. One can capture the illumination from every direction at one point photographically using a spherical illumination map. This work illustrates, through analysis of photographically acquired, high dynamic range illumination maps, that real-world illumination possesses a high degree of statistical regularity. The marginal and joint wavelet coefficient distributions and harmonic spectra of illumination maps resemble those documented in the natural image statistics literature. However, illumination maps differ from typical photographs in that illumination maps are statistically nonstationary and may contain localized light sources that dominate their power spectra. Our work provides a foundation for statistical models of real-world illumination, thereby facilitating the understanding of human material perception, the design of robust computer vision systems, and the rendering of realistic computer graphics imagery. PMID:15493972
Helping Alleviate Statistical Anxiety with Computer Aided Statistical Classes
ERIC Educational Resources Information Center
Stickels, John W.; Dobbs, Rhonda R.
2007-01-01
This study, Helping Alleviate Statistical Anxiety with Computer Aided Statistics Classes, investigated whether undergraduate students' anxiety about statistics changed when statistics is taught using computers compared to the traditional method. Two groups of students were questioned concerning their anxiety about statistics. One group was taught…
Statistical Mechanics of Turbulent Dynamos
NASA Technical Reports Server (NTRS)
Shebalin, John V.
2014-01-01
Incompressible magnetohydrodynamic (MHD) turbulence and magnetic dynamos, which occur in magnetofluids with large fluid and magnetic Reynolds numbers, will be discussed. When Reynolds numbers are large and energy decays slowly, the distribution of energy with respect to length scale becomes quasi-stationary and MHD turbulence can be described statistically. In the limit of infinite Reynolds numbers, viscosity and resistivity become zero and if these values are used in the MHD equations ab initio, a model system called ideal MHD turbulence results. This model system is typically confined in simple geometries with some form of homogeneous boundary conditions, allowing for velocity and magnetic field to be represented by orthogonal function expansions. One advantage to this is that the coefficients of the expansions form a set of nonlinearly interacting variables whose behavior can be described by equilibrium statistical mechanics, i.e., by a canonical ensemble theory based on the global invariants (energy, cross helicity and magnetic helicity) of ideal MHD turbulence. Another advantage is that truncated expansions provide a finite dynamical system whose time evolution can be numerically simulated to test the predictions of the associated statistical mechanics. If ensemble predictions are the same as time averages, then the system is said to be ergodic; if not, the system is nonergodic. Although it had been implicitly assumed in the early days of ideal MHD statistical theory development that these finite dynamical systems were ergodic, numerical simulations provided sufficient evidence that they were, in fact, nonergodic. Specifically, while canonical ensemble theory predicted that expansion coefficients would be (i) zero-mean random variables with (ii) energy that decreased with length scale, it was found that although (ii) was correct, (i) was not and the expected ergodicity was broken. The exact cause of this broken ergodicity was explained, after much
Suite versus composite statistics
Balsillie, J.H.; Tanner, W.F.
1999-01-01
Suite and composite methodologies, two statistically valid approaches for producing statistical descriptive measures, are investigated for sample groups representing a probability distribution where, in addition, each sample is probability distribution. Suite and composite means (first moment measures) are always equivalent. Composite standard deviations (second moment measures) are always larger than suite standard deviations. Suite and composite values for higher moment measures have more complex relationships. Very seldom, however, are they equivalent, and they normally yield statistically significant but different results. Multiple samples are preferable to single samples (including composites) because they permit the investigator to examine sample-to-sample variability. These and other relationships for suite and composite probability distribution analyses are investigated and reported using granulometric data.
Candidate Assembly Statistical Evaluation
1998-07-15
The Savannah River Site (SRS) receives aluminum clad spent Material Test Reactor (MTR) fuel from all over the world for storage and eventual reprocessing. There are hundreds of different kinds of MTR fuels and these fuels will continue to be received at SRS for approximately ten more years. SRS''s current criticality evaluation methodology requires the modeling of all MTR fuels utilizing Monte Carlo codes, which is extremely time consuming and resource intensive. Now that amore » significant number of MTR calculations have been conducted it is feasible to consider building statistical models that will provide reasonable estimations of MTR behavior. These statistical models can be incorporated into a standardized model homogenization spreadsheet package to provide analysts with a means of performing routine MTR fuel analyses with a minimal commitment of time and resources. This became the purpose for development of the Candidate Assembly Statistical Evaluation (CASE) program at SRS.« less
Statistical mechanics of complex networks
NASA Astrophysics Data System (ADS)
Albert, Réka; Barabási, Albert-László
2002-01-01
Complex networks describe a wide range of systems in nature and society. Frequently cited examples include the cell, a network of chemicals linked by chemical reactions, and the Internet, a network of routers and computers connected by physical links. While traditionally these systems have been modeled as random graphs, it is increasingly recognized that the topology and evolution of real networks are governed by robust organizing principles. This article reviews the recent advances in the field of complex networks, focusing on the statistical mechanics of network topology and dynamics. After reviewing the empirical data that motivated the recent interest in networks, the authors discuss the main models and analytical tools, covering random graphs, small-world and scale-free networks, the emerging theory of evolving networks, and the interplay between topology and the network's robustness against failures and attacks.
Perception in statistical graphics
NASA Astrophysics Data System (ADS)
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
NASA Astrophysics Data System (ADS)
Inomata, Akira
1997-03-01
To understand possible physical consequences of quantum deformation, we investigate statistical behaviors of a quon gas. The quon is an object which obeys the minimally deformed commutator (or q-mutator): a a† - q a†a=1 with -1≤ q≤ 1. Although q=1 and q=-1 appear to correspond respectively to boson and fermion statistics, it is not easy to create a gas which unifies the boson gas and the fermion gas. We present a model which is able to interpolates between the two limits. The quon gas shows the Bose-Einstein condensation near the Boson limit in two dimensions.
Environmental Statistics and Optimal Regulation
2014-01-01
Any organism is embedded in an environment that changes over time. The timescale for and statistics of environmental change, the precision with which the organism can detect its environment, and the costs and benefits of particular protein expression levels all will affect the suitability of different strategies–such as constitutive expression or graded response–for regulating protein levels in response to environmental inputs. We propose a general framework–here specifically applied to the enzymatic regulation of metabolism in response to changing concentrations of a basic nutrient–to predict the optimal regulatory strategy given the statistics of fluctuations in the environment and measurement apparatus, respectively, and the costs associated with enzyme production. We use this framework to address three fundamental questions: (i) when a cell should prefer thresholding to a graded response; (ii) when there is a fitness advantage to implementing a Bayesian decision rule; and (iii) when retaining memory of the past provides a selective advantage. We specifically find that: (i) relative convexity of enzyme expression cost and benefit influences the fitness of thresholding or graded responses; (ii) intermediate levels of measurement uncertainty call for a sophisticated Bayesian decision rule; and (iii) in dynamic contexts, intermediate levels of uncertainty call for retaining memory of the past. Statistical properties of the environment, such as variability and correlation times, set optimal biochemical parameters, such as thresholds and decay rates in signaling pathways. Our framework provides a theoretical basis for interpreting molecular signal processing algorithms and a classification scheme that organizes known regulatory strategies and may help conceptualize heretofore unknown ones. PMID:25254493
Statistical benchmark for BosonSampling
NASA Astrophysics Data System (ADS)
Walschaers, Mattia; Kuipers, Jack; Urbina, Juan-Diego; Mayer, Klaus; Tichy, Malte Christopher; Richter, Klaus; Buchleitner, Andreas
2016-03-01
Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church-Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects.
Statistical insight: a review.
Vardell, Emily; Garcia-Barcena, Yanira
2012-01-01
Statistical Insight is a database that offers the ability to search across multiple sources of data, including the federal government, private organizations, research centers, and international intergovernmental organizations in one search. Two sample searches on the same topic, a basic and an advanced, were conducted to evaluate the database.
Pilot Class Testing: Statistics.
ERIC Educational Resources Information Center
Washington Univ., Seattle. Washington Foreign Language Program.
Statistics derived from test score data from the pilot classes participating in the Washington Foreign Language Program are presented in tables in this report. An index accompanies the tables, itemizing the classes by level (FLES, middle, and high school), grade test, language skill, and school. MLA-Coop test performances for each class were…
Statistical Reasoning over Lunch
ERIC Educational Resources Information Center
Selmer, Sarah J.; Bolyard, Johnna J.; Rye, James A.
2011-01-01
Students in the 21st century are exposed daily to a staggering amount of numerically infused media. In this era of abundant numeric data, students must be able to engage in sound statistical reasoning when making life decisions after exposure to varied information. The context of nutrition can be used to engage upper elementary and middle school…
Selected Outdoor Recreation Statistics.
ERIC Educational Resources Information Center
Bureau of Outdoor Recreation (Dept. of Interior), Washington, DC.
In this recreational information report, 96 tables are compiled from Bureau of Outdoor Recreation programs and surveys, other governmental agencies, and private sources. Eight sections comprise the document: (1) The Bureau of Outdoor Recreation, (2) Federal Assistance to Recreation, (3) Recreation Surveys for Planning, (4) Selected Statistics of…
ASURV: Astronomical SURVival Statistics
NASA Astrophysics Data System (ADS)
Feigelson, E. D.; Nelson, P. I.; Isobe, T.; LaValley, M.
2014-06-01
ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.
Statistics for Learning Genetics
ERIC Educational Resources Information Center
Charles, Abigail Sheena
2012-01-01
This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in,…
Spitball Scatterplots in Statistics
ERIC Educational Resources Information Center
Wagaman, John C.
2012-01-01
This paper describes an active learning idea that I have used in my applied statistics class as a first lesson in correlation and regression. Students propel spitballs from various standing distances from the target and use the recorded data to determine if the spitball accuracy is associated with standing distance and review the algebra of lines…
Geopositional Statistical Methods
NASA Technical Reports Server (NTRS)
Ross, Kenton
2006-01-01
RMSE based methods distort circular error estimates (up to 50% overestimation). The empirical approach is the only statistically unbiased estimator offered. Ager modification to Shultz approach is nearly unbiased, but cumbersome. All methods hover around 20% uncertainty (@ 95% confidence) for low geopositional bias error estimates. This requires careful consideration in assessment of higher accuracy products.
ERIC Educational Resources Information Center
Akram, Muhammad; Siddiqui, Asim Jamal; Yasmeen, Farah
2004-01-01
In order to learn the concept of statistical techniques one needs to run real experiments that generate reliable data. In practice, the data from some well-defined process or system is very costly and time consuming. It is difficult to run real experiments during the teaching period in the university. To overcome these difficulties, statisticians…
Education Statistics Quarterly, 2003.
ERIC Educational Resources Information Center
Marenus, Barbara; Burns, Shelley; Fowler, William; Greene, Wilma; Knepper, Paula; Kolstad, Andrew; McMillen Seastrom, Marilyn; Scott, Leslie
2003-01-01
This publication provides a comprehensive overview of work done across all parts of the National Center for Education Statistics (NCES). Each issue contains short publications, summaries, and descriptions that cover all NCES publications and data products released in a 3-month period. Each issue also contains a message from the NCES on a timely…
Analogies for Understanding Statistics
ERIC Educational Resources Information Center
Hocquette, Jean-Francois
2004-01-01
This article describes a simple way to explain the limitations of statistics to scientists and students to avoid the publication of misleading conclusions. Biologists examine their results extremely critically and carefully choose the appropriate analytic methods depending on their scientific objectives. However, no such close attention is usually…
Statistical Significance Testing.
ERIC Educational Resources Information Center
McLean, James E., Ed.; Kaufman, Alan S., Ed.
1998-01-01
The controversy about the use or misuse of statistical significance testing has become the major methodological issue in educational research. This special issue contains three articles that explore the controversy, three commentaries on these articles, an overall response, and three rejoinders by the first three authors. They are: (1)…
Nonextensive statistical mechanics: A brief introduction
NASA Astrophysics Data System (ADS)
Tsallis, C.; Brigatti, E.
. Boltzmann-Gibbs statistical mechanics is based on the entropy S BG = -k ∑ {i = 1}W pi pi. It enables a successful thermal approach to ubiquitous systems, such as those involving short-range interactions, markovian processes, and, generally speaking, those systems whose dynamical occupancy of phase space tends to be ergodic. For systems whose microscopic dynamics is more complex, it is natural to expect that the dynamical occupancy of phase space will have a less trivial structure, for example a (multi)fractal or hierarchical geometry. The question naturally arises whether it is possible to study such systems with concepts and methods similar to those of standard statistical mechanics. The answer appears to be yes for ubiquitous systems, but the concept of entropy needs to be adequately generalized. Some classes of such systems can be satisfactorily approached with the entropy Sq = k {1-∑ {i = 1}W piq}{q-1} (with q l R, and S1 = S BG). This theory is sometimes referred in the literature as nonextensive statistical mechanics. We provide here a brief introduction to the formalism, its dynamical foundations, and some illustrative applications. In addition to these, we illustrate with a few examples the concept of stability (or experimental robustness) introduced by B. Lesche in 1982 and recently revisited by S. Abe.
Accelerated molecular dynamics methods
Perez, Danny
2011-01-04
The molecular dynamics method, although extremely powerful for materials simulations, is limited to times scales of roughly one microsecond or less. On longer time scales, dynamical evolution typically consists of infrequent events, which are usually activated processes. This course is focused on understanding infrequent-event dynamics, on methods for characterizing infrequent-event mechanisms and rate constants, and on methods for simulating long time scales in infrequent-event systems, emphasizing the recently developed accelerated molecular dynamics methods (hyperdynamics, parallel replica dynamics, and temperature accelerated dynamics). Some familiarity with basic statistical mechanics and molecular dynamics methods will be assumed.
Random paths and current fluctuations in nonequilibrium statistical mechanics
Gaspard, Pierre
2014-07-15
An overview is given of recent advances in nonequilibrium statistical mechanics about the statistics of random paths and current fluctuations. Although statistics is carried out in space for equilibrium statistical mechanics, statistics is considered in time or spacetime for nonequilibrium systems. In this approach, relationships have been established between nonequilibrium properties such as the transport coefficients, the thermodynamic entropy production, or the affinities, and quantities characterizing the microscopic Hamiltonian dynamics and the chaos or fluctuations it may generate. This overview presents results for classical systems in the escape-rate formalism, stochastic processes, and open quantum systems.
NASA Astrophysics Data System (ADS)
Maccone, C.
In this paper is provided the statistical generalization of the Fermi paradox. The statistics of habitable planets may be based on a set of ten (and possibly more) astrobiological requirements first pointed out by Stephen H. Dole in his book Habitable planets for man (1964). The statistical generalization of the original and by now too simplistic Dole equation is provided by replacing a product of ten positive numbers by the product of ten positive random variables. This is denoted the SEH, an acronym standing for “Statistical Equation for Habitables”. The proof in this paper is based on the Central Limit Theorem (CLT) of Statistics, stating that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable (Lyapunov form of the CLT). It is then shown that: 1. The new random variable NHab, yielding the number of habitables (i.e. habitable planets) in the Galaxy, follows the log- normal distribution. By construction, the mean value of this log-normal distribution is the total number of habitable planets as given by the statistical Dole equation. 2. The ten (or more) astrobiological factors are now positive random variables. The probability distribution of each random variable may be arbitrary. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into the SEH by allowing an arbitrary probability distribution for each factor. This is both astrobiologically realistic and useful for any further investigations. 3. By applying the SEH it is shown that the (average) distance between any two nearby habitable planets in the Galaxy may be shown to be inversely proportional to the cubic root of NHab. This distance is denoted by new random variable D. The relevant probability density function is derived, which was named the "Maccone distribution" by Paul Davies in
The Statistical Drake Equation
NASA Astrophysics Data System (ADS)
Maccone, Claudio
2010-12-01
We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density
SHARE: Statistical hadronization with resonances
NASA Astrophysics Data System (ADS)
Torrieri, G.; Steinke, S.; Broniowski, W.; Florkowski, W.; Letessier, J.; Rafelski, J.
2005-05-01
interaction feed-down corrections, the observed hadron abundances are obtained. SHARE incorporates diverse physical approaches, with a flexibility of choice of the details of the statistical hadronization model, including the selection of a chemical (non-)equilibrium condition. SHARE also offers evaluation of the extensive properties of the source of particles, such as energy, entropy, baryon number, strangeness, as well as the determination of the best intensive input parameters fitting a set of experimental yields. This allows exploration of a proposed physical hypothesis about hadron production mechanisms and the determination of the properties of their source. Method of solving the problem: Distributions at freeze-out of both the stable particles and the hadronic resonances are set according to a statistical prescription, technically calculated via a series of Bessel functions, using CERN library programs. We also have the option of including finite particle widths of the resonances. While this is computationally expensive, it is necessary to fully implement the essence of the strong interaction dynamics within the statistical hadronization picture. In fact, including finite width has a considerable effect when modeling directly detectable short-lived resonances ( Λ(1520),K, etc.), and is noticeable in fits to experimentally measured yields of stable particles. After production, all hadronic resonances decay. Resonance decays are accomplished by addition of the parent abundances to the daughter, normalized by the branching ratio. Weak interaction decays receive a special treatment, where we introduce daughter particle acceptance factors for both strongly interacting decay products. An interface for fitting to experimental particle ratios of the statistical model parameters with the help of MINUIT[1] is provided. The χ function is defined in the standard way. For an investigated quantity f and experimental error Δ f, χ=((N=N-N. (note that systematic and statistical
Statistical instability of barrier microdischarges operating in townsend regime
Nagorny, V. P.
2007-01-15
The dynamics of barrier microdischarges operating in a Townsend regime is studied analytically and via kinetic particle-in-cell/Monte Carlo simulations. It is shown that statistical fluctuations of the number of charged particles in the discharge gap strongly influence the dynamics of natural oscillations of the discharge current and may even lead to a disruption of the discharge. Analysis of the statistical effects based on a simple model is suggested. The role of external sources in stabilizing microdischarges is clarified.
Nock, Richard; Nielsen, Frank
2004-11-01
This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions. We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints. This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces. The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images. Experiments on gray-level and color images, obtained with a short readily available C-code, display the quality of the segmentations obtained.
Modeling cosmic void statistics
NASA Astrophysics Data System (ADS)
Hamaus, Nico; Sutter, P. M.; Wandelt, Benjamin D.
2016-10-01
Understanding the internal structure and spatial distribution of cosmic voids is crucial when considering them as probes of cosmology. We present recent advances in modeling void density- and velocity-profiles in real space, as well as void two-point statistics in redshift space, by examining voids identified via the watershed transform in state-of-the-art ΛCDM n-body simulations and mock galaxy catalogs. The simple and universal characteristics that emerge from these statistics indicate the self-similarity of large-scale structure and suggest cosmic voids to be among the most pristine objects to consider for future studies on the nature of dark energy, dark matter and modified gravity.
Statistical evaluation of forecasts
NASA Astrophysics Data System (ADS)
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.
Journey Through Statistical Mechanics
NASA Astrophysics Data System (ADS)
Yang, C. N.
2013-05-01
My first involvement with statistical mechanics and the many body problem was when I was a student at The National Southwest Associated University in Kunming during the war. At that time Professor Wang Zhu-Xi had just come back from Cambridge, England, where he was a student of Fowler, and his thesis was on phase transitions, a hot topic at that time, and still a very hot topic today...
Statistical Methods in Cosmology
NASA Astrophysics Data System (ADS)
Verde, L.
2010-03-01
The advent of large data-set in cosmology has meant that in the past 10 or 20 years our knowledge and understanding of the Universe has changed not only quantitatively but also, and most importantly, qualitatively. Cosmologists rely on data where a host of useful information is enclosed, but is encoded in a non-trivial way. The challenges in extracting this information must be overcome to make the most of a large experimental effort. Even after having converged to a standard cosmological model (the LCDM model) we should keep in mind that this model is described by 10 or more physical parameters and if we want to study deviations from it, the number of parameters is even larger. Dealing with such a high dimensional parameter space and finding parameters constraints is a challenge on itself. Cosmologists want to be able to compare and combine different data sets both for testing for possible disagreements (which could indicate new physics) and for improving parameter determinations. Finally, cosmologists in many cases want to find out, before actually doing the experiment, how much one would be able to learn from it. For all these reasons, sophisiticated statistical techniques are being employed in cosmology, and it has become crucial to know some statistical background to understand recent literature in the field. I will introduce some statistical tools that any cosmologist should know about in order to be able to understand recently published results from the analysis of cosmological data sets. I will not present a complete and rigorous introduction to statistics as there are several good books which are reported in the references. The reader should refer to those.
NASA Astrophysics Data System (ADS)
Talkner, Peter
2003-07-01
The statistical properties of the transitions of a discrete Markov process are investigated in terms of entrance times. A simple formula for their density is given and used to measure the synchronization of a process with a periodic driving force. For the McNamara-Wiesenfeld model of stochastic resonance we find parameter regions in which the transition frequency of the process is locked with the frequency of the external driving.
1979 DOE statistical symposium
Gardiner, D.A.; Truett T.
1980-09-01
The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.
Guta, Madalin; Butucea, Cristina
2010-10-15
The notion of a U-statistic for an n-tuple of identical quantum systems is introduced in analogy to the classical (commutative) case: given a self-adjoint 'kernel' K acting on (C{sup d}){sup '}x{sup r} with r
Statistical Inference at Work: Statistical Process Control as an Example
ERIC Educational Resources Information Center
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics.
Tirnakli, Ugur; Borges, Ernesto P
2016-01-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results. PMID:27004989
The standard map: From Boltzmann-Gibbs statistics to Tsallis statistics
Tirnakli, Ugur; Borges, Ernesto P.
2016-01-01
As well known, Boltzmann-Gibbs statistics is the correct way of thermostatistically approaching ergodic systems. On the other hand, nontrivial ergodicity breakdown and strong correlations typically drag the system into out-of-equilibrium states where Boltzmann-Gibbs statistics fails. For a wide class of such systems, it has been shown in recent years that the correct approach is to use Tsallis statistics instead. Here we show how the dynamics of the paradigmatic conservative (area-preserving) stan-dard map exhibits, in an exceptionally clear manner, the crossing from one statistics to the other. Our results unambiguously illustrate the domains of validity of both Boltzmann-Gibbs and Tsallis statistical distributions. Since various important physical systems from particle confinement in magnetic traps to autoionization of molecular Rydberg states, through particle dynamics in accelerators and comet dynamics, can be reduced to the standard map, our results are expected to enlighten and enable an improved interpretation of diverse experimental and observational results. PMID:27004989
Statistical design for microwave systems
NASA Technical Reports Server (NTRS)
Cooke, Roland; Purviance, John
1991-01-01
This paper presents an introduction to statistical system design. Basic ideas needed to understand statistical design and a method for implementing statistical design are presented. The nonlinear characteristics of the system amplifiers and mixers are accounted for in the given examples. The specification of group delay, signal-to-noise ratio and output power are considered in these statistical designs.
Experimental Mathematics and Computational Statistics
Bailey, David H.; Borwein, Jonathan M.
2009-04-30
The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.
Focus on stochastic flows and climate statistics
NASA Astrophysics Data System (ADS)
Marston, JB; Williams, Paul D.
2016-09-01
The atmosphere and ocean are examples of dynamical systems that evolve in accordance with the laws of physics. Therefore, climate science is a branch of physics that is just as valid and important as the more traditional branches, which include particle physics, condensed-matter physics, and statistical mechanics. This ‘focus on’ collection of New Journal of Physics brings together original research articles from leading groups that advance our understanding of the physics of climate. Areas of climate science that can particularly benefit from input by physicists are emphasised. The collection brings together articles on stochastic models, turbulence, quasi-linear approximations, climate statistics, statistical mechanics of atmospheres and oceans, jet formation, and reduced-form climate models. The hope is that the issue will encourage more physicists to think about the climate problem.
NASA Technical Reports Server (NTRS)
1995-01-01
NASA Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
NASA Technical Reports Server (NTRS)
1994-01-01
Pocket Statistics is published for the use of NASA managers and their staff. Included herein is Administrative and Organizational information, summaries of Space Flight Activity including the NASA Major Launch Record, and NASA Procurement, Financial, and Manpower data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Launch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
NASA Technical Reports Server (NTRS)
1996-01-01
This booklet of pocket statistics includes the 1996 NASA Major Launch Record, NASA Procurement, Financial, and Workforce data. The NASA Major Launch Record includes all launches of Scout class and larger vehicles. Vehicle and spacecraft development flights are also included in the Major Luanch Record. Shuttle missions are counted as one launch and one payload, where free flying payloads are not involved. Satellites deployed from the cargo bay of the Shuttle and placed in a separate orbit or trajectory are counted as an additional payload.
Who Needs Statistics? | Poster
You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.
Statistical physics and ecology
NASA Astrophysics Data System (ADS)
Volkov, Igor
This work addresses the applications of the methods of statistical physics to problems in population ecology. A theoretical framework based on stochastic Markov processes for the unified neutral theory of biodiversity is presented and an analytical solution for the distribution of the relative species abundance distribution both in the large meta-community and in the small local community is obtained. It is shown that the framework of the current neutral theory in ecology can be easily generalized to incorporate symmetric density dependence. An analytically tractable model is studied that provides an accurate description of beta-diversity and exhibits novel scaling behavior that leads to links between ecological measures such as relative species abundance and the species area relationship. We develop a simple framework that incorporates the Janzen-Connell, dispersal and immigration effects and leads to a description of the distribution of relative species abundance, the equilibrium species richness, beta-diversity and the species area relationship, in good accord with data. Also it is shown that an ecosystem can be mapped into an unconventional statistical ensemble and is quite generally tuned in the vicinity of a phase transition where bio-diversity and the use of resources are optimized. We also perform a detailed study of the unconventional statistical ensemble, in which, unlike in physics, the total number of particles and the energy are not fixed but bounded. We show that the temperature and the chemical potential play a dual role: they determine the average energy and the population of the levels in the system and at the same time they act as an imbalance between the energy and population ceilings and the corresponding average values. Different types of statistics (Boltzmann, Bose-Einstein, Fermi-Dirac and one corresponding to the description of a simple ecosystem) are considered. In all cases, we show that the systems may undergo a first or a second order
International petroleum statistics report
1995-10-01
The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report presents data on international oil production, demand, imports, exports and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Section 2 presents an oil supply/demand balance for the world, in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries.
NASA Technical Reports Server (NTRS)
Freilich, M. H.; Pawka, S. S.
1987-01-01
The statistics of Sxy estimates derived from orthogonal-component measurements are examined. Based on results of Goodman (1957), the probability density function (pdf) for Sxy(f) estimates is derived, and a closed-form solution for arbitrary moments of the distribution is obtained. Characteristic functions are used to derive the exact pdf of Sxy(tot). In practice, a simple Gaussian approximation is found to be highly accurate even for relatively few degrees of freedom. Implications for experiment design are discussed, and a maximum-likelihood estimator for a posterior estimation is outlined.
This article presents a general and versatile methodology for assessing sustainability with Fisher Information as a function of dynamic changes in urban systems. Using robust statistical methods, six Metropolitan Statistical Areas (MSAs) in Ohio were evaluated to comparatively as...
Fragile entanglement statistics
NASA Astrophysics Data System (ADS)
Brody, Dorje C.; Hughston, Lane P.; Meier, David M.
2015-10-01
If X and Y are independent, Y and Z are independent, and so are X and Z, one might be tempted to conclude that X, Y, and Z are independent. But it has long been known in classical probability theory that, intuitive as it may seem, this is not true in general. In quantum mechanics one can ask whether analogous statistics can emerge for configurations of particles in certain types of entangled states. The explicit construction of such states, along with the specification of suitable sets of observables that have the purported statistical properties, is not entirely straightforward. We show that an example of such a configuration arises in the case of an N-particle GHZ state, and we are able to identify a family of observables with the property that the associated measurement outcomes are independent for any choice of 2,3,\\ldots ,N-1 of the particles, even though the measurement outcomes for all N particles are not independent. Although such states are highly entangled, the entanglement turns out to be ‘fragile’, i.e. the associated density matrix has the property that if one traces out the freedom associated with even a single particle, the resulting reduced density matrix is separable.
Statistical clumped isotope signatures.
Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G
2016-08-18
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules.
International petroleum statistics report
1997-05-01
The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report is published for the use of Members of Congress, Federal agencies, State agencies, industry, and the general public. Publication of this report is in keeping with responsibilities given the Energy Information Administration in Public Law 95-91. The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.
Statistical clumped isotope signatures
Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.
2016-01-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168
[Comment on] Statistical discrimination
NASA Astrophysics Data System (ADS)
Chinn, Douglas
In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.
Statistical clumped isotope signatures
NASA Astrophysics Data System (ADS)
Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.
2016-08-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules.
Statistical clumped isotope signatures.
Röckmann, T; Popa, M E; Krol, M C; Hofmann, M E G
2016-01-01
High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168
Sufficient Statistics: an Example
NASA Technical Reports Server (NTRS)
Quirein, J.
1973-01-01
The feature selection problem is considered resulting from the transformation x = Bz where B is a k by n matrix of rank k and k is or = to n. Such a transformation can be considered to reduce the dimension of each observation vector z, and in general, such a transformation results in a loss of information. In terms of the divergence, this information loss is expressed by the fact that the average divergence D sub B computed using variable x is less than or equal to the average divergence D computed using variable z. If D sub B = D, then B is said to be a sufficient statistic for the average divergence D. If B is a sufficient statistic for the average divergence, then it can be shown that the probability of misclassification computed using variable x (of dimension k is or = to n) is equal to the probability of misclassification computed using variable z. Also included is what is believed to be a new proof of the well known fact that D is or = to D sub B. Using the techniques necessary to prove the above fact, it is shown that the Brattacharyya distance as measured by variable x is less than or equal to the Brattacharyya distance as measured by variable z.
ERIC Educational Resources Information Center
Perepiczka, Michelle; Chandler, Nichelle; Becerra, Michael
2011-01-01
Statistics plays an integral role in graduate programs. However, numerous intra- and interpersonal factors may lead to successful completion of needed coursework in this area. The authors examined the extent of the relationship between self-efficacy to learn statistics and statistics anxiety, attitude towards statistics, and social support of 166…
NASA Astrophysics Data System (ADS)
Talkner, Peter
2003-03-01
The statistical properties of discrete Markov processes are investigated in terms of entrance times. Simple relations are given for their density and higher order distributions. These quantities are used for introducing a generalized Rice phase and for characterizing the synchronization of a process with an external driving force. For the McNamara Wiesenfeld model of stochastic resonance parameter regions (spanned by the noise strength, driving frequency and strength) are identified in which the process is locked with the frequency of the external driving and in which the diffusion of the Rice phase becomes minimal. At the same time the Fano factor of the number of entrances per period of the driving force has a minimum.
Dienes, J.K.
1983-01-01
An alternative to the use of plasticity theory to characterize the inelastic behavior of solids is to represent the flaws by statistical methods. We have taken such an approach to study fragmentation because it offers a number of advantages. Foremost among these is that, by considering the effects of flaws, it becomes possible to address the underlying physics directly. For example, we have been able to explain why rocks exhibit large strain-rate effects (a consequence of the finite growth rate of cracks), why a spherical explosive imbedded in oil shale produces a cavity with a nearly square section (opening of bedding cracks) and why propellants may detonate following low-speed impact (a consequence of frictional hot spots).
Conditional statistical model building
NASA Astrophysics Data System (ADS)
Hansen, Mads Fogtmann; Hansen, Michael Sass; Larsen, Rasmus
2008-03-01
We present a new statistical deformation model suited for parameterized grids with different resolutions. Our method models the covariances between multiple grid levels explicitly, and allows for very efficient fitting of the model to data on multiple scales. The model is validated on a data set consisting of 62 annotated MR images of Corpus Callosum. One fifth of the data set was used as a training set, which was non-rigidly registered to each other without a shape prior. From the non-rigidly registered training set a shape prior was constructed by performing principal component analysis on each grid level and using the results to construct a conditional shape model, conditioning the finer parameters with the coarser grid levels. The remaining shapes were registered with the constructed shape prior. The dice measures for the registration without prior and the registration with a prior were 0.875 +/- 0.042 and 0.8615 +/- 0.051, respectively.
Statistical design controversy
Evans, L.S.; Hendrey, G.R.; Thompson, K.H.
1985-02-01
This article was in response to criticisms received by Evans, Hendrey, and Thompson that their article was biased because of omissions and misrepresentations. The authors contend that experimental designs having only one plot per treatment ''were, from the outset, not capable of differentiating between treatment effects and field-position effects,'' remains valid and is supported by decades of agronomic research. Several men, Irving, Troiano, and McCune thought of the article as a review of all studies of acidic rain effects on soybeans. It was not. The article was written over the concern of the comparisons which were being made among studies which purport to evaluate effects of acid deposition on field-grown crops, and implicitly assumes that all of the studies are of equal scientific value. They are not. Only experimental approaches that are well-focused and designed with appropriate agronomic and statistical procedures should be used for credible regional and national assessments of crop inventories. 12 references.
On the Statistics of Macrospicules
NASA Astrophysics Data System (ADS)
Bennett, S. M.; Erdélyi, R.
2015-08-01
A new generation of solar telescopes has led to an increase in the resolution of localized features seen on the Sun spatially, temporally, and spectrally, enabling a detailed study of macrospicules. Macrospicules are members of a wide variety of solar ejecta and ascertaining where they belong in this family is vitally important, particularly given that they are chromospheric events which penetrate the transition region and lower corona. We examine the overall properties of macrospicules, both temporal and spatial. We also investigate possible relationships between the macrospicule properties and the sample time period itself, which is selected as a proxy for the ramp from solar minimum to solar maximum. Measurements are taken using the Solar Dynamic Observatory to provide the necessary temporal resolution and coverage. At each point in time, the length of the macrospicule is measured from base to tip and the width is recorded at half the length at each step. The measurements were then applied to determine the statistical properties and relationships between them. It is evident that the properties of maximum velocity, maximum length, and lifetime are all related in specific, established terms. We provide appropriate scaling in terms of the physical properties, which would be a useful test bed for modeling. Also, we note that the maximum lengths and lifetimes of the features show some correlation with the sample epoch and, therefore, by proxy the solar minimum to maximum ramp.
Wide Wide World of Statistics: International Statistics on the Internet.
ERIC Educational Resources Information Center
Foudy, Geraldine
2000-01-01
Explains how to find statistics on the Internet, especially international statistics. Discusses advantages over print sources, including convenience, currency of information, cost effectiveness, and value-added formatting; sources of international statistics; United Nations agencies; search engines and power searching; and evaluating sources. (LRW)
Understanding Statistics and Statistics Education: A Chinese Perspective
ERIC Educational Resources Information Center
Shi, Ning-Zhong; He, Xuming; Tao, Jian
2009-01-01
In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…
Statistical Literacy: Developing a Youth and Adult Education Statistical Project
ERIC Educational Resources Information Center
Conti, Keli Cristina; Lucchesi de Carvalho, Dione
2014-01-01
This article focuses on the notion of literacy--general and statistical--in the analysis of data from a fieldwork research project carried out as part of a master's degree that investigated the teaching and learning of statistics in adult education mathematics classes. We describe the statistical context of the project that involved the…
Statistical physical models of cellular motility
NASA Astrophysics Data System (ADS)
Banigan, Edward J.
Cellular motility is required for a wide range of biological behaviors and functions, and the topic poses a number of interesting physical questions. In this work, we construct and analyze models of various aspects of cellular motility using tools and ideas from statistical physics. We begin with a Brownian dynamics model for actin-polymerization-driven motility, which is responsible for cell crawling and "rocketing" motility of pathogens. Within this model, we explore the robustness of self-diffusiophoresis, which is a general mechanism of motility. Using this mechanism, an object such as a cell catalyzes a reaction that generates a steady-state concentration gradient that propels the object in a particular direction. We then apply these ideas to a model for depolymerization-driven motility during bacterial chromosome segregation. We find that depolymerization and protein-protein binding interactions alone are sufficient to robustly pull a chromosome, even against large loads. Next, we investigate how forces and kinetics interact during eukaryotic mitosis with a many-microtubule model. Microtubules exert forces on chromosomes, but since individual microtubules grow and shrink in a force-dependent way, these forces lead to bistable collective microtubule dynamics, which provides a mechanism for chromosome oscillations and microtubule-based tension sensing. Finally, we explore kinematic aspects of cell motility in the context of the immune system. We develop quantitative methods for analyzing cell migration statistics collected during imaging experiments. We find that during chronic infection in the brain, T cells run and pause stochastically, following the statistics of a generalized Levy walk. These statistics may contribute to immune function by mimicking an evolutionarily conserved efficient search strategy. Additionally, we find that naive T cells migrating in lymph nodes also obey non-Gaussian statistics. Altogether, our work demonstrates how physical
Analysis and modeling of resistive switching statistics
NASA Astrophysics Data System (ADS)
Long, Shibing; Cagli, Carlo; Ielmini, Daniele; Liu, Ming; Suñé, Jordi
2012-04-01
The resistive random access memory (RRAM), based on the reversible switching between different resistance states, is a promising candidate for next-generation nonvolatile memories. One of the most important challenges to foster the practical application of RRAM is the control of the statistical variation of switching parameters to gain low variability and high reliability. In this work, starting from the well-known percolation model of dielectric breakdown (BD), we establish a framework of analysis and modeling of the resistive switching statistics in RRAM devices, which are based on the formation and disconnection of a conducting filament (CF). One key aspect of our proposal is the relation between the CF resistance and the switching statistics. Hence, establishing the correlation between SET and RESET switching variables and the initial resistance of the device in the OFF and ON states, respectively, is a fundamental issue. Our modeling approach to the switching statistics is fully analytical and contains two main elements: (i) a geometrical cell-based description of the CF and (ii) a deterministic model for the switching dynamics. Both ingredients might be slightly different for the SET and RESET processes, for the type of switching (bipolar or unipolar), and for the kind of considered resistive structure (oxide-based, conductive bridge, etc.). However, the basic structure of our approach is thought to be useful for all the cases and should provide a framework for the physics-based understanding of the switching mechanisms and the associated statistics, for the trustful estimation of RRAM performance, and for the successful forecast of reliability. As a first application example, we start by considering the case of the RESET statistics of NiO-based RRAM structures. In particular, we statistically analyze the RESET transitions of a statistically significant number of switching cycles of Pt/NiO/W devices. In the RESET transition, the ON-state resistance (RON) is a
Heart Disease and Stroke Statistics
... Nutrition (PDF) Obesity (PDF) Peripheral Artery Disease (PDF) ... statistics, please contact the American Heart Association National Center, Office of Science & Medicine at statistics@heart.org . Please direct all ...
Muscular Dystrophy: Data and Statistics
... Statistics Recommend on Facebook Tweet Share Compartir MD STAR net Data and Statistics The following data and ... research [ Read Article ] For more information on MD STAR net see Research and Tracking . Key Findings Feature ...
Thoughts About Theories and Statistics.
Fawcett, Jacqueline
2015-07-01
The purpose of this essay is to share my ideas about the connection between theories and statistics. The essay content reflects my concerns about some researchers' and readers' apparent lack of clarity about what constitutes appropriate statistical testing and conclusions about the empirical adequacy of theories. The reciprocal relation between theories and statistics is emphasized and the conclusion is that statistics without direction from theory is no more than a hobby.
Springer Handbook of Engineering Statistics
NASA Astrophysics Data System (ADS)
Pham, Hoang
The Springer Handbook of Engineering Statistics gathers together the full range of statistical techniques required by engineers from all fields to gain sensible statistical feedback on how their processes or products are functioning and to give them realistic predictions of how these could be improved.
Statistical log analysis made practical
Mitchell, W.K.; Nelson, R.J. )
1991-06-01
This paper discusses the advantages of a statistical approach to log analysis. Statistical techniques use inverse methods to calculate formation parameters. The use of statistical techniques has been limited, however, by the complexity of the mathematics and lengthy computer time required to minimize traditionally used nonlinear equations.
Invention Activities Support Statistical Reasoning
ERIC Educational Resources Information Center
Smith, Carmen Petrick; Kenlan, Kris
2016-01-01
Students' experiences with statistics and data analysis in middle school are often limited to little more than making and interpreting graphs. Although students may develop fluency in statistical procedures and vocabulary, they frequently lack the skills necessary to apply statistical reasoning in situations other than clear-cut textbook examples.…
Explorations in Statistics: the Bootstrap
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2009-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fourth installment of Explorations in Statistics explores the bootstrap. The bootstrap gives us an empirical approach to estimate the theoretical variability among possible values of a sample statistic such as the…
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
Statistics Anxiety and Instructor Immediacy
ERIC Educational Resources Information Center
Williams, Amanda S.
2010-01-01
The purpose of this study was to investigate the relationship between instructor immediacy and statistics anxiety. It was predicted that students receiving immediacy would report lower levels of statistics anxiety. Using a pretest-posttest-control group design, immediacy was measured using the Instructor Immediacy scale. Statistics anxiety was…
Statistics: It's in the Numbers!
ERIC Educational Resources Information Center
Deal, Mary M.; Deal, Walter F., III
2007-01-01
Mathematics and statistics play important roles in peoples' lives today. A day hardly passes that they are not bombarded with many different kinds of statistics. As consumers they see statistical information as they surf the web, watch television, listen to their satellite radios, or even read the nutrition facts panel on a cereal box in the…
Statistics of indistinguishable particles.
Wittig, Curt
2009-07-01
The wave function of a system containing identical particles takes into account the relationship between a particle's intrinsic spin and its statistical property. Specifically, the exchange of two identical particles having odd-half-integer spin results in the wave function changing sign, whereas the exchange of two identical particles having integer spin is accompanied by no such sign change. This is embodied in a term (-1)(2s), which has the value +1 for integer s (bosons), and -1 for odd-half-integer s (fermions), where s is the particle spin. All of this is well-known. In the nonrelativistic limit, a detailed consideration of the exchange of two identical particles shows that exchange is accompanied by a 2pi reorientation that yields the (-1)(2s) term. The same bookkeeping is applicable to the relativistic case described by the proper orthochronous Lorentz group, because any proper orthochronous Lorentz transformation can be expressed as the product of spatial rotations and a boost along the direction of motion. PMID:19552474
International petroleum statistics report
1996-05-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1084 through 1994.
International petroleum statistics report
1995-11-01
The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.
International petroleum statistics report
1995-07-27
The International Petroleum Statistics Report presents data on international oil production, demand, imports, and exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.
Topics in statistical mechanics
Elser, V.
1984-05-01
This thesis deals with four independent topics in statistical mechanics: (1) the dimer problem is solved exactly for a hexagonal lattice with general boundary using a known generating function from the theory of partitions. It is shown that the leading term in the entropy depends on the shape of the boundary; (2) continuum models of percolation and self-avoiding walks are introduced with the property that their series expansions are sums over linear graphs with intrinsic combinatorial weights and explicit dimension dependence; (3) a constrained SOS model is used to describe the edge of a simple cubic crystal. Low and high temperature results are derived as well as the detailed behavior near the crystal facet; (4) the microscopic model of the lambda-transition involving atomic permutation cycles is reexamined. In particular, a new derivation of the two-component field theory model of the critical behavior is presented. Results for a lattice model originally proposed by Kikuchi are extended with a high temperature series expansion and Monte Carlo simulation. 30 references.
Statistical mechanics of nucleosomes
NASA Astrophysics Data System (ADS)
Chereji, Razvan V.
Eukaryotic cells contain long DNA molecules (about two meters for a human cell) which are tightly packed inside the micrometric nuclei. Nucleosomes are the basic packaging unit of the DNA which allows this millionfold compactification. A longstanding puzzle is to understand the principles which allow cells to both organize their genomes into chromatin fibers in the crowded space of their nuclei, and also to keep the DNA accessible to many factors and enzymes. With the nucleosomes covering about three quarters of the DNA, their positions are essential because these influence which genes can be regulated by the transcription factors and which cannot. We study physical models which predict the genome-wide organization of the nucleosomes and also the relevant energies which dictate this organization. In the last five years, the study of chromatin knew many important advances. In particular, in the field of nucleosome positioning, new techniques of identifying nucleosomes and the competing DNA-binding factors appeared, as chemical mapping with hydroxyl radicals, ChIP-exo, among others, the resolution of the nucleosome maps increased by using paired-end sequencing, and the price of sequencing an entire genome decreased. We present a rigorous statistical mechanics model which is able to explain the recent experimental results by taking into account nucleosome unwrapping, competition between different DNA-binding proteins, and both the interaction between histones and DNA, and between neighboring histones. We show a series of predictions of our new model, all in agreement with the experimental observations.