M≥7 Earthquake rupture forecast and time-dependent probability for the Sea of Marmara region, Turkey
Murru, Maura; Akinci, Aybige; Falcone, Guiseppe; Pucci, Stefano; Console, Rodolfo; Parsons, Thomas E.
2016-01-01
We forecast time-independent and time-dependent earthquake ruptures in the Marmara region of Turkey for the next 30 years using a new fault-segmentation model. We also augment time-dependent Brownian Passage Time (BPT) probability with static Coulomb stress changes (ΔCFF) from interacting faults. We calculate Mw > 6.5 probability from 26 individual fault sources in the Marmara region. We also consider a multisegment rupture model that allows higher-magnitude ruptures over some segments of the Northern branch of the North Anatolian Fault Zone (NNAF) beneath the Marmara Sea. A total of 10 different Mw=7.0 to Mw=8.0 multisegment ruptures are combined with the other regional faults at rates that balance the overall moment accumulation. We use Gaussian random distributions to treat parameter uncertainties (e.g., aperiodicity, maximum expected magnitude, slip rate, and consequently mean recurrence time) of the statistical distributions associated with each fault source. We then estimate uncertainties of the 30-year probability values for the next characteristic event obtained from three different models (Poisson, BPT, and BPT+ΔCFF) using a Monte Carlo procedure. The Gerede fault segment located at the eastern end of the Marmara region shows the highest 30-yr probability, with a Poisson value of 29%, and a time-dependent interaction probability of 48%. We find an aggregated 30-yr Poisson probability of M >7.3 earthquakes at Istanbul of 35%, which increases to 47% if time dependence and stress transfer are considered. We calculate a 2-fold probability gain (ratio time-dependent to time-independent) on the southern strands of the North Anatolian Fault Zone.
[Socio-demographic and health factors associated with the institutionalization of dependent people].
Ayuso Gutiérrez, Mercedes; Pozo Rubio, Raúl Del; Escribano Sotos, Francisco
2010-01-01
The analysis of the effect that different variables have in the probability that dependent people are institutionalized is a topic scantily studied in Spain. The aim of the work is to analyze as certain socio-demographic and health factors can influence probability of dependent person living in a residence. A cross-section study has been conducted from a representative sample of the dependent population in Cuenca (Spain) in February, 2009. We have obtained information for people with level II and III of dependence. A binary logit regression model has been estimated to identify those factors related to the institutionalization of dependent people. People with ages between 65-74 years old are six times more likely to be institutionalized than younger people (< 65 years old); this probability increases sixteen times for those individuals with ages equal or higher than 95 years. The probability of institutionalization of people who live in an urban area is three times the probability of people who live in a rural area. People who need pharmacological, psychotherapy or rehabilitation treatments have between two and four times more probability of being institutionalized that those who do not need those. Age, marital status, place of residence, cardiovascular and musculoskeletal diseases and four times of medical treatment are the principal variables associated with the institutionalization of dependent people.
Local regularity for time-dependent tug-of-war games with varying probabilities
NASA Astrophysics Data System (ADS)
Parviainen, Mikko; Ruosteenoja, Eero
2016-07-01
We study local regularity properties of value functions of time-dependent tug-of-war games. For games with constant probabilities we get local Lipschitz continuity. For more general games with probabilities depending on space and time we obtain Hölder and Harnack estimates. The games have a connection to the normalized p (x , t)-parabolic equation ut = Δu + (p (x , t) - 2) Δ∞N u.
Recalculated probability of M ≥ 7 earthquakes beneath the Sea of Marmara, Turkey
Parsons, T.
2004-01-01
New earthquake probability calculations are made for the Sea of Marmara region and the city of Istanbul, providing a revised forecast and an evaluation of time-dependent interaction techniques. Calculations incorporate newly obtained bathymetric images of the North Anatolian fault beneath the Sea of Marmara [Le Pichon et al., 2001; Armijo et al., 2002]. Newly interpreted fault segmentation enables an improved regional A.D. 1500-2000 earthquake catalog and interevent model, which form the basis for time-dependent probability estimates. Calculations presented here also employ detailed models of coseismic and postseismic slip associated with the 17 August 1999 M = 7.4 Izmit earthquake to investigate effects of stress transfer on seismic hazard. Probability changes caused by the 1999 shock depend on Marmara Sea fault-stressing rates, which are calculated with a new finite element model. The combined 2004-2034 regional Poisson probability of M≥7 earthquakes is ~38%, the regional time-dependent probability is 44 ± 18%, and incorporation of stress transfer raises it to 53 ± 18%. The most important effect of adding time dependence and stress transfer to the calculations is an increase in the 30 year probability of a M ??? 7 earthquake affecting Istanbul. The 30 year Poisson probability at Istanbul is 21%, and the addition of time dependence and stress transfer raises it to 41 ± 14%. The ranges given on probability values are sensitivities of the calculations to input parameters determined by Monte Carlo analysis; 1000 calculations are made using parameters drawn at random from distributions. Sensitivities are large relative to mean probability values and enhancements caused by stress transfer, reflecting a poor understanding of large-earthquake aperiodicity.
Time-dependent landslide probability mapping
Campbell, Russell H.; Bernknopf, Richard L.; ,
1993-01-01
Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.
Conditional, Time-Dependent Probabilities for Segmented Type-A Faults in the WGCEP UCERF 2
Field, Edward H.; Gupta, Vipin
2008-01-01
This appendix presents elastic-rebound-theory (ERT) motivated time-dependent probabilities, conditioned on the date of last earthquake, for the segmented type-A fault models of the 2007 Working Group on California Earthquake Probabilities (WGCEP). These probabilities are included as one option in the WGCEP?s Uniform California Earthquake Rupture Forecast 2 (UCERF 2), with the other options being time-independent Poisson probabilities and an ?Empirical? model based on observed seismicity rate changes. A more general discussion of the pros and cons of all methods for computing time-dependent probabilities, as well as the justification of those chosen for UCERF 2, are given in the main body of this report (and the 'Empirical' model is also discussed in Appendix M). What this appendix addresses is the computation of conditional, time-dependent probabilities when both single- and multi-segment ruptures are included in the model. Computing conditional probabilities is relatively straightforward when a fault is assumed to obey strict segmentation in the sense that no multi-segment ruptures occur (e.g., WGCEP (1988, 1990) or see Field (2007) for a review of all previous WGCEPs; from here we assume basic familiarity with conditional probability calculations). However, and as we?ll see below, the calculation is not straightforward when multi-segment ruptures are included, in essence because we are attempting to apply a point-process model to a non point process. The next section gives a review and evaluation of the single- and multi-segment rupture probability-calculation methods used in the most recent statewide forecast for California (WGCEP UCERF 1; Petersen et al., 2007). We then present results for the methodology adopted here for UCERF 2. We finish with a discussion of issues and possible alternative approaches that could be explored and perhaps applied in the future. A fault-by-fault comparison of UCERF 2 probabilities with those of previous studies is given in the main part of this report.
Generalized Success-Breeds-Success Principle Leading to Time-Dependent Informetric Distributions.
ERIC Educational Resources Information Center
Egghe, Leo; Rousseau, Ronald
1995-01-01
Reformulates the success-breeds-success (SBS) principle in informetrics in order to generate a general theory of source-item relationships. Topics include a time-dependent probability, a new model for the expected probability that is compared with the SBS principle with exact combinatorial calculations, classical frequency distributions, and…
Quantitative assessment of building fire risk to life safety.
Guanquan, Chu; Jinhua, Sun
2008-06-01
This article presents a quantitative risk assessment framework for evaluating fire risk to life safety. Fire risk is divided into two parts: probability and corresponding consequence of every fire scenario. The time-dependent event tree technique is used to analyze probable fire scenarios based on the effect of fire protection systems on fire spread and smoke movement. To obtain the variation of occurrence probability with time, Markov chain is combined with a time-dependent event tree for stochastic analysis on the occurrence probability of fire scenarios. To obtain consequences of every fire scenario, some uncertainties are considered in the risk analysis process. When calculating the onset time to untenable conditions, a range of fires are designed based on different fire growth rates, after which uncertainty of onset time to untenable conditions can be characterized by probability distribution. When calculating occupant evacuation time, occupant premovement time is considered as a probability distribution. Consequences of a fire scenario can be evaluated according to probability distribution of evacuation time and onset time of untenable conditions. Then, fire risk to life safety can be evaluated based on occurrence probability and consequences of every fire scenario. To express the risk assessment method in detail, a commercial building is presented as a case study. A discussion compares the assessment result of the case study with fire statistics.
Anomalous Growth of Aging Populations
NASA Astrophysics Data System (ADS)
Grebenkov, Denis S.
2016-04-01
We consider a discrete-time population dynamics with age-dependent structure. At every time step, one of the alive individuals from the population is chosen randomly and removed with probability q_k depending on its age, whereas a new individual of age 1 is born with probability r. The model can also describe a single queue in which the service order is random while the service efficiency depends on a customer's "age" in the queue. We propose a mean field approximation to investigate the long-time asymptotic behavior of the mean population size. The age dependence is shown to lead to anomalous power-law growth of the population at the critical regime. The scaling exponent is determined by the asymptotic behavior of the probabilities q_k at large k. The mean field approximation is validated by Monte Carlo simulations.
Uniform California earthquake rupture forecast, version 2 (UCERF 2)
Field, E.H.; Dawson, T.E.; Felzer, K.R.; Frankel, A.D.; Gupta, V.; Jordan, T.H.; Parsons, T.; Petersen, M.D.; Stein, R.S.; Weldon, R.J.; Wills, C.J.
2009-01-01
The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6.5 ??? M ???7.0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ???6.7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ???7.5 and to 4.5% for M ???8.0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ???8.0 time-dependent probability is 10%. The M ???6.7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.
Burst wait time simulation of CALIBAN reactor at delayed super-critical state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humbert, P.; Authier, N.; Richard, B.
2012-07-01
In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less
Time‐dependent renewal‐model probabilities when date of last earthquake is unknown
Field, Edward H.; Jordan, Thomas H.
2015-01-01
We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.
Ryabov, Artem; Berestneva, Ekaterina; Holubec, Viktor
2015-09-21
The paper addresses Brownian motion in the logarithmic potential with time-dependent strength, U(x, t) = g(t)log(x), subject to the absorbing boundary at the origin of coordinates. Such model can represent kinetics of diffusion-controlled reactions of charged molecules or escape of Brownian particles over a time-dependent entropic barrier at the end of a biological pore. We present a simple asymptotic theory which yields the long-time behavior of both the survival probability (first-passage properties) and the moments of the particle position (dynamics). The asymptotic survival probability, i.e., the probability that the particle will not hit the origin before a given time, is a functional of the potential strength. As such, it exhibits a rather varied behavior for different functions g(t). The latter can be grouped into three classes according to the regime of the asymptotic decay of the survival probability. We distinguish 1. the regular (power-law decay), 2. the marginal (power law times a slow function of time), and 3. the regime of enhanced absorption (decay faster than the power law, e.g., exponential). Results of the asymptotic theory show good agreement with numerical simulations.
Exploration properties of biased evanescent random walkers on a one-dimensional lattice
NASA Astrophysics Data System (ADS)
Esguerra, Jose Perico; Reyes, Jelian
2017-08-01
We investigate the combined effects of bias and evanescence on the characteristics of random walks on a one-dimensional lattice. We calculate the time-dependent return probability, eventual return probability, conditional mean return time, and the time-dependent mean number of visited sites of biased immortal and evanescent discrete-time random walkers on a one-dimensional lattice. We then extend the calculations to the case of a continuous-time step-coupled biased evanescent random walk on a one-dimensional lattice with an exponential waiting time distribution.
Scale Dependence of Spatiotemporal Intermittence of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Siddani, Ravi K.
2011-01-01
It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.
A statistical analysis of the daily streamflow hydrograph
NASA Astrophysics Data System (ADS)
Kavvas, M. L.; Delleur, J. W.
1984-03-01
In this study a periodic statistical analysis of daily streamflow data in Indiana, U.S.A., was performed to gain some new insight into the stochastic structure which describes the daily streamflow process. This analysis was performed by the periodic mean and covariance functions of the daily streamflows, by the time and peak discharge -dependent recession limb of the daily streamflow hydrograph, by the time and discharge exceedance level (DEL) -dependent probability distribution of the hydrograph peak interarrival time, and by the time-dependent probability distribution of the time to peak discharge. Some new statistical estimators were developed and used in this study. In general features, this study has shown that: (a) the persistence properties of daily flows depend on the storage state of the basin at the specified time origin of the flow process; (b) the daily streamflow process is time irreversible; (c) the probability distribution of the daily hydrograph peak interarrival time depends both on the occurrence time of the peak from which the inter-arrival time originates and on the discharge exceedance level; and (d) if the daily streamflow process is modeled as the release from a linear watershed storage, this release should depend on the state of the storage and on the time of the release as the persistence properties and the recession limb decay rates were observed to change with the state of the watershed storage and time. Therefore, a time-varying reservoir system needs to be considered if the daily streamflow process is to be modeled as the release from a linear watershed storage.
Anusavice, Kenneth J; Jadaan, Osama M; Esquivel-Upshaw, Josephine F
2013-11-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Predicted fracture probabilities (Pf) for centrally loaded 1.6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8mm/0.8mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4mm/1.2mm). CARES/Life results support the proposed crown design and load orientation hypotheses. The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. Copyright © 2013 Academy of Dental Materials. All rights reserved.
Anusavice, Kenneth J.; Jadaan, Osama M.; Esquivel–Upshaw, Josephine
2013-01-01
Recent reports on bilayer ceramic crown prostheses suggest that fractures of the veneering ceramic represent the most common reason for prosthesis failure. Objective The aims of this study were to test the hypotheses that: (1) an increase in core ceramic/veneer ceramic thickness ratio for a crown thickness of 1.6 mm reduces the time-dependent fracture probability (Pf) of bilayer crowns with a lithium-disilicate-based glass-ceramic core, and (2) oblique loading, within the central fossa, increases Pf for 1.6-mm-thick crowns compared with vertical loading. Materials and methods Time-dependent fracture probabilities were calculated for 1.6-mm-thick, veneered lithium-disilicate-based glass-ceramic molar crowns as a function of core/veneer thickness ratio and load orientation in the central fossa area. Time-dependent fracture probability analyses were computed by CARES/Life software and finite element analysis, using dynamic fatigue strength data for monolithic discs of a lithium-disilicate glass-ceramic core (Empress 2), and ceramic veneer (Empress 2 Veneer Ceramic). Results Predicted fracture probabilities (Pf) for centrally-loaded 1,6-mm-thick bilayer crowns over periods of 1, 5, and 10 years are 1.2%, 2.7%, and 3.5%, respectively, for a core/veneer thickness ratio of 1.0 (0.8 mm/0.8 mm), and 2.5%, 5.1%, and 7.0%, respectively, for a core/veneer thickness ratio of 0.33 (0.4 mm/1.2 mm). Conclusion CARES/Life results support the proposed crown design and load orientation hypotheses. Significance The application of dynamic fatigue data, finite element stress analysis, and CARES/Life analysis represent an optimal approach to optimize fixed dental prosthesis designs produced from dental ceramics and to predict time-dependent fracture probabilities of ceramic-based fixed dental prostheses that can minimize the risk for clinical failures. PMID:24060349
NASA Astrophysics Data System (ADS)
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.; Campbell, M. R.
2017-12-01
A challenge for earthquake hazard assessment is that geologic records often show large earthquakes occurring in temporal clusters separated by periods of quiescence. For example, in Cascadia, a paleoseismic record going back 10,000 years shows four to five clusters separated by approximately 1,000 year gaps. If we are still in the cluster that began 1700 years ago, a large earthquake is likely to happen soon. If the cluster has ended, a great earthquake is less likely. For a Gaussian distribution of recurrence times, the probability of an earthquake in the next 50 years is six times larger if we are still in the most recent cluster. Earthquake hazard assessments typically employ one of two recurrence models, neither of which directly incorporate clustering. In one, earthquake probability is time-independent and modeled as Poissonian, so an earthquake is equally likely at any time. The fault has no "memory" because when a prior earthquake occurred has no bearing on when the next will occur. The other common model is a time-dependent earthquake cycle in which the probability of an earthquake increases with time until one happens, after which the probability resets to zero. Because the probability is reset after each earthquake, the fault "remembers" only the last earthquake. This approach can be used with any assumed probability density function for recurrence times. We propose an alternative, Long-Term Fault Memory (LTFM), a modified earthquake cycle model where the probability of an earthquake increases with time until one happens, after which it decreases, but not necessarily to zero. Hence the probability of the next earthquake depends on the fault's history over multiple cycles, giving "long-term memory". Physically, this reflects an earthquake releasing only part of the elastic strain stored on the fault. We use the LTFM to simulate earthquake clustering along the San Andreas Fault and Cascadia. In some portions of the simulated earthquake history, events would appear quasiperiodic, while at other times, the events can appear more Poissonian. Hence a given paleoseismic or instrumental record may not reflect the long-term seismicity of a fault, which has important implications for hazard assessment.
Probabilities for time-dependent properties in classical and quantum mechanics
NASA Astrophysics Data System (ADS)
Losada, Marcelo; Vanni, Leonardo; Laura, Roberto
2013-05-01
We present a formalism which allows one to define probabilities for expressions that involve properties at different times for classical and quantum systems and we study its lattice structure. The formalism is based on the notion of time translation of properties. In the quantum case, the properties involved should satisfy compatibility conditions in order to obtain well-defined probabilities. The formalism is applied to describe the double-slit experiment.
Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M
2018-02-01
Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.
Elapsed decision time affects the weighting of prior probability in a perceptual decision task
Hanks, Timothy D.; Mazurek, Mark E.; Kiani, Roozbeh; Hopp, Elizabeth; Shadlen, Michael N.
2012-01-01
Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (i) decisions that linger tend to arise from less reliable evidence, and (ii) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal cortex (LIP) of rhesus monkeys performing this task. PMID:21525274
Elapsed decision time affects the weighting of prior probability in a perceptual decision task.
Hanks, Timothy D; Mazurek, Mark E; Kiani, Roozbeh; Hopp, Elisabeth; Shadlen, Michael N
2011-04-27
Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (1) decisions that linger tend to arise from less reliable evidence, and (2) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal area (LIP) of rhesus monkeys performing this task.
Pollitz, F.F.; Schwartz, D.P.
2008-01-01
We construct a viscoelastic cycle model of plate boundary deformation that includes the effect of time-dependent interseismic strain accumulation, coseismic strain release, and viscoelastic relaxation of the substrate beneath the seismogenic crust. For a given fault system, time-averaged stress changes at any point (not on a fault) are constrained to zero; that is, kinematic consistency is enforced for the fault system. The dates of last rupture, mean recurrence times, and the slip distributions of the (assumed) repeating ruptures are key inputs into the viscoelastic cycle model. This simple formulation allows construction of stress evolution at all points in the plate boundary zone for purposes of probabilistic seismic hazard analysis (PSHA). Stress evolution is combined with a Coulomb failure stress threshold at representative points on the fault segments to estimate the times of their respective future ruptures. In our PSHA we consider uncertainties in a four-dimensional parameter space: the rupture peridocities, slip distributions, time of last earthquake (for prehistoric ruptures) and Coulomb failure stress thresholds. We apply this methodology to the San Francisco Bay region using a recently determined fault chronology of area faults. Assuming single-segment rupture scenarios, we find that fature rupture probabilities of area faults in the coming decades are the highest for the southern Hayward, Rodgers Creek, and northern Calaveras faults. This conclusion is qualitatively similar to that of Working Group on California Earthquake Probabilities, but the probabilities derived here are significantly higher. Given that fault rupture probabilities are highly model-dependent, no single model should be used to assess to time-dependent rupture probabilities. We suggest that several models, including the present one, be used in a comprehensive PSHA methodology, as was done by Working Group on California Earthquake Probabilities.
A Time-Dependent Quantum Dynamics Study of the H2 + CH3 yields H + CH4 Reaction
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
We present a time-dependent wave-packet propagation calculation for the H2 + CH3 yields H + CH4 reaction in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probability for different initial rotational-vibrational states are presented in this study. The cumulative reaction probability (CRP) is obtained by summing over initial-state-selected reaction probability. The energy-shift approximation to account for the contribution of degrees of freedom missing in the 6D calculation is employed to obtain an approximate full-dimensional CRP. Thermal rate constant is compared with different experiment results.
KINETICS OF LOW SOURCE REACTOR STARTUPS. PART II
DOE Office of Scientific and Technical Information (OSTI.GOV)
hurwitz, H. Jr.; MacMillan, D.B.; Smith, J.H.
1962-06-01
A computational technique is described for computation of the probability distribution of power level for a low source reactor startup. The technique uses a mathematical model, for the time-dependent probability distribution of neutron and precursor concentration, having finite neutron lifetime, one group of delayed neutron precursors, and no spatial dependence. Results obtained by the technique are given. (auth)
Anomalous transport in fluid field with random waiting time depending on the preceding jump length
NASA Astrophysics Data System (ADS)
Zhang, Hong; Li, Guo-Hua
2016-11-01
Anomalous (or non-Fickian) transport behaviors of particles have been widely observed in complex porous media. To capture the energy-dependent characteristics of non-Fickian transport of a particle in flow fields, in the present paper a generalized continuous time random walk model whose waiting time probability distribution depends on the preceding jump length is introduced, and the corresponding master equation in Fourier-Laplace space for the distribution of particles is derived. As examples, two generalized advection-dispersion equations for Gaussian distribution and lévy flight with the probability density function of waiting time being quadratic dependent on the preceding jump length are obtained by applying the derived master equation. Project supported by the Foundation for Young Key Teachers of Chengdu University of Technology, China (Grant No. KYGG201414) and the Opening Foundation of Geomathematics Key Laboratory of Sichuan Province, China (Grant No. scsxdz2013009).
Kalus, Stefanie; Kneib, Thomas; Steiger, Axel; Holsboer, Florian; Yassouridis, Alexander
2009-04-01
The human sleep process shows dynamic alterations during the night. Methods are needed to examine whether and to what extent such alterations are affected by internal, possibly time-dependent, factors, such as endocrine activity. In an observational study, we examined simultaneously sleep EEG and nocturnal levels of renin, growth hormone (GH), and cortisol (between 2300 and 0700) in 47 healthy volunteers comprising 24 women (41.67 +/- 2.93 yr of age) and 23 men (37.26 +/- 2.85 yr of age). Hormone concentrations were measured every 20 min. Conventional sleep stage scoring at 30-s intervals was applied. Semiparametric multinomial logit models are used to study and quantify possible time-dependent hormone effects on sleep stage transition courses. Results show that increased cortisol levels decrease the probability of transition from rapid-eye-movement (REM) sleep to wakefulness (WAKE) and increase the probability of transition from REM to non-REM (NREM) sleep, irrespective of the time in the night. Via the model selection criterion Akaike's information criterion, it was found that all considered hormone effects on transition probabilities with the initial state WAKE change with time. Similarly, transition from slow-wave sleep (SWS) to light sleep (LS) is affected by a "hormone-time" interaction for cortisol and renin, but not GH. For example, there is a considerable increase in the probability of SWS-LS transition toward the end of the night, when cortisol concentrations are very high. In summary, alterations in human sleep possess dynamic forms and are partially influenced by the endocrine activity of certain hormones. Statistical methods, such as semiparametric multinomial and time-dependent logit regression, can offer ambitious ways to investigate and estimate the association intensities between the nonstationary sleep changes and the time-dependent endocrine activities.
Variation of Time Domain Failure Probabilities of Jack-up with Wave Return Periods
NASA Astrophysics Data System (ADS)
Idris, Ahmad; Harahap, Indra S. H.; Ali, Montassir Osman Ahmed
2018-04-01
This study evaluated failure probabilities of jack up units on the framework of time dependent reliability analysis using uncertainty from different sea states representing different return period of the design wave. Surface elevation for each sea state was represented by Karhunen-Loeve expansion method using the eigenfunctions of prolate spheroidal wave functions in order to obtain the wave load. The stochastic wave load was propagated on a simplified jack up model developed in commercial software to obtain the structural response due to the wave loading. Analysis of the stochastic response to determine the failure probability in excessive deck displacement in the framework of time dependent reliability analysis was performed by developing Matlab codes in a personal computer. Results from the study indicated that the failure probability increases with increase in the severity of the sea state representing a longer return period. Although the results obtained are in agreement with the results of a study of similar jack up model using time independent method at higher values of maximum allowable deck displacement, it is in contrast at lower values of the criteria where the study reported that failure probability decreases with increase in the severity of the sea state.
Mohammadkhani, Parvaneh; Khanipour, Hamid; Azadmehr, Hedieh; Mobramm, Ardeshir; Naseri, Esmaeil
2015-01-01
The aim of this study was to evaluate suicide probability in Iranian males with substance abuse or dependence disorder and to investigate the predictors of suicide probability based on trait mindfulness, reasons for living and severity of general psychiatric symptoms. Participants were 324 individuals with substance abuse or dependence in an outpatient setting and prison. Reasons for living questionnaire, Mindfulness Attention Awareness Scale and Suicide probability Scale were used as instruments. Sample was selected based on convenience sampling method. Data were analyzed using SPSS and AMOS. The life-time prevalence of suicide attempt in the outpatient setting was35% and it was 42% in the prison setting. Suicide probability in the prison setting was significantly higher than in the outpatient setting (p<0.001). The severity of general symptom strongly correlated with suicide probability. Trait mindfulness, not reasons for living beliefs, had a mediating effect in the relationship between the severity of general symptoms and suicide probability. Fear of social disapproval, survival and coping beliefs and child-related concerns significantly predicted suicide probability (p<0.001). It could be suggested that trait mindfulness was more effective in preventing suicide probability than beliefs about reasons for living in individuals with substance abuse or dependence disorders. The severity of general symptom should be regarded as an important risk factor of suicide probability.
Are Earthquake Clusters/Supercycles Real or Random?
NASA Astrophysics Data System (ADS)
Salditch, L.; Brooks, E. M.; Stein, S.; Spencer, B. D.
2016-12-01
Long records of earthquakes at plate boundaries such as the San Andreas or Cascadia often show that large earthquakes occur in temporal clusters, also termed supercycles, separated by less active intervals. These are intriguing because the boundary is presumably being loaded by steady plate motion. If so, earthquakes resulting from seismic cycles - in which their probability is small shortly after the past one, and then increases with time - should occur quasi-periodically rather than be more frequent in some intervals than others. We are exploring this issue with two approaches. One is to assess whether the clusters result purely by chance from a time-independent process that has no "memory." Thus a future earthquake is equally likely immediately after the past one and much later, so earthquakes can cluster in time. We analyze the agreement between such a model and inter-event times for Parkfield, Pallet Creek, and other records. A useful tool is transformation by the inverse cumulative distribution function, so the inter-event times have a uniform distribution when the memorylessness property holds. The second is via a time-variable model in which earthquake probability increases with time between earthquakes and decreases after an earthquake. The probability of an event increases with time until one happens, after which it decreases, but not to zero. Hence after a long period of quiescence, the probability of an earthquake can remain higher than the long-term average for several cycles. Thus the probability of another earthquake is path dependent, i.e. depends on the prior earthquake history over multiple cycles. Time histories resulting from simulations give clusters with properties similar to those observed. The sequences of earthquakes result from both the model parameters and chance, so two runs with the same parameters look different. The model parameters control the average time between events and the variation of the actual times around this average, so models can be strongly or weakly time-dependent.
Gravity and count probabilities in an expanding universe
NASA Technical Reports Server (NTRS)
Bouchet, Francois R.; Hernquist, Lars
1992-01-01
The time evolution of nonlinear clustering on large scales in cold dark matter, hot dark matter, and white noise models of the universe is investigated using N-body simulations performed with a tree code. Count probabilities in cubic cells are determined as functions of the cell size and the clustering state (redshift), and comparisons are made with various theoretical models. We isolate the features that appear to be the result of gravitational instability, those that depend on the initial conditions, and those that are likely a consequence of numerical limitations. More specifically, we study the development of skewness, kurtosis, and the fifth moment in relation to variance, the dependence of the void probability on time as well as on sparseness of sampling, and the overall shape of the count probability distribution. Implications of our results for theoretical and observational studies are discussed.
NASA Astrophysics Data System (ADS)
Rodak, C. M.; Silliman, S. E.; Bolster, D.
2012-12-01
A hypothetical case study of groundwater contaminant protection was carried out using time-dependent health risk calculations. The case study focuses on a hypothetical zoning project for parcels of land around a well field in northern Indiana, where the control of cancer risk relative to a mandated cancer risk threshold is of concern in the management strategy. Within our analysis, we include both uncertainty in the subsurface transport and variability in population behavior in the calculation of time-dependent health risks. From these results we introduce risk maps, a visual representation of the probability of an unacceptable health risk as a function of population behavior and the time at which exposure to the contaminant begins. We also evaluate the time-dependent risks with three criteria from water resource literature: reliability, resilience, and vulnerability (RRV). With respect to health risk from a groundwater well, the three criteria determine: the probability that a well produces safe water (reliability), the probability that a contaminated well returns to an uncontaminated state within a specified time interval (resilience), and the overall severity in terms of health impact of the contamination at a well head (vulnerability). The results demonstrate that the distributions of RRV values for each parcel of land are linked to the time-dependent concentration profile of the contaminant at the well, and the toxicological characteristics of the contaminant. The proposed time-dependent risk calculation expands on current techniques to include a continuous exposure start time, capable of reproducing the maximum risk while providing information on the severity and duration of health risks. Overall this study suggests that, especially in light of the inherent complexity of health-groundwater systems, RRV are viable criteria for relatively simple and effective evaluation of time-dependent health risk. It is argued that the RRV approach, as applied to consideration of potential health impact, allows for more informed, health-based decisions regarding zoning for wellhead protection.
Toda, S.; Stein, R.S.; Reasenberg, P.A.; Dieterich, J.H.; Yoshida, A.
1998-01-01
The Kobe earthquake struck at the edge of the densely populated Osaka-Kyoto corridor in southwest Japan. We investigate how the earthquake transferred stress to nearby faults, altering their proximity to failure and thus changing earthquake probabilities. We find that relative to the pre-Kobe seismicity, Kobe aftershocks were concentrated in regions of calculated Coulomb stress increase and less common in regions of stress decrease. We quantify this relationship by forming the spatial correlation between the seismicity rate change and the Coulomb stress change. The correlation is significant for stress changes greater than 0.2-1.0 bars (0.02-0.1 MPa), and the nonlinear dependence of seismicity rate change on stress change is compatible with a state- and rate-dependent formulation for earthquake occurrence. We extend this analysis to future mainshocks by resolving the stress changes on major faults within 100 km of Kobe and calculating the change in probability caused by these stress changes. Transient effects of the stress changes are incorporated by the state-dependent constitutive relation, which amplifies the permanent stress changes during the aftershock period. Earthquake probability framed in this manner is highly time-dependent, much more so than is assumed in current practice. Because the probabilities depend on several poorly known parameters of the major faults, we estimate uncertainties of the probabilities by Monte Carlo simulation. This enables us to include uncertainties on the elapsed time since the last earthquake, the repeat time and its variability, and the period of aftershock decay. We estimate that a calculated 3-bar (0.3-MPa) stress increase on the eastern section of the Arima-Takatsuki Tectonic Line (ATTL) near Kyoto causes fivefold increase in the 30-year probability of a subsequent large earthquake near Kyoto; a 2-bar (0.2-MPa) stress decrease on the western section of the ATTL results in a reduction in probability by a factor of 140 to 2000. The probability of a Mw = 6.9 earthquake within 50 km of Osaka during 1997-2007 is estimated to have risen from 5-6% before the Kobe earthquake to 7-11% afterward; during 1997-2027, it is estimated to have risen from 14-16% before Kobe to 16-22%.
Coherent exciton transport in dendrimers and continuous-time quantum walks
NASA Astrophysics Data System (ADS)
Mülken, Oliver; Bierbaum, Veronika; Blumen, Alexander
2006-03-01
We model coherent exciton transport in dendrimers by continuous-time quantum walks. For dendrimers up to the second generation the coherent transport shows perfect recurrences when the initial excitation starts at the central node. For larger dendrimers, the recurrence ceases to be perfect, a fact which resembles results for discrete quantum carpets. Moreover, depending on the initial excitation site, we find that the coherent transport to certain nodes of the dendrimer has a very low probability. When the initial excitation starts from the central node, the problem can be mapped onto a line which simplifies the computational effort. Furthermore, the long time average of the quantum mechanical transition probabilities between pairs of nodes shows characteristic patterns and allows us to classify the nodes into clusters with identical limiting probabilities. For the (space) average of the quantum mechanical probability to be still or to be again at the initial site, we obtain, based on the Cauchy-Schwarz inequality, a simple lower bound which depends only on the eigenvalue spectrum of the Hamiltonian.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha
The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
Pötschger, Ulrike; Heinzl, Harald; Valsecchi, Maria Grazia; Mittlböck, Martina
2018-01-19
Investigating the impact of a time-dependent intervention on the probability of long-term survival is statistically challenging. A typical example is stem-cell transplantation performed after successful donor identification from registered donors. Here, a suggested simple analysis based on the exogenous donor availability status according to registered donors would allow the estimation and comparison of survival probabilities. As donor search is usually ceased after a patient's event, donor availability status is incompletely observed, so that this simple comparison is not possible and the waiting time to donor identification needs to be addressed in the analysis to avoid bias. It is methodologically unclear, how to directly address cumulative long-term treatment effects without relying on proportional hazards while avoiding waiting time bias. The pseudo-value regression technique is able to handle the first two issues; a novel generalisation of this technique also avoids waiting time bias. Inverse-probability-of-censoring weighting is used to account for the partly unobserved exogenous covariate donor availability. Simulation studies demonstrate unbiasedness and satisfying coverage probabilities of the new method. A real data example demonstrates that study results based on generalised pseudo-values have a clear medical interpretation which supports the clinical decision making process. The proposed generalisation of the pseudo-value regression technique enables to compare survival probabilities between two independent groups where group membership becomes known over time and remains partly unknown. Hence, cumulative long-term treatment effects are directly addressed without relying on proportional hazards while avoiding waiting time bias.
Exact Time-Dependent Exchange-Correlation Potential in Electron Scattering Processes
NASA Astrophysics Data System (ADS)
Suzuki, Yasumitsu; Lacombe, Lionel; Watanabe, Kazuyuki; Maitra, Neepa T.
2017-12-01
We identify peak and valley structures in the exact exchange-correlation potential of time-dependent density functional theory that are crucial for time-resolved electron scattering in a model one-dimensional system. These structures are completely missed by adiabatic approximations that, consequently, significantly underestimate the scattering probability. A recently proposed nonadiabatic approximation is shown to correctly capture the approach of the electron to the target when the initial Kohn-Sham state is chosen judiciously, and it is more accurate than standard adiabatic functionals but ultimately fails to accurately capture reflection. These results may explain the underestimation of scattering probabilities in some recent studies on molecules and surfaces.
Hayashi, Yusuke; Wirth, Oliver
2015-01-01
Four rats responded under a choice reaction-time procedure. At the beginning of each trial, the rats were required to hold down a center lever for a variable duration, release it following a high- or low-pitched tone, and press either a left or right lever, conditionally on the tone. Correct choices were reinforced with a probability of .95 or .05 under blinking or static houselights, respectively. After performance stabilized, disruptive effects of free access to food pellets prior to sessions (prefeeding) and intraperitoneal injection of haloperidol were examined on multiple behavioral measures (i.e., the number of trials completed, percent of correct responses, and reaction time). Resistance to prefeeding depended on the probability of food delivery for the number of trials completed and reaction time. Resistance to haloperidol, on the other hand, was not systematically affected by the probability of food delivery for all dependent measures. PMID:22209910
Schaubel, Douglas E; Wei, Guanghui
2011-03-01
In medical studies of time-to-event data, nonproportional hazards and dependent censoring are very common issues when estimating the treatment effect. A traditional method for dealing with time-dependent treatment effects is to model the time-dependence parametrically. Limitations of this approach include the difficulty to verify the correctness of the specified functional form and the fact that, in the presence of a treatment effect that varies over time, investigators are usually interested in the cumulative as opposed to instantaneous treatment effect. In many applications, censoring time is not independent of event time. Therefore, we propose methods for estimating the cumulative treatment effect in the presence of nonproportional hazards and dependent censoring. Three measures are proposed, including the ratio of cumulative hazards, relative risk, and difference in restricted mean lifetime. For each measure, we propose a double inverse-weighted estimator, constructed by first using inverse probability of treatment weighting (IPTW) to balance the treatment-specific covariate distributions, then using inverse probability of censoring weighting (IPCW) to overcome the dependent censoring. The proposed estimators are shown to be consistent and asymptotically normal. We study their finite-sample properties through simulation. The proposed methods are used to compare kidney wait-list mortality by race. © 2010, The International Biometric Society.
McCalpin, J.P.; Nishenko, S.P.
1996-01-01
The chronology of M>7 paleoearthquakes on the central five segments of the Wasatch fault zone (WFZ) is one of the best dated in the world and contains 16 earthquakes in the past 5600 years with an average repeat time of 350 years. Repeat times for individual segments vary by a factor of 2, and range from about 1200 to 2600 years. Four of the central five segments ruptured between ??? 620??30 and 1230??60 calendar years B.P. The remaining segment (Brigham City segment) has not ruptured in the past 2120??100 years. Comparison of the WFZ space-time diagram of paleoearthquakes with synthetic paleoseismic histories indicates that the observed temporal clusters and gaps have about an equal probability (depending on model assumptions) of reflecting random coincidence as opposed to intersegment contagion. Regional seismicity suggests that for exposure times of 50 and 100 years, the probability for an earthquake of M>7 anywhere within the Wasatch Front region, based on a Poisson model, is 0.16 and 0.30, respectively. A fault-specific WFZ model predicts 50 and 100 year probabilities for a M>7 earthquake on the WFZ itself, based on a Poisson model, as 0.13 and 0.25, respectively. In contrast, segment-specific earthquake probabilities that assume quasi-periodic recurrence behavior on the Weber, Provo, and Nephi segments are less (0.01-0.07 in 100 years) than the regional or fault-specific estimates (0.25-0.30 in 100 years), due to the short elapsed times compared to average recurrence intervals on those segments. The Brigham City and Salt Lake City segments, however, have time-dependent probabilities that approach or exceed the regional and fault specific probabilities. For the Salt Lake City segment, these elevated probabilities are due to the elapsed time being approximately equal to the average late Holocene recurrence time. For the Brigham City segment, the elapsed time is significantly longer than the segment-specific late Holocene recurrence time.
Rapidity window dependences of higher order cumulants and diffusion master equation
NASA Astrophysics Data System (ADS)
Kitazawa, Masakiyo
2015-10-01
We study the rapidity window dependences of higher order cumulants of conserved charges observed in relativistic heavy ion collisions. The time evolution and the rapidity window dependence of the non-Gaussian fluctuations are described by the diffusion master equation. Analytic formulas for the time evolution of cumulants in a rapidity window are obtained for arbitrary initial conditions. We discuss that the rapidity window dependences of the non-Gaussian cumulants have characteristic structures reflecting the non-equilibrium property of fluctuations, which can be observed in relativistic heavy ion collisions with the present detectors. It is argued that various information on the thermal and transport properties of the hot medium can be revealed experimentally by the study of the rapidity window dependences, especially by the combined use, of the higher order cumulants. Formulas of higher order cumulants for a probability distribution composed of sub-probabilities, which are useful for various studies of non-Gaussian cumulants, are also presented.
Kinetic Monte Carlo simulations of nucleation and growth in electrodeposition.
Guo, Lian; Radisic, Aleksandar; Searson, Peter C
2005-12-22
Nucleation and growth during bulk electrodeposition is studied using kinetic Monte Carlo (KMC) simulations. Ion transport in solution is modeled using Brownian dynamics, and the kinetics of nucleation and growth are dependent on the probabilities of metal-on-substrate and metal-on-metal deposition. Using this approach, we make no assumptions about the nucleation rate, island density, or island distribution. The influence of the attachment probabilities and concentration on the time-dependent island density and current transients is reported. Various models have been assessed by recovering the nucleation rate and island density from the current-time transients.
Quantum return probability of a system of N non-interacting lattice fermions
NASA Astrophysics Data System (ADS)
Krapivsky, P. L.; Luck, J. M.; Mallick, K.
2018-02-01
We consider N non-interacting fermions performing continuous-time quantum walks on a one-dimensional lattice. The system is launched from a most compact configuration where the fermions occupy neighboring sites. We calculate exactly the quantum return probability (sometimes referred to as the Loschmidt echo) of observing the very same compact state at a later time t. Remarkably, this probability depends on the parity of the fermion number—it decays as a power of time for even N, while for odd N it exhibits periodic oscillations modulated by a decaying power law. The exponent also slightly depends on the parity of N, and is roughly twice smaller than what it would be in the continuum limit. We also consider the same problem, and obtain similar results, in the presence of an impenetrable wall at the origin constraining the particles to remain on the positive half-line. We derive closed-form expressions for the amplitudes of the power-law decay of the return probability in all cases. The key point in the derivation is the use of Mehta integrals, which are limiting cases of the Selberg integral.
Modeling Finite-Time Failure Probabilities in Risk Analysis Applications.
Dimitrova, Dimitrina S; Kaishev, Vladimir K; Zhao, Shouqi
2015-10-01
In this article, we introduce a framework for analyzing the risk of systems failure based on estimating the failure probability. The latter is defined as the probability that a certain risk process, characterizing the operations of a system, reaches a possibly time-dependent critical risk level within a finite-time interval. Under general assumptions, we define two dually connected models for the risk process and derive explicit expressions for the failure probability and also the joint probability of the time of the occurrence of failure and the excess of the risk process over the risk level. We illustrate how these probabilistic models and results can be successfully applied in several important areas of risk analysis, among which are systems reliability, inventory management, flood control via dam management, infectious disease spread, and financial insolvency. Numerical illustrations are also presented. © 2015 Society for Risk Analysis.
Dantan, Etienne; Combescure, Christophe; Lorent, Marine; Ashton-Chess, Joanna; Daguin, Pascal; Classe, Jean-Marc; Giral, Magali; Foucher, Yohann
2014-04-01
Predicting chronic disease evolution from a prognostic marker is a key field of research in clinical epidemiology. However, the prognostic capacity of a marker is not systematically evaluated using the appropriate methodology. We proposed the use of simple equations to calculate time-dependent sensitivity and specificity based on published survival curves and other time-dependent indicators as predictive values, likelihood ratios, and posttest probability ratios to reappraise prognostic marker accuracy. The methodology is illustrated by back calculating time-dependent indicators from published articles presenting a marker as highly correlated with the time to event, concluding on the high prognostic capacity of the marker, and presenting the Kaplan-Meier survival curves. The tools necessary to run these direct and simple computations are available online at http://www.divat.fr/en/online-calculators/evalbiom. Our examples illustrate that published conclusions about prognostic marker accuracy may be overoptimistic, thus giving potential for major mistakes in therapeutic decisions. Our approach should help readers better evaluate clinical articles reporting on prognostic markers. Time-dependent sensitivity and specificity inform on the inherent prognostic capacity of a marker for a defined prognostic time. Time-dependent predictive values, likelihood ratios, and posttest probability ratios may additionally contribute to interpret the marker's prognostic capacity. Copyright © 2014 Elsevier Inc. All rights reserved.
Field, Edward; Biasi, Glenn P.; Bird, Peter; Dawson, Timothy E.; Felzer, Karen R.; Jackson, David A.; Johnson, Kaj M.; Jordan, Thomas H.; Madden, Christopher; Michael, Andrew J.; Milner, Kevin; Page, Morgan T.; Parsons, Thomas E.; Powers, Peter; Shaw, Bruce E.; Thatcher, Wayne R.; Weldon, Ray J.; Zeng, Yuehua
2015-01-01
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model, published previously, renewal models are utilized to represent elastic-rebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for un-segmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5,760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30-year M≥6.7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault slip rates), with relaxation of segmentation and inclusion of multi-fault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 sized events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region, and depend on the evaluation metric of interest. For example, M≥6.7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
On the extinction probability in models of within-host infection: the role of latency and immunity.
Yan, Ada W C; Cao, Pengxing; McCaw, James M
2016-10-01
Not every exposure to virus establishes infection in the host; instead, the small amount of initial virus could become extinct due to stochastic events. Different diseases and routes of transmission have a different average number of exposures required to establish an infection. Furthermore, the host immune response and antiviral treatment affect not only the time course of the viral load provided infection occurs, but can prevent infection altogether by increasing the extinction probability. We show that the extinction probability when there is a time-dependent immune response depends on the chosen form of the model-specifically, on the presence or absence of a delay between infection of a cell and production of virus, and the distribution of latent and infectious periods of an infected cell. We hypothesise that experimentally measuring the extinction probability when the virus is introduced at different stages of the immune response, alongside the viral load which is usually measured, will improve parameter estimates and determine the most suitable mathematical form of the model.
Lessons Learned from Dependency Usage in HERA: Implications for THERP-Related HRA Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
April M. Whaley; Ronald L. Boring; Harold S. Blackman
Dependency occurs when the probability of success or failure on one action changes the probability of success or failure on a subsequent action. Dependency may serve as a modifier on the human error probabilities (HEPs) for successive actions in human reliability analysis (HRA) models. Discretion should be employed when determining whether or not a dependency calculation is warranted: dependency should not be assigned without strongly grounded reasons. Human reliability analysts may sometimes assign dependency in cases where it is unwarranted. This inappropriate assignment is attributed to a lack of clear guidance to encompass the range of scenarios human reliability analystsmore » are addressing. Inappropriate assignment of dependency produces inappropriately elevated HEP values. Lessons learned about dependency usage in the Human Event Repository and Analysis (HERA) system may provide clarification and guidance for analysts using first-generation HRA methods. This paper presents the HERA approach to dependency assessment and discusses considerations for dependency usage in HRA, including the cognitive basis for dependency, direction for determining when dependency should be assessed, considerations for determining the dependency level, temporal issues to consider when assessing dependency, (e.g., considering task sequence versus overall event sequence, and dependency over long periods of time), and diagnosis and action influences on dependency.« less
Complete Numerical Solution of the Diffusion Equation of Random Genetic Drift
Zhao, Lei; Yue, Xingye; Waxman, David
2013-01-01
A numerical method is presented to solve the diffusion equation for the random genetic drift that occurs at a single unlinked locus with two alleles. The method was designed to conserve probability, and the resulting numerical solution represents a probability distribution whose total probability is unity. We describe solutions of the diffusion equation whose total probability is unity as complete. Thus the numerical method introduced in this work produces complete solutions, and such solutions have the property that whenever fixation and loss can occur, they are automatically included within the solution. This feature demonstrates that the diffusion approximation can describe not only internal allele frequencies, but also the boundary frequencies zero and one. The numerical approach presented here constitutes a single inclusive framework from which to perform calculations for random genetic drift. It has a straightforward implementation, allowing it to be applied to a wide variety of problems, including those with time-dependent parameters, such as changing population sizes. As tests and illustrations of the numerical method, it is used to determine: (i) the probability density and time-dependent probability of fixation for a neutral locus in a population of constant size; (ii) the probability of fixation in the presence of selection; and (iii) the probability of fixation in the presence of selection and demographic change, the latter in the form of a changing population size. PMID:23749318
Molecular vibrational states during a collision
NASA Technical Reports Server (NTRS)
Recamier, Jose A.; Jauregui, Rocio
1995-01-01
Alternative algebraic techniques to approximate a given Hamiltonian by a harmonic oscillator are described both for time-independent and time-dependent systems. We apply them to the description of a one dimensional atom-diatom collision. From the resulting evolution operator, we evaluate vibrational transition probabilities as well as other time-dependent properties. As expected, the ground vibrational state becomes a squeezed state during the collision.
Diffusion of active chiral particles
NASA Astrophysics Data System (ADS)
Sevilla, Francisco J.
2016-12-01
The diffusion of chiral active Brownian particles in three-dimensional space is studied analytically, by consideration of the corresponding Fokker-Planck equation for the probability density of finding a particle at position x and moving along the direction v ̂ at time t , and numerically, by the use of Langevin dynamics simulations. The analysis is focused on the marginal probability density of finding a particle at a given location and at a given time (independently of its direction of motion), which is found from an infinite hierarchy of differential-recurrence relations for the coefficients that appear in the multipole expansion of the probability distribution, which contains the whole kinematic information. This approach allows the explicit calculation of the time dependence of the mean-squared displacement and the time dependence of the kurtosis of the marginal probability distribution, quantities from which the effective diffusion coefficient and the "shape" of the positions distribution are examined. Oscillations between two characteristic values were found in the time evolution of the kurtosis, namely, between the value that corresponds to a Gaussian and the one that corresponds to a distribution of spherical shell shape. In the case of an ensemble of particles, each one rotating around a uniformly distributed random axis, evidence is found of the so-called effect "anomalous, yet Brownian, diffusion," for which particles follow a non-Gaussian distribution for the positions yet the mean-squared displacement is a linear function of time.
Possibilities of forecasting hypercholesterinemia in pilots
NASA Technical Reports Server (NTRS)
Vivilov, P.
1980-01-01
The dependence of the frequency of hypercholesterinemia on the age, average annual flying time, functional category, qualification class, and flying specialty of 300 pilots was investigated. The risk probability coefficient of hypercholesterinemia was computed. An evaluation table was developed which gives an 84% probability of forcasting risk of hypercholesterinemia.
Exact probability distribution functions for Parrondo's games
NASA Astrophysics Data System (ADS)
Zadourian, Rubina; Saakian, David B.; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Exact probability distribution functions for Parrondo's games.
Zadourian, Rubina; Saakian, David B; Klümper, Andreas
2016-12-01
We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.
Stabilization and Structure of wave packets in Rydberg atoms ionized by a strong light field.
Fedorov, M; Fedorov, S
1998-09-28
New features of the phenomenon of interference stabilization of Rydberg atoms are found to exist. The main of them are: (i) dynamical stabilization, which means that in case of pulses with a smooth envelope the time-dependent residual probability for an atom to survive in bound states remains almost constant in the middle part of a pulse (at the strongest fields); (ii) existence of the strong-field stabilization of the after-pulse residual probability in case of pulses longer than the classical Kepler period; and (iii) pulsation of the time-dependent Rydberg wave packet formed in the process of photoionization.
Probability and predictors of treatment-seeking for substance use disorders in the U.S.
Blanco, Carlos; Iza, Miren; Rodríguez-Fernández, Jorge Mario; Baca-García, Enrique; Wang, Shuai; Olfson, Mark
2015-04-01
Little is known about to what extent treatment-seeking behavior varies across individuals with alcohol abuse, alcohol dependence, drug abuse, and drug dependence. The sample included respondents from the Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) who reported a lifetime diagnosis alcohol abuse, alcohol dependence, drug abuse, or drug dependence. Unadjusted and adjusted hazard ratios are presented for time to first treatment contact by sociodemographic characteristics and comorbid psychiatric disorders. Individuals were censored from the analyses if their condition remitted prior to seeking treatment. In the first year after disorder onset, rates of treatment-seeking were 13% for drug dependence, 5% for alcohol dependence, 2% for drug abuse, and 1% for alcohol abuse. The lifetime probability of seeking treatment among individuals who did not remit was also highest for drug dependence (90%), followed by drug abuse (60%), alcohol dependence (54%), and alcohol abuse (16%). Having had previous treatment contact for a substance use disorder (SUD) increased the probability of seeking treatment for another SUD. By contrast, an early age of SUD onset, belonging to an older cohort, and a higher level of education decreased the lifetime probability of treatment contact for SUD. The role of comorbid mental disorders was more complex, with some disorders increasing and other decreasing the probability of seeking treatment. Given high rates of SUD and their substantial health and economic burden, these patterns suggest the need for innovative approaches to increase treatment access for individuals with SUD. Copyright © 2015. Published by Elsevier Ireland Ltd.
Probability and predictors of treatment-seeking for substance use disorders in the U.S
Blanco, Carlos; Iza, Miren; Rodríguez-Fernández, Jorge Mario; Baca-García, Enrique; Wang, Shuai; Olfson, Mark
2016-01-01
Background Little is known about to what extent treatment-seeking behavior varies across individuals with alcohol abuse, alcohol dependence, drug abuse, and drug dependence. Methods The sample included respondents from the Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) who reported a lifetime diagnosis alcohol abuse, alcohol dependence, drug abuse, or drug dependence. Unadjusted and adjusted hazard ratios are presented for time to first treatment contact by sociodemographic characteristics and comorbid psychiatric disorders. Individuals were censored from the analyses if their condition remitted prior to seeking treatment. Results In the first year after disorder onset, rates of treatment-seeking were 13% for drug dependence, 5% for alcohol dependence, 2% for drug abuse, and 1% for alcohol abuse. The lifetime probability of seeking treatment among individuals who did not remit was also highest for drug dependence (90%), followed by drug abuse (60%), alcohol dependence (54%), and alcohol abuse (16%). Having had previous treatment contact for a substance use disorder (SUD) increased the probability of seeking treatment for another SUD. By contrast, an early age of SUD onset, belonging to an older cohort, and a higher level of education decreased the lifetime probability of treatment contact for SUD. The role of comorbid mental disorders was more complex, with some disorders increasing and other decreasing the probability of seeking treatment. Conclusions Given high rates of SUD and their substantial health and economic burden, these patterns suggest the need for innovative approaches to increase treatment access for individuals with SUD. PMID:25725934
Monolayer phosphorene under time-dependent magnetic field
NASA Astrophysics Data System (ADS)
Nascimento, J. P. G.; Aguiar, V.; Guedes, I.
2018-02-01
We obtain the exact wave function of a monolayer phosphorene under a low-intensity time-dependent magnetic field using the dynamical invariant method. We calculate the quantum-mechanical energy expectation value and the transition probability for a constant and an oscillatory magnetic field. For the former we observe that the Landau level energy varies linearly with the quantum numbers n and m and the magnetic field intensity B0. No transition takes place. For the latter, we observe that the energy oscillates in time, increasing linearly with the Landau level n and m and nonlinearly with the magnetic field. The (k , l) →(n , m) transitions take place only for l = m. We investigate the (0,0) →(n , 0) and (1 , l) and (2 , l) probability transitions.
Switching probability of all-perpendicular spin valve nanopillars
NASA Astrophysics Data System (ADS)
Tzoufras, M.
2018-05-01
In all-perpendicular spin valve nanopillars the probability density of the free-layer magnetization is independent of the azimuthal angle and its evolution equation simplifies considerably compared to the general, nonaxisymmetric geometry. Expansion of the time-dependent probability density to Legendre polynomials enables analytical integration of the evolution equation and yields a compact expression for the practically relevant switching probability. This approach is valid when the free layer behaves as a single-domain magnetic particle and it can be readily applied to fitting experimental data.
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
Shi, Wei; Xia, Jun
2017-02-01
Water quality risk management is a global hot research linkage with the sustainable water resource development. Ammonium nitrogen (NH 3 -N) and permanganate index (COD Mn ) as the focus indicators in Huai River Basin, are selected to reveal their joint transition laws based on Markov theory. The time-varying moments model with either time or land cover index as explanatory variables is applied to build the time-varying marginal distributions of water quality time series. Time-varying copula model, which takes the non-stationarity in the marginal distribution and/or the time variation in dependence structure between water quality series into consideration, is constructed to describe a bivariate frequency analysis for NH 3 -N and COD Mn series at the same monitoring gauge. The larger first-order Markov joint transition probability indicates water quality state Class V w , Class IV and Class III will occur easily in the water body of Bengbu Sluice. Both marginal distribution and copula models are nonstationary, and the explanatory variable time yields better performance than land cover index in describing the non-stationarities in the marginal distributions. In modelling the dependence structure changes, time-varying copula has a better fitting performance than the copula with the constant or the time-trend dependence parameter. The largest synchronous encounter risk probability of NH 3 -N and COD Mn simultaneously reaching Class V is 50.61%, while the asynchronous encounter risk probability is largest when NH 3 -N and COD Mn is inferior to class V and class IV water quality standards, respectively.
78 FR 33010 - Airworthiness Directives; Saab AB, Saab Aerosystems Airplanes
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-03
... the probability of a negative effect on the handling quality during stall, which could result in... the probability of a negative effect on the handling quality during stall. For the reasons described above, this [EASA] AD requires a one- time inspection of the stick pusher rigging and, depending on...
On time-dependent diffusion coefficients arising from stochastic processes with memory
NASA Astrophysics Data System (ADS)
Carpio-Bernido, M. Victoria; Barredo, Wilson I.; Bernido, Christopher C.
2017-08-01
Time-dependent diffusion coefficients arise from anomalous diffusion encountered in many physical systems such as protein transport in cells. We compare these coefficients with those arising from analysis of stochastic processes with memory that go beyond fractional Brownian motion. Facilitated by the Hida white noise functional integral approach, diffusion propagators or probability density functions (pdf) are obtained and shown to be solutions of modified diffusion equations with time-dependent diffusion coefficients. This should be useful in the study of complex transport processes.
NASA Astrophysics Data System (ADS)
Papanikolaοu, Ioannis D.; Roberts, Gerald P.; Deligiannakis, Georgios; Sakellariou, Athina; Vassilakis, Emmanuel
2013-06-01
The Sparta Fault system is a major structure approximately 64 km long that bounds the eastern flank of the Taygetos Mountain front (2407 m) and shapes the present-day Sparta basin. It was activated in 464 B.C., devastating the city of Sparta. This fault is examined and described in terms of its geometry, segmentation, drainage pattern and post-glacial throw, emphasising how these parameters vary along strike. Qualitative analysis of long profile catchments shows a significant difference in longitudinal convexity between the central and both the south and north parts of the fault system, leading to the conclusion of varying uplift rate along strike. Catchments are sensitive in differential uplift as it is observed by the calculated differences of the steepness index ksn between the outer (ksn < 83) and central parts (121 < ksn < 138) of the Sparta Fault along strike the fault system. Based on fault throw-rates and the bedrock geology a seismic hazard map has been constructed that extracts a locality specific long-term earthquake recurrence record. Based on this map the town of Sparta would experience a destructive event similar to that in 464 B.C. approximately every 1792 ± 458 years. Since no other major earthquake M ~ 7.0 has been generated by this system since 464 B.C., a future event could be imminent. As a result, not only time-independent but also time-dependent probabilities, which incorporate the concept of the seismic cycle, have been calculated for the town of Sparta, showing a considerably higher time-dependent probability of 3.0 ± 1.5% over the next 30 years compared to the time-independent probability of 1.66%. Half of the hanging wall area of the Sparta Fault can experience intensities ≥ IX, but belongs to the lowest category of seismic risk of the national seismic building code. On view of these relatively high calculated probabilities, a reassessment of the building code might be necessary.
NASA Astrophysics Data System (ADS)
Papanikolaou, Ioannis; Roberts, Gerald; Deligiannakis, Georgios; Sakellariou, Athina; Vassilakis, Emmanuel
2013-04-01
The Sparta Fault system is a major structure approximately 64 km long that bounds the eastern flank of the Taygetos Mountain front (2.407 m) and shapes the present-day Sparta basin. It was activated in 464 B.C., devastating the city of Sparta. This fault is examined and described in terms of its geometry, segmentation, drainage pattern and postglacial throw, emphasizing how these parameters vary along strike. Qualitative analysis of long profile catchments shows a significant difference in longitudinal convexity between the central and both the south and north parts of the fault system, leading to the conclusion of varying uplift rate along strike. Catchments are sensitive in differential uplift as it is observed by the calculated differences of the steepness index ksn between the outer (ksn<83) and central parts (121
NASA Astrophysics Data System (ADS)
Vaccaro, S. R.
2011-09-01
The voltage dependence of the ionic and gating currents of a K channel is dependent on the activation barriers of a voltage sensor with a potential function which may be derived from the principal electrostatic forces on an S4 segment in an inhomogeneous dielectric medium. By variation of the parameters of a voltage-sensing domain model, consistent with x-ray structures and biophysical data, the lowest frequency of the survival probability of each stationary state derived from a solution of the Smoluchowski equation provides a good fit to the voltage dependence of the slowest time constant of the ionic current in a depolarized membrane, and the gating current exhibits a rising phase that precedes an exponential relaxation. For each depolarizing potential, the calculated time dependence of the survival probabilities of the closed states of an alpha helical S4 sensor are in accord with an empirical model of the ionic and gating currents recorded during the activation process.
Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger
2018-05-01
In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.
Gravitational lensing, time delay, and gamma-ray bursts
NASA Technical Reports Server (NTRS)
Mao, Shude
1992-01-01
The probability distributions of time delay in gravitational lensing by point masses and isolated galaxies (modeled as singular isothermal spheres) are studied. For point lenses (all with the same mass) the probability distribution is broad, and with a peak at delta(t) of about 50 S; for singular isothermal spheres, the probability distribution is a rapidly decreasing function with increasing time delay, with a median delta(t) equals about 1/h month, and its behavior depends sensitively on the luminosity function of galaxies. The present simplified calculation is particularly relevant to the gamma-ray bursts if they are of cosmological origin. The frequency of 'recurrent' bursts due to gravitational lensing by galaxies is probably between 0.05 and 0.4 percent. Gravitational lensing can be used as a test of the cosmological origin of gamma-ray bursts.
Modeling Spatial Dependence of Rainfall Extremes Across Multiple Durations
NASA Astrophysics Data System (ADS)
Le, Phuong Dong; Leonard, Michael; Westra, Seth
2018-03-01
Determining the probability of a flood event in a catchment given that another flood has occurred in a nearby catchment is useful in the design of infrastructure such as road networks that have multiple river crossings. These conditional flood probabilities can be estimated by calculating conditional probabilities of extreme rainfall and then transforming rainfall to runoff through a hydrologic model. Each catchment's hydrological response times are unlikely to be the same, so in order to estimate these conditional probabilities one must consider the dependence of extreme rainfall both across space and across critical storm durations. To represent these types of dependence, this study proposes a new approach for combining extreme rainfall across different durations within a spatial extreme value model using max-stable process theory. This is achieved in a stepwise manner. The first step defines a set of common parameters for the marginal distributions across multiple durations. The parameters are then spatially interpolated to develop a spatial field. Storm-level dependence is represented through the max-stable process for rainfall extremes across different durations. The dependence model shows a reasonable fit between the observed pairwise extremal coefficients and the theoretical pairwise extremal coefficient function across all durations. The study demonstrates how the approach can be applied to develop conditional maps of the return period and return level across different durations.
Should I Stay or Should I Go? A Habitat-Dependent Dispersal Kernel Improves Prediction of Movement
Vinatier, Fabrice; Lescourret, Françoise; Duyck, Pierre-François; Martin, Olivier; Senoussi, Rachid; Tixier, Philippe
2011-01-01
The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus) in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes. PMID:21765890
Should I stay or should I go? A habitat-dependent dispersal kernel improves prediction of movement.
Vinatier, Fabrice; Lescourret, Françoise; Duyck, Pierre-François; Martin, Olivier; Senoussi, Rachid; Tixier, Philippe
2011-01-01
The analysis of animal movement within different landscapes may increase our understanding of how landscape features affect the perceptual range of animals. Perceptual range is linked to movement probability of an animal via a dispersal kernel, the latter being generally considered as spatially invariant but could be spatially affected. We hypothesize that spatial plasticity of an animal's dispersal kernel could greatly modify its distribution in time and space. After radio tracking the movements of walking insects (Cosmopolites sordidus) in banana plantations, we considered the movements of individuals as states of a Markov chain whose transition probabilities depended on the habitat characteristics of current and target locations. Combining a likelihood procedure and pattern-oriented modelling, we tested the hypothesis that dispersal kernel depended on habitat features. Our results were consistent with the concept that animal dispersal kernel depends on habitat features. Recognizing the plasticity of animal movement probabilities will provide insight into landscape-level ecological processes.
NASA Astrophysics Data System (ADS)
Dasenbrock-Gammon, Nathan; Zacate, Matthew O.
2017-05-01
Baker et al. derived time-dependent expressions for calculating average number of jumps per encounter and displacement probabilities for vacancy diffusion in crystal lattice systems with infinitesimal vacancy concentrations. As shown in this work, their formulation is readily expanded to include finite vacancy concentration, which allows calculation of concentration-dependent, time-averaged quantities. This is useful because it provides a computationally efficient method to express lineshapes of nuclear spectroscopic techniques through the use of stochastic fluctuation models.
Time-dependent quantum wave packet calculation for nonadiabatic F(2P3/2,2P1/2)+H2 reaction
NASA Astrophysics Data System (ADS)
Zhang, Yan; Xie, Ting-Xian; Han, Ke-Li; Zhang, John Z. H.
2003-12-01
In this paper we present a time-dependent quantum wave packet calculation for the reaction of F(2P3/2,2P1/2)+H2 on the Alexander-Stark-Werner potential energy surface. The reaction probabilities and the integral cross sections for the reaction of F(2P3/2,2P1/2)+H2 (v=j=0) are computed using time-dependent quantum methods with the centrifugal sudden approximate. The results are compared with recent time-independent quantum calculations. The two-surface reaction probability for the initial ground spin-orbit state of J=0.5 is similar to the time-independent result obtained by Alexander et al. [J. Chem. Phys. 113, 11084 (2000)]. Our calculation also shows that electronic coupling has a relatively minor effect on the reactivity from the 2P3/2 state but a non-negligible one from the 2P1/2 state. By comparison with exact time-independent calculations, it is found that the Coriolis coupling plays a relatively minor role. In addition, most of the reactivity of the excited state of fluorine atom results from the spin-orbit coupling.
Scaling and clustering effects of extreme precipitation distributions
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Zhou, Yu; Singh, Vijay P.; Li, Jianfeng
2012-08-01
SummaryOne of the impacts of climate change and human activities on the hydrological cycle is the change in the precipitation structure. Closely related to the precipitation structure are two characteristics: the volume (m) of wet periods (WPs) and the time interval between WPs or waiting time (t). Using daily precipitation data for a period of 1960-2005 from 590 rain gauge stations in China, these two characteristics are analyzed, involving scaling and clustering of precipitation episodes. Our findings indicate that m and t follow similar probability distribution curves, implying that precipitation processes are controlled by similar underlying thermo-dynamics. Analysis of conditional probability distributions shows a significant dependence of m and t on their previous values of similar volumes, and the dependence tends to be stronger when m is larger or t is longer. It indicates that a higher probability can be expected when high-intensity precipitation is followed by precipitation episodes with similar precipitation intensity and longer waiting time between WPs is followed by the waiting time of similar duration. This result indicates the clustering of extreme precipitation episodes and severe droughts or floods are apt to occur in groups.
A double hit model for the distribution of time to AIDS onset
NASA Astrophysics Data System (ADS)
Chillale, Nagaraja Rao
2013-09-01
Incubation time is a key epidemiologic descriptor of an infectious disease. In the case of HIV infection this is a random variable and is probably the longest one. The probability distribution of incubation time is the major determinant of the relation between the incidences of HIV infection and its manifestation to Aids. This is also one of the key factors used for accurate estimation of AIDS incidence in a region. The present article i) briefly reviews the work done, points out uncertainties in estimation of AIDS onset time and stresses the need for its precise estimation, ii) highlights some of the modelling features of onset distribution including immune failure mechanism, and iii) proposes a 'Double Hit' model for the distribution of time to AIDS onset in the cases of (a) independent and (b) dependent time variables of the two markers and examined the applicability of a few standard probability models.
NASA Astrophysics Data System (ADS)
Tumakov, Dmitry A.; Telnov, Dmitry A.; Maltsev, Ilia A.; Plunien, Günter; Shabaev, Vladimir M.
2017-10-01
We develop an efficient numerical implementation of the relativistic time-dependent density functional theory (RTDDFT) to study multielectron highly-charged ions subject to intense linearly-polarized laser fields. The interaction with the electromagnetic field is described within the electric dipole approximation. The resulting time-dependent relativistic Kohn-Sham (RKS) equations possess an axial symmetry and are solved accurately and efficiently with the help of the time-dependent generalized pseudospectral method. As a case study, we calculate multiphoton ionization probabilities of the neutral argon atom and argon-like xenon ion. Relativistic effects are assessed by comparison of our present results with existing non-relativistic data.
NASA Astrophysics Data System (ADS)
Telnov, Dmitry A.; Heslar, John T.; Chu, Shih-I.
2011-11-01
In the framework of the time-dependent density functional theory, we have performed 3D calculations of multiphoton ionization of Li and Be atoms by strong near-infrared laser fields. The results for the intensity-dependent probabilities of single and double ionization are presented. We make use of the time-dependent Krieger-Li-Iafrate exchange-correlation potential with self-interaction correction (TD-KLI-SIC). Such a potential possesses an integer discontinuity which improves description of the ionization process. However, we have found that the discontinuity of the TD-KLI-SIC potential is not sufficient to reproduce characteristic feature of double ionization.
NASA Technical Reports Server (NTRS)
English, Thomas
2005-01-01
A standard tool of reliability analysis used at NASA-JSC is the event tree. An event tree is simply a probability tree, with the probabilities determining the next step through the tree specified at each node. The nodal probabilities are determined by a reliability study of the physical system at work for a particular node. The reliability study performed at a node is typically referred to as a fault tree analysis, with the potential of a fault tree existing.for each node on the event tree. When examining an event tree it is obvious why the event tree/fault tree approach has been adopted. Typical event trees are quite complex in nature, and the event tree/fault tree approach provides a systematic and organized approach to reliability analysis. The purpose of this study was two fold. Firstly, we wanted to explore the possibility that a semi-Markov process can create dependencies between sojourn times (the times it takes to transition from one state to the next) that can decrease the uncertainty when estimating time to failures. Using a generalized semi-Markov model, we studied a four element reliability model and were able to demonstrate such sojourn time dependencies. Secondly, we wanted to study the use of semi-Markov processes to introduce a time variable into the event tree diagrams that are commonly developed in PRA (Probabilistic Risk Assessment) analyses. Event tree end states which change with time are more representative of failure scenarios than are the usual static probability-derived end states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K. S.; Nakae, L. F.; Prasad, M. K.
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
NASA Astrophysics Data System (ADS)
Magdziarz, Marcin; Zorawik, Tomasz
2017-02-01
Aging can be observed for numerous physical systems. In such systems statistical properties [like probability distribution, mean square displacement (MSD), first-passage time] depend on a time span ta between the initialization and the beginning of observations. In this paper we study aging properties of ballistic Lévy walks and two closely related jump models: wait-first and jump-first. We calculate explicitly their probability distributions and MSDs. It turns out that despite similarities these models react very differently to the delay ta. Aging weakly affects the shape of probability density function and MSD of standard Lévy walks. For the jump models the shape of the probability density function is changed drastically. Moreover for the wait-first jump model we observe a different behavior of MSD when ta≪t and ta≫t .
Dynamics of a Landau-Zener non-dissipative system with fluctuating energy levels
NASA Astrophysics Data System (ADS)
Fai, L. C.; Diffo, J. T.; Ateuafack, M. E.; Tchoffo, M.; Fouokeng, G. C.
2014-12-01
This paper considers a Landau-Zener (two-level) system influenced by a three-dimensional Gaussian and non-Gaussian coloured noise and finds a general form of the time dependent diabatic quantum bit (qubit) flip transition probabilities in the fast, intermediate and slow noise limits. The qubit flip probability is observed to mimic (for low-frequencies noise) that of the standard LZ problem. The qubit flip probability is also observed to be the measure of quantum coherence of states. The transition probability is observed to be tailored by non-Gaussian low-frequency noise and otherwise by Gaussian low-frequency coloured noise. Intermediate and fast noise limits are observed to alter the memory of the system in time and found to improve and control quantum information processing.
ON CONTINUOUS-REVIEW (S-1,S) INVENTORY POLICIES WITH STATE-DEPENDENT LEADTIMES,
INVENTORY CONTROL, *REPLACEMENT THEORY), MATHEMATICAL MODELS, LEAD TIME , MANAGEMENT ENGINEERING, DISTRIBUTION FUNCTIONS, PROBABILITY, QUEUEING THEORY, COSTS, OPTIMIZATION, STATISTICAL PROCESSES, DIFFERENCE EQUATIONS
NASA Technical Reports Server (NTRS)
Mark, W. D.
1977-01-01
Mathematical expressions were derived for the exceedance rates and probability density functions of aircraft response variables using a turbulence model that consists of a low frequency component plus a variance modulated Gaussian turbulence component. The functional form of experimentally observed concave exceedance curves was predicted theoretically, the strength of the concave contribution being governed by the coefficient of variation of the time fluctuating variance of the turbulence. Differences in the functional forms of response exceedance curves and probability densities also were shown to depend primarily on this same coefficient of variation. Criteria were established for the validity of the local stationary assumption that is required in the derivations of the exceedance curves and probability density functions. These criteria are shown to depend on the relative time scale of the fluctuations in the variance, the fluctuations in the turbulence itself, and on the nominal duration of the relevant aircraft impulse response function. Metrics that can be generated from turbulence recordings for testing the validity of the local stationary assumption were developed.
The Everett-Wheeler interpretation and the open future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sudbery, Anthony
2011-03-28
I discuss the meaning of probability in the Everett-Wheeler interpretation of quantum mechanics, together with the problem of defining histories. To resolve these, I propose an understanding of probability arising from a form of temporal logic: the probability of a future-tense proposition is identified with its truth value in a many-valued and context-dependent logic. In short, probability is degree of truth. These ideas relate to traditional naive ideas of time and chance. Indeed, I argue that Everettian quantum mechanics is the only form of scientific theory that truly incorporates the perception that the future is open.
Navy Recruit Attrition Prediction Modeling
2014-09-01
service married but separated, divorced, or widowed (after DMDC, 2014). ....................................................27 Figure 13. Number of...or had a Marriage Annulment) had a lower probability of success. Recruits who were married at the time of accession had a higher probability of...survival rates differed among male and female service members. Male recruits who were married with dependents had a greater chance of survival than
Sensitivity study on durability variables of marine concrete structures
NASA Astrophysics Data System (ADS)
Zhou, Xin'gang; Li, Kefei
2013-06-01
In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
A simple model for DSS-14 outage times
NASA Technical Reports Server (NTRS)
Rumsey, H. C.; Stevens, R.; Posner, E. C.
1989-01-01
A model is proposed to describe DSS-14 outage times. Discrepancy Reporting System outage data for the period from January 1986 through September 1988 are used to estimate the parameters of the model. The model provides a probability distribution for the duration of outages, which agrees well with observed data. The model depends only on a small number of parameters, and has some heuristic justification. This shows that the Discrepancy Reporting System in the Deep Space Network (DSN) can be used to estimate the probability of extended outages in spite of the discrepancy reports ending when the pass ends. The probability of an outage extending beyond the end of a pass is estimated as around 5 percent.
Assessing Aircraft Supply Air to Recommend Compounds for Timely Warning of Contamination
NASA Astrophysics Data System (ADS)
Fox, Richard B.
Taking aircraft out of service for even one day to correct fume-in-cabin events can cost the industry roughly $630 million per year in lost revenue. The quantitative correlation study investigated quantitative relationships between measured concentrations of contaminants in bleed air and probability of odor detectability. Data were collected from 94 aircraft engine and auxiliary power unit (APU) bleed air tests from an archival data set between 1997 and 2011, and no relationships were found. Pearson correlation was followed by regression analysis for individual contaminants. Significant relationships of concentrations of compounds in bleed air to probability of odor detectability were found (p<0.05), as well as between compound concentration and probability of sensory irritancy detectability. Study results may be useful to establish early warning levels. Predictive trend monitoring, a method to identify potential pending failure modes within a mechanical system, may influence scheduled down-time for maintenance as a planned event, rather than repair after a mechanical failure and thereby reduce operational costs associated with odor-in-cabin events. Twenty compounds (independent variables) were found statistically significant as related to probability of odor detectability (dependent variable 1). Seventeen compounds (independent variables) were found statistically significant as related to probability of sensory irritancy detectability (dependent variable 2). Additional research was recommended to further investigate relationships between concentrations of contaminants and probability of odor detectability or probability of sensory irritancy detectability for all turbine oil brands. Further research on implementation of predictive trend monitoring may be warranted to demonstrate how the monitoring process might be applied to in-flight application.
Zago, Myrka; Bosco, Gianfranco; Maffei, Vincenzo; Iosa, Marco; Ivanenko, Yuri P; Lacquaniti, Francesco
2005-02-01
We studied how subjects learn to deal with two conflicting sensory environments as a function of the probability of each environment and the temporal distance between repeated events. Subjects were asked to intercept a visual target moving downward on a screen with randomized laws of motion. We compared five protocols that differed in the probability of constant speed (0g) targets and accelerated (1g) targets. Probability ranged from 9 to 100%, and the time interval between consecutive repetitions of the same target ranged from about 1 to 20 min. We found that subjects systematically timed their responses consistent with the assumption of gravity effects, for both 1 and 0g trials. With training, subjects rapidly adapted to 0g targets by shifting the time of motor activation. Surprisingly, the adaptation rate was independent of both the probability of 0g targets and their temporal distance. Very few 0g trials sporadically interspersed as catch trials during immersive practice with 1g trials were sufficient for learning and consolidation in long-term memory, as verified by retesting after 24 h. We argue that the memory store for adapted states of the internal gravity model is triggered by individual events and can be sustained for prolonged periods of time separating sporadic repetitions. This form of event-related learning could depend on multiple-stage memory, with exponential rise and decay in the initial stages followed by a sample-and-hold module.
Bai, Shirong; Skodje, Rex T
2017-08-17
A new approach is presented for simulating the time-evolution of chemically reactive systems. This method provides an alternative to conventional modeling of mass-action kinetics that involves solving differential equations for the species concentrations. The method presented here avoids the need to solve the rate equations by switching to a representation based on chemical pathways. In the Sum Over Histories Representation (or SOHR) method, any time-dependent kinetic observable, such as concentration, is written as a linear combination of probabilities for chemical pathways leading to a desired outcome. In this work, an iterative method is introduced that allows the time-dependent pathway probabilities to be generated from a knowledge of the elementary rate coefficients, thus avoiding the pitfalls involved in solving the differential equations of kinetics. The method is successfully applied to the model Lotka-Volterra system and to a realistic H 2 combustion model.
Time Evolving Fission Chain Theory and Fast Neutron and Gamma-Ray Counting Distributions
Kim, K. S.; Nakae, L. F.; Prasad, M. K.; ...
2015-11-01
Here, we solve a simple theoretical model of time evolving fission chains due to Feynman that generalizes and asymptotically approaches the point model theory. The point model theory has been used to analyze thermal neutron counting data. This extension of the theory underlies fast counting data for both neutrons and gamma rays from metal systems. Fast neutron and gamma-ray counting is now possible using liquid scintillator arrays with nanosecond time resolution. For individual fission chains, the differential equations describing three correlated probability distributions are solved: the time-dependent internal neutron population, accumulation of fissions in time, and accumulation of leaked neutronsmore » in time. Explicit analytic formulas are given for correlated moments of the time evolving chain populations. The equations for random time gate fast neutron and gamma-ray counting distributions, due to randomly initiated chains, are presented. Correlated moment equations are given for both random time gate and triggered time gate counting. There are explicit formulas for all correlated moments are given up to triple order, for all combinations of correlated fast neutrons and gamma rays. The nonlinear differential equations for probabilities for time dependent fission chain populations have a remarkably simple Monte Carlo realization. A Monte Carlo code was developed for this theory and is shown to statistically realize the solutions to the fission chain theory probability distributions. Combined with random initiation of chains and detection of external quanta, the Monte Carlo code generates time tagged data for neutron and gamma-ray counting and from these data the counting distributions.« less
Causality in time-neutral cosmologies
NASA Astrophysics Data System (ADS)
Kent, Adrian
1999-02-01
Gell-Mann and Hartle (GMH) have recently considered time-neutral cosmological models in which the initial and final conditions are independently specified, and several authors have investigated experimental tests of such models. We point out here that GMH time-neutral models can allow superluminal signaling, in the sense that it can be possible for observers in those cosmologies, by detecting and exploiting regularities in the final state, to construct devices which send and receive signals between space-like separated points. In suitable cosmologies, any single superluminal message can be transmitted with probability arbitrarily close to one by the use of redundant signals. However, the outcome probabilities of quantum measurements generally depend on precisely which past and future measurements take place. As the transmission of any signal relies on quantum measurements, its transmission probability is similarly context dependent. As a result, the standard superluminal signaling paradoxes do not apply. Despite their unusual features, the models are internally consistent. These results illustrate an interesting conceptual point. The standard view of Minkowski causality is not an absolutely indispensable part of the mathematical formalism of relativistic quantum theory. It is contingent on the empirical observation that naturally occurring ensembles can be naturally pre-selected but not post-selected.
FORESHOCKS AND TIME-DEPENDENT EARTHQUAKE HAZARD ASSESSMENT IN SOUTHERN CALIFORNIA.
Jones, Lucile M.
1985-01-01
The probability that an earthquake in southern California (M greater than equivalent to 3. 0) will be followed by an earthquake of larger magnitude within 5 days and 10 km (i. e. , will be a foreshock) is 6 plus or minus 0. 5 per cent (1 S. D. ), and is not significantly dependent on the magnitude of the possible foreshock between M equals 3 and M equals 5. The probability that an earthquake will be followed by an M greater than equivalent to 5. 0 main shock, however, increases with magnitude of the foreshock from less than 1 per cent at M greater than equivalent to 3 to 6. 5 plus or minus 2. 5 per cent (1 S. D. ) at M greater than equivalent to 5. The main shock will most likely occur in the first hour after the foreshock, and the probability that a main shock will occur in the first hour decreases with elapsed time from the occurrence of the possible foreshock by approximately the inverse of time. Thus, the occurrence of an earthquake of M greater than equivalent to 3. 0 in southern California increases the earthquake hazard within a small space-time window several orders of magnitude above the normal background level.
NASA Astrophysics Data System (ADS)
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
Barani, Simone; Mascandola, Claudia; Riccomagno, Eva; Spallarossa, Daniele; Albarello, Dario; Ferretti, Gabriele; Scafidi, Davide; Augliera, Paolo; Massa, Marco
2018-03-28
Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on 'fractal' self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst's rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.
NASA Astrophysics Data System (ADS)
Petrukhin, Venedict F.; Poddubnaya, I. V.; Ponomarev, Evgenij A.; Sutyrin, Nicolaj A.
2004-12-01
The analysis of the ionospheric data on Irkutsk obtained from 1960 to 1996 was made. Was shown, that the link of probabilities of observation of the sporadic derivations in E-region of ionosphere with solar activity (SA) essentially depends on time of day, season and correlation between solar and geophysical parameters. For different types of sporadic derivations this link has different character and with a different image varies with current of time. It is necessary to mark, that the link of night sporadic derivations (Es such as "f") with solar activity in the summer very high and practically does not vary in time (r=-0.897-/+0.04). The temporary course of coefficients of correlation between solar activity and probability of observation of sporadic stratums (r) of a different type varies depending on the season. Further, for stratums of a type "cl" and "l" the change r happens within increase of SA. At the same time, there is an abnormal behavior of height so f sporadic stratums such as "cl". There is an impression that in a considered time frame there is competition of two factors. One of them - solar activity, which in the norm supports the negative correlation link with frequency of observation and second - a factor of an unknown nature, which has caused evocative anomaly of altitude, becomes dominant above natural negative link.
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.
2016-01-01
The stress rupture strength of silicon carbide fiber-reinforced silicon carbide composites with a boron nitride fiber coating decreases with time within the intermediate temperature range of 700 to 950 degree Celsius. Various theories have been proposed to explain the cause of the time-dependent stress rupture strength. The objective of this paper is to investigate the relative significance of the various theories for the time-dependent strength of silicon carbide fiber-reinforced silicon carbide composites. This is achieved through the development of a numerically based progressive failure analysis routine and through the application of the routine to simulate the composite stress rupture tests. The progressive failure routine is a time-marching routine with an iterative loop between a probability of fiber survival equation and a force equilibrium equation within each time step. Failure of the composite is assumed to initiate near a matrix crack and the progression of fiber failures occurs by global load sharing. The probability of survival equation is derived from consideration of the strength of ceramic fibers with randomly occurring and slow growing flaws as well as the mechanical interaction between the fibers and matrix near a matrix crack. The force equilibrium equation follows from the global load sharing presumption. The results of progressive failure analyses of the composite tests suggest that the relationship between time and stress-rupture strength is attributed almost entirely to the slow flaw growth within the fibers. Although other mechanisms may be present, they appear to have only a minor influence on the observed time-dependent behavior.
Some Exact Results for the Schroedinger Wave Equation with a Time Dependent Potential
NASA Technical Reports Server (NTRS)
Campbell, Joel
2009-01-01
The time dependent Schroedinger equation with a time dependent delta function potential is solved exactly for many special cases. In all other cases the problem can be reduced to an integral equation of the Volterra type. It is shown that by knowing the wave function at the origin, one may derive the wave function everywhere. Thus, the problem is reduced from a PDE in two variables to an integral equation in one. These results are used to compare adiabatic versus sudden changes in the potential. It is shown that adiabatic changes in the p otential lead to conservation of the normalization of the probability density.
NASA Astrophysics Data System (ADS)
Grimaudo, R.; Belousov, Yu.; Nakazato, H.; Messina, A.
2018-05-01
The quantum dynamics of a Jˆ2 =(jˆ1 +jˆ2) 2-conserving Hamiltonian model describing two coupled spins jˆ1 and jˆ2 under controllable and fluctuating time-dependent magnetic fields is investigated. Each eigenspace of Jˆ2 is dynamically invariant and the Hamiltonian of the total system restricted to any one of such (j1 +j2) - |j1 -j2 | + 1 eigenspaces, possesses the SU(2) structure of the Hamiltonian of a single fictitious spin acted upon by the total magnetic field. We show that such a reducibility holds regardless of the time dependence of the externally applied field as well as of the statistical properties of the noise, here represented as a classical fluctuating magnetic field. The time evolution of the joint transition probabilities of the two spins jˆ1 and jˆ2 between two prefixed factorized states is examined, bringing to light peculiar dynamical properties of the system under scrutiny. When the noise-induced non-unitary dynamics of the two coupled spins is properly taken into account, analytical expressions for the joint Landau-Zener transition probabilities are reported. The possibility of extending the applicability of our results to other time-dependent spin models is pointed out.
Restoration of isospin symmetry in highly excited nuclei
NASA Astrophysics Data System (ADS)
Sagawa, H.; Bortignon, P. F.; Colò, G.
1998-12-01
Explicit relations between the isospin mixing probability, the spreading width ΓIAS↓ of the Isobaric Analog State (IAS) and the statistical decay width Γc of the compound nucleus at finite excitation energy, are derived by using the Feshbach projection formalism. The temperature dependence of the isospin mixing probability is discussed quantitatively for the first time by using the values of ΓIAS↓ and of Γc calculated by means of microscopic models. It is shown that the mixing probability remains essentially constant up to a temperature of the order of 1 MeV and then decreases to about 1/4 of its zero temperature value, at higher temperature than ~3 MeV, due to the short decay time of the compound system.
Extinction times of epidemic outbreaks in networks.
Holme, Petter
2013-01-01
In the Susceptible-Infectious-Recovered (SIR) model of disease spreading, the time to extinction of the epidemics happens at an intermediate value of the per-contact transmission probability. Too contagious infections burn out fast in the population. Infections that are not contagious enough die out before they spread to a large fraction of people. We characterize how the maximal extinction time in SIR simulations on networks depend on the network structure. For example we find that the average distances in isolated components, weighted by the component size, is a good predictor of the maximal time to extinction. Furthermore, the transmission probability giving the longest outbreaks is larger than, but otherwise seemingly independent of, the epidemic threshold.
Recent progresses in outcome-dependent sampling with failure time data.
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2017-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case-cohort design, generalized case-cohort design, stratified case-cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.
Recent progresses in outcome-dependent sampling with failure time data
Ding, Jieli; Lu, Tsui-Shan; Cai, Jianwen; Zhou, Haibo
2016-01-01
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design. PMID:26759313
Los Angeles Beach Harbors, Los Angeles County, California.
1974-10-01
predicted at this time. The presently proposed project is not dependent upon nor contributory to further navigation development in the V" Los Angeles...as Long Beach and Compton. The Los Angeles Harbor probably exhibited similar intensities ranging from VII to IX depending on the soil conditions...the harbor. The water quality in these aquifers is dependent upon the rates of recharge and extraction (natural and otherwise). The Dominguez Gap
Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S
2015-01-01
Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.
A Brownian model for recurrent earthquakes
Matthews, M.V.; Ellsworth, W.L.; Reasenberg, P.A.
2002-01-01
We construct a probability model for rupture times on a recurrent earthquake source. Adding Brownian perturbations to steady tectonic loading produces a stochastic load-state process. Rupture is assumed to occur when this process reaches a critical-failure threshold. An earthquake relaxes the load state to a characteristic ground level and begins a new failure cycle. The load-state process is a Brownian relaxation oscillator. Intervals between events have a Brownian passage-time distribution that may serve as a temporal model for time-dependent, long-term seismic forecasting. This distribution has the following noteworthy properties: (1) the probability of immediate rerupture is zero; (2) the hazard rate increases steadily from zero at t = 0 to a finite maximum near the mean recurrence time and then decreases asymptotically to a quasi-stationary level, in which the conditional probability of an event becomes time independent; and (3) the quasi-stationary failure rate is greater than, equal to, or less than the mean failure rate because the coefficient of variation is less than, equal to, or greater than 1/???2 ??? 0.707. In addition, the model provides expressions for the hazard rate and probability of rupture on faults for which only a bound can be placed on the time of the last rupture. The Brownian relaxation oscillator provides a connection between observable event times and a formal state variable that reflects the macromechanics of stress and strain accumulation. Analysis of this process reveals that the quasi-stationary distance to failure has a gamma distribution, and residual life has a related exponential distribution. It also enables calculation of "interaction" effects due to external perturbations to the state, such as stress-transfer effects from earthquakes outside the target source. The influence of interaction effects on recurrence times is transient and strongly dependent on when in the loading cycle step pertubations occur. Transient effects may be much stronger than would be predicted by the "clock change" method and characteristically decay inversely with elapsed time after the perturbation.
Intertime jump statistics of state-dependent Poisson processes.
Daly, Edoardo; Porporato, Amilcare
2007-01-01
A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.
Theory of Stochastic Laplacian Growth
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2017-07-01
We generalize the diffusion-limited aggregation by issuing many randomly-walking particles, which stick to a cluster at the discrete time unit providing its growth. Using simple combinatorial arguments we determine probabilities of different growth scenarios and prove that the most probable evolution is governed by the deterministic Laplacian growth equation. A potential-theoretical analysis of the growth probabilities reveals connections with the tau-function of the integrable dispersionless limit of the two-dimensional Toda hierarchy, normal matrix ensembles, and the two-dimensional Dyson gas confined in a non-uniform magnetic field. We introduce the time-dependent Hamiltonian, which generates transitions between different classes of equivalence of closed curves, and prove the Hamiltonian structure of the interface dynamics. Finally, we propose a relation between probabilities of growth scenarios and the semi-classical limit of certain correlation functions of "light" exponential operators in the Liouville conformal field theory on a pseudosphere.
Transcriptional dynamics with time-dependent reaction rates
NASA Astrophysics Data System (ADS)
Nandi, Shubhendu; Ghosh, Anandamohan
2015-02-01
Transcription is the first step in the process of gene regulation that controls cell response to varying environmental conditions. Transcription is a stochastic process, involving synthesis and degradation of mRNAs, that can be modeled as a birth-death process. We consider a generic stochastic model, where the fluctuating environment is encoded in the time-dependent reaction rates. We obtain an exact analytical expression for the mRNA probability distribution and are able to analyze the response for arbitrary time-dependent protocols. Our analytical results and stochastic simulations confirm that the transcriptional machinery primarily act as a low-pass filter. We also show that depending on the system parameters, the mRNA levels in a cell population can show synchronous/asynchronous fluctuations and can deviate from Poisson statistics.
NASA Astrophysics Data System (ADS)
Schneider, E. A.; Deinert, M. R.; Cady, K. B.
2006-10-01
The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Powers, L.M.; Nemeth, N.N.
1995-08-01
A probabilistic design methodology which predicts the fast fracture and time-dependent failure behavior of thermomechanically loaded ceramic components is discussed using the CARES/LIFE integrated design computer program. Slow crack growth (SCG) is assumed to be the mechanism responsible for delayed failure behavior. Inert strength and dynamic fatigue data obtained from testing coupon specimens (O-ring and C-ring specimens) are initially used to calculate the fast fracture and SCG material parameters as a function of temperature using the parameter estimation techniques available with the CARES/LIFE code. Finite element analysis (FEA) is used to compute the stress distributions for the tube as amore » function of applied pressure. Knowing the stress and temperature distributions and the fast fracture and SCG material parameters, the life time for a given tube can be computed. A stress-failure probability-time to failure (SPT) diagram is subsequently constructed for these tubes. Such a diagram can be used by design engineers to estimate the time to failure at a given failure probability level for a component subjected to a given thermomechanical load.« less
The Active Fault Parameters for Time-Dependent Earthquake Hazard Assessment in Taiwan
NASA Astrophysics Data System (ADS)
Lee, Y.; Cheng, C.; Lin, P.; Shao, K.; Wu, Y.; Shih, C.
2011-12-01
Taiwan is located at the boundary between the Philippine Sea Plate and the Eurasian Plate, with a convergence rate of ~ 80 mm/yr in a ~N118E direction. The plate motion is so active that earthquake is very frequent. In the Taiwan area, disaster-inducing earthquakes often result from active faults. For this reason, it's an important subject to understand the activity and hazard of active faults. The active faults in Taiwan are mainly located in the Western Foothills and the Eastern longitudinal valley. Active fault distribution map published by the Central Geological Survey (CGS) in 2010 shows that there are 31 active faults in the island of Taiwan and some of which are related to earthquake. Many researchers have investigated these active faults and continuously update new data and results, but few people have integrated them for time-dependent earthquake hazard assessment. In this study, we want to gather previous researches and field work results and then integrate these data as an active fault parameters table for time-dependent earthquake hazard assessment. We are going to gather the seismic profiles or earthquake relocation of a fault and then combine the fault trace on land to establish the 3D fault geometry model in GIS system. We collect the researches of fault source scaling in Taiwan and estimate the maximum magnitude from fault length or fault area. We use the characteristic earthquake model to evaluate the active fault earthquake recurrence interval. In the other parameters, we will collect previous studies or historical references and complete our parameter table of active faults in Taiwan. The WG08 have done the time-dependent earthquake hazard assessment of active faults in California. They established the fault models, deformation models, earthquake rate models, and probability models and then compute the probability of faults in California. Following these steps, we have the preliminary evaluated probability of earthquake-related hazards in certain faults in Taiwan. By accomplishing active fault parameters table in Taiwan, we would apply it in time-dependent earthquake hazard assessment. The result can also give engineers a reference for design. Furthermore, it can be applied in the seismic hazard map to mitigate disasters.
Lotka-Volterra competition models for sessile organisms.
Spencer, Matthew; Tanner, Jason E
2008-04-01
Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.
NASA Astrophysics Data System (ADS)
Kosov, Daniel S.
2017-09-01
Quantum transport of electrons through a molecule is a series of individual electron tunneling events separated by stochastic waiting time intervals. We study the emergence of temporal correlations between successive waiting times for the electron transport in a vibrating molecular junction. Using the master equation approach, we compute the joint probability distribution for waiting times of two successive tunneling events. We show that the probability distribution is completely reset after each tunneling event if molecular vibrations are thermally equilibrated. If we treat vibrational dynamics exactly without imposing the equilibration constraint, the statistics of electron tunneling events become non-renewal. Non-renewal statistics between two waiting times τ1 and τ2 means that the density matrix of the molecule is not fully renewed after time τ1 and the probability of observing waiting time τ2 for the second electron transfer depends on the previous electron waiting time τ1. The strong electron-vibration coupling is required for the emergence of the non-renewal statistics. We show that in the Franck-Condon blockade regime, extremely rare tunneling events become positively correlated.
Exposing extinction risk analysis to pathogens: Is disease just another form of density dependence?
Gerber, L.R.; McCallum, H.; Lafferty, K.D.; Sabo, J.L.; Dobson, A.
2005-01-01
In the United States and several other countries, the development of population viability analyses (PVA) is a legal requirement of any species survival plan developed for threatened and endangered species. Despite the importance of pathogens in natural populations, little attention has been given to host-pathogen dynamics in PVA. To study the effect of infectious pathogens on extinction risk estimates generated from PVA, we review and synthesize the relevance of host-pathogen dynamics in analyses of extinction risk. We then develop a stochastic, density-dependent host-parasite model to investigate the effects of disease on the persistence of endangered populations. We show that this model converges on a Ricker model of density dependence under a suite of limiting assumptions, including a high probability that epidemics will arrive and occur. Using this modeling framework, we then quantify: (1) dynamic differences between time series generated by disease and Ricker processes with the same parameters; (2) observed probabilities of quasi-extinction for populations exposed to disease or self-limitation; and (3) bias in probabilities of quasi-extinction estimated by density-independent PVAs when populations experience either form of density dependence. Our results suggest two generalities about the relationships among disease, PVA, and the management of endangered species. First, disease more strongly increases variability in host abundance and, thus, the probability of quasi-extinction, than does self-limitation. This result stems from the fact that the effects and the probability of occurrence of disease are both density dependent. Second, estimates of quasi-extinction are more often overly optimistic for populations experiencing disease than for those subject to self-limitation. Thus, although the results of density-independent PVAs may be relatively robust to some particular assumptions about density dependence, they are less robust when endangered populations are known to be susceptible to disease. If potential management actions involve manipulating pathogens, then it may be useful to model disease explicitly. ?? 2005 by the Ecological Society of America.
Method and device for landing aircraft dependent on runway occupancy time
NASA Technical Reports Server (NTRS)
Ghalebsaz Jeddi, Babak (Inventor)
2012-01-01
A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.
Voter model with arbitrary degree dependence: clout, confidence and irreversibility
NASA Astrophysics Data System (ADS)
Fotouhi, Babak; Rabbat, Michael G.
2014-03-01
The voter model is widely used to model opinion dynamics in society. In this paper, we propose three modifications to incorporate heterogeneity into the model. We address the corresponding oversimplifications of the conventional voter model which are unrealistic. We first consider the voter model with popularity bias. The influence of each node on its neighbors depends on its degree. We find the consensus probabilities and expected consensus times for each of the states. We also find the fixation probability, which is the probability that a single node whose state differs from every other node imposes its state on the entire system. In addition, we find the expected fixation time. Then two other extensions to the model are proposed and the motivations behind them are discussed. The first one is confidence, where in addition to the states of neighbors, nodes take their own state into account at each update. We repeat the calculations for the augmented model and investigate the effects of adding confidence to the model. The second proposed extension is irreversibility, where one of the states is given the property that once nodes adopt it, they cannot switch back. This is motivated by applications where, agents take an irreversible action such as seeing a movie, purchasing a music album online, or buying a new product. The dynamics of densities, fixation times and consensus times are obtained.
Experimental Investigation of Muon-Catalyzed d-t Fusion
NASA Astrophysics Data System (ADS)
Jones, S. E.; Anderson, A. N.; Caffrey, A. J.; Walter, J. B.; Watts, K. D.; Bradbury, J. N.; Gram, P. A. M.; Leon, M.; Maltrud, H. R.; Paciotti, M. A.
1983-11-01
Measurements of the absolute neutron yield and the time dependence of the appearance of neutrons resulting from muon-catalyzed fusion have been carried out in high-density deuterium-tritium mixtures. The temperature dependence of the resonant dtμ-molecular formation process has been determined in the range 100 to 540 K. Mesomolecular formation is found to be resonant for DT as well as D2 target molecules. The sticking probability and other fundamental parameters have been measured for the first time.
Einhäuser, Wolfgang; Nuthmann, Antje
2016-09-01
During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.
Ding, Aidong Adam; Hsieh, Jin-Jian; Wang, Weijing
2015-01-01
Bivariate survival analysis has wide applications. In the presence of covariates, most literature focuses on studying their effects on the marginal distributions. However covariates can also affect the association between the two variables. In this article we consider the latter issue by proposing a nonstandard local linear estimator for the concordance probability as a function of covariates. Under the Clayton copula, the conditional concordance probability has a simple one-to-one correspondence with the copula parameter for different data structures including those subject to independent or dependent censoring and dependent truncation. The proposed method can be used to study how covariates affect the Clayton association parameter without specifying marginal regression models. Asymptotic properties of the proposed estimators are derived and their finite-sample performances are examined via simulations. Finally, for illustration, we apply the proposed method to analyze a bone marrow transplant data set.
NASA Astrophysics Data System (ADS)
Dufty, J. W.
1984-09-01
Diffusion of a tagged particle in a fluid with uniform shear flow is described. The continuity equation for the probability density describing the position of the tagged particle is considered. The diffusion tensor is identified by expanding the irreversible part of the probability current to first order in the gradient of the probability density, but with no restriction on the shear rate. The tensor is expressed as the time integral of a nonequilibrium autocorrelation function for the velocity of the tagged particle in its local fluid rest frame, generalizing the Green-Kubo expression to the nonequilibrium state. The tensor is evaluated from results obtained previously for the velocity autocorrelation function that are exact for Maxwell molecules in the Boltzmann limit. The effects of viscous heating are included and the dependence on frequency and shear rate is displayed explicitly. The mode-coupling contributions to the frequency and shear-rate dependent diffusion tensor are calculated.
Quantum dynamics study of H+NH3-->H2+NH2 reaction.
Zhang, Xu Qiang; Cui, Qian; Zhang, John Z H; Han, Ke Li
2007-06-21
We report in this paper a quantum dynamics study for the reaction H+NH3-->NH2+H2 on the potential energy surface of Corchado and Espinosa-Garcia [J. Chem. Phys. 106, 4013 (1997)]. The quantum dynamics calculation employs the semirigid vibrating rotor target model [J. Z. H. Zhang, J. Chem. Phys. 111, 3929 (1999)] and time-dependent wave packet method to propagate the wave function. Initial state-specific reaction probabilities are obtained, and an energy correction scheme is employed to account for zero point energy changes for the neglected degrees of freedom in the dynamics treatment. Tunneling effect is observed in the energy dependency of reaction probability, similar to those found in H+CH4 reaction. The influence of rovibrational excitation on reaction probability and stereodynamical effect are investigated. Reaction rate constants from the initial ground state are calculated and are compared to those from the transition state theory and experimental measurement.
Coevolution of patch-type dependent emigration and patch-type dependent immigration.
Weigang, Helene C
2017-08-07
The three phases of dispersal - emigration, transfer and immigration - are affecting each other and the former and latter decisions may depend on patch types. Despite the inevitable fact of the complexity of the dispersal process, patch-type dependencies of dispersal decisions modelled as emigration and immigration are usually missing in theoretical dispersal models. Here, I investigate the coevolution of patch-type dependent emigration and patch-type dependent immigration in an extended Hamilton-May model. The dispersing population inhabits a landscape structured into many patches of two types and disperses during a continuous-time season. The trait under consideration is a four dimensional vector consisting of two values for emigration probability from the patches and two values for immigration probability into the patches of each type. Using the adaptive dynamics approach I show that four qualitatively different dispersal strategies may evolve in different parameter regions, including a counterintuitive strategy, where patches of one type are fully dispersed from (emigration probability is one) but individuals nevertheless always immigrate into them during the dispersal season (immigration probability is one). I present examples of evolutionary branching in a wide parameter range, when the patches with high local death rate during the dispersal season guarantee a high expected disperser output. I find that two dispersal strategies can coexist after evolutionary branching: a strategy with full immigration only into the patches with high expected disperser output coexists with a strategy that immigrates into any patch. Stochastic simulations agree with the numerical predictions. Since evolutionary branching is also found when immigration evolves alone, the present study is adding coevolutionary constraints on the emigration traits and hence finds that the coevolution of a higher dimensional trait sometimes hinders evolutionary diversification. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fixation of strategies with the Moran and Fermi processes in evolutionary games
NASA Astrophysics Data System (ADS)
Liu, Xuesong; He, Mingfeng; Kang, Yibin; Pan, Qiuhui
2017-10-01
A model of stochastic evolutionary game dynamics with finite population was built. It combines the standard Moran and Fermi rules with two strategies cooperation and defection. We obtain the expressions of fixation probabilities and fixation times. The one-third rule which has been found in the frequency dependent Moran process also holds for our model. We obtain the conditions of strategy being an evolutionarily stable strategy in our model, and then make a comparison with the standard Moran process. Besides, the analytical results show that compared with the standard Moran process, fixation occurs with higher probabilities under a prisoner's dilemma game and coordination game, but with lower probabilities under a coexistence game. The simulation result shows that the fixation time in our mixed process is lower than that in the standard Fermi process. In comparison with the standard Moran process, fixation always takes more time on average in spatial populations, regardless of the game. In addition, the fixation time decreases with the growth of the number of neighbors.
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
Maximum Entropy Principle for Transportation
NASA Astrophysics Data System (ADS)
Bilich, F.; DaSilva, R.
2008-11-01
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Sensing Surveillance & Navigation
2012-03-07
Removing Atmospheric Turbulence Goal: to restore a single high quality image from the observed sequence Prof. Peyman...Computer Sciences – Higher wavelet studies , time-scale, time-frequency transformations, Reduced Signature Targets, Low Probability of Intercept...Range Dependent Beam -patterns •Electronic Steering with Frequency Offsets •Inherent Countermeasure Capability Why? W1(t) W2(t) W3
2013-08-01
cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended...MCMC and splitting sampling schemes. Our proposed SS/ STP method is presented in Section 4, including accuracy bounds and computational effort
Toda, Shinji; Stein, Ross S.
2013-01-01
1] The Kanto seismic corridor surrounding Tokyo has hosted four to five M ≥ 7 earthquakes in the past 400 years. Immediately after the Tohoku earthquake, the seismicity rate in the corridor jumped 10-fold, while the rate of normal focal mechanisms dropped in half. The seismicity rate decayed for 6–12 months, after which it steadied at three times the pre-Tohoku rate. The seismicity rate jump and decay to a new rate, as well as the focal mechanism change, can be explained by the static stress imparted by the Tohoku rupture and postseismic creep to Kanto faults. We therefore fit the seismicity observations to a rate/state Coulomb model, which we use to forecast the time-dependent probability of large earthquakes in the Kanto seismic corridor. We estimate a 17% probability of a M ≥ 7.0 shock over the 5 year prospective period 11 March 2013 to 10 March 2018, two-and-a-half times the probability had the Tohoku earthquake not struck
First Detected Arrival of a Quantum Walker on an Infinite Line
NASA Astrophysics Data System (ADS)
Thiel, Felix; Barkai, Eli; Kessler, David A.
2018-01-01
The first detection of a quantum particle on a graph is shown to depend sensitively on the distance ξ between the detector and initial location of the particle, and on the sampling time τ . Here, we use the recently introduced quantum renewal equation to investigate the statistics of first detection on an infinite line, using a tight-binding lattice Hamiltonian with nearest-neighbor hops. Universal features of the first detection probability are uncovered and simple limiting cases are analyzed. These include the large ξ limit, the small τ limit, and the power law decay with the attempt number of the detection probability over which quantum oscillations are superimposed. For large ξ the first detection probability assumes a scaling form and when the sampling time is equal to the inverse of the energy band width nonanalytical behaviors arise, accompanied by a transition in the statistics. The maximum total detection probability is found to occur for τ close to this transition point. When the initial location of the particle is far from the detection node we find that the total detection probability attains a finite value that is distance independent.
Option volatility and the acceleration Lagrangian
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang
2014-01-01
This paper develops a volatility formula for option on an asset from an acceleration Lagrangian model and the formula is calibrated with market data. The Black-Scholes model is a simpler case that has a velocity dependent Lagrangian. The acceleration Lagrangian is defined, and the classical solution of the system in Euclidean time is solved by choosing proper boundary conditions. The conditional probability distribution of final position given the initial position is obtained from the transition amplitude. The volatility is the standard deviation of the conditional probability distribution. Using the conditional probability and the path integral method, the martingale condition is applied, and one of the parameters in the Lagrangian is fixed. The call option price is obtained using the conditional probability and the path integral method.
Encoding dependence in Bayesian causal networks
USDA-ARS?s Scientific Manuscript database
Bayesian networks (BNs) represent complex, uncertain spatio-temporal dynamics by propagation of conditional probabilities between identifiable states with a testable causal interaction model. Typically, they assume random variables are discrete in time and space with a static network structure that ...
NASA Astrophysics Data System (ADS)
Nanjo, K. Z.; Sakai, S.; Kato, A.; Tsuruoka, H.; Hirata, N.
2013-05-01
Seismicity in southern Kanto activated with the 2011 March 11 Tohoku earthquake of magnitude M9.0, but does this cause a significant difference in the probability of more earthquakes at the present or in the To? future answer this question, we examine the effect of a change in the seismicity rate on the probability of earthquakes. Our data set is from the Japan Meteorological Agency earthquake catalogue, downloaded on 2012 May 30. Our approach is based on time-dependent earthquake probabilistic calculations, often used for aftershock hazard assessment, and are based on two statistical laws: the Gutenberg-Richter (GR) frequency-magnitude law and the Omori-Utsu (OU) aftershock-decay law. We first confirm that the seismicity following a quake of M4 or larger is well modelled by the GR law with b ˜ 1. Then, there is good agreement with the OU law with p ˜ 0.5, which indicates that the slow decay was notably significant. Based on these results, we then calculate the most probable estimates of future M6-7-class events for various periods, all with a starting date of 2012 May 30. The estimates are higher than pre-quake levels if we consider a period of 3-yr duration or shorter. However, for statistics-based forecasting such as this, errors that arise from parameter estimation must be considered. Taking into account the contribution of these errors to the probability calculations, we conclude that any increase in the probability of earthquakes is insignificant. Although we try to avoid overstating the change in probability, our observations combined with results from previous studies support the likelihood that afterslip (fault creep) in southern Kanto will slowly relax a stress step caused by the Tohoku earthquake. This afterslip in turn reminds us of the potential for stress redistribution to the surrounding regions. We note the importance of varying hazards not only in time but also in space to improve the probabilistic seismic hazard assessment for southern Kanto.
NASA Astrophysics Data System (ADS)
Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.
2017-12-01
Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.
Colonius, Hans; Diederich, Adele
2011-07-01
The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.
Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface
NASA Astrophysics Data System (ADS)
Liu, Tianhui; Fu, Bina; Zhang, Dong H.
2013-11-01
The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the title molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.
H theorem for generalized entropic forms within a master-equation framework
NASA Astrophysics Data System (ADS)
Casas, Gabriela A.; Nobre, Fernando D.; Curado, Evaldo M. F.
2016-03-01
The H theorem is proven for generalized entropic forms, in the case of a discrete set of states. The associated probability distributions evolve in time according to a master equation, for which the corresponding transition rates depend on these entropic forms. An important equation describing the time evolution of the transition rates and probabilities in such a way as to drive the system towards an equilibrium state is found. In the particular case of Boltzmann-Gibbs entropy, it is shown that this equation is satisfied in the microcanonical ensemble only for symmetric probability transition rates, characterizing a single path to the equilibrium state. This equation fulfils the proof of the H theorem for generalized entropic forms, associated with systems characterized by complex dynamics, e.g., presenting nonsymmetric probability transition rates and more than one path towards the same equilibrium state. Some examples considering generalized entropies of the literature are discussed, showing that they should be applicable to a wide range of natural phenomena, mainly those within the realm of complex systems.
Dietz, Dennis C.
2014-01-01
A cogent method is presented for computing the expected cost of an appointment schedule where customers are statistically identical, the service time distribution has known mean and variance, and customer no-shows occur with time-dependent probability. The approach is computationally efficient and can be easily implemented to evaluate candidate schedules within a schedule optimization algorithm. PMID:24605070
Recovery time in quantum dynamics of wave packets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strekalov, M. L., E-mail: strekalov@kinetics.nsc.ru
2017-01-15
A wave packet formed by a linear superposition of bound states with an arbitrary energy spectrum returns arbitrarily close to the initial state after a quite long time. A method in which quantum recovery times are calculated exactly is developed. In particular, an exact analytic expression is derived for the recovery time in the limiting case of a two-level system. In the general case, the reciprocal recovery time is proportional to the Gauss distribution that depends on two parameters (mean value and variance of the return probability). The dependence of the recovery time on the mean excitation level of themore » system is established. The recovery time is the longest for the maximal excitation level.« less
Operational earthquake forecasting can enhance earthquake preparedness
Jordan, T.H.; Marzocchi, W.; Michael, A.J.; Gerstenberger, M.C.
2014-01-01
We cannot yet predict large earthquakes in the short term with much reliability and skill, but the strong clustering exhibited in seismic sequences tells us that earthquake probabilities are not constant in time; they generally rise and fall over periods of days to years in correlation with nearby seismic activity. Operational earthquake forecasting (OEF) is the dissemination of authoritative information about these time‐dependent probabilities to help communities prepare for potentially destructive earthquakes. The goal of OEF is to inform the decisions that people and organizations must continually make to mitigate seismic risk and prepare for potentially destructive earthquakes on time scales from days to decades. To fulfill this role, OEF must provide a complete description of the seismic hazard—ground‐motion exceedance probabilities as well as short‐term rupture probabilities—in concert with the long‐term forecasts of probabilistic seismic‐hazard analysis (PSHA).
Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation
Parsons, T.; Toda, S.; Stein, R.S.; Barka, A.; Dieterich, J.H.
2000-01-01
We calculate the probability of strong shaking in Istanbul, an urban center of 10 million people, from the description of earthquakes on the North Anatolian fault system in the Marmara Sea during the past 500 years and test the resulting catalog against the frequency of damage in Istanbul during the preceding millennium, departing from current practice, we include the time-dependent effect of stress transferred by the 1999 moment magnitude M = 7.4 Izmit earthquake to faults nearer to Istanbul. We find a 62 ± 15% probability (one standard deviation) of strong shaking during the next 30 years and 32 ± 12% during the next decade.
ERIC Educational Resources Information Center
Jung, Juergen; Hall, Diane M. Harnek; Rhoads, Thomas
2013-01-01
The present study examines whether the college enrollment decision of young individuals (student full-time, student part-time, and non-student) depends on health insurance coverage via a parent's family health plan. Our findings indicate that the availability of parental health insurance can have significant effects on the probability that a young…
NASA Astrophysics Data System (ADS)
Fedorov, M. V.; Sysoeva, A. A.; Vintskevich, S. V.; Grigoriev, D. A.
2018-03-01
The well-known Hong-Ou-Mandel effect is revisited. Two physical reasons are discussed for the effect to be less pronounced or even to disappear: differing polarizations of photons coming to the beamsplitter and delay time of photons in one of two channels. For the latter we use the concepts of biphoton frequency and temporal wave functions depending, correspondingly, on two frequency continuous variables of photons and on two time variables t 1 and t 2 interpreted as the arrival times of photons to the beamsplitter. Explicit expressions are found for the probability densities and total probabilities for photon pairs to be split between two channels after the beamsplitter and to be unsplit, when two photons appear together in one of two channels.
Uncertainty analysis in fault tree models with dependent basic events.
Pedroni, Nicola; Zio, Enrico
2013-06-01
In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.
A theoretically consistent stochastic cascade for temporal disaggregation of intermittent rainfall
NASA Astrophysics Data System (ADS)
Lombardo, F.; Volpi, E.; Koutsoyiannis, D.; Serinaldi, F.
2017-06-01
Generating fine-scale time series of intermittent rainfall that are fully consistent with any given coarse-scale totals is a key and open issue in many hydrological problems. We propose a stationary disaggregation method that simulates rainfall time series with given dependence structure, wet/dry probability, and marginal distribution at a target finer (lower-level) time scale, preserving full consistency with variables at a parent coarser (higher-level) time scale. We account for the intermittent character of rainfall at fine time scales by merging a discrete stochastic representation of intermittency and a continuous one of rainfall depths. This approach yields a unique and parsimonious mathematical framework providing general analytical formulations of mean, variance, and autocorrelation function (ACF) for a mixed-type stochastic process in terms of mean, variance, and ACFs of both continuous and discrete components, respectively. To achieve the full consistency between variables at finer and coarser time scales in terms of marginal distribution and coarse-scale totals, the generated lower-level series are adjusted according to a procedure that does not affect the stochastic structure implied by the original model. To assess model performance, we study rainfall process as intermittent with both independent and dependent occurrences, where dependence is quantified by the probability that two consecutive time intervals are dry. In either case, we provide analytical formulations of main statistics of our mixed-type disaggregation model and show their clear accordance with Monte Carlo simulations. An application to rainfall time series from real world is shown as a proof of concept.
Reliable gain-scheduled control of discrete-time systems and its application to CSTR model
NASA Astrophysics Data System (ADS)
Sakthivel, R.; Selvi, S.; Mathiyalagan, K.; Shi, Y.
2016-10-01
This paper is focused on reliable gain-scheduled controller design for a class of discrete-time systems with randomly occurring nonlinearities and actuator fault. Further, the nonlinearity in the system model is assumed to occur randomly according to a Bernoulli distribution with measurable time-varying probability in real time. The main purpose of this paper is to design a gain-scheduled controller by implementing a probability-dependent Lyapunov function and linear matrix inequality (LMI) approach such that the closed-loop discrete-time system is stochastically stable for all admissible randomly occurring nonlinearities. The existence conditions for the reliable controller is formulated in terms of LMI constraints. Finally, the proposed reliable gain-scheduled control scheme is applied on continuously stirred tank reactor model to demonstrate the effectiveness and applicability of the proposed design technique.
Teleportation of Three-Qubit State via Six-qubit Cluster State
NASA Astrophysics Data System (ADS)
Yu, Li-zhi; Sun, Shao-xin
2015-05-01
A scheme of probabilistic teleportation was proposed. In this scheme, we took a six-qubit nonmaximally cluster state as the quantum channel to teleport an unknown three-qubit entangled state. Based on Bob's three times Bell state measurement (BSM) results, the receiver Bob can by introducing an auxiliary particle and the appropriate transformation to reconstruct the initial state with a certain probability. We found that, the successful transmission probability depend on the absolute value of coefficients of two of six particle cluster state minimum.
Benzodiazepine dependence in subjects with alcohol use disorders: what prevalence?
Morel, A; Grall-Bronnec, M; Bulteau, S; Chauvin-Grelier, P; Gailledrat, L; Pinot, M L; Jolliet, P; Victorri-Vigneau, C
2016-10-01
To our knowledge, no studies have been conducted in France on benzodiazepine (BZD) dependence among outpatients with alcohol use disorders (AUD). Some international studies have been conducted on the consumption of BZD in this specific population, but the comparisons among them are difficult. We aimed to assess the current prevalence of probable benzodiazepine and BZD-like hypnotics (Z-drugs) dependence among outpatients seeking treatment for AUD. Participants were patients seeking treatment for AUD for the first time or repeating treatment after more than twelve months. Recruitment took place in seven addiction centres between January and December 2013 in the Nantes region (France). BZD/Z-drug dependence was assessed according to the DSM-IV diagnostic criteria for dependence. This information was gathered through a self-report questionnaire. Among the 1005 patients included in this study, 413 were BZD/Z-drug users (41.1%). Among the 413 patients, 217 were probably dependent on at least one substance, which represents 21.6% of the total population and 52.5% of BZD/Z-drug users. BZD/Z-drug dependence represents a public health concern. Prescribers should take the risks into account and keep treatment courses to a minimum.
Yoder, Paul J.; Bottema-Beutel, Kristen; Woynaroski, Tiffany; Chandrasekhar, Rameela; Sandbank, Michael
2014-01-01
Individuals with autism spectrum disorders (ASDs) have difficulty communicating in ways that are primarily for initiating and maintaining social relatedness (i.e., social communication). We hypothesized that the way researchers measured social communication would affect whether treatment effects were found. Using a best evidence review method, we found that treatments were shown to improve social communication outcomes approximately 54% of the time. The probability that a treatment affected social communication varied greatly depending on whether social communication was directly targeted (63%) or not (39%). Finally, the probability that a treatment affected social communication also varied greatly depending on whether social communication as measured in (a) contexts very similar to treatment sessions (82%) or (b) contexts that differed from treatment on at least setting, materials, and communication partner (33%). This paper also provides several methodological contributions. PMID:25346776
Cardiac sodium channel Markov model with temperature dependence and recovery from inactivation.
Irvine, L A; Jafri, M S; Winslow, R L
1999-01-01
A Markov model of the cardiac sodium channel is presented. The model is similar to the CA1 hippocampal neuron sodium channel model developed by Kuo and Bean (1994. Neuron. 12:819-829) with the following modifications: 1) an additional open state is added; 2) open-inactivated transitions are made voltage-dependent; and 3) channel rate constants are exponential functions of enthalpy, entropy, and voltage and have explicit temperature dependence. Model parameters are determined using a simulated annealing algorithm to minimize the error between model responses and various experimental data sets. The model reproduces a wide range of experimental data including ionic currents, gating currents, tail currents, steady-state inactivation, recovery from inactivation, and open time distributions over a temperature range of 10 degrees C to 25 degrees C. The model also predicts measures of single channel activity such as first latency, probability of a null sweep, and probability of reopening. PMID:10096885
Target annihilation by diffusing particles in inhomogeneous geometries
NASA Astrophysics Data System (ADS)
Cassi, Davide
2009-09-01
The survival probability of immobile targets annihilated by a population of random walkers on inhomogeneous discrete structures, such as disordered solids, glasses, fractals, polymer networks, and gels, is analytically investigated. It is shown that, while it cannot in general be related to the number of distinct visited points as in the case of homogeneous lattices, in the case of bounded coordination numbers its asymptotic behavior at large times can still be expressed in terms of the spectral dimension d˜ and its exact analytical expression is given. The results show that the asymptotic survival probability is site-independent of recurrent structures (d˜≤2) , while on transient structures (d˜>2) it can strongly depend on the target position, and such dependence is explicitly calculated.
Operational foreshock forecasting: Fifteen years after
NASA Astrophysics Data System (ADS)
Ogata, Y.
2010-12-01
We are concerned with operational forecasting of the probability that events are foreshocks of a forthcoming earthquake that is significantly larger (mainshock). Specifically, we define foreshocks as the preshocks substantially smaller than the mainshock by a magnitude gap of 0.5 or larger. The probability gain of foreshock forecast is extremely high compare to long-term forecast by renewal processes or various alarm-based intermediate-term forecasts because of a large event’s low occurrence rate in a short period and a narrow target region. Thus, it is desired to establish operational foreshock probability forecasting as seismologists have done for aftershocks. When a series of earthquakes occurs in a region, we attempt to discriminate foreshocks from a swarm or mainshock-aftershock sequence. Namely, after real time identification of an earthquake cluster using methods such as the single-link algorithm, the probability is calculated by applying statistical features that discriminate foreshocks from other types of clusters, by considering the events' stronger proximity in time and space and tendency towards chronologically increasing magnitudes. These features were modeled for probability forecasting and the coefficients of the model were estimated in Ogata et al. (1996) for the JMA hypocenter data (M≧4, 1926-1993). Currently, fifteen years has passed since the publication of the above-stated work so that we are able to present the performance and validation of the forecasts (1994-2009) by using the same model. Taking isolated events into consideration, the probability of the first events in a potential cluster being a foreshock vary in a range between 0+% and 10+% depending on their locations. This conditional forecasting performs significantly better than the unconditional (average) foreshock probability of 3.7% throughout Japan region. Furthermore, when we have the additional events in a cluster, the forecast probabilities range more widely from nearly 0% to about 40% depending on the discrimination features among the events in the cluster. This conditional forecasting further performs significantly better than the unconditional foreshock probability of 7.3%, which is the average probability of the plural events in the earthquake clusters. Indeed, the frequency ratios of the actual foreshocks are consistent with the forecasted probabilities. Reference: Ogata, Y., Utsu, T. and Katsura, K. (1996). Statistical discrimination of foreshocks from other earthquake clusters, Geophys. J. Int. 127, 17-30.
Maximum entropy principal for transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bilich, F.; Da Silva, R.
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less
NASA Astrophysics Data System (ADS)
Kozulin, E. M.; Knyazheva, G. N.; Novikov, K. V.; Itkis, I. M.; Itkis, M. G.; Dmitriev, S. N.; Oganessian, Yu. Ts.; Bogachev, A. A.; Kozulina, N. I.; Harca, I.; Trzaska, W. H.; Ghosh, T. K.
2016-11-01
Background: Suppression of compound nucleus formation in the reactions with heavy ions by a quasifission process in dependence on the reaction entrance channel. Purpose: Investigation of fission and quasifission processes in the reactions 36S,48Ca,48Ti , and 64Ni+238U at energies around the Coulomb barrier. Methods: Mass-energy distributions of fissionlike fragments formed in the reaction 48Ti+238U at energies of 247, 258, and 271 MeV have been measured using the double-arm time-of-flight spectrometer CORSET at the U400 cyclotron of the Flerov Laboratory of Nuclear Reactions and compared with mass-energy distributions for the reactions 36S,48Ca,64Ni+238U . Results: The most probable fragment masses as well as total kinetic energies and their dispersions in dependence on the interaction energies have been investigated for asymmetric and symmetric fragments for the studied reactions. The fusion probabilities have been deduced from the analysis of mass-energy distributions. Conclusion: The estimated fusion probability for the reactions S, Ca, Ti, and Ni ions with actinide nuclei shows that it depends exponentially on the mean fissility parameter of the system. For the reactions with actinide nuclei leading to the formation of superheavy elements the fusion probabilities are of several orders of magnitude higher than in the case of cold fusion reactions.
Dependency of high coastal water level and river discharge at the global scale
NASA Astrophysics Data System (ADS)
Ward, P.; Couasnon, A.; Haigh, I. D.; Muis, S.; Veldkamp, T.; Winsemius, H.; Wahl, T.
2017-12-01
It is widely recognized that floods cause huge socioeconomic impacts. From 1980-2013, global flood losses exceeded $1 trillion, with 220,000 fatalities. These impacts are particularly hard felt in low-lying densely populated deltas and estuaries, whose location at the coast-land interface makes them naturally prone to flooding. When river and coastal floods coincide, their impacts in these deltas and estuaries are often worse than when they occur in isolation. Such floods are examples of so-called `compound events'. In this contribution, we present the first global scale analysis of the statistical dependency of high coastal water levels (and the storm surge component alone) and river discharge. We show that there is statistical dependency between these components at more than half of the stations examined. We also show time-lags in the highest correlation between peak discharges and coastal water levels. Finally, we assess the probability of the simultaneous occurrence of design discharge and design coastal water levels, assuming both independence and statistical dependence. For those stations where we identified statistical dependency, the probability is between 1 and 5 times greater, when the dependence structure is accounted for. This information is essential for understanding the likelihood of compound flood events occurring at locations around the world as well as for accurate flood risk assessments and effective flood risk management. The research was carried out by analysing the statistical dependency between observed coastal water levels (and the storm surge component) from GESLA-2 and river discharge using gauged data from GRDC stations all around the world. The dependence structure was examined using copula functions.
Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast
Geist, E.L.; Parsons, T.
2009-01-01
Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.
Moro, Marilyn; Westover, M. Brandon; Kelly, Jessica; Bianchi, Matt T.
2016-01-01
Study Objectives: Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. Methods: We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. Results: The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Conclusion: Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. Citation: Moro M, Westover MB, Kelly J, Bianchi MT. Decision modeling in sleep apnea: the critical roles of pretest probability, cost of untreated obstructive sleep apnea, and time horizon. J Clin Sleep Med 2016;12(3):409–418. PMID:26518699
Dictionary of Missile and Artillery Terms
1982-05-20
arrangement and layout of missile nodes. KOMPRESSORNYY VOZDUShNO-REAKTIVNYY DVICATEL’ LAir -Breathing Compressor Rocket Engine] - variety of air-breathing...law. It will depend on dispersion of trajectories and nonconformity of fuse action due to nonuniform combustion of the time composition. Burst probable
Ali, S. M.; Mehmood, C. A; Khan, B.; Jawad, M.; Farid, U; Jadoon, J. K.; Ali, M.; Tareen, N. K.; Usman, S.; Majid, M.; Anwar, S. M.
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion. PMID:27314229
Ali, S M; Mehmood, C A; Khan, B; Jawad, M; Farid, U; Jadoon, J K; Ali, M; Tareen, N K; Usman, S; Majid, M; Anwar, S M
2016-01-01
In smart grid paradigm, the consumer demands are random and time-dependent, owning towards stochastic probabilities. The stochastically varying consumer demands have put the policy makers and supplying agencies in a demanding position for optimal generation management. The utility revenue functions are highly dependent on the consumer deterministic stochastic demand models. The sudden drifts in weather parameters effects the living standards of the consumers that in turn influence the power demands. Considering above, we analyzed stochastically and statistically the effect of random consumer demands on the fixed and variable revenues of the electrical utilities. Our work presented the Multi-Variate Gaussian Distribution Function (MVGDF) probabilistic model of the utility revenues with time-dependent consumer random demands. Moreover, the Gaussian probabilities outcome of the utility revenues is based on the varying consumer n demands data-pattern. Furthermore, Standard Monte Carlo (SMC) simulations are performed that validated the factor of accuracy in the aforesaid probabilistic demand-revenue model. We critically analyzed the effect of weather data parameters on consumer demands using correlation and multi-linear regression schemes. The statistical analysis of consumer demands provided a relationship between dependent (demand) and independent variables (weather data) for utility load management, generation control, and network expansion.
Six-dimensional quantum dynamics study for the dissociative adsorption of HCl on Au(111) surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianhui; Fu, Bina; Zhang, Dong H., E-mail: zhangdh@dicp.ac.cn
The six-dimensional quantum dynamics calculations for the dissociative chemisorption of HCl on Au(111) are carried out using the time-dependent wave-packet approach, based on an accurate PES which was recently developed by neural network fitting to density functional theory energy points. The influence of vibrational excitation and rotational orientation of HCl on the reactivity is investigated by calculating the exact six-dimensional dissociation probabilities, as well as the four-dimensional fixed-site dissociation probabilities. The vibrational excitation of HCl enhances the reactivity and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. A new interesting site-averaged effect is found for the titlemore » molecule-surface system that one can essentially reproduce the six-dimensional dissociation probability by averaging the four-dimensional dissociation probabilities over 25 fixed sites.« less
Two-Dimensional Modelling of the Hall Thruster Discharge: Final Report
2007-09-10
performing a number Nprob,jk of probability tests to determine the real number of macroions to be created, Njk, in a particular cell and time step. The...hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...temperature-dependent yield expression is proposed, which avoids integrals expressions at the same time that it recovers approximately the reduction of that
Probability Forecasting Using Monte Carlo Simulation
NASA Astrophysics Data System (ADS)
Duncan, M.; Frisbee, J.; Wysack, J.
2014-09-01
Space Situational Awareness (SSA) is defined as the knowledge and characterization of all aspects of space. SSA is now a fundamental and critical component of space operations. Increased dependence on our space assets has in turn lead to a greater need for accurate, near real-time knowledge of all space activities. With the growth of the orbital debris population, satellite operators are performing collision avoidance maneuvers more frequently. Frequent maneuver execution expends fuel and reduces the operational lifetime of the spacecraft. Thus the need for new, more sophisticated collision threat characterization methods must be implemented. The collision probability metric is used operationally to quantify the collision risk. The collision probability is typically calculated days into the future, so that high risk and potential high risk conjunction events are identified early enough to develop an appropriate course of action. As the time horizon to the conjunction event is reduced, the collision probability changes. A significant change in the collision probability will change the satellite mission stakeholder's course of action. So constructing a method for estimating how the collision probability will evolve improves operations by providing satellite operators with a new piece of information, namely an estimate or 'forecast' of how the risk will change as time to the event is reduced. Collision probability forecasting is a predictive process where the future risk of a conjunction event is estimated. The method utilizes a Monte Carlo simulation that produces a likelihood distribution for a given collision threshold. Using known state and state uncertainty information, the simulation generates a set possible trajectories for a given space object pair. Each new trajectory produces a unique event geometry at the time of close approach. Given state uncertainty information for both objects, a collision probability value can be computed for every trail. This yields a collision probability distribution given known, predicted uncertainty. This paper presents the details of the collision probability forecasting method. We examine various conjunction event scenarios and numerically demonstrate the utility of this approach in typical event scenarios. We explore the utility of a probability-based track scenario simulation that models expected tracking data frequency as the tasking levels are increased. The resulting orbital uncertainty is subsequently used in the forecasting algorithm.
Statistical time-dependent model for the interstellar gas
NASA Technical Reports Server (NTRS)
Gerola, H.; Kafatos, M.; Mccray, R.
1974-01-01
We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.
Calvó-Perxas, Laia; López-Pousa, Secundino; Vilalta-Franch, Joan; Turró-Garriga, Oriol; Blankenburg, Michael; Febrer, Laia; Flaqué, Margarida; Vallmajó, Natàlia; Aguirregomozcorta, Maria; Genís, David; Casas, Isabel; Perkal, Héctor; Coromina, Joan; Garre-Olmo, Josep
2012-01-01
To describe central nervous system (CNS) drug consumption patterns depending on the time to diagnosis of Alzheimer's disease (AD), and to check whether the cases diagnosed later are associated with greater severity and consuming more CNS drugs. Cross-sectional study using 952 cases of the Registry of Dementias of Girona. A binary logistic regression was used to detect variables associated with the use of CNS drugs depending on the time to diagnosis. CNS drugs were consumed by 95.8% of the AD patients. Only antipsychotics presented a statistically significant increase in the frequency of prescription to patients with longer time elapsed from symptom onset to AD diagnosis. Longer time elapsed from the onset of symptoms to the diagnosis resulted in increased probability of antipsychotic consumption. Copyright © 2012 S. Karger AG, Basel.
Computer models of social processes: the case of migration.
Beshers, J M
1967-06-01
The demographic model is a program for representing births, deaths, migration, and social mobility as social processes in a non-stationary stochastic process (Markovian). Transition probabilities for each age group are stored and then retrieved at the next appearance of that age cohort. In this way new transition probabilities can be calculated as a function of the old transition probabilities and of two successive distribution vectors.Transition probabilities can be calculated to represent effects of the whole age-by-state distribution at any given time period, too. Such effects as saturation or queuing may be represented by a market mechanism; for example, migration between metropolitan areas can be represented as depending upon job supplies and labor markets. Within metropolitan areas, migration can be represented as invasion and succession processes with tipping points (acceleration curves), and the market device has been extended to represent this phenomenon.Thus, the demographic model makes possible the representation of alternative classes of models of demographic processes. With each class of model one can deduce implied time series (varying parame-terswithin the class) and the output of the several classes can be compared to each other and to outside criteria, such as empirical time series.
Markovian Anderson Model: Bounds for the Rate of Propagation
NASA Astrophysics Data System (ADS)
Tcheremchantsev, Serguei
We consider the Anderson model in with potentials whose values at any site of the lattice are Markovian independent random functions of time. For solutions to the time-dependent Schrödinger equation we show under some conditions that with probability 1
A spatio-temporal model for probabilistic seismic hazard zonation of Tehran
NASA Astrophysics Data System (ADS)
Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza
2013-08-01
A precondition for all disaster management steps, building damage prediction, and construction code developments is a hazard assessment that shows the exceedance probabilities of different ground motion levels at a site considering different near- and far-field earthquake sources. The seismic sources are usually categorized as time-independent area sources and time-dependent fault sources. While the earlier incorporates the small and medium events, the later takes into account only the large characteristic earthquakes. In this article, a probabilistic approach is proposed to aggregate the effects of time-dependent and time-independent sources on seismic hazard. The methodology is then applied to generate three probabilistic seismic hazard maps of Tehran for 10%, 5%, and 2% exceedance probabilities in 50 years. The results indicate an increase in peak ground acceleration (PGA) values toward the southeastern part of the study area and the PGA variations are mostly controlled by the shear wave velocities across the city. In addition, the implementation of the methodology takes advantage of GIS capabilities especially raster-based analyses and representations. During the estimation of the PGA exceedance rates, the emphasis has been placed on incorporating the effects of different attenuation relationships and seismic source models by using a logic tree.
The effect of zealots on the rate of consensus achievement in complex networks
NASA Astrophysics Data System (ADS)
Kashisaz, Hadi; Hosseini, S. Samira; Darooneh, Amir H.
2014-05-01
In this study, we investigate the role of zealots on the result of voting process on both scale-free and Watts-Strogatz networks. We observe that inflexible individuals are very effective in consensus achievement and also in the rate of ordering process in complex networks. Zealots make the magnetization of the system to vary exponentially with time. We obtain that on SF networks, increasing the zealots' population, Z, exponentially increases the rate of consensus achievement. The time needed for the system to reach a desired magnetization, shows a power-law dependence on Z. As well, we obtain that the decay time of the order parameter shows a power-law dependence on Z. We also investigate the role of zealots' degree on the rate of ordering process and finally, we analyze the effect of network's randomness on the efficiency of zealots. Moving from a regular to a random network, the re-wiring probability P increases. We show that with increasing P, the efficiency of zealots for reducing the consensus achievement time increases. The rate of consensus is compared with the rate of ordering for different re-wiring probabilities of WS networks.
NASA Astrophysics Data System (ADS)
Christos, Kourouklas; Eleftheria, Papadimitriou; George, Tsaklidis; Vassilios, Karakostas
2018-06-01
The determination of strong earthquakes' recurrence time above a predefined magnitude, associated with specific fault segments, is an important component of seismic hazard assessment. The occurrence of these earthquakes is neither periodic nor completely random but often clustered in time. This fact in connection with their limited number, due to shortage of the available catalogs, inhibits a deterministic approach for recurrence time calculation, and for this reason, application of stochastic processes is required. In this study, recurrence time determination in the area of North Aegean Trough (NAT) is developed by the application of time-dependent stochastic models, introducing an elastic rebound motivated concept for individual fault segments located in the study area. For this purpose, all the available information on strong earthquakes (historical and instrumental) with M w ≥ 6.5 is compiled and examined for magnitude completeness. Two possible starting dates of the catalog are assumed with the same magnitude threshold, M w ≥ 6.5 and divided into five data sets, according to a new segmentation model for the study area. Three Brownian Passage Time (BPT) models with different levels of aperiodicity are applied and evaluated with the Anderson-Darling test for each segment in both catalog data where possible. The preferable models are then used in order to estimate the occurrence probabilities of M w ≥ 6.5 shocks on each segment of NAT for the next 10, 20, and 30 years since 01/01/2016. Uncertainties in probability calculations are also estimated using a Monte Carlo procedure. It must be mentioned that the provided results should be treated carefully because of their dependence to the initial assumptions. Such assumptions exhibit large variability and alternative means of these may return different final results.
A new approach to estimate time-to-cure from cancer registries data.
Boussari, Olayidé; Romain, Gaëlle; Remontet, Laurent; Bossard, Nadine; Mounier, Morgane; Bouvier, Anne-Marie; Binquet, Christine; Colonna, Marc; Jooste, Valérie
2018-04-01
Cure models have been adapted to net survival context to provide important indicators from population-based cancer data, such as the cure fraction and the time-to-cure. However existing methods for computing time-to-cure suffer from some limitations. Cure models in net survival framework were briefly overviewed and a new definition of time-to-cure was introduced as the time TTC at which P(t), the estimated covariate-specific probability of being cured at a given time t after diagnosis, reaches 0.95. We applied flexible parametric cure models to data of four cancer sites provided by the French network of cancer registries (FRANCIM). Then estimates of the time-to-cure by TTC and by two existing methods were derived and compared. Cure fractions and probabilities P(t) were also computed. Depending on the age group, TTC ranged from to 8 to 10 years for colorectal and pancreatic cancer and was nearly 12 years for breast cancer. In thyroid cancer patients under 55 years at diagnosis, TTC was strikingly 0: the probability of being cured was >0.95 just after diagnosis. This is an interesting result regarding the health insurance premiums of these patients. The estimated values of time-to-cure from the three approaches were close for colorectal cancer only. We propose a new approach, based on estimated covariate-specific probability of being cured, to estimate time-to-cure. Compared to two existing methods, the new approach seems to be more intuitive and natural and less sensitive to the survival time distribution. Copyright © 2018 Elsevier Ltd. All rights reserved.
Theoretical information measurement in nonrelativistic time-dependent approach
NASA Astrophysics Data System (ADS)
Najafizade, S. A.; Hassanabadi, H.; Zarrinkamar, S.
2018-02-01
The information-theoretic measures of time-dependent Schrödinger equation are investigated via the Shannon information entropy, variance and local Fisher quantities. In our calculations, we consider the two first states n = 0,1 and obtain the position Sx (t) and momentum Sp (t) Shannon entropies as well as Fisher information Ix (t) in position and momentum Ip (t) spaces. Using the Fourier transformed wave function, we obtain the results in momentum space. Some interesting features of the information entropy densities ρs (x,t) and γs (p,t), as well as the probability densities ρ (x,t) and γ (p,t) for time-dependent states are demonstrated. We establish a general relation between variance and Fisher's information. The Bialynicki-Birula-Mycielski inequality is tested and verified for the states n = 0,1.
Decision making in noisy bistable systems with time-dependent asymmetry
NASA Astrophysics Data System (ADS)
Nené, Nuno R.; Zaikin, Alexey
2013-01-01
Our work draws special attention to the importance of the effects of time-dependent parameters on decision making in bistable systems. Here, we extend previous studies of the mechanism known as speed-dependent cellular decision making in genetic circuits by performing an analytical treatment of the canonical supercritical pitchfork bifurcation problem with an additional time-dependent asymmetry and control parameter. This model has an analogous behavior to the genetic switch. In the presence of transient asymmetries and fluctuations, slow passage through the critical region in both systems increases substantially the probability of specific decision outcomes. We also study the relevance for attractor selection of reaching maximum values for the external asymmetry before and after the critical region. Overall, maximum asymmetries should be reached at an instant where the position of the critical point allows for compensation of the detrimental effects of noise in retaining memory of the transient asymmetries.
The Theory of Quantized Fields. III
DOE R&D Accomplishments Database
Schwinger, J.
1953-05-01
In this paper we discuss the electromagnetic field, as perturbed by a prescribed current. All quantities of physical interest in various situations, eigenvalues, eigenfunctions, and transformation probabilities, are derived from a general transformation function which is expressed in a non-Hermitian representation. The problems treated are: the determination of the energy-momentum eigenvalues and eigenfunctions for the isolated electromagnetic field, and the energy eigenvalues and eigenfunctions for the field perturbed by a time-independent current that departs from zero only within a finite time interval, and for a time-dependent current that assumes non-vanishing time-independent values initially and finally. The results are applied in a discussion of the intra-red catastrophe and of the adiabatic theorem. It is shown how the latter can be exploited to give a uniform formulation for all problems requiring the evaluation of transition probabilities or eigenvalue displacements.
Fingerprints of exceptional points in the survival probability of resonances in atomic spectra
NASA Astrophysics Data System (ADS)
Cartarius, Holger; Moiseyev, Nimrod
2011-07-01
The unique time signature of the survival probability exactly at the exceptional point parameters is studied here for the hydrogen atom in strong static magnetic and electric fields. We show that indeed the survival probability S(t)=|<ψ(0)|ψ(t)>|2 decays exactly as |1-at|2e-ΓEPt/ℏ, where ΓEP is associated with the decay rate at the exceptional point and a is a complex constant depending solely on the initial wave packet that populates exclusively the two almost degenerate states of the non-Hermitian Hamiltonian. This may open the possibility for a first experimental detection of exceptional points in a quantum system.
Transitionless driving on adiabatic search algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oh, Sangchul, E-mail: soh@qf.org.qa; Kais, Sabre, E-mail: kais@purdue.edu; Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907
We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian,more » approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.« less
Delay-induced stochastic bifurcations in a bistable system under white noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Zhongkui, E-mail: sunzk@nwpu.edu.cn; Fu, Jin; Xu, Wei
2015-08-15
In this paper, the effects of noise and time delay on stochastic bifurcations are investigated theoretically and numerically in a time-delayed Duffing-Van der Pol oscillator subjected to white noise. Due to the time delay, the random response is not Markovian. Thereby, approximate methods have been adopted to obtain the Fokker-Planck-Kolmogorov equation and the stationary probability density function for amplitude of the response. Based on the knowledge that stochastic bifurcation is characterized by the qualitative properties of the steady-state probability distribution, it is found that time delay and feedback intensity as well as noise intensity will induce the appearance of stochasticmore » P-bifurcation. Besides, results demonstrated that the effects of the strength of the delayed displacement feedback on stochastic bifurcation are accompanied by the sensitive dependence on time delay. Furthermore, the results from numerical simulations best confirm the effectiveness of the theoretical analyses.« less
Controlling dynamical entanglement in a Josephson tunneling junction
NASA Astrophysics Data System (ADS)
Ziegler, K.
2017-12-01
We analyze the evolution of an entangled many-body state in a Josephson tunneling junction and its dependence on the number of bosons and interaction strength. A N00N state, which is a superposition of two complementary Fock states, appears in the evolution with sufficient probability only for a moderate many-body interaction on an intermediate time scale. This time scale is inversely proportional to the tunneling rate. Many-body interaction strongly supports entanglement: The probability for creating an entangled state decays exponentially with the number of particles without many-body interaction, whereas it decays only like the inverse square root of the number of particles in the presence of many-body interaction.
Helble, Tyler A; D'Spain, Gerald L; Campbell, Greg S; Hildebrand, John A
2013-11-01
This paper demonstrates the importance of accounting for environmental effects on passive underwater acoustic monitoring results. The situation considered is the reduction in shipping off the California coast between 2008-2010 due to the recession and environmental legislation. The resulting variations in ocean noise change the probability of detecting marine mammal vocalizations. An acoustic model was used to calculate the time-varying probability of detecting humpback whale vocalizations under best-guess environmental conditions and varying noise. The uncorrected call counts suggest a diel pattern and an increase in calling over a two-year period; the corrected call counts show minimal evidence of these features.
Gkigkitzis, Ioannis
2013-01-01
The aim of this report is to provide a mathematical model of the mechanism for making binary fate decisions about cell death or survival, during and after Photodynamic Therapy (PDT) treatment, and to supply the logical design for this decision mechanism as an application of rate distortion theory to the biochemical processing of information by the physical system of a cell. Based on system biology models of the molecular interactions involved in the PDT processes previously established, and regarding a cellular decision-making system as a noisy communication channel, we use rate distortion theory to design a time dependent Blahut-Arimoto algorithm where the input is a stimulus vector composed of the time dependent concentrations of three PDT related cell death signaling molecules and the output is a cell fate decision. The molecular concentrations are determined by a group of rate equations. The basic steps are: initialize the probability of the cell fate decision, compute the conditional probability distribution that minimizes the mutual information between input and output, compute the cell probability of cell fate decision that minimizes the mutual information and repeat the last two steps until the probabilities converge. Advance to the next discrete time point and repeat the process. Based on the model from communication theory described in this work, and assuming that the activation of the death signal processing occurs when any of the molecular stimulants increases higher than a predefined threshold (50% of the maximum concentrations), for 1800s of treatment, the cell undergoes necrosis within the first 30 minutes with probability range 90.0%-99.99% and in the case of repair/survival, it goes through apoptosis within 3-4 hours with probability range 90.00%-99.00%. Although, there is no experimental validation of the model at this moment, it reproduces some patterns of survival ratios of predicted experimental data. Analytical modeling based on cell death signaling molecules has been shown to be an independent and useful tool for prediction of cell surviving response to PDT. The model can be adjusted to provide important insights for cellular response to other treatments such as hyperthermia, and diseases such as neurodegeneration.
A time-dependent probabilistic seismic-hazard model for California
Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.
2000-01-01
For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.
NASA Technical Reports Server (NTRS)
Sullivan, Roy M.
2015-01-01
The stress rupture strength of silicon carbide fiber-reinforced silicon carbide (SiCSiC) composites with a boron nitride (BN) fiber coating decreases with time within the intermediate temperature range of 700-950 C. Various theories have been proposed to explain the cause of the time dependent stress rupture strength. Some previous authors have suggested that the observed composite strength behavior is due to the inherent time dependent strength of the fibers, which is caused by the slow growth of flaws within the fibers. Flaw growth is supposedly enabled by oxidation of free carbon at the grain boundaries. The objective of this paper is to investigate the relative significance of the various theories for the time-dependent strength of SiCSiC composites. This is achieved through the development of a numerically-based progressive failure analysis routine and through the application of the routine to simulate the composite stress rupture tests. The progressive failure routine is a time marching routine with an iterative loop between a probability of fiber survival equation and a force equilibrium equation within each time step. Failure of the composite is assumed to initiate near a matrix crack and the progression of fiber failures occurs by global load sharing. The probability of survival equation is derived from consideration of the strength of ceramic fibers with randomly occurring and slow growing flaws as well as the mechanical interaction between the fibers and matrix near a matrix crack. The force equilibrium equation follows from the global load sharing presumption. The results of progressive failure analyses of the composite tests suggest that the relationship between time and stress-rupture strength is attributed almost entirely to the slow flaw growth within the fibers. Although other mechanisms may be present, they appear to have only a minor influence on the observed time dependent behavior.
A stochastic model for the probability of malaria extinction by mass drug administration.
Pemberton-Ross, Peter; Chitnis, Nakul; Pothin, Emilie; Smith, Thomas A
2017-09-18
Mass drug administration (MDA) has been proposed as an intervention to achieve local extinction of malaria. Although its effect on the reproduction number is short lived, extinction may subsequently occur in a small population due to stochastic fluctuations. This paper examines how the probability of stochastic extinction depends on population size, MDA coverage and the reproduction number under control, R c . A simple compartmental model is developed which is used to compute the probability of extinction using probability generating functions. The expected time to extinction in small populations after MDA for various scenarios in this model is calculated analytically. The results indicate that mass drug administration (Firstly, R c must be sustained at R c < 1.2 to avoid the rapid re-establishment of infections in the population. Secondly, the MDA must produce effective cure rates of >95% to have a non-negligible probability of successful elimination. Stochastic fluctuations only significantly affect the probability of extinction in populations of about 1000 individuals or less. The expected time to extinction via stochastic fluctuation is less than 10 years only in populations less than about 150 individuals. Clustering of secondary infections and of MDA distribution both contribute positively to the potential probability of success, indicating that MDA would most effectively be administered at the household level. There are very limited circumstances in which MDA will lead to local malaria elimination with a substantial probability.
Open-loop frequency acquisition for suppressed-carrier biphase signals using one-pole arm filters
NASA Technical Reports Server (NTRS)
Shah, B.; Holmes, J. K.
1991-01-01
Open loop frequency acquisition performance is discussed for suppressed carrier binary phase shift keyed signals in terms of the probability of detecting the carrier frequency offset when the arms of the Costas loop detector have one pole filters. The approach, which does not require symbol timing, uses fast Fourier transforms (FFTs) to detect the carrier frequency offset. The detection probability, which depends on both the 3 dB arm filter bandwidth and the received symbol signal to noise ratio, is derived and is shown to be independent of symbol timing. It is shown that the performance of this technique is slightly better that other open loop acquisition techniques which use integrators in the arms and whose detection performance varies with symbol timing.
Double ionization of neon in elliptically polarized femtosecond laser fields
NASA Astrophysics Data System (ADS)
Kang, HuiPeng; Henrichs, Kevin; Wang, YanLan; Hao, XiaoLei; Eckart, Sebastian; Kunitski, Maksim; Schöffler, Markus; Jahnke, Till; Liu, XiaoJun; Dörner, Reinhard
2018-06-01
We present a joint experimental and theoretical investigation of the correlated electron momentum spectra from strong-field double ionization of neon induced by elliptically polarized laser pulses. A significant asymmetry of the electron momentum distributions along the major polarization axis is reported. This asymmetry depends sensitively on the laser ellipticity. Using a three-dimensional semiclassical model, we attribute this asymmetry pattern to the ellipticity-dependent probability distributions of recollision time. Our work demonstrates that, by simply varying the ellipticity, the correlated electron emission can be two-dimensionally controlled and the recolliding electron trajectories can be steered on a subcycle time scale.
Sheng, Li; Wang, Zidong; Tian, Engang; Alsaadi, Fuad E
2016-12-01
This paper deals with the H ∞ state estimation problem for a class of discrete-time neural networks with stochastic delays subject to state- and disturbance-dependent noises (also called (x,v)-dependent noises) and fading channels. The time-varying stochastic delay takes values on certain intervals with known probability distributions. The system measurement is transmitted through fading channels described by the Rice fading model. The aim of the addressed problem is to design a state estimator such that the estimation performance is guaranteed in the mean-square sense against admissible stochastic time-delays, stochastic noises as well as stochastic fading signals. By employing the stochastic analysis approach combined with the Kronecker product, several delay-distribution-dependent conditions are derived to ensure that the error dynamics of the neuron states is stochastically stable with prescribed H ∞ performance. Finally, a numerical example is provided to illustrate the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Maximum caliber inference of nonequilibrium processes
NASA Astrophysics Data System (ADS)
Otten, Moritz; Stock, Gerhard
2010-07-01
Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.
Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States
NASA Astrophysics Data System (ADS)
de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M.
2018-03-01
Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.
Structure and morphology of submarine slab slides: clues to origin and behavior
O'Leary, Dennis W.
1991-01-01
Geologic features suggest that some slab slides probably result from long-term strength degradation of weak layers deep in the homoclinal section. Time-dependent strain in clay-rich layers can create potential slide surfaces of low frictional strength. Competent layers are weak in tension and probably fragment in the first instance of, or even prior to, translation, and the allochthonous mass is readily transformed into a high-momentum debris flow. The structure and geomorphology of slab slides provide important clues to their origin and behavior. -from Author
Minimum Action Path Theory Reveals the Details of Stochastic Transitions Out of Oscillatory States.
de la Cruz, Roberto; Perez-Carrasco, Ruben; Guerrero, Pilar; Alarcon, Tomas; Page, Karen M
2018-03-23
Cell state determination is the outcome of intrinsically stochastic biochemical reactions. Transitions between such states are studied as noise-driven escape problems in the chemical species space. Escape can occur via multiple possible multidimensional paths, with probabilities depending nonlocally on the noise. Here we characterize the escape from an oscillatory biochemical state by minimizing the Freidlin-Wentzell action, deriving from it the stochastic spiral exit path from the limit cycle. We also use the minimized action to infer the escape time probability density function.
Integrable time-dependent Hamiltonians, solvable Landau-Zener models and Gaudin magnets
NASA Astrophysics Data System (ADS)
Yuzbashyan, Emil A.
2018-05-01
We solve the non-stationary Schrödinger equation for several time-dependent Hamiltonians, such as the BCS Hamiltonian with an interaction strength inversely proportional to time, periodically driven BCS and linearly driven inhomogeneous Dicke models as well as various multi-level Landau-Zener tunneling models. The latter are Demkov-Osherov, bow-tie, and generalized bow-tie models. We show that these Landau-Zener problems and their certain interacting many-body generalizations map to Gaudin magnets in a magnetic field. Moreover, we demonstrate that the time-dependent Schrödinger equation for the above models has a similar structure and is integrable with a similar technique as Knizhnik-Zamolodchikov equations. We also discuss applications of our results to the problem of molecular production in an atomic Fermi gas swept through a Feshbach resonance and to the evaluation of the Landau-Zener transition probabilities.
Tosun, Tuğçe; Gür, Ezgi; Balcı, Fuat
2016-01-01
Animals can shape their timed behaviors based on experienced probabilistic relations in a nearly optimal fashion. On the other hand, it is not clear if they adopt these timed decisions by making computations based on previously learnt task parameters (time intervals, locations, and probabilities) or if they gradually develop their decisions based on trial and error. To address this question, we tested mice in the timed-switching task, which required them to anticipate when (after a short or long delay) and at which of the two delay locations a reward would be presented. The probability of short trials differed between test groups in two experiments. Critically, we first trained mice on relevant task parameters by signaling the active trial with a discriminative stimulus and delivered the corresponding reward after the associated delay without any response requirement (without inducing switching behavior). During the test phase, both options were presented simultaneously to characterize the emergence and temporal characteristics of the switching behavior. Mice exhibited timed-switching behavior starting from the first few test trials, and their performance remained stable throughout testing in the majority of the conditions. Furthermore, as the probability of the short trial increased, mice waited longer before switching from the short to long location (experiment 1). These behavioral adjustments were in directions predicted by reward maximization. These results suggest that rather than gradually adjusting their time-dependent choice behavior, mice abruptly adopted temporal decision strategies by directly integrating their previous knowledge of task parameters into their timed behavior, supporting the model-based representational account of temporal risk assessment. PMID:26733674
Time-dependent earthquake probabilities
Gomberg, J.; Belardinelli, M.E.; Cocco, M.; Reasenberg, P.
2005-01-01
We have attempted to provide a careful examination of a class of approaches for estimating the conditional probability of failure of a single large earthquake, particularly approaches that account for static stress perturbations to tectonic loading as in the approaches of Stein et al. (1997) and Hardebeck (2004). We have loading as in the framework based on a simple, generalized rate change formulation and applied it to these two approaches to show how they relate to one another. We also have attempted to show the connection between models of seismicity rate changes applied to (1) populations of independent faults as in background and aftershock seismicity and (2) changes in estimates of the conditional probability of failures of different members of a the notion of failure rate corresponds to successive failures of different members of a population of faults. The latter application requires specification of some probability distribution (density function of PDF) that describes some population of potential recurrence times. This PDF may reflect our imperfect knowledge of when past earthquakes have occurred on a fault (epistemic uncertainty), the true natural variability in failure times, or some combination of both. We suggest two end-member conceptual single-fault models that may explain natural variability in recurrence times and suggest how they might be distinguished observationally. When viewed deterministically, these single-fault patch models differ significantly in their physical attributes, and when faults are immature, they differ in their responses to stress perturbations. Estimates of conditional failure probabilities effectively integrate over a range of possible deterministic fault models, usually with ranges that correspond to mature faults. Thus conditional failure probability estimates usually should not differ significantly for these models. Copyright 2005 by the American Geophysical Union.
An Army-Centric System of Systems Analysis (SoSA) Definition
2011-02-01
1994, 19, 49–74. 34. Suzuki, K.; Ikegami , T . Homeodynamics in the Game of Life. In Artificial Life XI: Proceedings of the Eleventh International...insect displacement as a function of sampling time. (b) The same dataset displaying displacement at time t versus displacement at time t + t ...probability distribution of x(ti), x( t i + 1), x( t i + 2), …, x( t i + m - 1) is dependent upon the value of ti (21). Similarly, a discrete time series
Neural Signatures of Intransitive Preferences
Kalenscher, Tobias; Tobler, Philippe N.; Huijbers, Willem; Daselaar, Sander M.; Pennartz, Cyriel M.A.
2010-01-01
It is often assumed that decisions are made by rank-ordering and thus comparing the available choice options based on their subjective values. Rank-ordering requires that the alternatives’ subjective values are mentally represented at least on an ordinal scale. Because one alternative cannot be at the same time better and worse than another alternative, choices should satisfy transitivity (if alternative A is preferred over B, and B is preferred over C, A should be preferred over C). Yet, individuals often demonstrate striking violations of transitivity (preferring C over A). We used functional magnetic resonance imaging to study the neural correlates of intransitive choices between gambles varying in magnitude and probability of financial gains. Behavioral intransitivities were common. They occurred because participants did not evaluate the gambles independently, but in comparison with the alternative gamble presented. Neural value signals in prefrontal and parietal cortex were not ordinal-scaled and transitive, but reflected fluctuations in the gambles’ local, pairing-dependent preference-ranks. Detailed behavioral analysis of gamble preferences showed that, depending on the difference in the offered gambles’ attributes, participants gave variable priority to magnitude or probability and thus shifted between preferring richer or safer gambles. The variable, context-dependent priority given to magnitude and probability was tracked by insula (magnitude) and posterior cingulate (probability). Their activation-balance may reflect the individual decision rules leading to intransitivities. Thus, the phenomenon of intransitivity is reflected in the organization of the neural systems involved in risky decision-making. PMID:20814565
Akinci, A.; Galadini, F.; Pantosti, D.; Petersen, M.; Malagnini, L.; Perkins, D.
2009-01-01
We produce probabilistic seismic-hazard assessments for the central Apennines, Italy, using time-dependent models that are characterized using a Brownian passage time recurrence model. Using aperiodicity parameters, ?? of 0.3, 0.5, and 0.7, we examine the sensitivity of the probabilistic ground motion and its deaggregation to these parameters. For the seismic source model we incorporate both smoothed historical seismicity over the area and geological information on faults. We use the maximum magnitude model for the fault sources together with a uniform probability of rupture along the fault (floating fault model) to model fictitious faults to account for earthquakes that cannot be correlated with known geologic structural segmentation.
NASA Technical Reports Server (NTRS)
Ogallagher, J. J.
1973-01-01
A simple one-dimensional time-dependent diffusion-convection model for the modulation of cosmic rays is presented. This model predicts that the observed intensity at a given time is approximately equal to the intensity given by the time independent diffusion convection solution under interplanetary conditions which existed a time iota in the past, (U(t sub o) = U sub s(t sub o - tau)) where iota is the average time spent by a particle inside the modulating cavity. Delay times in excess of several hundred days are possible with reasonable modulation parameters. Interpretation of phase lags observed during the 1969 to 1970 solar maximum in terms of this model suggests that the modulating region is probably not less than 10 a.u. and maybe as much as 35 a.u. in extent.
Roberson, A.M.; Andersen, D.E.; Kennedy, P.L.
2005-01-01
Broadcast surveys using conspecific calls are currently the most effective method for detecting northern goshawks (Accipiter gentilis) during the breeding season. These surveys typically use alarm calls during the nestling phase and juvenile food-begging calls during the fledgling-dependency phase. Because goshawks are most vocal during the courtship phase, we hypothesized that this phase would be an effective time to detect goshawks. Our objective was to improve current survey methodology by evaluating the probability of detecting goshawks at active nests in northern Minnesota in 3 breeding phases and at 4 broadcast distances and to determine the effective area surveyed per broadcast station. Unlike previous studies, we broadcast calls at only 1 distance per trial. This approach better quantifies (1) the relationship between distance and probability of detection, and (2) the effective area surveyed (EAS) per broadcast station. We conducted 99 broadcast trials at 14 active breeding areas. When pooled over all distances, detection rates were highest during the courtship (70%) and fledgling-dependency phases (68%). Detection rates were lowest during the nestling phase (28%), when there appeared to be higher variation in likelihood of detecting individuals. EAS per broadcast station was 39.8 ha during courtship and 24.8 ha during fledgling-dependency. Consequently, in northern Minnesota, broadcast stations may be spaced 712m and 562 m apart when conducting systematic surveys during courtship and fledgling-dependency, respectively. We could not calculate EAS for the nestling phase because probability of detection was not a simple function of distance from nest. Calculation of EAS could be applied to other areas where the probability of detection is a known function of distance.
Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han
2011-01-01
The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143
A prediction model for colon cancer surveillance data.
Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G
2015-08-15
Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curchod, Basile F. E.; Agostini, Federica, E-mail: agostini@mpi-halle.mpg.de; Gross, E. K. U.
Nonadiabatic quantum interferences emerge whenever nuclear wavefunctions in different electronic states meet and interact in a nonadiabatic region. In this work, we analyze how nonadiabatic quantum interferences translate in the context of the exact factorization of the molecular wavefunction. In particular, we focus our attention on the shape of the time-dependent potential energy surface—the exact surface on which the nuclear dynamics takes place. We use a one-dimensional exactly solvable model to reproduce different conditions for quantum interferences, whose characteristic features already appear in one-dimension. The time-dependent potential energy surface develops complex features when strong interferences are present, in clear contrastmore » to the observed behavior in simple nonadiabatic crossing cases. Nevertheless, independent classical trajectories propagated on the exact time-dependent potential energy surface reasonably conserve a distribution in configuration space that mimics one of the exact nuclear probability densities.« less
Dependency of outbreaks distribution from insects - defoliators' seasonal development
Valentina Meshkova
2003-01-01
Analysis of data on the population dynamics of foliage browsing insects in time and space was conducted in the Ukraine. For each of the main species, correlation indices were calculated between outbreak characteristics (mean and specific foci area, outbreak probability), weather elements (air temperature, precipitation), indices (hydrothermal coefficient, winter...
A Composite Likelihood Inference in Latent Variable Models for Ordinal Longitudinal Responses
ERIC Educational Resources Information Center
Vasdekis, Vassilis G. S.; Cagnone, Silvia; Moustaki, Irini
2012-01-01
The paper proposes a composite likelihood estimation approach that uses bivariate instead of multivariate marginal probabilities for ordinal longitudinal responses using a latent variable model. The model considers time-dependent latent variables and item-specific random effects to be accountable for the interdependencies of the multivariate…
Exploration and Trapping of Mortal Random Walkers
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Lindenberg, Katja
2013-05-01
Exploration and trapping properties of random walkers that may evanesce at any time as they walk have seen very little treatment in the literature, and yet a finite lifetime is a frequent occurrence, and its effects on a number of random walk properties may be profound. For instance, whereas the average number of distinct sites visited by an immortal walker grows with time without bound, that of a mortal walker may, depending on dimensionality and rate of evanescence, remain finite or keep growing with the passage of time. This number can in turn be used to calculate other classic quantities such as the survival probability of a target surrounded by diffusing traps. If the traps are immortal, the survival probability will vanish with increasing time. However, if the traps are evanescent, the target may be spared a certain death. We analytically calculate a number of basic and broadly used quantities for evanescent random walkers.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
Some limitations of frequency as a component of risk: an expository note.
Cox, Louis Anthony
2009-02-01
Students of risk analysis are often taught that "risk is frequency times consequence" or, more generally, that risk is determined by the frequency and severity of adverse consequences. But is it? This expository note reviews the concepts of frequency as average annual occurrence rate and as the reciprocal of mean time to failure (MTTF) or mean time between failures (MTBF) in a renewal process. It points out that if two risks (represented as two (frequency, severity) pairs for adverse consequences) have identical values for severity but different values of frequency, then it is not necessarily true that the one with the smaller value of frequency is preferable-and this is true no matter how frequency is defined. In general, there is not necessarily an increasing relation between the reciprocal of the mean time until an event occurs, its long-run average occurrences per year, and other criteria, such as the probability or expected number of times that it will happen over a specific interval of interest, such as the design life of a system. Risk depends on more than frequency and severity of consequences. It also depends on other information about the probability distribution for the time of a risk event that can become lost in simple measures of event "frequency." More flexible descriptions of risky processes, such as point process models can avoid these limitations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herberger, Sarah M.; Boring, Ronald L.
Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less
System statistical reliability model and analysis
NASA Technical Reports Server (NTRS)
Lekach, V. S.; Rood, H.
1973-01-01
A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.
Architecture for Integrated Medical Model Dynamic Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Jaworske, D. A.; Myers, J. G.; Goodenow, D.; Young, M.; Arellano, J. D.
2016-01-01
Probabilistic Risk Assessment (PRA) is a modeling tool used to predict potential outcomes of a complex system based on a statistical understanding of many initiating events. Utilizing a Monte Carlo method, thousands of instances of the model are considered and outcomes are collected. PRA is considered static, utilizing probabilities alone to calculate outcomes. Dynamic Probabilistic Risk Assessment (dPRA) is an advanced concept where modeling predicts the outcomes of a complex system based not only on the probabilities of many initiating events, but also on a progression of dependencies brought about by progressing down a time line. Events are placed in a single time line, adding each event to a queue, as managed by a planner. Progression down the time line is guided by rules, as managed by a scheduler. The recently developed Integrated Medical Model (IMM) summarizes astronaut health as governed by the probabilities of medical events and mitigation strategies. Managing the software architecture process provides a systematic means of creating, documenting, and communicating a software design early in the development process. The software architecture process begins with establishing requirements and the design is then derived from the requirements.
A possible loophole in the theorem of Bell.
Hess, K; Philipp, W
2001-12-04
The celebrated inequalities of Bell are based on the assumption that local hidden parameters exist. When combined with conflicting experimental results, these inequalities appear to prove that local hidden parameters cannot exist. This contradiction suggests to many that only instantaneous action at a distance can explain the Einstein, Podolsky, and Rosen type of experiments. We show that, in addition to the assumption that hidden parameters exist, Bell tacitly makes a variety of other assumptions that contribute to his being able to obtain the desired contradiction. For instance, Bell assumes that the hidden parameters do not depend on time and are governed by a single probability measure independent of the analyzer settings. We argue that the exclusion of time has neither a physical nor a mathematical basis but is based on Bell's translation of the concept of Einstein locality into the language of probability theory. Our additional set of local hidden variables includes time-like correlated parameters and a generalized probability density. We prove that our extended space of local hidden variables does not permit Bell-type proofs to go forward.
A statistical analysis of flank eruptions on Etna volcano
NASA Astrophysics Data System (ADS)
Mulargia, Francesco; Tinti, Stefano; Boschi, Enzo
1985-02-01
A singularly complete record exists for the eruptive activity of Etna volcano. The time series of occurrence of flank eruptions in the period 1600-1980, in which the record is presumably complete, is found to follow a stationary Poisson process. A revision of the available data shows that eruption durations are rather well correlated with the estimates of the volume of lava flows. This implies that the magnitude of an eruption can be defined directly by its duration. Extreme value statistics are then applied to the time series, using duration as a dependent variable. The probability of occurrence of a very long (300 days) eruption is greater than 50% only in time intervals of the order of 50 years. The correlation found between duration and total output also allows estimation of the probability of occurrence of a major event which exceeds a given duration and total flow of lava. The composite probabilities do not differ considerably from the pure ones. Paralleling a well established application to seismic events, extreme value theory can be profitably used in volcanic risk estimates, provided that appropriate account is also taken of all other variables.
A Monte Carlo study of fluorescence generation probability in a two-layered tissue model
NASA Astrophysics Data System (ADS)
Milej, Daniel; Gerega, Anna; Wabnitz, Heidrun; Liebert, Adam
2014-03-01
It was recently reported that the time-resolved measurement of diffuse reflectance and/or fluorescence during injection of an optical contrast agent may constitute a basis for a technique to assess cerebral perfusion. In this paper, we present results of Monte Carlo simulations of the propagation of excitation photons and tracking of fluorescence photons in a two-layered tissue model mimicking intra- and extracerebral tissue compartments. Spatial 3D distributions of the probability that the photons were converted from excitation to emission wavelength in a defined voxel of the medium (generation probability) during their travel between source and detector were obtained for different optical properties in intra- and extracerebral tissue compartments. It was noted that the spatial distribution of the generation probability depends on the distribution of the fluorophore in the medium and is influenced by the absorption of the medium and of the fluorophore at excitation and emission wavelengths. Simulations were also carried out for realistic time courses of the dye concentration in both layers. The results of the study show that the knowledge of the absorption properties of the medium at excitation and emission wavelengths is essential for the interpretation of the time-resolved fluorescence signals measured on the surface of the head.
NASA Astrophysics Data System (ADS)
Lacombe, Lionel; Dinh, P. Huong Mai; Reinhard, Paul-Gerhard; Suraud, Eric; Sanche, Leon
2015-08-01
We present an extension of standard time-dependent density functional theory (TDDFT) to include the evaluation of rare reaction channels, taking as an example of application the theoretical modelling of electron attachment to molecules. The latter process is of great importance in radiation-induced damage of biological tissue for which dissociative electron attachment plays a decisive role. As the attachment probability is very low, it cannot be extracted from the TDDFT propagation whose mean field provides an average over various reaction channels. To extract rare events, we augment TDDFT by a perturbative treatment to account for the occasional jumps, namely electron capture in our test case. We apply the modelling to electron attachment to H2O, H3O+, and (H2O)2. Dynamical calculations have been done at low energy (3-16 eV). We explore, in particular, how core-excited states of the targets show up as resonances in the attachment probability. Contribution to the Topical Issue "COST Action Nano-IBCT: Nano-scale Processes Behind Ion-Beam Cancer Therapy", edited by Andrey Solov'yov, Nigel Mason, Gustavo García, Eugene Surdutovich.
A cross-diffusion system derived from a Fokker-Planck equation with partial averaging
NASA Astrophysics Data System (ADS)
Jüngel, Ansgar; Zamponi, Nicola
2017-02-01
A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.
Schwartz, L M; Bergman, D J; Dunn, K J; Mitra, P P
1996-01-01
Random walk computer simulations are an important tool in understanding magnetic resonance measurements in porous media. In this paper we focus on the description of pulsed field gradient spin echo (PGSE) experiments that measure the probability, P(R,t), that a diffusing water molecule will travel a distance R in a time t. Because PGSE simulations are often limited by statistical considerations, we will see that valuable insight can be gained by working with simple periodic geometries and comparing simulation data to the results of exact eigenvalue expansions. In this connection, our attention will be focused on (1) the wavevector, k, and time dependent magnetization, M(k, t); and (2) the normalized probability, Ps(delta R, t), that a diffusing particle will return to within delta R of the origin after time t.
N -tag probability law of the symmetric exclusion process
NASA Astrophysics Data System (ADS)
Poncet, Alexis; Bénichou, Olivier; Démery, Vincent; Oshanin, Gleb
2018-06-01
The symmetric exclusion process (SEP), in which particles hop symmetrically on a discrete line with hard-core constraints, is a paradigmatic model of subdiffusion in confined systems. This anomalous behavior is a direct consequence of strong spatial correlations induced by the requirement that the particles cannot overtake each other. Even if this fact has been recognized qualitatively for a long time, up to now there has been no full quantitative determination of these correlations. Here we study the joint probability distribution of an arbitrary number of tagged particles in the SEP. We determine analytically its large-time limit for an arbitrary density of particles, and its full dynamics in the high-density limit. In this limit, we obtain the time-dependent large deviation function of the problem and unveil a universal scaling form shared by the cumulants.
Barlow, Jay; Tyack, Peter L; Johnson, Mark P; Baird, Robin W; Schorr, Gregory S; Andrews, Russel D; Aguilar de Soto, Natacha
2013-09-01
Acoustic survey methods can be used to estimate density and abundance using sounds produced by cetaceans and detected using hydrophones if the probability of detection can be estimated. For passive acoustic surveys, probability of detection at zero horizontal distance from a sensor, commonly called g(0), depends on the temporal patterns of vocalizations. Methods to estimate g(0) are developed based on the assumption that a beaked whale will be detected if it is producing regular echolocation clicks directly under or above a hydrophone. Data from acoustic recording tags placed on two species of beaked whales (Cuvier's beaked whale-Ziphius cavirostris and Blainville's beaked whale-Mesoplodon densirostris) are used to directly estimate the percentage of time they produce echolocation clicks. A model of vocal behavior for these species as a function of their diving behavior is applied to other types of dive data (from time-depth recorders and time-depth-transmitting satellite tags) to indirectly determine g(0) in other locations for low ambient noise conditions. Estimates of g(0) for a single instant in time are 0.28 [standard deviation (s.d.) = 0.05] for Cuvier's beaked whale and 0.19 (s.d. = 0.01) for Blainville's beaked whale.
Evolutionary games on cycles with strong selection
NASA Astrophysics Data System (ADS)
Altrock, P. M.; Traulsen, A.; Nowak, M. A.
2017-02-01
Evolutionary games on graphs describe how strategic interactions and population structure determine evolutionary success, quantified by the probability that a single mutant takes over a population. Graph structures, compared to the well-mixed case, can act as amplifiers or suppressors of selection by increasing or decreasing the fixation probability of a beneficial mutant. Properties of the associated mean fixation times can be more intricate, especially when selection is strong. The intuition is that fixation of a beneficial mutant happens fast in a dominance game, that fixation takes very long in a coexistence game, and that strong selection eliminates demographic noise. Here we show that these intuitions can be misleading in structured populations. We analyze mean fixation times on the cycle graph under strong frequency-dependent selection for two different microscopic evolutionary update rules (death-birth and birth-death). We establish exact analytical results for fixation times under strong selection and show that there are coexistence games in which fixation occurs in time polynomial in population size. Depending on the underlying game, we observe inherence of demographic noise even under strong selection if the process is driven by random death before selection for birth of an offspring (death-birth update). In contrast, if selection for an offspring occurs before random removal (birth-death update), then strong selection can remove demographic noise almost entirely.
Xie, Ping; Wu, Zi Yi; Zhao, Jiang Yan; Sang, Yan Fang; Chen, Jie
2018-04-01
A stochastic hydrological process is influenced by both stochastic and deterministic factors. A hydrological time series contains not only pure random components reflecting its inheri-tance characteristics, but also deterministic components reflecting variability characteristics, such as jump, trend, period, and stochastic dependence. As a result, the stochastic hydrological process presents complicated evolution phenomena and rules. To better understand these complicated phenomena and rules, this study described the inheritance and variability characteristics of an inconsistent hydrological series from two aspects: stochastic process simulation and time series analysis. In addition, several frequency analysis approaches for inconsistent time series were compared to reveal the main problems in inconsistency study. Then, we proposed a new concept of hydrological genes origined from biological genes to describe the inconsistent hydrolocal processes. The hydrologi-cal genes were constructed using moments methods, such as general moments, weight function moments, probability weight moments and L-moments. Meanwhile, the five components, including jump, trend, periodic, dependence and pure random components, of a stochastic hydrological process were defined as five hydrological bases. With this method, the inheritance and variability of inconsistent hydrological time series were synthetically considered and the inheritance, variability and evolution principles were fully described. Our study would contribute to reveal the inheritance, variability and evolution principles in probability distribution of hydrological elements.
Optimal search strategies of space-time coupled random walkers with finite lifetimes
NASA Astrophysics Data System (ADS)
Campos, D.; Abad, E.; Méndez, V.; Yuste, S. B.; Lindenberg, K.
2015-05-01
We present a simple paradigm for detection of an immobile target by a space-time coupled random walker with a finite lifetime. The motion of the walker is characterized by linear displacements at a fixed speed and exponentially distributed duration, interrupted by random changes in the direction of motion and resumption of motion in the new direction with the same speed. We call these walkers "mortal creepers." A mortal creeper may die at any time during its motion according to an exponential decay law characterized by a finite mean death rate ωm. While still alive, the creeper has a finite mean frequency ω of change of the direction of motion. In particular, we consider the efficiency of the target search process, characterized by the probability that the creeper will eventually detect the target. Analytic results confirmed by numerical results show that there is an ωm-dependent optimal frequency ω =ωopt that maximizes the probability of eventual target detection. We work primarily in one-dimensional (d =1 ) domains and examine the role of initial conditions and of finite domain sizes. Numerical results in d =2 domains confirm the existence of an optimal frequency of change of direction, thereby suggesting that the observed effects are robust to changes in dimensionality. In the d =1 case, explicit expressions for the probability of target detection in the long time limit are given. In the case of an infinite domain, we compute the detection probability for arbitrary times and study its early- and late-time behavior. We further consider the survival probability of the target in the presence of many independent creepers beginning their motion at the same location and at the same time. We also consider a version of the standard "target problem" in which many creepers start at random locations at the same time.
Criticality and Phase Transition in Stock-Price Fluctuations
NASA Astrophysics Data System (ADS)
Kiyono, Ken; Struzik, Zbigniew R.; Yamamoto, Yoshiharu
2006-02-01
We analyze the behavior of the U.S. S&P 500 index from 1984 to 1995, and characterize the non-Gaussian probability density functions (PDF) of the log returns. The temporal dependence of fat tails in the PDF of a ten-minute log return shows a gradual, systematic increase in the probability of the appearance of large increments on approaching black Monday in October 1987, reminiscent of parameter tuning towards criticality. On the occurrence of the black Monday crash, this culminates in an abrupt transition of the scale dependence of the non-Gaussian PDF towards scale-invariance characteristic of critical behavior. These facts suggest the need for revisiting the turbulent cascade paradigm recently proposed for modeling the underlying dynamics of the financial index, to account for time varying—phase transitionlike and scale invariant-critical-like behavior.
Effect of feedback mode and task difficulty on quality of timing decisions in a zero-sum game.
Tikuisis, Peter; Vartanian, Oshin; Mandel, David R
2014-09-01
The objective was to investigate the interaction between the mode of performance outcome feedback and task difficulty on timing decisions (i.e., when to act). Feedback is widely acknowledged to affect task performance. However, the extent to which feedback display mode and its impact on timing decisions is moderated by task difficulty remains largely unknown. Participants repeatedly engaged a zero-sum game involving silent duels with a computerized opponent and were given visual performance feedback after each engagement. They were sequentially tested on three different levels of task difficulty (low, intermediate, and high) in counterbalanced order. Half received relatively simple "inside view" binary outcome feedback, and the other half received complex "outside view" hit rate probability feedback. The key dependent variables were response time (i.e., time taken to make a decision) and survival outcome. When task difficulty was low to moderate, participants were more likely to learn and perform better from hit rate probability feedback than binary outcome feedback. However, better performance with hit rate feedback exacted a higher cognitive cost manifested by higher decision response time. The beneficial effect of hit rate probability feedback on timing decisions is partially moderated by task difficulty. Performance feedback mode should be judiciously chosen in relation to task difficulty for optimal performance in tasks involving timing decisions.
Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface
NASA Astrophysics Data System (ADS)
Liu, Tianhui; Fu, Bina; Zhang, Dong H.
2014-04-01
We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitation and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.
Six-dimensional quantum dynamics study for the dissociative adsorption of DCl on Au(111) surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianhui; Fu, Bina, E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H., E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn
We carried out six-dimensional quantum dynamics calculations for the dissociative adsorption of deuterium chloride (DCl) on Au(111) surface using the initial state-selected time-dependent wave packet approach. The four-dimensional dissociation probabilities are also obtained with the center of mass of DCl fixed at various sites. These calculations were all performed based on an accurate potential energy surface recently constructed by neural network fitting to density function theory energy points. The origin of the extremely small dissociation probability for DCl/HCl (v = 0, j = 0) fixed at the top site compared to other fixed sites is elucidated in this study. The influence of vibrational excitationmore » and rotational orientation of DCl on the reactivity was investigated by calculating six-dimensional dissociation probabilities. The vibrational excitation of DCl enhances the reactivity substantially and the helicopter orientation yields higher dissociation probability than the cartwheel orientation. The site-averaged dissociation probability over 25 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability.« less
Breininger, David R; Breininger, Robert D; Hall, Carlton R
2017-02-01
Seagrasses are the foundation of many coastal ecosystems and are in global decline because of anthropogenic impacts. For the Indian River Lagoon (Florida, U.S.A.), we developed competing multistate statistical models to quantify how environmental factors (surrounding land use, water depth, and time [year]) influenced the variability of seagrass state dynamics from 2003 to 2014 while accounting for time-specific detection probabilities that quantified our ability to determine seagrass state at particular locations and times. We classified seagrass states (presence or absence) at 764 points with geographic information system maps for years when seagrass maps were available and with aerial photographs when seagrass maps were not available. We used 4 categories (all conservation, mostly conservation, mostly urban, urban) to describe surrounding land use within sections of lagoonal waters, usually demarcated by land features that constricted these waters. The best models predicted that surrounding land use, depth, and year would affect transition and detection probabilities. Sections of the lagoon bordered by urban areas had the least stable seagrass beds and lowest detection probabilities, especially after a catastrophic seagrass die-off linked to an algal bloom. Sections of the lagoon bordered by conservation lands had the most stable seagrass beds, which supports watershed conservation efforts. Our results show that a multistate approach can empirically estimate state-transition probabilities as functions of environmental factors while accounting for state-dependent differences in seagrass detection probabilities as part of the overall statistical inference procedure. © 2016 Society for Conservation Biology.
Weak Measurement and Quantum Smoothing of a Superconducting Qubit
NASA Astrophysics Data System (ADS)
Tan, Dian
In quantum mechanics, the measurement outcome of an observable in a quantum system is intrinsically random, yielding a probability distribution. The state of the quantum system can be described by a density matrix rho(t), which depends on the information accumulated until time t, and represents our knowledge about the system. The density matrix rho(t) gives probabilities for the outcomes of measurements at time t. Further probing of the quantum system allows us to refine our prediction in hindsight. In this thesis, we experimentally examine a quantum smoothing theory in a superconducting qubit by introducing an auxiliary matrix E(t) which is conditioned on information obtained from time t to a final time T. With the complete information before and after time t, the pair of matrices [rho(t), E(t)] can be used to make smoothed predictions for the measurement outcome at time t. We apply the quantum smoothing theory in the case of continuous weak measurement unveiling the retrodicted quantum trajectories and weak values. In the case of strong projective measurement, while the density matrix rho(t) with only diagonal elements in a given basis |n〉 may be treated as a classical mixture, we demonstrate a failure of this classical mixture description in determining the smoothed probabilities for the measurement outcome at time t with both diagonal rho(t) and diagonal E(t). We study the correlations between quantum states and weak measurement signals and examine aspects of the time symmetry of continuous quantum measurement. We also extend our study of quantum smoothing theory to the case of resonance fluorescence of a superconducting qubit with homodyne measurement and observe some interesting effects such as the modification of the excited state probabilities, weak values, and evolution of the predicted and retrodicted trajectories.
Religion in SETI Communications
NASA Astrophysics Data System (ADS)
Pay, R.
The prospect of millions of civilizations in the Galaxy raises the probability of receiving communications in the Search for Extraterrestrial Intelligence (SETI). However, much depends on the average lifetime of planetary civilizations. For a lifetime of 500 years, an optimistic forecast would predict about 65 civilizations in the Galaxy at any one time, separated by 5,000 light years. No prospect of communication. For a lifetime of 10 million years, over a million civilizations would be spaced 180 light years apart. Communication among them is feasible. This indicates that extraterrestrial communications depend on civilizations achieving long term stability, probably by evolving a global religion that removes sources of religious strife. Stability also requires an ethic supporting universal rights, nonviolence, empathy and cooperation. As this ethic will be expressed in the planet-wide religion, it will lead to offers of support to other civilizations struggling to gain stability. As stable civilizations will be much advanced scientifically, understanding the religious concepts that appear in their communications will depend on how quantum mechanics, biological evolution, and the creation of the universe at a point in time are incorporated into their religion. Such a religion will view creation as intentional rather than accidental (the atheistic alternative) and will find the basis for its natural theology in the intention revealed by the physical laws of the universe.
NASA Astrophysics Data System (ADS)
Sidorova, Mariia; Semenov, Alexej; Hübers, Heinz-Wilhelm; Charaev, Ilya; Kuzmin, Artem; Doerner, Steffen; Siegel, Michael
2017-11-01
We studied timing jitter in the appearance of photon counts in meandering nanowires with different fractional amount of bends. Intrinsic timing jitter, which is the probability density function of the random time delay between photon absorption in current-carrying superconducting nanowire and appearance of the normal domain, reveals two different underlying physical mechanisms. In the deterministic regime, which is realized at large photon energies and large currents, jitter is controlled by position-dependent detection threshold in straight parts of meanders. It decreases with the increase in the current. At small photon energies, jitter increases and its current dependence disappears. In this probabilistic regime jitter is controlled by Poisson process in that magnetic vortices jump randomly across the wire in areas adjacent to the bends.
Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.
O'Connor, B P
1999-11-01
This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.
Germanium-Source Tunnel Field Effect Transistors for Ultra-Low Power Digital Logic
2012-05-10
carrier injection via band-to-band tunneling (BTBT) and the absence of thermal (kT) dependence allows for the subthreshold swing to be steeper than... tunneling probability was derived by Kane using time- dependent perturbation theory and Fermi’s Golden Rule [8-9]. This section will instead employ a...be based on tunneling across a reverse- biased p-n junction as shown in Fig. 2.2. In order to obtain a closed form solution of the BTBT
On statistical properties of traded volume in financial markets
NASA Astrophysics Data System (ADS)
de Souza, J.; Moyano, L. G.; Duarte Queirós, S. M.
2006-03-01
In this article we study the dependence degree of the traded volume of the Dow Jones 30 constituent equities by using a nonextensive generalised form of the Kullback-Leibler information measure. Our results show a slow decay of the dependence degree as a function of the lag. This feature is compatible with the existence of non-linearities in this type time series. In addition, we introduce a dynamical mechanism whose associated stationary probability density function (PDF) presents a good agreement with the empirical results.
Moro, Marilyn; Westover, M Brandon; Kelly, Jessica; Bianchi, Matt T
2016-03-01
Obstructive sleep apnea (OSA) is associated with increased morbidity and mortality, and treatment with positive airway pressure (PAP) is cost-effective. However, the optimal diagnostic strategy remains a subject of debate. Prior modeling studies have not consistently supported the widely held assumption that home sleep testing (HST) is cost-effective. We modeled four strategies: (1) treat no one; (2) treat everyone empirically; (3) treat those testing positive during in-laboratory polysomnography (PSG) via in-laboratory titration; and (4) treat those testing positive during HST with auto-PAP. The population was assumed to lack independent reasons for in-laboratory PSG (such as insomnia, periodic limb movements in sleep, complex apnea). We considered the third-party payer perspective, via both standard (quality-adjusted) and pure cost methods. The preferred strategy depended on three key factors: pretest probability of OSA, cost of untreated OSA, and time horizon. At low prevalence and low cost of untreated OSA, the treat no one strategy was favored, whereas empiric treatment was favored for high prevalence and high cost of untreated OSA. In-laboratory backup for failures in the at-home strategy increased the preference for the at-home strategy. Without laboratory backup in the at-home arm, the in-laboratory strategy was increasingly preferred at longer time horizons. Using a model framework that captures a broad range of clinical possibilities, the optimal diagnostic approach to uncomplicated OSA depends on pretest probability, cost of untreated OSA, and time horizon. Estimating each of these critical factors remains a challenge warranting further investigation. © 2016 American Academy of Sleep Medicine.
On the mixing time in the Wang-Landau algorithm
NASA Astrophysics Data System (ADS)
Fadeeva, Marina; Shchur, Lev
2018-01-01
We present preliminary results of the investigation of the properties of the Markov random walk in the energy space generated by the Wang-Landau probability. We build transition matrix in the energy space (TMES) using the exact density of states for one-dimensional and two-dimensional Ising models. The spectral gap of TMES is inversely proportional to the mixing time of the Markov chain. We estimate numerically the dependence of the mixing time on the lattice size, and extract the mixing exponent.
Networked Ising-Sznajd AR-β Model
NASA Astrophysics Data System (ADS)
Nagao, Tomonori; Ohmiya, Mayumi
2009-09-01
The modified Ising-Sznajd model is studied to clarify the machanism of price formation in the stock market. The conventional Ising-Sznajd model is improved as a small world network with the rewireing probability β(t) which depends on the time. Numerical experiments show that phase transition, regarded as a economical crisis, is inevitable in this model.
Mixed and Mixture Regression Models for Continuous Bounded Responses Using the Beta Distribution
ERIC Educational Resources Information Center
Verkuilen, Jay; Smithson, Michael
2012-01-01
Doubly bounded continuous data are common in the social and behavioral sciences. Examples include judged probabilities, confidence ratings, derived proportions such as percent time on task, and bounded scale scores. Dependent variables of this kind are often difficult to analyze using normal theory models because their distributions may be quite…
Discrimination of Variable Schedules Is Controlled by Interresponse Times Proximal to Reinforcement
ERIC Educational Resources Information Center
Tanno, Takayuki; Silberberg, Alan; Sakagami, Takayuki
2012-01-01
In Experiment 1, food-deprived rats responded to one of two schedules that were, with equal probability, associated with a sample lever. One schedule was always variable ratio, while the other schedule, depending on the trial within a session, was: (a) a variable-interval schedule; (b) a tandem variable-interval,…
Influence of Image Interactivity on Approach Responses towards an Online Retailer.
ERIC Educational Resources Information Center
Fiore, Ann Marie; Jin, Hyun-Jeong
2003-01-01
Measured the effect of exposure to an image interactivity function from an apparel retailer's Web site on approach responses towards the retailer. Dependent variables included attitude towards the online store, willingness to purchase, probability of spending more time than planned shopping, and likelihood of patronizing the online retailer's…
The stochastic energy-Casimir method
NASA Astrophysics Data System (ADS)
Arnaudon, Alexis; Ganaba, Nader; Holm, Darryl D.
2018-04-01
In this paper, we extend the energy-Casimir stability method for deterministic Lie-Poisson Hamiltonian systems to provide sufficient conditions for stability in probability of stochastic dynamical systems with symmetries. We illustrate this theory with classical examples of coadjoint motion, including the rigid body, the heavy top, and the compressible Euler equation in two dimensions. The main result is that stable deterministic equilibria remain stable in probability up to a certain stopping time that depends on the amplitude of the noise for finite-dimensional systems and on the amplitude of the spatial derivative of the noise for infinite-dimensional systems. xml:lang="fr"
NASA Technical Reports Server (NTRS)
Neuhauser, Daniel; Baer, Michael; Judson, Richard S.; Kouri, Donald J.
1989-01-01
The first successful application of the three-dimensional quantum body frame wave packet approach to reactive scattering is reported for the H + H2 exchange reaction on the LSTH potential surface. The method used is based on a procedure for calculating total reaction probabilities from wave packets. It is found that converged, vibrationally resolved reactive probabilities can be calculated with a grid that is not much larger than required for the pure inelastic calculation. Tabular results are presented for several energies.
Zangarini, Sara; Trombino, Luca; Cattaneo, Cristina
2016-06-01
A buried body not only determines an environmental response at the deposition site but it is also affected by the soil. The experiment was performed using eleven swine carcasses buried in an open site (Northern Italy). Changes occurring in bone tissue at different post-burial intervals were evaluated observing thin sections of bones through micromorphological and ultramicroscopic (SEM-EDS) techniques. These methods allowed the identification of: (a) magnesium phosphate (Mg3(PO4)2) crystallizations, probably linked to decomposition of bones and soft tissues; (b) significant sulphur levels which seem to be related to hydrogen sulphide (H2S) fixation in bone tissue; (c) metal oxide concentrations in the form of unusual violet-blue colorations, which probably are evidence of the soil's action and penetration in bones, also testified by (d) the presence of mineral grains enclosed in the osseous tissue. The results underline the possibility of identifying both time-dependent markers of decomposition and indicators of permanence in soil in buried bones. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Maximum predictive power and the superposition principle
NASA Technical Reports Server (NTRS)
Summhammer, Johann
1994-01-01
In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.
Information transport in classical statistical systems
NASA Astrophysics Data System (ADS)
Wetterich, C.
2018-02-01
For "static memory materials" the bulk properties depend on boundary conditions. Such materials can be realized by classical statistical systems which admit no unique equilibrium state. We describe the propagation of information from the boundary to the bulk by classical wave functions. The dependence of wave functions on the location of hypersurfaces in the bulk is governed by a linear evolution equation that can be viewed as a generalized Schrödinger equation. Classical wave functions obey the superposition principle, with local probabilities realized as bilinears of wave functions. For static memory materials the evolution within a subsector is unitary, as characteristic for the time evolution in quantum mechanics. The space-dependence in static memory materials can be used as an analogue representation of the time evolution in quantum mechanics - such materials are "quantum simulators". For example, an asymmetric Ising model on a Euclidean two-dimensional lattice represents the time evolution of free relativistic fermions in two-dimensional Minkowski space.
Hagihara, Rie; Jones, Rhondda E; Sobtzick, Susan; Cleguer, Christophe; Garrigue, Claire; Marsh, Helene
2018-01-01
The probability of an aquatic animal being available for detection is typically <1. Accounting for covariates that reduce the probability of detection is important for obtaining robust estimates of the population abundance and determining its status and trends. The dugong (Dugong dugon) is a bottom-feeding marine mammal and a seagrass community specialist. We hypothesized that the probability of a dugong being available for detection is dependent on water depth and that dugongs spend more time underwater in deep-water seagrass habitats than in shallow-water seagrass habitats. We tested this hypothesis by quantifying the depth use of 28 wild dugongs fitted with GPS satellite transmitters and time-depth recorders (TDRs) at three sites with distinct seagrass depth distributions: 1) open waters supporting extensive seagrass meadows to 40 m deep (Torres Strait, 6 dugongs, 2015); 2) a protected bay (average water depth 6.8 m) with extensive shallow seagrass beds (Moreton Bay, 13 dugongs, 2011 and 2012); and 3) a mixture of lagoon, coral and seagrass habitats to 60 m deep (New Caledonia, 9 dugongs, 2013). The fitted instruments were used to measure the times the dugongs spent in the experimentally determined detection zones under various environmental conditions. The estimated probability of detection was applied to aerial survey data previously collected at each location. In general, dugongs were least available for detection in Torres Strait, and the population estimates increased 6-7 fold using depth-specific availability correction factors compared with earlier estimates that assumed homogeneous detection probability across water depth and location. Detection probabilities were higher in Moreton Bay and New Caledonia than Torres Strait because the water transparency in these two locations was much greater than in Torres Strait and the effect of correcting for depth-specific detection probability much less. The methodology has application to visual survey of coastal megafauna including surveys using Unmanned Aerial Vehicles.
Unambiguous discrimination between linearly dependent equidistant states with multiple copies
NASA Astrophysics Data System (ADS)
Zhang, Wen-Hai; Ren, Gang
2018-07-01
Linearly independent quantum states can be unambiguously discriminated, but linearly dependent ones cannot. For linearly dependent quantum states, however, if C copies of the single states are available, then they may form linearly independent states, and can be unambiguously discriminated. We consider unambiguous discrimination among N = D + 1 linearly dependent states given that C copies are available and that the single copies span a D-dimensional space with equal inner products. The maximum unambiguous discrimination probability is derived for all C with equal a priori probabilities. For this classification of the linearly dependent equidistant states, our result shows that if C is even then adding a further copy fails to increase the maximum discrimination probability.
NASA Astrophysics Data System (ADS)
Schneider, Kai; Kadoch, Benjamin; Bos, Wouter
2017-11-01
The angle between two subsequent particle displacement increments is evaluated as a function of the time lag. The directional change of particles can thus be quantified at different scales and multiscale statistics can be performed. Flow dependent and geometry dependent features can be distinguished. The mean angle satisfies scaling behaviors for short time lags based on the smoothness of the trajectories. For intermediate time lags a power law behavior can be observed for some turbulent flows, which can be related to Kolmogorov scaling. The long time behavior depends on the confinement geometry of the flow. We show that the shape of the probability distribution function of the directional change can be well described by a Fischer distribution. Results for two-dimensional (direct and inverse cascade) and three-dimensional turbulence with and without confinement, illustrate the properties of the proposed multiscale statistics. The presented Monte-Carlo simulations allow disentangling geometry dependent and flow independent features. Finally, we also analyze trajectories of football players, which are, in general, not randomly spaced on a field.
Predicting the cosmological constant with the scale-factor cutoff measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Simone, Andrea; Guth, Alan H.; Salem, Michael P.
2008-09-15
It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less
Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J
2017-04-01
Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.
Brownian Motion with Active Fluctuations
NASA Astrophysics Data System (ADS)
Romanczuk, Pawel; Schimansky-Geier, Lutz
2011-06-01
We study the effect of different types of fluctuation on the motion of self-propelled particles in two spatial dimensions. We distinguish between passive and active fluctuations. Passive fluctuations (e.g., thermal fluctuations) are independent of the orientation of the particle. In contrast, active ones point parallel or perpendicular to the time dependent orientation of the particle. We derive analytical expressions for the speed and velocity probability density for a generic model of active Brownian particles, which yields an increased probability of low speeds in the presence of active fluctuations in comparison to the case of purely passive fluctuations. As a consequence, we predict sharply peaked Cartesian velocity probability densities at the origin. Finally, we show that such a behavior may also occur in non-Gaussian active fluctuations and discuss briefly correlations of the fluctuating stochastic forces.
Ciofu, Oana; Yang, Liang; Wu, Hong; Song, Zhijun; Oliver, Antonio; Høiby, Niels
2013-01-01
Resistance to β-lactam antibiotics is a frequent problem in Pseudomonas aeruginosa lung infection of cystic fibrosis (CF) patients. This resistance is mainly due to the hyperproduction of chromosomally encoded β-lactamase and biofilm formation. The purpose of this study was to investigate the role of β-lactamase in the pharmacokinetics (PK) and pharmacodynamics (PD) of ceftazidime and imipenem on P. aeruginosa biofilms. P. aeruginosa PAO1 and its corresponding β-lactamase-overproducing mutant, PAΔDDh2Dh3, were used in this study. Biofilms of these two strains in flow chambers, microtiter plates, and on alginate beads were treated with different concentrations of ceftazidime and imipenem. The kinetics of antibiotics on the biofilms was investigated in vitro by time-kill methods. Time-dependent killing of ceftazidime was observed in PAO1 biofilms, but concentration-dependent killing activity of ceftazidime was observed for β-lactamase-overproducing biofilms of P. aeruginosa in all three models. Ceftazidime showed time-dependent killing on planktonic PAO1 and PAΔDDh2Dh3. This difference is probably due to the special distribution and accumulation in the biofilm matrix of β-lactamase, which can hydrolyze the β-lactam antibiotics. The PK/PD indices of the AUC/MBIC and Cmax/MBIC (AUC is the area under concentration-time curve, MBIC is the minimal biofilm-inhibitory concentration, and Cmax is the maximum concentration of drug in serum) are probably the best parameters to describe the effect of ceftazidime in β-lactamase-overproducing P. aeruginosa biofilms. Meanwhile, imipenem showed time-dependent killing on both PAO1 and PAΔDDh2Dh3 biofilms. An inoculum effect of β-lactams was found for both planktonic and biofilm P. aeruginosa cells. The inoculum effect of ceftazidime for the β-lactamase-overproducing mutant PAΔDDh2Dh3 biofilms was more obvious than for PAO1 biofilms, with a requirement of higher antibiotic concentration and a longer period of treatment. PMID:23089750
Gamallo, Pablo; Akpinar, Sinan; Defazio, Paolo; Petrongolo, Carlo
2014-08-21
We present the adiabatic quantum dynamics of the proton-transfer reaction H((2)S) + HeH(+)(X(1)Σ(+)) → H2(+)(X(2)Σg(+)) + He((1)S) on the HeH2(+) X̃(2)Σ(+) RMRCI6 (M = 6) PES of C. N. Ramachandran et al. ( Chem. Phys. Lett. 2009, 469, 26). We consider the HeH(+) molecule in the ground vibrational–rotational state and obtain initial-state-resolved reaction probabilities and the ground-state cross section σ0 and rate constant k0 by propagating time-dependent, coupled-channel, real wavepackets (RWPs) and performing a flux analysis. Three different wavepackets are propagated to describe the wide range of energies explored, from cold (0.0001 meV) to hyperthermal (1000 meV) collision energies, and in a temperature range from 0.01 to 2000 K. We compare our time-dependent results with the time-independent ones by D. De Fazio and S. Bovino et al., where De Fazio carried out benchmark coupled-channel calculations whereas Bovino et al. employed the negative imaginary potential and the centrifugal-sudden approximations. The RWP cross section is in good agreement with that by De Fazio, except at the lowest collision energies below ∼0.01 meV, where the former is larger than the latter. However, neither the RWP and De Fazio results possess the huge resonance in probability and cross section at 0.01 meV, found by Bovino et al., who also obtained a too low σ0 at high energies. Therefore, the RWP and De Fazio rate constants compare quite well, whereas that by Bovino et al. is in general lower.
A methodology for stochastic analysis of share prices as Markov chains with finite states.
Mettle, Felix Okoe; Quaye, Enoch Nii Boi; Laryea, Ravenhill Adjetey
2014-01-01
Price volatilities make stock investments risky, leaving investors in critical position when uncertain decision is made. To improve investor evaluation confidence on exchange markets, while not using time series methodology, we specify equity price change as a stochastic process assumed to possess Markov dependency with respective state transition probabilities matrices following the identified state pace (i.e. decrease, stable or increase). We established that identified states communicate, and that the chains are aperiodic and ergodic thus possessing limiting distributions. We developed a methodology for determining expected mean return time for stock price increases and also establish criteria for improving investment decision based on highest transition probabilities, lowest mean return time and highest limiting distributions. We further developed an R algorithm for running the methodology introduced. The established methodology is applied to selected equities from Ghana Stock Exchange weekly trading data.
NASA Astrophysics Data System (ADS)
Pavlov, A. V.; Pavlova, N. M.
2007-11-01
We present a study of anomalous night-time NmF2 peaks, ANNPs, observed by the La Paz, Natal, Djibouti, Kodaikanal, Madras, Manila, Talara, and Huancayo Jicamarca ionosonde stations close to the geomagnetic equator. It is shown for the first time that the probabilities of occurrence of the first and second ANNPs depend on the geomagnetic longitude, and there is a longitude sector close to 110° geomagnetic longitude where the first and second ANNPs occur less frequently in comparison with the longitude regions located close to and below about 34° geomagnetic longitude and close to and above about 144° geomagnetic longitude. The found frequencies of occurrence of the ANNPs increase with increasing solar activity, except of the Djibouti and Kodaikanal ionosonde stations, where the probability of the first ANNP occurrence is found to decrease with increasing solar activity from low to moderate solar activity, and except of the Natal ionosonde station, where the frequencies of occurrence of the first and second ANNPs decrease with increasing solar activity from moderate to high solar activity. We found that the occurrence probabilities of ANNPs during geomagnetically disturbed conditions are greater than those during geomagnetically quiet conditions. The ANNP probabilities are largest in summer and are lowest in winter for the La-Paz, Talara, and Huancayo Jicamarca sounders. These probabilities are lowest in summer for the Djibouti, Madras, and Manila ionosonde stations, and in spring for the Kodaikanal sounder. The maximums in the probabilities are found to be in autumn for the Djibouti, Madras, and Manila ionosonde stations, and in winter for the Kodaikanal sounder.
Non-Kolmogorovian Approach to the Context-Dependent Systems Breaking the Classical Probability Law
NASA Astrophysics Data System (ADS)
Asano, Masanari; Basieva, Irina; Khrennikov, Andrei; Ohya, Masanori; Yamato, Ichiro
2013-07-01
There exist several phenomena breaking the classical probability laws. The systems related to such phenomena are context-dependent, so that they are adaptive to other systems. In this paper, we present a new mathematical formalism to compute the joint probability distribution for two event-systems by using concepts of the adaptive dynamics and quantum information theory, e.g., quantum channels and liftings. In physics the basic example of the context-dependent phenomena is the famous double-slit experiment. Recently similar examples have been found in biological and psychological sciences. Our approach is an extension of traditional quantum probability theory, and it is general enough to describe aforementioned contextual phenomena outside of quantum physics.
The gravitational law of social interaction
NASA Astrophysics Data System (ADS)
Levy, Moshe; Goldenberg, Jacob
2014-01-01
While a great deal is known about the topology of social networks, there is much less agreement about the geographical structure of these networks. The fundamental question in this context is: how does the probability of a social link between two individuals depend on the physical distance between them? While it is clear that the probability decreases with the distance, various studies have found different functional forms for this dependence. The exact form of the distance dependence has crucial implications for network searchability and dynamics: Kleinberg (2000) [15] shows that the small-world property holds if the probability of a social link is a power-law function of the distance with power -2, but not with any other power. We investigate the distance dependence of link probability empirically by analyzing four very different sets of data: Facebook links, data from the electronic version of the Small-World experiment, email messages, and data from detailed personal interviews. All four datasets reveal the same empirical regularity: the probability of a social link is proportional to the inverse of the square of the distance between the two individuals, analogously to the distance dependence of the gravitational force. Thus, it seems that social networks spontaneously converge to the exact unique distance dependence that ensures the Small-World property.
NASA Astrophysics Data System (ADS)
Tang, Yinan; Chen, Ping
2014-06-01
The sub-prime crisis in the U.S. reveals the limitation of diversification strategy based on mean-variance analysis. A regime switch and a turning point can be observed using a high moment representation and time-dependent transition probability. Up-down price movements are induced by interactions among agents, which can be described by the birth-death (BD) process. Financial instability is visible by dramatically increasing 3rd to 5th moments one-quarter before and during the crisis. The sudden rising high moments provide effective warning signals of a regime-switch or a coming crisis. The critical condition of a market breakdown can be identified from nonlinear stochastic dynamics. The master equation approach of population dynamics provides a unified theory of a calm and turbulent market.
Survival probability of diffusion with trapping in cellular neurobiology
NASA Astrophysics Data System (ADS)
Holcman, David; Marchewka, Avi; Schuss, Zeev
2005-09-01
The problem of diffusion with absorption and trapping sites arises in the theory of molecular signaling inside and on the membranes of biological cells. In particular, this problem arises in the case of spine-dendrite communication, where the number of calcium ions, modeled as random particles, is regulated across the spine microstructure by pumps, which play the role of killing sites, while the end of the dendritic shaft is an absorbing boundary. We develop a general mathematical framework for diffusion in the presence of absorption and killing sites and apply it to the computation of the time-dependent survival probability of ions. We also compute the ratio of the number of absorbed particles at a specific location to the number of killed particles. We show that the ratio depends on the distribution of killing sites. The biological consequence is that the position of the pumps regulates the fraction of calcium ions that reach the dendrite.
A Markovian event-based framework for stochastic spiking neural networks.
Touboul, Jonathan D; Faugeras, Olivier D
2011-11-01
In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.
Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method
NASA Astrophysics Data System (ADS)
Lazoglou, Georgia; Anagnostopoulou, Christina
2018-03-01
This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.
Revision of Time-Independent Probabilistic Seismic Hazard Maps for Alaska
Wesson, Robert L.; Boyd, Oliver S.; Mueller, Charles S.; Bufe, Charles G.; Frankel, Arthur D.; Petersen, Mark D.
2007-01-01
We present here time-independent probabilistic seismic hazard maps of Alaska and the Aleutians for peak ground acceleration (PGA) and 0.1, 0.2, 0.3, 0.5, 1.0 and 2.0 second spectral acceleration at probability levels of 2 percent in 50 years (annual probability of 0.000404), 5 percent in 50 years (annual probability of 0.001026) and 10 percent in 50 years (annual probability of 0.0021). These maps represent a revision of existing maps based on newly obtained data and assumptions reflecting best current judgments about methodology and approach. These maps have been prepared following the procedures and assumptions made in the preparation of the 2002 National Seismic Hazard Maps for the lower 48 States. A significant improvement relative to the 2002 methodology is the ability to include variable slip rate along a fault where appropriate. These maps incorporate new data, the responses to comments received at workshops held in Fairbanks and Anchorage, Alaska, in May, 2005, and comments received after draft maps were posted on the National Seismic Hazard Mapping Web Site. These maps will be proposed for adoption in future revisions to the International Building Code. In this documentation we describe the maps and in particular explain and justify changes that have been made relative to the 1999 maps. We are also preparing a series of experimental maps of time-dependent hazard that will be described in future documents.
Unimolecular diffusion-mediated reactions with a nonrandom time-modulated absorbing barrier
NASA Technical Reports Server (NTRS)
Bashford, D.; Weaver, D. L.
1986-01-01
A diffusion-reaction model with time-dependent reactivity is formulated and applied to unimolecular reactions. The model is solved exactly numerically and approximately analytically for the unreacted fraction as a function of time. It is shown that the approximate analytical solution is valid even when the system is far from equilibrium, and when the reactivity probability is more complicated than a square-wave function of time. A discussion is also given of an approach to problems of this type using a stochastically fluctuating reactivity, and the first-passage time for a particular example is derived.
Pirozzi, Enrica
2018-04-01
High variability in the neuronal response to stimulations and the adaptation phenomenon cannot be explained by the standard stochastic leaky integrate-and-fire model. The main reason is that the uncorrelated inputs involved in the model are not realistic. There exists some form of dependency between the inputs, and it can be interpreted as memory effects. In order to include these physiological features in the standard model, we reconsider it with time-dependent coefficients and correlated inputs. Due to its hard mathematical tractability, we perform simulations of it for a wide investigation of its output. A Gauss-Markov process is constructed for approximating its non-Markovian dynamics. The first passage time probability density of such a process can be numerically evaluated, and it can be used to fit the histograms of simulated firing times. Some estimates of the moments of firing times are also provided. The effect of the correlation time of the inputs on firing densities and on firing rates is shown. An exponential probability density of the first firing time is estimated for low values of input current and high values of correlation time. For comparison, a simulation-based investigation is also carried out for a fractional stochastic model that allows to preserve the memory of the time evolution of the neuronal membrane potential. In this case, the memory parameter that affects the firing activity is the fractional derivative order. In both models an adaptation level of spike frequency is attained, even if along different modalities. Comparisons and discussion of the obtained results are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.
Probability of loss of assured safety (PLOAS) is modeled for weak link (WL)/strong link (SL) systems in which one or more WLs or SLs could potentially degrade into a precursor condition to link failure that will be followed by an actual failure after some amount of elapsed time. The following topics are considered: (i) Definition of precursor occurrence time cumulative distribution functions (CDFs) for individual WLs and SLs, (ii) Formal representation of PLOAS with constant delay times, (iii) Approximation and illustration of PLOAS with constant delay times, (iv) Formal representation of PLOAS with aleatory uncertainty in delay times, (v) Approximationmore » and illustration of PLOAS with aleatory uncertainty in delay times, (vi) Formal representation of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, (vii) Approximation and illustration of PLOAS with delay times defined by functions of link properties at occurrence times for failure precursors, and (viii) Procedures for the verification of PLOAS calculations for the three indicated definitions of delayed link failure.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Jay Dean; Oberkampf, William Louis; Helton, Jon Craig
2004-12-01
Relationships to determine the probability that a weak link (WL)/strong link (SL) safety system will fail to function as intended in a fire environment are investigated. In the systems under study, failure of the WL system before failure of the SL system is intended to render the overall system inoperational and thus prevent the possible occurrence of accidents with potentially serious consequences. Formal developments of the probability that the WL system fails to deactivate the overall system before failure of the SL system (i.e., the probability of loss of assured safety, PLOAS) are presented for several WWSL configurations: (i) onemore » WL, one SL, (ii) multiple WLs, multiple SLs with failure of any SL before any WL constituting failure of the safety system, (iii) multiple WLs, multiple SLs with failure of all SLs before any WL constituting failure of the safety system, and (iv) multiple WLs, multiple SLs and multiple sublinks in each SL with failure of any sublink constituting failure of the associated SL and failure of all SLs before failure of any WL constituting failure of the safety system. The indicated probabilities derive from time-dependent temperatures in the WL/SL system and variability (i.e., aleatory uncertainty) in the temperatures at which the individual components of this system fail and are formally defined as multidimensional integrals. Numerical procedures based on quadrature (i.e., trapezoidal rule, Simpson's rule) and also on Monte Carlo techniques (i.e., simple random sampling, importance sampling) are described and illustrated for the evaluation of these integrals. Example uncertainty and sensitivity analyses for PLOAS involving the representation of uncertainty (i.e., epistemic uncertainty) with probability theory and also with evidence theory are presented.« less
A coupled hidden Markov model for disease interactions
Sherlock, Chris; Xifara, Tatiana; Telfer, Sandra; Begon, Mike
2013-01-01
To investigate interactions between parasite species in a host, a population of field voles was studied longitudinally, with presence or absence of six different parasites measured repeatedly. Although trapping sessions were regular, a different set of voles was caught at each session, leading to incomplete profiles for all subjects. We use a discrete time hidden Markov model for each disease with transition probabilities dependent on covariates via a set of logistic regressions. For each disease the hidden states for each of the other diseases at a given time point form part of the covariate set for the Markov transition probabilities from that time point. This allows us to gauge the influence of each parasite species on the transition probabilities for each of the other parasite species. Inference is performed via a Gibbs sampler, which cycles through each of the diseases, first using an adaptive Metropolis–Hastings step to sample from the conditional posterior of the covariate parameters for that particular disease given the hidden states for all other diseases and then sampling from the hidden states for that disease given the parameters. We find evidence for interactions between several pairs of parasites and of an acquired immune response for two of the parasites. PMID:24223436
Beeler, Nicholas M.; Roeloffs, Evelyn A.; McCausland, Wendy
2013-01-01
Mazzotti and Adams (2004) estimated that rapid deep slip during typically two week long episodes beneath northern Washington and southern British Columbia increases the probability of a great Cascadia earthquake by 30–100 times relative to the probability during the ∼58 weeks between slip events. Because the corresponding absolute probability remains very low at ∼0.03% per week, their conclusion is that though it is more likely that a great earthquake will occur during a rapid slip event than during other times, a great earthquake is unlikely to occur during any particular rapid slip event. This previous estimate used a failure model in which great earthquakes initiate instantaneously at a stress threshold. We refine the estimate, assuming a delayed failure model that is based on laboratory‐observed earthquake initiation. Laboratory tests show that failure of intact rock in shear and the onset of rapid slip on pre‐existing faults do not occur at a threshold stress. Instead, slip onset is gradual and shows a damped response to stress and loading rate changes. The characteristic time of failure depends on loading rate and effective normal stress. Using this model, the probability enhancement during the period of rapid slip in Cascadia is negligible (<10%) for effective normal stresses of 10 MPa or more and only increases by 1.5 times for an effective normal stress of 1 MPa. We present arguments that the hypocentral effective normal stress exceeds 1 MPa. In addition, the probability enhancement due to rapid slip extends into the interevent period. With this delayed failure model for effective normal stresses greater than or equal to 50 kPa, it is more likely that a great earthquake will occur between the periods of rapid deep slip than during them. Our conclusion is that great earthquake occurrence is not significantly enhanced by episodic deep slip events.
Observations of the auroral hectometric radio emission onboard the INTERBALL-1 satellite
NASA Astrophysics Data System (ADS)
Kuril'Chik, V. N.
2007-06-01
The results of five-year (1995 2000) continuous observations of the auroral radio emission (ARE) in the hectometric wavelength range on the high-apogee INTERBALL-1 satellite are presented. Short intense bursts of the auroral hectometric radio emission (AHR) were observed at frequencies of 1463 and 1501 kHz. The bursts were observed predominantly at times when the terrestrial magnetosphere was undisturbed (in the quiet Sun period), and their number decreased rapidly with increasing solar activity. The bursts demonstrated seasonal dependence in the Northern and Southern hemispheres (dominating in the autumn-winter period). Their appearance probably depends on the observation time (UT). A qualitative explanation of the AHR peculiarities is given.
Minimum time search in uncertain dynamic domains with complex sensorial platforms.
Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel
2014-08-04
The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models.
Minimum Time Search in Uncertain Dynamic Domains with Complex Sensorial Platforms
Lanillos, Pablo; Besada-Portas, Eva; Lopez-Orozco, Jose Antonio; de la Cruz, Jesus Manuel
2014-01-01
The minimum time search in uncertain domains is a searching task, which appears in real world problems such as natural disasters and sea rescue operations, where a target has to be found, as soon as possible, by a set of sensor-equipped searchers. The automation of this task, where the time to detect the target is critical, can be achieved by new probabilistic techniques that directly minimize the Expected Time (ET) to detect a dynamic target using the observation probability models and actual observations collected by the sensors on board the searchers. The selected technique, described in algorithmic form in this paper for completeness, has only been previously partially tested with an ideal binary detection model, in spite of being designed to deal with complex non-linear/non-differential sensorial models. This paper covers the gap, testing its performance and applicability over different searching tasks with searchers equipped with different complex sensors. The sensorial models under test vary from stepped detection probabilities to continuous/discontinuous differentiable/non-differentiable detection probabilities dependent on distance, orientation, and structured maps. The analysis of the simulated results of several static and dynamic scenarios performed in this paper validates the applicability of the technique with different types of sensor models. PMID:25093345
NASA Astrophysics Data System (ADS)
Polotto, Franciele; Drigo Filho, Elso; Chahine, Jorge; Oliveira, Ronaldo Junio de
2018-03-01
This work developed analytical methods to explore the kinetics of the time-dependent probability distributions over thermodynamic free energy profiles of protein folding and compared the results with simulation. The Fokker-Planck equation is mapped onto a Schrödinger-type equation due to the well-known solutions of the latter. Through a semi-analytical description, the supersymmetric quantum mechanics formalism is invoked and the time-dependent probability distributions are obtained with numerical calculations by using the variational method. A coarse-grained structure-based model of the two-state protein Tm CSP was simulated at a Cα level of resolution and the thermodynamics and kinetics were fully characterized. Analytical solutions from non-equilibrium conditions were obtained with the simulated double-well free energy potential and kinetic folding times were calculated. It was found that analytical folding time as a function of temperature agrees, quantitatively, with simulations and experiments from the literature of Tm CSP having the well-known 'U' shape of the Chevron Plots. The simple analytical model developed in this study has a potential to be used by theoreticians and experimentalists willing to explore, quantitatively, rates and the kinetic behavior of their system by informing the thermally activated barrier. The theory developed describes a stochastic process and, therefore, can be applied to a variety of biological as well as condensed-phase two-state systems.
A case of oral spelling behavior: another environmental dependency syndrome.
Ragno Paquier, Claire; Assal, Frédéric
2007-12-01
To present the case of a woman with a probable vascular dementia and a unique oral spelling behavior (OSB). Our aim was to better characterize the mechanisms underlying OSB in this patient with advanced cognitive and noncognitive deficits. Different kind of language deficits have been described in dementias and other related diseases. Other nonaphasic acquired communication disorders include a heterogeneous group of verbal impairment that are mainly due to attentional and executive impairments. We describe the neurolinguistic and neuropsychologic analysis of an 88-year-old woman with spontaneous OSB. Computed tomography-scan at the time of the clinical analysis is also presented. OSB occurred preferentially in spontaneous discourse, manifested in other language tasks, concerned all lexical categories, and was not accompanied by spelling deficit although language was not intact and disclosed foreign accent syndrome. Environmental dependency syndrome or utilization behavior was prominent and concerned various sensorimotor activities. We postulate that OSB was not secondary to a linguistic or more specifically to a spelling deficit but to another variant of environmental dependency syndrome and probably associated with a cortico-subcortical lesion including the left prefrontal region.
Hosokawa, T; Tanaka, Y; Aoike, A; Kawai, K; Muramatsu, S
1984-09-01
The time course of B-cell memory development to a dinitrophenyl (DNP) T-independent type-2 (TI-2) antigen was investigated by adoptive cell transfer. Strong IgM and IgG memory developed in BALB/c mice after immunization with DNP-dextran, to be recalled by challenge with either T-dependent (TD) antigen or TI-2 antigen. However, only weak IgM memory and very feeble IgG memory were detected in athymic nude mice receiving the same immunization as euthymic mice. Once memory was established under probable T cell influence, its recall by TI-2 antigen challenge seemed independent of T cell help and did not require sharing of carriers between priming and challenge antigens. The following may be concluded. (i) Long-term IgM and IgG memory is induced by TI-2 antigen priming in the presence of functional T cells. (ii) The class switch from IgM to IgG in the memory B cell pool is driven effectively by TI-2 antigen and is probably T cell-dependent.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
NASA Astrophysics Data System (ADS)
Villanueva, Anthony Allan D.
2018-02-01
We discuss a class of solutions of the time-dependent Schrödinger equation such that the position uncertainty temporarily decreases. This self-focusing or contractive behavior is a consequence of the anti-correlation of the position and momentum observables. Since the associated position density satisfies a continuity equation, upon contraction the probability current at a given fixed point may flow in the opposite direction of the group velocity of the wave packet. For definiteness, we consider a free particle incident from the left of the origin, and establish a condition for the initial position-momentum correlation such that a negative probability current at the origin is possible. This implies a decrease in the particle's detection probability in the region x > 0, and we calculate how long this occurs. Analogous results are obtained for a particle subject to a uniform gravitational force if we consider the particle approaching the turning point. We show that position-momentum anti-correlation may cause a negative probability current at the turning point, leading to a temporary decrease in the particle's detection probability in the classically forbidden region.
Quantum transfer energy in the framework of time-dependent dipole-dipole interaction
NASA Astrophysics Data System (ADS)
El-Shishtawy, Reda M.; Haddon, Robert C.; Al-Heniti, Saleh H.; Raffah, Bahaaudin M.; Berrada, K.; Abdel-Khalek, S.; Al-Hadeethi, Yas F.
2018-03-01
In this work, we examine the process of the quantum transfer of energy considering time-dependent dipole-dipole interaction in a dimer system characterized by two-level atom systems. By taking into account the effect of the acceleration and speed of the atoms in the dimer coupling, we demonstrate that the improvement of the probability for a single-excitation transfer energy extremely benefits from the incorporation of atomic motion effectiveness and the energy detuning. We explore the relevance between the population and entanglement during the time-evolution and show that this kind of nonlocal correlation may be generated during the process of the transfer of energy. Our work may provide optimal conditions to implement realistic experimental scenario in the transfer of the quantum energy.
Time Dependence of Collision Probabilities During Satellite Conjunctions
NASA Technical Reports Server (NTRS)
Hall, Doyle T.; Hejduk, Matthew D.; Johnson, Lauren C.
2017-01-01
The NASA Conjunction Assessment Risk Analysis (CARA) team has recently implemented updated software to calculate the probability of collision (P (sub c)) for Earth-orbiting satellites. The algorithm can employ complex dynamical models for orbital motion, and account for the effects of non-linear trajectories as well as both position and velocity uncertainties. This “3D P (sub c)” method entails computing a 3-dimensional numerical integral for each estimated probability. Our analysis indicates that the 3D method provides several new insights over the traditional “2D P (sub c)” method, even when approximating the orbital motion using the relatively simple Keplerian two-body dynamical model. First, the formulation provides the means to estimate variations in the time derivative of the collision probability, or the probability rate, R (sub c). For close-proximity satellites, such as those orbiting in formations or clusters, R (sub c) variations can show multiple peaks that repeat or blend with one another, providing insight into the ongoing temporal distribution of risk. For single, isolated conjunctions, R (sub c) analysis provides the means to identify and bound the times of peak collision risk. Additionally, analysis of multiple actual archived conjunctions demonstrates that the commonly used “2D P (sub c)” approximation can occasionally provide inaccurate estimates. These include cases in which the 2D method yields negligibly small probabilities (e.g., P (sub c)) is greater than 10 (sup -10)), but the 3D estimates are sufficiently large to prompt increased monitoring or collision mitigation (e.g., P (sub c) is greater than or equal to 10 (sup -5)). Finally, the archive analysis indicates that a relatively efficient calculation can be used to identify which conjunctions will have negligibly small probabilities. This small-P (sub c) screening test can significantly speed the overall risk analysis computation for large numbers of conjunctions.
van Voorn, George A. K.; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover’s distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system. PMID:28196372
Resilience through adaptation.
Ten Broeke, Guus A; van Voorn, George A K; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wampler, William R.; Myers, Samuel M.; Modine, Normand A.
2017-09-01
The energy-dependent probability density of tunneled carrier states for arbitrarily specified longitudinal potential-energy profiles in planar bipolar devices is numerically computed using the scattering method. Results agree accurately with a previous treatment based on solution of the localized eigenvalue problem, where computation times are much greater. These developments enable quantitative treatment of tunneling-assisted recombination in irradiated heterojunction bipolar transistors, where band offsets may enhance the tunneling effect by orders of magnitude. The calculations also reveal the density of non-tunneled carrier states in spatially varying potentials, and thereby test the common approximation of uniform- bulk values for such densities.
NASA Astrophysics Data System (ADS)
Mang, Chao-Yong; Liu, Cai-Ping; Liu, Guang-Ming; Jiang, Bei; Lan, Hai; Wu, Ke-Chen; Yan, Ya; Li, Hai-Fei; Yang, Ming-Hui; Zhao, Yu
2015-02-01
A cyclic dipeptide often has the multiple configurations and the abundant conformations. The density functional theory (DFT) method is used to search the preferred conformation of the most probable configuration for cordycedipeptide A isolated from the culture liquid of Cordyceps sinensis. The time-dependent DFT approach is exploited to describe the profile of electronic circular dichroism (CD). The calculated results show that the most probable configuration is 3S6R7S, whose preferred conformation has a negative optical rotation and a positive lowest energy electronic CD band.
Self tuning system for industrial surveillance
Stephan, Wegerich W; Jarman, Kristin K.; Gross, Kenneth C.
2000-01-01
A method and system for automatically establishing operational parameters of a statistical surveillance system. The method and system performs a frequency domain transition on time dependent data, a first Fourier composite is formed, serial correlation is removed, a series of Gaussian whiteness tests are performed along with an autocorrelation test, Fourier coefficients are stored and a second Fourier composite is formed. Pseudorandom noise is added, a Monte Carlo simulation is performed to establish SPRT missed alarm probabilities and tested with a synthesized signal. A false alarm test is then emperically evaluated and if less than a desired target value, then SPRT probabilities are used for performing surveillance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cartarius, Holger; Moiseyev, Nimrod; Department of Physics and Minerva Center for Nonlinear Physics of Complex Systems, Technion-Israel Institute of Technology, Haifa, 32000
The unique time signature of the survival probability exactly at the exceptional point parameters is studied here for the hydrogen atom in strong static magnetic and electric fields. We show that indeed the survival probability S(t)=|<{psi}(0)|{psi}(t)>|{sup 2} decays exactly as |1-at|{sup 2}e{sup -{Gamma}{sub E}{sub P}t/({Dirac_h}/2{pi})}, where {Gamma}{sub EP} is associated with the decay rate at the exceptional point and a is a complex constant depending solely on the initial wave packet that populates exclusively the two almost degenerate states of the non-Hermitian Hamiltonian. This may open the possibility for a first experimental detection of exceptional points in a quantum system.
Reichert, Brian E.; Martin, J.; Kendall, William L.; Cattau, Christopher E.; Kitchens, Wiley M.
2010-01-01
Individuals in wild populations face risks associated with both intrinsic (i.e. aging) and external (i.e. environmental) sources of mortality. Condition-dependent mortality occurs when there is an interaction between such factors; however, few studies have clearly demonstrated condition-dependent mortality and some have even argued that condition-dependent mortality does not occur in wild avian populations. Using large sample sizes (2084 individuals, 3746 re-sights) of individual-based longitudinal data collected over a 33 year period (1976-2008) on multiple cohorts, we used a capture-mark-recapture framework to model age-dependent survival in the snail kite Rostrhamus sociabilis plumbeus population in Florida. Adding to the growing amount of evidence for actuarial senescence in wild populations, we found evidence of senescent declines in survival probabilities in adult kites. We also tested the hypothesis that older kites experienced condition-dependent mortality during a range-wide drought event (2000-2002). The results provide convincing evidence that the annual survival probability of senescent kites was disproportionately affected by the drought relative to the survival probability of prime-aged adults. To our knowledge, this is the first evidence of condition-dependent mortality to be demonstrated in a wild avian population, a finding which challenges recent conclusions drawn in the literature. Our study suggests that senescence and condition-dependent mortality can affect the demography of wild avian populations. Accounting for these sources of variation may be particularly important to appropriately compute estimates of population growth rate, and probabilities of quasi-extinctions.
Deviations in the Zipf and Heaps laws in natural languages
NASA Astrophysics Data System (ADS)
Bochkarev, Vladimir V.; Lerner, Eduard Yu; Shevlyakova, Anna V.
2014-03-01
This paper is devoted to verifying of the empirical Zipf and Hips laws in natural languages using Google Books Ngram corpus data. The connection between the Zipf and Heaps law which predicts the power dependence of the vocabulary size on the text size is discussed. In fact, the Heaps exponent in this dependence varies with the increasing of the text corpus. To explain it, the obtained results are compared with the probability model of text generation. Quasi-periodic variations with characteristic time periods of 60-100 years were also found.
2013-11-25
reaction time, the core of the flow was sampled through a small orifice in a rounded nose cone, while the bulk of the gas was pumped by an oil free...than rebound for the latter. Calculations were performed with the OW quantum number J equal to 0, 3, and 6 to investigate the effect of OH-rotational...have little dependence on J. Possible effects on the atomistic mechanisms were investigated for the S(N)2 pathway and the probability of the direct
Anisotropic Defect-Mediated Melting of Two-Dimensional Colloidal Crystals
NASA Astrophysics Data System (ADS)
Eisenmann, C.; Gasser, U.; Keim, P.; Maret, G.
2004-09-01
The melting transition of anisotropic two-dimensional (2D) crystals is studied in a model system of superparamagnetic colloids. The anisotropy of the induced dipole-dipole interaction is varied by tilting the external magnetic field off the normal to the particle plane. By analyzing the time-dependent Lindemann parameter as well as translational and orientational order we observe a 2D smecticlike phase. The Kosterlitz-Thouless-Halperin-Nelson-Young scenario of isotropic melting is modified: dislocation pairs and dislocations appear with different probabilities depending on their orientation with respect to the in-plane field.
A study of two statistical methods as applied to shuttle solid rocket booster expenditures
NASA Technical Reports Server (NTRS)
Perlmutter, M.; Huang, Y.; Graves, M.
1974-01-01
The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.
Cai, Jing; Read, Paul W; Altes, Talissa A; Molloy, Janelle A; Brookeman, James R; Sheng, Ke
2007-01-21
Treatment planning based on probability distribution function (PDF) of patient geometries has been shown a potential off-line strategy to incorporate organ motion, but the application of such approach highly depends upon the reproducibility of the PDF. In this paper, we investigated the dependences of the PDF reproducibility on the imaging acquisition parameters, specifically the scan time and the frame rate. Three healthy subjects underwent a continuous 5 min magnetic resonance (MR) scan in the sagittal plane with a frame rate of approximately 10 f s-1, and the experiments were repeated with an interval of 2 to 3 weeks. A total of nine pulmonary vessels from different lung regions (upper, middle and lower) were tracked and the dependences of their displacement PDF reproducibility were evaluated as a function of scan time and frame rate. As results, the PDF reproducibility error decreased with prolonged scans and appeared to approach equilibrium state in subjects 2 and 3 within the 5 min scan. The PDF accuracy increased in the power function with the increase of frame rate; however, the PDF reproducibility showed less sensitivity to frame rate presumably due to the randomness of breathing which dominates the effects. As the key component of the PDF-based treatment planning, the reproducibility of the PDF affects the dosimetric accuracy substantially. This study provides a reference for acquiring MR-based PDF of structures in the lung.
Evaluation of Differential DependencY (EDDY) is a statistical test for the differential dependency relationship of a set of genes between two given conditions. For each condition, possible dependency network structures are enumerated and their likelihoods are computed to represent a probability distribution of dependency networks. The difference between the probability distributions of dependency networks is computed between conditions, and its statistical significance is evaluated with random permutations of condition labels on the samples.
Evaluation of Differential DependencY (EDDY) is a statistical test for the differential dependency relationship of a set of genes between two given conditions. For each condition, possible dependency network structures are enumerated and their likelihoods are computed to represent a probability distribution of dependency networks. The difference between the probability distributions of dependency networks is computed between conditions, and its statistical significance is evaluated with random permutations of condition labels on the samples.
Statistical context shapes stimulus-specific adaptation in human auditory cortex
Henry, Molly J.; Fromboluti, Elisa Kim; McAuley, J. Devin
2015-01-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. PMID:25652920
Vacuum Decay via Lorentzian Wormholes
NASA Astrophysics Data System (ADS)
Rosales, J. L.
We speculate about the space-time description due to the presence of Lorentzian worm-holes (handles in space-time joining two distant regions or other universes) in quantum gravity. The semiclassical rate of production of these Lorentzian wormholes in Reissner-Nordström space-times is calculated as a result of the spontaneous decay of vacuum due to a real tunneling configuration. In the magnetic case it only depends on the value of the field theoretical fine structure constant. We predict that the quantum probability corresponding to the nucleation of such geodesically complete space-times should be acutally negligible in our physical Universe.
Bonsu, Kwadwo Osei; Owusu, Isaac Kofi; Buabeng, Kwame Ohene; Reidpath, Daniel D; Kadirvelu, Amudha
2017-04-01
Randomized control trials of statins have not demonstrated significant benefits in outcomes of heart failure (HF). However, randomized control trials may not always be generalizable. The aim was to determine whether statin and statin type-lipophilic or -hydrophilic improve long-term outcomes in Africans with HF. This was a retrospective longitudinal study of HF patients aged ≥18 years hospitalized at a tertiary healthcare center between January 1, 2009 and December 31, 2013 in Ghana. Patients were eligible if they were discharged from first admission for HF (index admission) and followed up to time of all-cause, cardiovascular, and HF mortality or end of study. Multivariable time-dependent Cox model and inverse-probability-of-treatment weighting of marginal structural model were used to estimate associations between statin treatment and outcomes. Adjusted hazard ratios were also estimated for lipophilic and hydrophilic statin compared with no statin use. The study included 1488 patients (mean age 60.3±14.2 years) with 9306 person-years of observation. Using the time-dependent Cox model, the 5-year adjusted hazard ratios with 95% CI for statin treatment on all-cause, cardiovascular, and HF mortality were 0.68 (0.55-0.83), 0.67 (0.54-0.82), and 0.63 (0.51-0.79), respectively. Use of inverse-probability-of-treatment weighting resulted in estimates of 0.79 (0.65-0.96), 0.77 (0.63-0.96), and 0.77 (0.61-0.95) for statin treatment on all-cause, cardiovascular, and HF mortality, respectively, compared with no statin use. Among Africans with HF, statin treatment was associated with significant reduction in mortality. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
Impact constraints on the environment for chemical evolution and the continuity of life
NASA Technical Reports Server (NTRS)
Oberbeck, Verne R.; Fogleman, Guy
1990-01-01
The moon and the earth were bombarded heavily by planetesimals and asteroids that were capable of interfering with chemical evolution and the origin of life. This paper explores the frequency of giant terrestrial impacts able to stop prebiotic chemistry in the probable regions of chemical evolution. The limited time available between impacts disruptive to prebiotic chemistry at the time of the oldest evidence of life suggests the need for a rapid process for chemical evolution of life. On the other hand, rapid chemical evolution in cloud systems and lakes or other shallow evaporating water bodies would have been possible because reactants could have been concentrated and polymerized rapidly in this environment. Thus life probably could have originated near the surface between frequent surface-sterilizing impacts. There may not have been continuity of life depending on sunlight because there is evidence that life, existing as early as 3.8 Gyr ago, may have been destroyed by giant impacts. The first such organisms on earth were probably not the ancestors of present life.
Metocean design parameter estimation for fixed platform based on copula functions
NASA Astrophysics Data System (ADS)
Zhai, Jinjin; Yin, Qilin; Dong, Sheng
2017-08-01
Considering the dependent relationship among wave height, wind speed, and current velocity, we construct novel trivariate joint probability distributions via Archimedean copula functions. Total 30-year data of wave height, wind speed, and current velocity in the Bohai Sea are hindcast and sampled for case study. Four kinds of distributions, namely, Gumbel distribution, lognormal distribution, Weibull distribution, and Pearson Type III distribution, are candidate models for marginal distributions of wave height, wind speed, and current velocity. The Pearson Type III distribution is selected as the optimal model. Bivariate and trivariate probability distributions of these environmental conditions are established based on four bivariate and trivariate Archimedean copulas, namely, Clayton, Frank, Gumbel-Hougaard, and Ali-Mikhail-Haq copulas. These joint probability models can maximize marginal information and the dependence among the three variables. The design return values of these three variables can be obtained by three methods: univariate probability, conditional probability, and joint probability. The joint return periods of different load combinations are estimated by the proposed models. Platform responses (including base shear, overturning moment, and deck displacement) are further calculated. For the same return period, the design values of wave height, wind speed, and current velocity obtained by the conditional and joint probability models are much smaller than those by univariate probability. Considering the dependence among variables, the multivariate probability distributions provide close design parameters to actual sea state for ocean platform design.
Tiffan, Kenneth F.; Hatten, James R.; Trachtenbarg, David A
2015-01-01
Subyearling fall Chinook salmon (Oncorhynchus tshawytscha) in the Columbia River basin exhibit a transient rearing strategy and depend on connected shoreline habitats during freshwater rearing. Impoundment has greatly reduced the amount of shallow-water rearing habitat that is exacerbated by the steep topography of reservoirs. Periodic dredging creates opportunities to strategically place spoils to increase the amount of shallow-water habitat for subyearlings while at the same time reducing the amount of unsuitable area that is often preferred by predators. We assessed the amount and spatial arrangement of subyearling rearing habitat in Lower Granite Reservoir on the Snake River to guide future habitat improvement efforts. A spatially explicit habitat assessment was conducted using physical habitat data, two-dimensional hydrodynamic modelling and a statistical habitat model in a geographic information system framework. We used field collections of subyearlings and a common predator [smallmouth bass (Micropterus dolomieu)] to draw inferences about predation risk within specific habitat types. Most of the high-probability rearing habitat was located in the upper half of the reservoir where gently sloping landforms created low lateral bed slopes and shallow-water habitats. Only 29% of shorelines were predicted to be suitable (probability >0.5) for subyearlings, and the occurrence of these shorelines decreased in a downstream direction. The remaining, less suitable areas were composed of low-probability habitats in unmodified (25%) and riprapped shorelines (46%). As expected, most subyearlings were found in high-probability habitat, while most smallmouth bass were found in low-probability locations. However, some subyearlings were found in low-probability habitats, such as riprap, where predation risk could be high. Given their transient rearing strategy and dependence on shoreline habitats, subyearlings could benefit from habitat creation efforts in the lower reservoir where high-probability habitat is generally lacking. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Saliency Detection via Absorbing Markov Chain With Learnt Transition Probability.
Lihe Zhang; Jianwu Ai; Bowen Jiang; Huchuan Lu; Xiukui Li
2018-02-01
In this paper, we propose a bottom-up saliency model based on absorbing Markov chain (AMC). First, a sparsely connected graph is constructed to capture the local context information of each node. All image boundary nodes and other nodes are, respectively, treated as the absorbing nodes and transient nodes in the absorbing Markov chain. Then, the expected number of times from each transient node to all other transient nodes can be used to represent the saliency value of this node. The absorbed time depends on the weights on the path and their spatial coordinates, which are completely encoded in the transition probability matrix. Considering the importance of this matrix, we adopt different hierarchies of deep features extracted from fully convolutional networks and learn a transition probability matrix, which is called learnt transition probability matrix. Although the performance is significantly promoted, salient objects are not uniformly highlighted very well. To solve this problem, an angular embedding technique is investigated to refine the saliency results. Based on pairwise local orderings, which are produced by the saliency maps of AMC and boundary maps, we rearrange the global orderings (saliency value) of all nodes. Extensive experiments demonstrate that the proposed algorithm outperforms the state-of-the-art methods on six publicly available benchmark data sets.
NASA Astrophysics Data System (ADS)
Naine, Tarun Bharath; Gundawar, Manoj Kumar
2017-09-01
We demonstrate a very powerful correlation between the discrete probability of distances of neighboring cells and thermal wave propagation rate, for a system of cells spread on a one-dimensional chain. A gamma distribution is employed to model the distances of neighboring cells. In the absence of an analytical solution and the differences in ignition times of adjacent reaction cells following non-Markovian statistics, invariably the solution for thermal wave propagation rate for a one-dimensional system with randomly distributed cells is obtained by numerical simulations. However, such simulations which are based on Monte-Carlo methods require several iterations of calculations for different realizations of distribution of adjacent cells. For several one-dimensional systems, differing in the value of shaping parameter of the gamma distribution, we show that the average reaction front propagation rates obtained by a discrete probability between two limits, shows excellent agreement with those obtained numerically. With the upper limit at 1.3, the lower limit depends on the non-dimensional ignition temperature. Additionally, this approach also facilitates the prediction of burning limits of heterogeneous thermal mixtures. The proposed method completely eliminates the need for laborious, time intensive numerical calculations where the thermal wave propagation rates can now be calculated based only on macroscopic entity of discrete probability.
Schwager, Monika; Johst, Karin; Jeltsch, Florian
2006-06-01
Recent theoretical studies have shown contrasting effects of temporal correlation of environmental fluctuations (red noise) on the risk of population extinction. It is still debated whether and under which conditions red noise increases or decreases extinction risk compared with uncorrelated (white) noise. Here, we explain the opposing effects by introducing two features of red noise time series. On the one hand, positive autocorrelation increases the probability of series of poor environmental conditions, implying increasing extinction risk. On the other hand, for a given time period, the probability of at least one extremely bad year ("catastrophe") is reduced compared with white noise, implying decreasing extinction risk. Which of these two features determines extinction risk depends on the strength of environmental fluctuations and the sensitivity of population dynamics to these fluctuations. If extreme (catastrophic) events can occur (strong noise) or sensitivity is high (overcompensatory density dependence), then temporal correlation decreases extinction risk; otherwise, it increases it. Thus, our results provide a simple explanation for the contrasting previous findings and are a crucial step toward a general understanding of the effect of noise color on extinction risk.
NASA Astrophysics Data System (ADS)
Arce, Julio Cesar
1992-01-01
This work focuses on time-dependent quantum theory and methods for the study of the spectra and dynamics of atomic and molecular systems. Specifically, we have addressed the following two problems: (i) Development of a time-dependent spectral method for the construction of spectra of simple quantum systems--This includes the calculation of eigenenergies, the construction of bound and continuum eigenfunctions, and the calculation of photo cross-sections. Computational applications include the quadrupole photoabsorption spectra and dissociation cross-sections of molecular hydrogen from various vibrational states in its ground electronic potential -energy curve. This method is seen to provide an advantageous alternative, both from the computational and conceptual point of view, to existing standard methods. (ii) Explicit time-dependent formulation of photoabsorption processes --Analytical solutions of the time-dependent Schrodinger equation are constructed and employed for the calculation of probability densities, momentum distributions, fluxes, transition rates, expectation values and correlation functions. These quantities are seen to establish the link between the dynamics and the calculated, or measured, spectra and cross-sections, and to clarify the dynamical nature of the excitation, transition and ejection processes. Numerical calculations on atomic and molecular hydrogen corroborate and complement the previous results, allowing the identification of different regimes during the photoabsorption process.
The Coast Artillery Journal. Volume 75, Number 3, May-June 1932
1932-06-01
having a muzzle velocity of 1500 to 1800 f. s. and a maximumvertical range of 18,000feet. .AUfuzes were dependent upon a time powder train. ’rhe...Provincial Constabulary Commander as a guar - antee of good faith. Juan was educated by Captain H. Knauber, Provincial Commander at that time, and is now... gum ; have been decreased. The range probable errors for the 14-inch railway gun using 1460 pound projectile have been increased. These changes were
DOE Office of Scientific and Technical Information (OSTI.GOV)
Incorporation of real-time component information using equipment condition assessment (ECA) through the developmentof enhanced risk monitors (ERM) for active components in advanced reactor (AR) and advanced small modular reactor (SMR) designs. We incorporate time-dependent failure probabilities from prognostic health management (PHM) systems to dynamically update the risk metric of interest. This information is used to augment data used for supervisory control and plant-wide coordination of multiple modules by providing the incremental risk incurred due to aging and demands placed on components that support mission requirements.
A new variable interval schedule with constant hazard rate and finite time range.
Bugallo, Mehdi; Machado, Armando; Vasconcelos, Marco
2018-05-27
We propose a new variable interval (VI) schedule that achieves constant probability of reinforcement in time while using a bounded range of intervals. By sampling each trial duration from a uniform distribution ranging from 0 to 2 T seconds, and then applying a reinforcement rule that depends linearly on trial duration, the schedule alternates reinforced and unreinforced trials, each less than 2 T seconds, while preserving a constant hazard function. © 2018 Society for the Experimental Analysis of Behavior.
Growth of Synechococcus sp. immobilized in chitosan with different times of contact with NaOH
Aguilar-May, Bily; Lizardi, Jaime; Voltolina, Domenico
2006-01-01
The thickness of the walls of the capsules of chitosan-immobilized Synechococcus cultures was dependent on the time of contact with NaOH and was directly related to culture growth. After an initial lag phase, probably caused by cell damage, the capsules obtained after 80 s in a 0.1 N NaOH solution showed better growth than that of free cell cultures (6.9 and 5.2 divisions in 10 days, respectively). PMID:19396351
Magazzù, Luca; Hänggi, Peter; Spagnolo, Bernardo; Valenti, Davide
2017-04-01
Quantum resonant activation is investigated for the archetype setup of an externally driven two-state (spin-boson) system subjected to strong dissipation by means of both analytical and extensive numerical calculations. The phenomenon of resonant activation emerges in the presence of either randomly fluctuating or deterministic periodically varying driving fields. Addressing the incoherent regime, a characteristic minimum emerges in the mean first passage time to reach an absorbing neighboring state whenever the intrinsic time scale of the modulation matches the characteristic time scale of the system dynamics. For the case of deterministic periodic driving, the first passage time probability density function (pdf) displays a complex, multipeaked behavior, which depends crucially on the details of initial phase, frequency, and strength of the driving. As an interesting feature we find that the mean first passage time enters the resonant activation regime at a critical frequency ν^{*} which depends very weakly on the strength of the driving. Moreover, we provide the relation between the first passage time pdf and the statistics of residence times.
NASA Astrophysics Data System (ADS)
Magazzó, Luca; Hänggi, Peter; Spagnolo, Bernardo; Valenti, Davide
2017-04-01
Quantum resonant activation is investigated for the archetype setup of an externally driven two-state (spin-boson) system subjected to strong dissipation by means of both analytical and extensive numerical calculations. The phenomenon of resonant activation emerges in the presence of either randomly fluctuating or deterministic periodically varying driving fields. Addressing the incoherent regime, a characteristic minimum emerges in the mean first passage time to reach an absorbing neighboring state whenever the intrinsic time scale of the modulation matches the characteristic time scale of the system dynamics. For the case of deterministic periodic driving, the first passage time probability density function (pdf) displays a complex, multipeaked behavior, which depends crucially on the details of initial phase, frequency, and strength of the driving. As an interesting feature we find that the mean first passage time enters the resonant activation regime at a critical frequency ν* which depends very weakly on the strength of the driving. Moreover, we provide the relation between the first passage time pdf and the statistics of residence times.
Lum, Kirsten J.; Sundaram, Rajeshwari; Louis, Thomas A.
2015-01-01
Prospective pregnancy studies are a valuable source of longitudinal data on menstrual cycle length. However, care is needed when making inferences of such renewal processes. For example, accounting for the sampling plan is necessary for unbiased estimation of the menstrual cycle length distribution for the study population. If couples can enroll when they learn of the study as opposed to waiting for the start of a new menstrual cycle, then due to length-bias, the enrollment cycle will be stochastically larger than the general run of cycles, a typical property of prevalent cohort studies. Furthermore, the probability of enrollment can depend on the length of time since a woman’s last menstrual period (a backward recurrence time), resulting in selection effects. We focus on accounting for length-bias and selection effects in the likelihood for enrollment menstrual cycle length, using a recursive two-stage approach wherein we first estimate the probability of enrollment as a function of the backward recurrence time and then use it in a likelihood with sampling weights that account for length-bias and selection effects. To broaden the applicability of our methods, we augment our model to incorporate a couple-specific random effect and time-independent covariate. A simulation study quantifies performance for two scenarios of enrollment probability when proper account is taken of sampling plan features. In addition, we estimate the probability of enrollment and the distribution of menstrual cycle length for the study population of the Longitudinal Investigation of Fertility and the Environment Study. PMID:25027273
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Inverse kinematic problem for a random gradient medium in geometric optics approximation
NASA Astrophysics Data System (ADS)
Petersen, N. V.
1990-03-01
Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikitin, N. V., E-mail: nnikit@mail.cern.ch; Sotnikov, V.P., E-mail: sotnikov@physics.msu.ru; Toms, K. S., E-mail: ktoms@mail.cern.ch
A radically new class of Bell inequalities in Wigner’s form was obtained on the basis of Kolmorov’s axiomatization of probability theory and the hypothesis of locality. These inequalities take explicitly into account the dependence on time (time-dependent Bell inequalities in Wigner’s form). By using these inequalities, one can propose a means for experimentally testing Bohr’ complementarity principle in the relativistic region. The inequalities in question open broad possibilities for studying correlations of nonrelativistic and relativistic quantum systems in external fields. The violation of the time-dependent inequalities in quantum mechanics was studied by considering the behavior of a pair of anticorrelatedmore » spins in a constant external magnetic field and oscillations of neutral pseudoscalar mesons. The decay of a pseudoscalar particle to a fermion–antifermion pair is considered within quantum field theory. In order to test experimentally the inequalities proposed in the present study, it is not necessary to perform dedicated noninvasive measurements required in the Leggett–Garg approach, for example.« less
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Interplanetary magnetic field dependency of stable Sun-aligned polar cap arcs
NASA Technical Reports Server (NTRS)
Valladares, C. E.; Carlson, H. C., Jr.; Fukui, K.
1994-01-01
This is the first analysis, using a statistically significant data set, of the morphological dependence of the presence, orientation, and motion of stable sun-aligned polar cap arcs upon the vector interplanetary magnetic field (IMF). For the one winter season analyzed we had 1392 all-sky 630.0-nm images of 2-min resolution containing a total of 150 polar cap arcs, all with corresponding values of the IMF as measured by International Monitoring Platform (IMP) 8 or International Sun Earth Explorer (ISEE) 2. After demonstrating an unbiased data set with smooth normal distributions of events versus the dimensions of time, space, and IMF component, we examine IMF dependencies of the properties of the optical arcs. A well-defined dependence for B(sub z) is found for the presence/absence of stable Sun-aligned polar cap arcs. Consistent with previous statistical studies, the probability of observing polar cap aurora steadily increases for larger positive values of B(sub z), and linearly decreases when B(sub z) becomes more negative. The probability of observing Sun-aligned arcs within the polar cap is determined to vary sharply as a function of the arc location; arcs were observed 40% of the time on the dawnside and only 10% on the duskside. This implies an overall probability of at least 40% for the whole polar cap. 20% of the arcs were observed during 'southward IMF conditions,' but in fact under closer inspection were found to have been formed under northward IMF conditions; these 'residual' positive B(sub z) arcs ha d a delayed residence time in the polar cap of about what would be expected after a north to south transition of B(sub z). A firm dependence on B(sub y) is also found for both the orientation and the dawn-dusk direction of motion of the arcs. All the arcs are Sun-aligned to a first approximation, but present deviations from this orientation, depending primarily upon the location of the arc in corrected geomagnetic (CG) coordinates. The arcs populating the 06-12 and the 12-18 quadrants of the CG coordinate system point toward the cusp. The B(sub y) dependency of the arc alignment is consistent with a cusp displacement in local time according to the sign of B(sub y). We found that the arc direction of motion depended both on B(sub y) and the arc location within the polar cap. For a given value of B(sub y) two well-defined regions (or cells) exist. Within each cell the arcs move in the same direction toward the boundary between the cells. The arcs located in the duskside move dawnward; those in the dawnside move duskward. The relative size of these dusk and dawn regions (or cells) are controlled by the magnitude of B(sub y). This persistent dusk-dawn motion fo the polar cap arcs is interpreted in terms of newly open flux tubes entering the polar cap and exerting a displacement of the convective cells and the polar cap arcs that are embedded within them.
NASA Astrophysics Data System (ADS)
Cocco, M.
2001-12-01
Earthquake stress changes can promote failures on favorably oriented faults and modify the seismicity pattern over broad regions around the causative faults. Because the induced stress perturbations modify the rate of production of earthquakes, they alter the probability of seismic events in a specified time window. Comparing the Coulomb stress changes with the seismicity rate changes and aftershock patterns can statistically test the role of stress transfer in earthquake occurrence. The interaction probability may represent a further tool to test the stress trigger or shadow model. The probability model, which incorporate stress transfer, has the main advantage to include the contributions of the induced stress perturbation (a static step in its present formulation), the loading rate and the fault constitutive properties. Because the mechanical conditions of the secondary faults at the time of application of the induced load are largely unkown, stress triggering can only be tested on fault populations and not on single earthquake pairs with a specified time delay. The interaction probability can represent the most suitable tool to test the interaction between large magnitude earthquakes. Despite these important implications and the stimulating perspectives, there exist problems in understanding earthquake interaction that should motivate future research but at the same time limit its immediate social applications. One major limitation is that we are unable to predict how and if the induced stress perturbations modify the ratio between small versus large magnitude earthquakes. In other words, we cannot distinguish between a change in this ratio in favor of small events or of large magnitude earthquakes, because the interaction probability is independent of magnitude. Another problem concerns the reconstruction of the stressing history. The interaction probability model is based on the response to a static step; however, we know that other processes contribute to the stressing history perturbing the faults (such as dynamic stress changes, post-seismic stress changes caused by viscolelastic relaxation or fluid flow). If, for instance, we believe that dynamic stress changes can trigger aftershocks or earthquakes years after the passing of the seismic waves through the fault, the perspective of calculating interaction probability is untenable. It is therefore clear we have learned a lot on earthquake interaction incorporating fault constitutive properties, allowing to solve existing controversy, but leaving open questions for future research.
The Dependence Structure of Conditional Probabilities in a Contingency Table
ERIC Educational Resources Information Center
Joarder, Anwar H.; Al-Sabah, Walid S.
2002-01-01
Conditional probability and statistical independence can be better explained with contingency tables. In this note some special cases of 2 x 2 contingency tables are considered. In turn an interesting insight into statistical dependence as well as independence of events is obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Ultraviolet absorption spectra of shock-heated carbon dioxide and water between 900 and 3050 K
NASA Astrophysics Data System (ADS)
Schulz, C.; Koch, J. D.; Davidson, D. F.; Jeffries, J. B.; Hanson, R. K.
2002-03-01
Spectrally resolved UV absorption cross-sections between 190 and 320 nm were measured in shock-heated CO 2 between 880 and 3050 K and H 2O between 1230 and 2860 K. Absorption spectra were acquired with 10 μs time resolution using a unique kinetic spectrograph, thereby enabling comparisons with time-dependent chemical kinetic modeling of post-shock thermal decomposition and chemical reactions. Although room temperature CO 2 is transparent (σ<10 -22 cm2) at wavelengths longer than 200 nm, hot CO 2 has significant absorption (σ>10 -20 cm2) extending to wavelengths longer than 300 nm. The temperature dependence of CO 2 absorption strongly suggests sharply increased transition probabilities from excited vibrational levels.
Tite, J P; Morrison, C A; Taylor, R B
1981-01-01
The photosensitive affinity label NAP (4-azido-2-nitrophenyl) was used to make a stable covalent-bonded monomeric immune complex (Ag2Ab) between rabbit anti-NAP antibody and a bihaptenic compound containing NAP linked to fluorescein (NAP-aminocaproyl-lysyl-Fl). This complex injected into mice had marked effects on their subsequent response to fluorescein coupled to a thymus-independent carrier (Fl-ficoll). Depending on the time at which the complex was administered relative to challenge, it was possible to obtain either enhancing or suppressive effects. The enhancing but not the suppressive effect of complex was dependent on immune recognition of the rabbit IgG carrier. While the suppressive effect probably results from complex-mediated inactivation of T-independent B cells, it is suggested that the enhancing effect results from priming of the T-dependent B cells by Fl-Ficoll followed by their triggering into antibody production by rabbit IgG-specific helper cells. PMID:7007223
Including foreshocks and aftershocks in time-independent probabilistic seismic hazard analyses
Boyd, Oliver S.
2012-01-01
Time‐independent probabilistic seismic‐hazard analysis treats each source as being temporally and spatially independent; hence foreshocks and aftershocks, which are both spatially and temporally dependent on the mainshock, are removed from earthquake catalogs. Yet, intuitively, these earthquakes should be considered part of the seismic hazard, capable of producing damaging ground motions. In this study, I consider the mainshock and its dependents as a time‐independent cluster, each cluster being temporally and spatially independent from any other. The cluster has a recurrence time of the mainshock; and, by considering the earthquakes in the cluster as a union of events, dependent events have an opportunity to contribute to seismic ground motions and hazard. Based on the methods of the U.S. Geological Survey for a high‐hazard site, the inclusion of dependent events causes ground motions that are exceeded at probability levels of engineering interest to increase by about 10% but could be as high as 20% if variations in aftershock productivity can be accounted for reliably.
Idealized models of the joint probability distribution of wind speeds
NASA Astrophysics Data System (ADS)
Monahan, Adam H.
2018-05-01
The joint probability distribution of wind speeds at two separate locations in space or points in time completely characterizes the statistical dependence of these two quantities, providing more information than linear measures such as correlation. In this study, we consider two models of the joint distribution of wind speeds obtained from idealized models of the dependence structure of the horizontal wind velocity components. The bivariate Rice distribution follows from assuming that the wind components have Gaussian and isotropic fluctuations. The bivariate Weibull distribution arises from power law transformations of wind speeds corresponding to vector components with Gaussian, isotropic, mean-zero variability. Maximum likelihood estimates of these distributions are compared using wind speed data from the mid-troposphere, from different altitudes at the Cabauw tower in the Netherlands, and from scatterometer observations over the sea surface. While the bivariate Rice distribution is more flexible and can represent a broader class of dependence structures, the bivariate Weibull distribution is mathematically simpler and may be more convenient in many applications. The complexity of the mathematical expressions obtained for the joint distributions suggests that the development of explicit functional forms for multivariate speed distributions from distributions of the components will not be practical for more complicated dependence structure or more than two speed variables.
Fa, John E.; Sharples, Colin M.; Bell, Diana J.; DeAngelis, Donald L.
2001-01-01
We developed an individual-based model of Rabbit Viral Hemorrhagic Disease (RVHD) for European wild rabbits (Oryctolagus cuniculus L.), representing up to 1000 rabbits in four hectares. Model output for productivity and recruitment matched published values. The disease was density-dependent and virulence affected outcome. Strains that caused death after several days produced greater overall mortality than strains in which rabbits either died or recovered very quickly. Disease effect also depended on time of year. We also elaborated a larger scale model representing 25 km2 and 100,000+ rabbits, split into a number of grid-squares. This was a more traditional model that did not represent individual rabbits, but employed a system of dynamic equations for each grid-square. Disease spread depended on probability of transmission between neighboring grid-squares. Potential recovery from a major population crash caused by the disease relied on disease virulence and frequency of recurrence. The model's dependence on probability of disease transmission between grid-squares suggests the way that the model represents the spatial distribution of the population affects simulation. Although data on RVHD in Europe are lacking, our models provide a basis for describing the disease in realistic detail and for assessing influence of various social and spatial factors on spread.
Li, Yuan; Jalil, Mansoor B. A.; Tan, S. G.; Zhao, W.; Bai, R.; Zhou, G. H.
2014-01-01
Time-periodic perturbation can be used to modify the transport properties of the surface states of topological insulators, specifically their chiral tunneling property. Using the scattering matrix method, we study the tunneling transmission of the surface states of a topological insulator under the influence of a time-dependent potential and finite gate bias voltage. It is found that perfect transmission is obtained for electrons which are injected normally into the time-periodic potential region in the absence of any bias voltage. However, this signature of Klein tunneling is destroyed when a bias voltage is applied, with the transmission probability of normally incident electrons decreasing with increasing gate bias voltage. Likewise, the overall conductance of the system decreases significantly when a gate bias voltage is applied. The characteristic left-handed helicity of the transmitted spin polarization is also broken by the finite gate bias voltage. In addition, the time-dependent potential modifies the large-angle transmission profile, which exhibits an oscillatory or resonance-like behavior. Finally, time-dependent transport modes (with oscillating potential in the THz frequency) can result in enhanced overall conductance, irrespective of the presence or absence of the gate bias voltage. PMID:24713634
Dependence in probabilistic modeling Dempster-Shafer theory and probability bounds analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferson, Scott; Nelsen, Roger B.; Hajagos, Janos
2015-05-01
This report summarizes methods to incorporate information (or lack of information) about inter-variable dependence into risk assessments that use Dempster-Shafer theory or probability bounds analysis to address epistemic and aleatory uncertainty. The report reviews techniques for simulating correlated variates for a given correlation measure and dependence model, computation of bounds on distribution functions under a specified dependence model, formulation of parametric and empirical dependence models, and bounding approaches that can be used when information about the intervariable dependence is incomplete. The report also reviews several of the most pervasive and dangerous myths among risk analysts about dependence in probabilistic models.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
First passage Brownian functional properties of snowmelt dynamics
NASA Astrophysics Data System (ADS)
Dubey, Ashutosh; Bandyopadhyay, Malay
2018-04-01
In this paper, we model snow-melt dynamics in terms of a Brownian motion (BM) with purely time dependent drift and difusion and examine its first passage properties by suggesting and examining several Brownian functionals which characterize the lifetime and reactivity of such stochastic processes. We introduce several probability distribution functions (PDFs) associated with such time dependent BMs. For instance, for a BM with initial starting point x0, we derive analytical expressions for : (i) the PDF P(tf|x0) of the first passage time tf which specify the lifetime of such stochastic process, (ii) the PDF P(A|x0) of the area A till the first passage time and it provides us numerous valuable information about the total fresh water availability during melting, (iii) the PDF P(M) associated with the maximum size M of the BM process before the first passage time, and (iv) the joint PDF P(M; tm) of the maximum size M and its occurrence time tm before the first passage time. These P(M) and P(M; tm) are useful in determining the time of maximum fresh water availability and in calculating the total maximum amount of available fresh water. These PDFs are examined for the power law time dependent drift and diffusion which matches quite well with the available data of snowmelt dynamics.
Fluctuation relation for heat exchange in Markovian open quantum systems
NASA Astrophysics Data System (ADS)
Ramezani, M.; Golshani, M.; Rezakhani, A. T.
2018-04-01
A fluctuation relation for the heat exchange of an open quantum system under a thermalizing Markovian dynamics is derived. We show that the probability that the system absorbs an amount of heat from its bath, at a given time interval, divided by the probability of the reverse process (releasing the same amount of heat to the bath) is given by an exponential factor which depends on the amount of heat and the difference between the temperatures of the system and the bath. Interestingly, this relation is akin to the standard form of the fluctuation relation (for forward-backward dynamics). We also argue that the probability of the violation of the second law of thermodynamics in the form of the Clausius statement (i.e., net heat transfer from a cold system to its hot bath) drops exponentially with both the amount of heat and the temperature differences of the baths.
Fluctuation relation for heat exchange in Markovian open quantum systems.
Ramezani, M; Golshani, M; Rezakhani, A T
2018-04-01
A fluctuation relation for the heat exchange of an open quantum system under a thermalizing Markovian dynamics is derived. We show that the probability that the system absorbs an amount of heat from its bath, at a given time interval, divided by the probability of the reverse process (releasing the same amount of heat to the bath) is given by an exponential factor which depends on the amount of heat and the difference between the temperatures of the system and the bath. Interestingly, this relation is akin to the standard form of the fluctuation relation (for forward-backward dynamics). We also argue that the probability of the violation of the second law of thermodynamics in the form of the Clausius statement (i.e., net heat transfer from a cold system to its hot bath) drops exponentially with both the amount of heat and the temperature differences of the baths.
Analysis and selection of magnitude relations for the Working Group on Utah Earthquake Probabilities
Duross, Christopher; Olig, Susan; Schwartz, David
2015-01-01
Prior to calculating time-independent and -dependent earthquake probabilities for faults in the Wasatch Front region, the Working Group on Utah Earthquake Probabilities (WGUEP) updated a seismic-source model for the region (Wong and others, 2014) and evaluated 19 historical regressions on earthquake magnitude (M). These regressions relate M to fault parameters for historical surface-faulting earthquakes, including linear fault length (e.g., surface-rupture length [SRL] or segment length), average displacement, maximum displacement, rupture area, seismic moment (Mo ), and slip rate. These regressions show that significant epistemic uncertainties complicate the determination of characteristic magnitude for fault sources in the Basin and Range Province (BRP). For example, we found that M estimates (as a function of SRL) span about 0.3–0.4 units (figure 1) owing to differences in the fault parameter used; age, quality, and size of historical earthquake databases; and fault type and region considered.
Communication: Reactivity borrowing in the mode selective chemistry of H + CHD3 → H2 + CD3
NASA Astrophysics Data System (ADS)
Ellerbrock, Roman; Manthe, Uwe
2017-12-01
Quantum state-resolved reaction probabilities for the H + CHD3 → H2 + CD3 reaction are calculated by accurate full-dimensional quantum dynamics calculations using the multi-layer multi-configurational time-dependent Hartree approach and the quantum transition state concept. Reaction probabilities of various ro-vibrational states of the CHD3 reactant are investigated for vanishing total angular momentum. While the reactivity of the different vibrational states of CHD3 mostly follows intuitive patterns, an unusually large reaction probability is found for CHD3 molecules triply excited in the CD3 umbrella-bending vibration. This surprising reactivity can be explained by a Fermi resonance-type mixing of the single CH-stretch excited and the triple CD3 umbrella-bend excited vibrational states of CHD3. These findings show that resonant energy transfer can significantly affect the mode-selective chemistry of CHD3 and result in counter-intuitive reactivity patterns.
Exact combinatorial approach to finite coagulating systems
NASA Astrophysics Data System (ADS)
Fronczak, Agata; Chmiel, Anna; Fronczak, Piotr
2018-02-01
This paper outlines an exact combinatorial approach to finite coagulating systems. In this approach, cluster sizes and time are discrete and the binary aggregation alone governs the time evolution of the systems. By considering the growth histories of all possible clusters, an exact expression is derived for the probability of a coagulating system with an arbitrary kernel being found in a given cluster configuration when monodisperse initial conditions are applied. Then this probability is used to calculate the time-dependent distribution for the number of clusters of a given size, the average number of such clusters, and that average's standard deviation. The correctness of our general expressions is proved based on the (analytical and numerical) results obtained for systems with the constant kernel. In addition, the results obtained are compared with the results arising from the solutions to the mean-field Smoluchowski coagulation equation, indicating its weak points. The paper closes with a brief discussion on the extensibility to other systems of the approach presented herein, emphasizing the issue of arbitrary initial conditions.
Laser-Driven Recollisions under the Coulomb Barrier.
Keil, Th; Popruzhenko, S V; Bauer, D
2016-12-09
Photoelectron spectra obtained from the ab initio solution of the time-dependent Schrödinger equation can be in striking disagreement with predictions by the strong-field approximation (SFA), not only at low energy but also around twice the ponderomotive energy where the transition from the direct to the rescattered electrons is expected. In fact, the relative enhancement of the ionization probability compared to the SFA in this regime can be several orders of magnitude. We show for which laser and target parameters such an enhancement occurs and for which the SFA prediction is qualitatively good. The enhancement is analyzed in terms of the Coulomb-corrected action along analytic quantum orbits in the complex-time plane, taking soft recollisions under the Coulomb barrier into account. These recollisions in complex time and space prevent a separation into sub-barrier motion up to the "tunnel exit" and subsequent classical dynamics. Instead, the entire quantum path up to the detector determines the ionization probability.
Analysis of noise-induced temporal correlations in neuronal spike sequences
NASA Astrophysics Data System (ADS)
Reinoso, José A.; Torrent, M. C.; Masoller, Cristina
2016-11-01
We investigate temporal correlations in sequences of noise-induced neuronal spikes, using a symbolic method of time-series analysis. We focus on the sequence of time-intervals between consecutive spikes (inter-spike-intervals, ISIs). The analysis method, known as ordinal analysis, transforms the ISI sequence into a sequence of ordinal patterns (OPs), which are defined in terms of the relative ordering of consecutive ISIs. The ISI sequences are obtained from extensive simulations of two neuron models (FitzHugh-Nagumo, FHN, and integrate-and-fire, IF), with correlated noise. We find that, as the noise strength increases, temporal order gradually emerges, revealed by the existence of more frequent ordinal patterns in the ISI sequence. While in the FHN model the most frequent OP depends on the noise strength, in the IF model it is independent of the noise strength. In both models, the correlation time of the noise affects the OP probabilities but does not modify the most probable pattern.
NASA Astrophysics Data System (ADS)
Kenfack, S. C.; Fotue, A. J.; Fobasso, M. F. C.; Djomou, J.-R. D.; Tiotsop, M.; Ngouana, K. S. L.; Fai, L. C.
2017-12-01
We have studied the transition probability and decoherence time of levitating polaron in helium film thickness. By using a variational method of Pekar type, the ground and the first excited states of polaron are calculated above the liquid-helium film placed on the polar substrate. It is shown that the polaron transits from the ground to the excited state in the presence of an external electromagnetic field in the plane. We have seen that, in the helium film, the effects of the magnetic and electric fields on the polaron are opposite. It is also shown that the energy, transition probability and decoherence time of the polaron depend sensitively on the helium film thickness. We found that decoherence time decreases as a function of increasing electron-phonon coupling strength and the helium film thickness. It is seen that the film thickness can be considered as a new confinement in our system and can be adjusted in order to reduce decoherence.
The short time Fourier transform and local signals
NASA Astrophysics Data System (ADS)
Okumura, Shuhei
In this thesis, I examine the theoretical properties of the short time discrete Fourier transform (STFT). The STFT is obtained by applying the Fourier transform by a fixed-sized, moving window to input series. We move the window by one time point at a time, so we have overlapping windows. I present several theoretical properties of the STFT, applied to various types of complex-valued, univariate time series inputs, and their outputs in closed forms. In particular, just like the discrete Fourier transform, the STFT's modulus time series takes large positive values when the input is a periodic signal. One main point is that a white noise time series input results in the STFT output being a complex-valued stationary time series and we can derive the time and time-frequency dependency structure such as the cross-covariance functions. Our primary focus is the detection of local periodic signals. I present a method to detect local signals by computing the probability that the squared modulus STFT time series has consecutive large values exceeding some threshold after one exceeding observation following one observation less than the threshold. We discuss a method to reduce the computation of such probabilities by the Box-Cox transformation and the delta method, and show that it works well in comparison to the Monte Carlo simulation method.
Operational Earthquake Forecasting and Decision-Making in a Low-Probability Environment
NASA Astrophysics Data System (ADS)
Jordan, T. H.; the International Commission on Earthquake ForecastingCivil Protection
2011-12-01
Operational earthquake forecasting (OEF) is the dissemination of authoritative information about the time dependence of seismic hazards to help communities prepare for potentially destructive earthquakes. Most previous work on the public utility of OEF has anticipated that forecasts would deliver high probabilities of large earthquakes; i.e., deterministic predictions with low error rates (false alarms and failures-to-predict) would be possible. This expectation has not been realized. An alternative to deterministic prediction is probabilistic forecasting based on empirical statistical models of aftershock triggering and seismic clustering. During periods of high seismic activity, short-term earthquake forecasts can attain prospective probability gains in excess of 100 relative to long-term forecasts. The utility of such information is by no means clear, however, because even with hundredfold increases, the probabilities of large earthquakes typically remain small, rarely exceeding a few percent over forecasting intervals of days or weeks. Civil protection agencies have been understandably cautious in implementing OEF in this sort of "low-probability environment." The need to move more quickly has been underscored by recent seismic crises, such as the 2009 L'Aquila earthquake sequence, in which an anxious public was confused by informal and inaccurate earthquake predictions. After the L'Aquila earthquake, the Italian Department of Civil Protection appointed an International Commission on Earthquake Forecasting (ICEF), which I chaired, to recommend guidelines for OEF utilization. Our report (Ann. Geophys., 54, 4, 2011; doi: 10.4401/ag-5350) concludes: (a) Public sources of information on short-term probabilities should be authoritative, scientific, open, and timely, and need to convey epistemic uncertainties. (b) Earthquake probabilities should be based on operationally qualified, regularly updated forecasting systems. (c) All operational models should be evaluated for reliability and skill by retrospective testing, and the models should be under continuous prospective testing against long-term forecasts and alternative time-dependent models. (d) Short-term models used in operational forecasting should be consistent with the long-term forecasts used in probabilistic seismic hazard analysis. (e) Alert procedures should be standardized to facilitate decisions at different levels of government, based in part on objective analysis of costs and benefits. (f) In establishing alert protocols, consideration should also be given to the less tangible aspects of value-of-information, such as gains in psychological preparedness and resilience. Authoritative statements of increased risk, even when the absolute probability is low, can provide a psychological benefit to the public by filling information vacuums that lead to informal predictions and misinformation. Formal OEF procedures based on probabilistic forecasting appropriately separate hazard estimation by scientists from the decision-making role of civil protection authorities. The prosecution of seven Italian scientists on manslaughter charges stemming from their actions before the L'Aquila earthquake makes clear why this separation should be explicit in defining OEF protocols.
NASA Astrophysics Data System (ADS)
Silva, Antonio
2005-03-01
It is well-known that the mathematical theory of Brownian motion was first developed in the Ph. D. thesis of Louis Bachelier for the French stock market before Einstein [1]. In Ref. [2] we studied the so-called Heston model, where the stock-price dynamics is governed by multiplicative Brownian motion with stochastic diffusion coefficient. We solved the corresponding Fokker-Planck equation exactly and found an analytic formula for the time-dependent probability distribution of stock price changes (returns). The formula interpolates between the exponential (tent-shaped) distribution for short time lags and the Gaussian (parabolic) distribution for long time lags. The theoretical formula agrees very well with the actual stock-market data ranging from the Dow-Jones index [2] to individual companies [3], such as Microsoft, Intel, etc. [] [1] Louis Bachelier, ``Th'eorie de la sp'eculation,'' Annales Scientifiques de l''Ecole Normale Sup'erieure, III-17:21-86 (1900).[] [2] A. A. Dragulescu and V. M. Yakovenko, ``Probability distribution of returns in the Heston model with stochastic volatility,'' Quantitative Finance 2, 443--453 (2002); Erratum 3, C15 (2003). [cond-mat/0203046] [] [3] A. C. Silva, R. E. Prange, and V. M. Yakovenko, ``Exponential distribution of financial returns at mesoscopic time lags: a new stylized fact,'' Physica A 344, 227--235 (2004). [cond-mat/0401225
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-01-01
Background Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Methods Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Results Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = −0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = −0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Conclusions Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns. PMID:25642389
Oldehinkel, Albertine J; Hartman, Catharina A; Van Oort, Floor V A; Nederhof, Esther
2015-02-01
Some adolescents function poorly in apparently benign environments, while others thrive despite hassles and difficulties. The aim of this study was to examine if adolescents with specialized skills in the recognition of either positive or negative emotions have a context-dependent risk of developing an anxiety or depressive disorder during adolescence, depending on exposure to positive or harsh parenting. Data came from a large prospective Dutch population study (N = 1539). At age 11, perceived parental rejection and emotional warmth were measured by questionnaire, and emotion recognition skills by means of a reaction-time task. Lifetime diagnoses of anxiety and depressive disorders were assessed at about age 19, using a standardized diagnostic interview. Adolescents who were specialized in the recognition of positive emotions had a relatively high probability to develop an anxiety disorder when exposed to parental rejection (Bspecialization*rejection = 0.23, P < 0.01) and a relatively low probability in response to parental emotional warmth (Bspecialization*warmth = -0.24, P = 0.01), while the opposite pattern was found for specialists in negative emotions. The effect of parental emotional warmth on depression onset was likewise modified by emotion recognition specialization (B = -0.13, P = 0.03), but the effect of parental rejection was not (B = 0.02, P = 0.72). In general, the relative advantage of specialists in negative emotions was restricted to fairly uncommon negative conditions. Our results suggest that there is no unequivocal relation between parenting behaviors and the probability to develop an anxiety or depressive disorder in adolescence, and that emotion recognition specialization may be a promising way to distinguish between various types of context-dependent reaction patterns.
Hedged Monte-Carlo: low variance derivative pricing with objective probabilities
NASA Astrophysics Data System (ADS)
Potters, Marc; Bouchaud, Jean-Philippe; Sestovic, Dragan
2001-01-01
We propose a new ‘hedged’ Monte-Carlo ( HMC) method to price financial derivatives, which allows to determine simultaneously the optimal hedge. The inclusion of the optimal hedging strategy allows one to reduce the financial risk associated with option trading, and for the very same reason reduces considerably the variance of our HMC scheme as compared to previous methods. The explicit accounting of the hedging cost naturally converts the objective probability into the ‘risk-neutral’ one. This allows a consistent use of purely historical time series to price derivatives and obtain their residual risk. The method can be used to price a large class of exotic options, including those with path dependent and early exercise features.
Rare events in networks with internal and external noise
NASA Astrophysics Data System (ADS)
Hindes, J.; Schwartz, I. B.
2017-12-01
We study rare events in networks with both internal and external noise, and develop a general formalism for analyzing rare events that combines pair-quenched techniques and large-deviation theory. The probability distribution, shape, and time scale of rare events are considered in detail for extinction in the Susceptible-Infected-Susceptible model as an illustration. We find that when both types of noise are present, there is a crossover region as the network size is increased, where the probability exponent for large deviations no longer increases linearly with the network size. We demonstrate that the form of the crossover depends on whether the endemic state is localized near the epidemic threshold or not.
Mang, Chao-Yong; Liu, Cai-Ping; Liu, Guang-Ming; Jiang, Bei; Lan, Hai; Wu, Ke-Chen; Yan, Ya; Li, Hai-Fei; Yang, Ming-Hui; Zhao, Yu
2015-02-05
A cyclic dipeptide often has the multiple configurations and the abundant conformations. The density functional theory (DFT) method is used to search the preferred conformation of the most probable configuration for cordycedipeptide A isolated from the culture liquid of Cordyceps sinensis. The time-dependent DFT approach is exploited to describe the profile of electronic circular dichroism (CD). The calculated results show that the most probable configuration is 3S6R7S, whose preferred conformation has a negative optical rotation and a positive lowest energy electronic CD band. Copyright © 2014 Elsevier B.V. All rights reserved.
TimeXNet Web: Identifying cellular response networks from diverse omics time-course data.
Tan, Phit Ling; López, Yosvany; Nakai, Kenta; Patil, Ashwini
2018-05-14
Condition-specific time-course omics profiles are frequently used to study cellular response to stimuli and identify associated signaling pathways. However, few online tools allow users to analyze multiple types of high-throughput time-course data. TimeXNet Web is a web server that extracts a time-dependent gene/protein response network from time-course transcriptomic, proteomic or phospho-proteomic data, and an input interaction network. It classifies the given genes/proteins into time-dependent groups based on the time of their highest activity and identifies the most probable paths connecting genes/proteins in consecutive groups. The response sub-network is enriched in activated genes/proteins and contains novel regulators that do not show any observable change in the input data. Users can view the resultant response network and analyze it for functional enrichment. TimeXNet Web supports the analysis of high-throughput data from multiple species by providing high quality, weighted protein-protein interaction networks for 12 model organisms. http://txnet.hgc.jp/. ashwini@hgc.jp. Supplementary data are available at Bioinformatics online.
Thoreson, Wallace B.; Van Hook, Matthew J.; Parmelee, Caitlyn; Curto, Carina
2015-01-01
Post-synaptic responses are a product of quantal amplitude (Q), size of the releasable vesicle pool (N), and release probability (P). Voltage-dependent changes in presynaptic Ca2+ entry alter post-synaptic responses primarily by changing P but have also been shown to influence N. With simultaneous whole cell recordings from cone photoreceptors and horizontal cells in tiger salamander retinal slices, we measured N and P at cone ribbon synapses by using a train of depolarizing pulses to stimulate release and deplete the pool. We developed an analytical model that calculates the total pool size contributing to release under different stimulus conditions by taking into account the prior history of release and empirically-determined properties of replenishment. The model provided a formula that calculates vesicle pool size from measurements of the initial post-synaptic response and limiting rate of release evoked by a train of pulses, the fraction of release sites available for replenishment, and the time constant for replenishment. Results of the model showed that weak and strong depolarizing stimuli evoked release with differing probabilities but the same size vesicle pool. Enhancing intraterminal Ca2+ spread by lowering Ca2+ buffering or applying BayK8644 did not increase PSCs evoked with strong test steps showing there is a fixed upper limit to pool size. Together, these results suggest that light-evoked changes in cone membrane potential alter synaptic release solely by changing release probability. PMID:26541100
Memory-induced resonancelike suppression of spike generation in a resonate-and-fire neuron model
NASA Astrophysics Data System (ADS)
Mankin, Romi; Paekivi, Sander
2018-01-01
The behavior of a stochastic resonate-and-fire neuron model based on a reduction of a fractional noise-driven generalized Langevin equation (GLE) with a power-law memory kernel is considered. The effect of temporally correlated random activity of synaptic inputs, which arise from other neurons forming local and distant networks, is modeled as an additive fractional Gaussian noise in the GLE. Using a first-passage-time formulation, in certain system parameter domains exact expressions for the output interspike interval (ISI) density and for the survival probability (the probability that a spike is not generated) are derived and their dependence on input parameters, especially on the memory exponent, is analyzed. In the case of external white noise, it is shown that at intermediate values of the memory exponent the survival probability is significantly enhanced in comparison with the cases of strong and weak memory, which causes a resonancelike suppression of the probability of spike generation as a function of the memory exponent. Moreover, an examination of the dependence of multimodality in the ISI distribution on input parameters shows that there exists a critical memory exponent αc≈0.402 , which marks a dynamical transition in the behavior of the system. That phenomenon is illustrated by a phase diagram describing the emergence of three qualitatively different structures of the ISI distribution. Similarities and differences between the behavior of the model at internal and external noises are also discussed.
Influence of item distribution pattern and abundance on efficiency of benthic core sampling
Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.
2014-01-01
ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.
Implications of fault constitutive properties for earthquake prediction
Dieterich, J.H.; Kilgore, B.
1996-01-01
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance D(c), apparent fracture energy at a rupture front, time- dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of D, apply to faults in nature. However, scaling of D(c) is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Implications of fault constitutive properties for earthquake prediction.
Dieterich, J H; Kilgore, B
1996-04-30
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Emergent mechanics, quantum and un-quantum
NASA Astrophysics Data System (ADS)
Ralston, John P.
2013-10-01
There is great interest in quantum mechanics as an "emergent" phenomenon. The program holds that nonobvious patterns and laws can emerge from complicated physical systems operating by more fundamental rules. We find a new approach where quantum mechanics itself should be viewed as an information management tool not derived from physics nor depending on physics. The main accomplishment of quantum-style theory comes in expanding the notion of probability. We construct a map from macroscopic information as data" to quantum probability. The map allows a hidden variable description for quantum states, and efficient use of the helpful tools of quantum mechanics in unlimited circumstances. Quantum dynamics via the time-dependent Shroedinger equation or operator methods actually represents a restricted class of classical Hamiltonian or Lagrangian dynamics, albeit with different numbers of degrees of freedom. We show that under wide circumstances such dynamics emerges from structureless dynamical systems. The uses of the quantum information management tools are illustrated by numerical experiments and practical applications
1985-11-26
etc.).., Major decisions involving reliability ptudies, based on competing risk methodology , have been made in the past and will continue to be made...censoring mechanism. In such instances, the methodology for estimating relevant reliabili- ty probabilities has received considerable attention (cf. David...proposal for a discussion of the general methodology . .,4..% . - ’ -. - ’ . ’ , . * I - " . . - - - - . . ,_ . . . . . . . . .4
Dynamic Blowout Risk Analysis Using Loss Functions.
Abimbola, Majeed; Khan, Faisal
2018-02-01
Most risk analysis approaches are static; failing to capture evolving conditions. Blowout, the most feared accident during a drilling operation, is a complex and dynamic event. The traditional risk analysis methods are useful in the early design stage of drilling operation while falling short during evolving operational decision making. A new dynamic risk analysis approach is presented to capture evolving situations through dynamic probability and consequence models. The dynamic consequence models, the focus of this study, are developed in terms of loss functions. These models are subsequently integrated with the probability to estimate operational risk, providing a real-time risk analysis. The real-time evolving situation is considered dependent on the changing bottom-hole pressure as drilling progresses. The application of the methodology and models are demonstrated with a case study of an offshore drilling operation evolving to a blowout. © 2017 Society for Risk Analysis.
Survival curve estimation with dependent left truncated data using Cox's model.
Mackenzie, Todd
2012-10-19
The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.
Zone clearance in an infinite TASEP with a step initial condition
NASA Astrophysics Data System (ADS)
Cividini, Julien; Appert-Rolland, Cécile
2017-06-01
The TASEP is a paradigmatic model of out-of-equilibrium statistical physics, for which many quantities have been computed, either exactly or by approximate methods. In this work we study two new kinds of observables that have some relevance in biological or traffic models. They represent the probability for a given clearance zone of the lattice to be empty (for the first time) at a given time, starting from a step density profile. Exact expressions are obtained for single-time quantities, while more involved history-dependent observables are studied by Monte Carlo simulation, and partially predicted by a phenomenological approach.
Transient Reliability of Ceramic Structures For Heat Engine Applications
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Jadaan, Osama M.
2002-01-01
The objectives of this report was to develop a methodology to predict the time-dependent reliability (probability of failure) of brittle material components subjected to transient thermomechanical loading, taking into account the change in material response with time. This methodology for computing the transient reliability in ceramic components subjected to fluctuation thermomechanical loading was developed, assuming SCG (Slow Crack Growth) as the delayed mode of failure. It takes into account the effect of varying Weibull modulus and materials with time. It was also coded into a beta version of NASA's CARES/Life code, and an example demonstrating its viability was presented.
Measures for a multidimensional multiverse
NASA Astrophysics Data System (ADS)
Chung, Hyeyoun
2015-04-01
We explore the phenomenological implications of generalizing the causal patch and fat geodesic measures to a multidimensional multiverse, where the vacua can have differing numbers of large dimensions. We consider a simple model in which the vacua are nucleated from a D -dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the time scales for the existence of observers, for vacuum domination, and for curvature domination (tobs,tΛ , and tc, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, Λ . We find that in the case of the causal patch cutoff, when the bubble universes have p +1 large spatial dimensions with p ≥2 , the shape of the probability distribution is such that we obtain the coincidence of time scales tobs˜tΛ˜tc . Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case p =2 , compared to p ≥3 . In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all p ≥2 , and we once again obtain the coincidence tobs˜tΛ˜tc . These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of time scales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular Universe being (3 +1 )-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.
Schmidt, Benedikt R; Hödl, Walter; Schaub, Michael
2012-03-01
Performance in one stage of a complex life cycle may affect performance in the subsequent stage. Animals that start a new stage at a smaller size than conspecifics may either always remain smaller or they may be able to "catch up" through plasticity, usually elevated growth rates. We study how size at and date of metamorphosis affected subsequent performance in the terrestrial juvenile stage and lifetime fitness of spadefoot toads (Pelobates fuscus). We analyzed capture-recapture data of > 3000 individuals sampled during nine years with mark-recapture models to estimate first-year juvenile survival probabilities and age-specific first-time breeding probabilities of toads, followed by model selection to assess whether these probabilities were correlated with size at and date of metamorphosis. Males attained maturity after two years, whereas females reached maturity 2-4 years after metamorphosis. Age at maturity was weakly correlated with metamorphic traits. In both sexes, first-year juvenile survival depended positively on date of metamorphosis and, in males, also negatively on size at metamorphosis. In males, toads that metamorphosed early at a small size had the highest probability to reach maturity. However, because very few toadlets metamorphosed early, the vast majority of male metamorphs had a very similar probability to reach maturity. A matrix projection model constructed for females showed that different juvenile life history pathways resulted in similar lifetime fitness. We found that the effects of date of and size at metamorphosis on different juvenile traits cancelled each other out such that toads that were small or large at metamorphosis had equal performance. Because the costs and benefits of juvenile life history pathways may also depend on population fluctuations, ample phenotypic variation in life history traits may be maintained.
Statistical context shapes stimulus-specific adaptation in human auditory cortex.
Herrmann, Björn; Henry, Molly J; Fromboluti, Elisa Kim; McAuley, J Devin; Obleser, Jonas
2015-04-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. Copyright © 2015 the American Physiological Society.
Relative frequencies of seismic main shocks after strong shocks in Italy
NASA Astrophysics Data System (ADS)
Gasperini, Paolo; Lolli, Barbara; Vannucci, Gianfranco
2016-10-01
We analysed a catalogue of Italian earthquakes, covering 55 yr of data from 1960 to 2014 with magnitudes homogeneously converted to Mw, to compute the time-dependent relative frequencies with which strong seismic shocks (4.0 ≤ Mw < 5.0), widely felt by the population, have been followed by main shocks (Mw ≥ 5.0) that threatened the health and the properties of the persons living in the epicentral area. Assuming the stationarity of the seismic release properties, such frequencies are estimates of the probabilities of potentially destructive shocks after the occurrence of future strong shocks. We compared them with the time-independent relative frequencies of random occurrence in terms of the frequency gain that is the ratio between the time-dependent and time-independent relative frequencies. The time-dependent relative frequencies vary from less than 1 per cent to about 20 per cent, depending on the magnitudes of the shocks and the time windows considered (ranging from minutes to years). They remain almost constant for a few hours after the strong shock and then decrease with time logarithmically. Strong earthquakes (with Mw ≥ 6.0) mainly occurred within two or three months of the strong shock. The frequency gains vary from about 10 000 for very short time intervals to less than 10 for a time interval of 2 yr. Only about 1/3 of main shocks were preceded by at least a strong shock in the previous day and about 1/2 in the previous month.
Load and Pi control flux through the branched kinetic cycle of myosin V.
Kad, Neil M; Trybus, Kathleen M; Warshaw, David M
2008-06-20
Myosin V is a processive actin-based motor protein that takes multiple 36-nm steps to deliver intracellular cargo to its destination. In the laser trap, applied load slows myosin V heavy meromyosin stepping and increases the probability of backsteps. In the presence of 40 mm phosphate (P(i)), both forward and backward steps become less load-dependent. From these data, we infer that P(i) release commits myosin V to undergo a highly load-dependent transition from a state in which ADP is bound to both heads and its lead head trapped in a pre-powerstroke conformation. Increasing the residence time in this state by applying load increases the probability of backstepping or detachment. The kinetics of detachment indicate that myosin V can detach from actin at two distinct points in the cycle, one of which is turned off by the presence of P(i). We propose a branched kinetic model to explain these data. Our model includes P(i) release prior to the most load-dependent step in the cycle, implying that P(i) release and load both act as checkpoints that control the flux through two parallel pathways.
Singh, A S; Shah, A; Brockmann, A
2018-02-01
In honey bees, continuous foraging at an artificial feeder induced a sustained upregulation of the immediate early genes early growth response protein 1 (Egr-1) and hormone receptor 38 (Hr38). This gene expression response was accompanied by an upregulation of several Egr-1 candidate downstream genes: ecdysone receptor (EcR), dopamine/ecdysteroid receptor (DopEcR), dopamine decarboxylase and dopamine receptor 2. Hr38, EcR and DopEcR are components of the ecdysteroid signalling pathway, which is highly probably involved in learning and memory processes in honey bees and other insects. Time-trained foragers still showed an upregulation of Egr-1 when the feeder was presented at an earlier time of the day, suggesting that the genomic response is more dependent on the food reward than training time. However, presentation of the feeder at the training time without food was still capable of inducing a transient increase in Egr-1 expression. Thus, learnt feeder cues, or even training time, probably affect Egr-1 expression. In contrast, whole brain Egr-1 expression changes did not differ between dancing and nondancing foragers. On the basis of our results we propose that food reward induced continuous foraging ultimately elicits a genomic response involving Egr-1 and Hr38 and their downstream genes. Furthermore this genomic response is highly probably involved in foraging-related learning and memory responses. © 2017 The Royal Entomological Society.
Prediction of future asset prices
NASA Astrophysics Data System (ADS)
Seong, Ng Yew; Hin, Pooi Ah; Ching, Soo Huei
2014-12-01
This paper attempts to incorporate trading volumes as an additional predictor for predicting asset prices. Denoting r(t) as the vector consisting of the time-t values of the trading volume and price of a given asset, we model the time-(t+1) asset price to be dependent on the present and l-1 past values r(t), r(t-1), ....., r(t-1+1) via a conditional distribution which is derived from a (2l+1)-dimensional power-normal distribution. A prediction interval based on the 100(α/2)% and 100(1-α/2)% points of the conditional distribution is then obtained. By examining the average lengths of the prediction intervals found by using the composite indices of the Malaysia stock market for the period 2008 to 2013, we found that the value 2 appears to be a good choice for l. With the omission of the trading volume in the vector r(t), the corresponding prediction interval exhibits a slightly longer average length, showing that it might be desirable to keep trading volume as a predictor. From the above conditional distribution, the probability that the time-(t+1) asset price will be larger than the time-t asset price is next computed. When the probability differs from 0 (or 1) by less than 0.03, the observed time-(t+1) increase in price tends to be negative (or positive). Thus the above probability has a good potential of being used as a market indicator in technical analysis.
Seismicity alert probabilities at Parkfield, California, revisited
Michael, A.J.; Jones, L.M.
1998-01-01
For a decade, the US Geological Survey has used the Parkfield Earthquake Prediction Experiment scenario document to estimate the probability that earthquakes observed on the San Andreas fault near Parkfield will turn out to be foreshocks followed by the expected magnitude six mainshock. During this time, we have learned much about the seismogenic process at Parkfield, about the long-term probability of the Parkfield mainshock, and about the estimation of these types of probabilities. The probabilities for potential foreshocks at Parkfield are reexamined and revised in light of these advances. As part of this process, we have confirmed both the rate of foreshocks before strike-slip earthquakes in the San Andreas physiographic province and the uniform distribution of foreshocks with magnitude proposed by earlier studies. Compared to the earlier assessment, these new estimates of the long-term probability of the Parkfield mainshock are lower, our estimate of the rate of background seismicity is higher, and we find that the assumption that foreshocks at Parkfield occur in a unique way is not statistically significant at the 95% confidence level. While the exact numbers vary depending on the assumptions that are made, the new alert probabilities are lower than previously estimated. Considering the various assumptions and the statistical uncertainties in the input parameters, we also compute a plausible range for the probabilities. The range is large, partly due to the extra knowledge that exists for the Parkfield segment, making us question the usefulness of these numbers.
Ghamari-Langroudi, M; Glavinovíc, M I
1998-01-01
Spontaneous miniature excitatory postsynaptic currents (mEPSCs) in rat hippocampal pyramidal neurones in slices (CA1 region) were recorded at 35-37 degrees C using the whole-cell patch-clamp technique before and after addition of aniracetam (1 mM) to determine how a partial blockade of desensitization alters the relationship between the amplitude (A) and kinetics of mEPSCs, and to evaluate the factors that determine their variability. The rise time (taur) and the time constant of decay of mEPSCs (taud) are essentially amplitude independent in control conditions, but become clearly amplitude dependent in the presence of aniracetam. The slopes of the best fitting lines to taud:A and taur:A data pairs were (+/- SD; ms/pA; n = 5): (1) (control) 0.07 +/- 0.02 and 0.008 +/- 0.003; (2) (aniracetam) 0.40 +/- 0.19 and 0.22 +/- 0.22. The amplitude-dependent prolongation of taud is explained by the concentration dependence of two related processes, the buffering of glutamate molecules by AMPA receptor channels, and the occupancy of the double-bound activatable states. A slower deactivation makes an amplitude-independent contribution. Desensitization reduces the amplitude dependence of taud by minimizing repeated openings of alpha-amino-3-hydroxy-methyl-isoxazole (AMPA) receptor channels. A greater amplitude dependence of taur probably involves both pre- and postsynaptic factors. The variability of A and taud values did not change significantly, but the factors underlying the variability of taud values were much affected. The greater amplitude dependence and the greater scatter about the best fitting lines to taud:A data pairs are approximately balanced by the greater mean values. The greater scatter of taud about the best fitting lines probably occurs because the saturation of AMPA receptors is not the same at different synapses with different numbers of AMPA receptors.
Functional mechanisms of probabilistic inference in feature- and space-based attentional systems.
Dombert, Pascasie L; Kuhns, Anna; Mengotti, Paola; Fink, Gereon R; Vossel, Simone
2016-11-15
Humans flexibly attend to features or locations and these processes are influenced by the probability of sensory events. We combined computational modeling of response times with fMRI to compare the functional correlates of (re-)orienting, and the modulation by probabilistic inference in spatial and feature-based attention systems. Twenty-four volunteers performed two task versions with spatial or color cues. Percentage of cue validity changed unpredictably. A hierarchical Bayesian model was used to derive trial-wise estimates of probability-dependent attention, entering the fMRI analysis as parametric regressors. Attentional orienting activated a dorsal frontoparietal network in both tasks, without significant parametric modulation. Spatially invalid trials activated a bilateral frontoparietal network and the precuneus, while invalid feature trials activated the left intraparietal sulcus (IPS). Probability-dependent attention modulated activity in the precuneus, left posterior IPS, middle occipital gyrus, and right temporoparietal junction for spatial attention, and in the left anterior IPS for feature-based and spatial attention. These findings provide novel insights into the generality and specificity of the functional basis of attentional control. They suggest that probabilistic inference can distinctively affect each attentional subsystem, but that there is an overlap in the left IPS, which responds to both spatial and feature-based expectancy violations. Copyright © 2016 Elsevier Inc. All rights reserved.
Test for age-specificity in survival of the common tern
Nisbet, I.C.T.; Cam, E.
2002-01-01
Much effort in life-history theory has been addressed to the dependence of life-history traits on age, especially the phenomenon of senescence and its evolution. Although senescent declines in survival are well documented in humans and in domestic and laboratory animals, evidence for their occurrence and importance in wild animal species remains limited and equivocal. Several recent papers have suggested that methodological issues may contribute to this problem, and have encouraged investigators to improve sampling designs and to analyse their data using recently developed approaches to modelling of capture-mark-recapture data. Here we report on a three-year, two-site, mark-recapture study of known-aged common terns (Sterna hirundo) in the north-eastern USA. The study was nested within a long-term ecological study in which large numbers of chicks had been banded in each year for > 25 years. We used a range of models to test the hypothesis of an influence of age on survival probability. We also tested for a possible influence of sex on survival. The cross-sectional design of the study (one year's parameter estimates) avoided the possible confounding of effects of age and time. The study was conducted at a time when one of the study sites was being colonized and numbers were increasing rapidly. We detected two-way movements between the sites and estimated movement probabilities in the year for which they could be modelled. We also obtained limited data on emigration from our study area to more distant sites. We found no evidence that survival depended on either sex or age, except that survival was lower among the youngest birds (ages 2-3 years). Despite the large number of birds included in the study (1599 known-aged birds, 2367 total), confidence limits on estimates of survival probability were wide, especially for the oldest age-classes, so that a slight decline in survival late in life could not have been detected. In addition, the cross-sectional design of this study meant that a decline in survival probability within individuals (actuarial senescence) could have been masked by heterogeneity in survival probability among individuals (mortality selection). This emphasizes the need for the development of modelling tools permitting separation of these two phenomena, valid under field conditions in which the recapture probabilities are less than one.
Bayesian explorations of fault slip evolution over the earthquake cycle
NASA Astrophysics Data System (ADS)
Duputel, Z.; Jolivet, R.; Benoit, A.; Gombert, B.
2017-12-01
The ever-increasing amount of geophysical data continuously opens new perspectives on fundamental aspects of the seismogenic behavior of active faults. In this context, the recent fleet of SAR satellites including Sentinel-1 and COSMO-SkyMED permits the use of InSAR for time-dependent slip modeling with unprecedented resolution in time and space. However, existing time-dependent slip models rely on spatial smoothing regularization schemes, which can produce unrealistically smooth slip distributions. In addition, these models usually do not include uncertainty estimates thereby reducing the utility of such estimates. Here, we develop an entirely new approach to derive probabilistic time-dependent slip models. This Markov-Chain Monte Carlo method involves a series of transitional steps to predict and update posterior Probability Density Functions (PDFs) of slip as a function of time. We assess the viability of our approach using various slow-slip event scenarios. Using a dense set of SAR images, we also use this method to quantify the spatial distribution and temporal evolution of slip along a creeping segment of the North Anatolian Fault. This allows us to track a shallow aseismic slip transient lasting for about a month with a maximum slip of about 2 cm.
Extreme river flow dependence in Northern Scotland
NASA Astrophysics Data System (ADS)
Villoria, M. Franco; Scott, M.; Hoey, T.; Fischbacher-Smith, D.
2012-04-01
Various methods for the spatial analysis of hydrologic data have been developed recently. Here we present results using the conditional probability approach proposed by Keef et al. [Appl. Stat. (2009): 58,601-18] to investigate spatial interdependence in extreme river flows in Scotland. This approach does not require the specification of a correlation function, being mostly suitable for relatively small geographical areas. The work is motivated by the Flood Risk Management Act (Scotland (2009)) which requires maps of flood risk that take account of spatial dependence in extreme river flow. The method is based on two conditional measures of spatial flood risk: firstly the conditional probability PC(p) that a set of sites Y = (Y 1,...,Y d) within a region C of interest exceed a flow threshold Qp at time t (or any lag of t), given that in the specified conditioning site X > Qp; and, secondly the expected number of sites within C that will exceed a flow Qp on average (given that X > Qp). The conditional probabilities are estimated using the conditional distribution of Y |X = x (for large x), which can be modeled using a semi-parametric approach (Heffernan and Tawn [Roy. Statist. Soc. Ser. B (2004): 66,497-546]). Once the model is fitted, pseudo-samples can be generated to estimate functionals of the joint tails of the distribution of (Y,X). Conditional return level plots were directly compared to traditional return level plots thus improving our understanding of the dependence structure of extreme river flow events. Confidence intervals were calculated using block bootstrapping methods (100 replicates). We report results from applying this approach to a set of four rivers (Dulnain, Lossie, Ewe and Ness) in Northern Scotland. These sites were chosen based on data quality, spatial location and catchment characteristics. The river Ness, being the largest (catchment size 1839.1km2) was chosen as the conditioning river. Both the Ewe (441.1km2) and Ness catchments have predominantly impermeable bedrock, with the Ewe's one being very wet. The Lossie(216km2) and Dulnain (272.2km2) both contain significant areas of glacial deposits. River flow in the Dulnain is usually affected by snowmelt. In all cases, the conditional probability of each of the three rivers (Dulnain, Lossie, Ewe) decreases as the event in the conditioning river (Ness) becomes more extreme. The Ewe, despite being the furthest of the three sites from the Ness shows the strongest dependence, with relatively high (>0.4) conditional probabilities even for very extreme events (>0.995). Although the Lossie is closer geographically to the Ness than the Ewe, it shows relatively low conditional probabilities and can be considered independent of the Ness for very extreme events (> 0.990). The conditional probabilities seem to reflect the different catchment characteristics and dominant precipitation generating events, with the Ewe being more similar to the Ness than the other two rivers. This interpretation suggests that the conditional method may yield improved estimates of extreme events, but the approach is time consuming. An alternative model that is easier to implement, using a spatial quantile regression, is currently being investigated, which would also allow the introduction of further covariates, essential as the effects of climate change are incorporated into estimation procedures.
Lum, Kirsten J; Sundaram, Rajeshwari; Louis, Thomas A
2015-01-01
Prospective pregnancy studies are a valuable source of longitudinal data on menstrual cycle length. However, care is needed when making inferences of such renewal processes. For example, accounting for the sampling plan is necessary for unbiased estimation of the menstrual cycle length distribution for the study population. If couples can enroll when they learn of the study as opposed to waiting for the start of a new menstrual cycle, then due to length-bias, the enrollment cycle will be stochastically larger than the general run of cycles, a typical property of prevalent cohort studies. Furthermore, the probability of enrollment can depend on the length of time since a woman's last menstrual period (a backward recurrence time), resulting in selection effects. We focus on accounting for length-bias and selection effects in the likelihood for enrollment menstrual cycle length, using a recursive two-stage approach wherein we first estimate the probability of enrollment as a function of the backward recurrence time and then use it in a likelihood with sampling weights that account for length-bias and selection effects. To broaden the applicability of our methods, we augment our model to incorporate a couple-specific random effect and time-independent covariate. A simulation study quantifies performance for two scenarios of enrollment probability when proper account is taken of sampling plan features. In addition, we estimate the probability of enrollment and the distribution of menstrual cycle length for the study population of the Longitudinal Investigation of Fertility and the Environment Study. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
A generic multi-hazard and multi-risk framework and its application illustrated in a virtual city
NASA Astrophysics Data System (ADS)
Mignan, Arnaud; Euchner, Fabian; Wiemer, Stefan
2013-04-01
We present a generic framework to implement hazard correlations in multi-risk assessment strategies. We consider hazard interactions (process I), time-dependent vulnerability (process II) and time-dependent exposure (process III). Our approach is based on the Monte Carlo method to simulate a complex system, which is defined from assets exposed to a hazardous region. We generate 1-year time series, sampling from a stochastic set of events. Each time series corresponds to one risk scenario and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths. Each sampled event is associated to a time of occurrence, a damage footprint and a loss footprint. The occurrence of an event depends on its rate, which is conditional on the occurrence of past events (process I, concept of correlation matrix). Damage depends on the hazard intensity and on the vulnerability of the asset, which is conditional on previous damage on that asset (process II). Losses are the product of damage and exposure value, this value being the original exposure minus previous losses (process III, no reconstruction considered). The Monte Carlo method allows for a straightforward implementation of uncertainties and for implementation of numerous interactions, which is otherwise challenging in an analytical multi-risk approach. We apply our framework to a synthetic data set, defined by a virtual city within a virtual region. This approach gives the opportunity to perform multi-risk analyses in a controlled environment while not requiring real data, which may be difficultly accessible or simply unavailable to the public. Based on the heuristic approach, we define a 100 by 100 km region where earthquakes, volcanic eruptions, fluvial floods, hurricanes and coastal floods can occur. All hazards are harmonized to a common format. We define a 20 by 20 km city, composed of 50,000 identical buildings with a fixed economic value. Vulnerability curves are defined in terms of mean damage ratio as a function of hazard intensity. All data are based on simple equations found in the literature and on other simplifications. We show the impact of earthquake-earthquake interaction and hurricane-storm surge coupling, as well as of time-dependent vulnerability and exposure, on aggregated loss curves. One main result is the emergence of low probability-high consequences (extreme) events when correlations are implemented. While the concept of virtual city can suggest the theoretical benefits of multi-risk assessment for decision support, identifying their real-world practicality will require the study of real test sites.
Jabar, Syaheed B; Filipowicz, Alex; Anderson, Britt
2017-11-01
When a location is cued, targets appearing at that location are detected more quickly. When a target feature is cued, targets bearing that feature are detected more quickly. These attentional cueing effects are only superficially similar. More detailed analyses find distinct temporal and accuracy profiles for the two different types of cues. This pattern parallels work with probability manipulations, where both feature and spatial probability are known to affect detection accuracy and reaction times. However, little has been done by way of comparing these effects. Are probability manipulations on space and features distinct? In a series of five experiments, we systematically varied spatial probability and feature probability along two dimensions (orientation or color). In addition, we decomposed response times into initiation and movement components. Targets appearing at the probable location were reported more quickly and more accurately regardless of whether the report was based on orientation or color. On the other hand, when either color probability or orientation probability was manipulated, response time and accuracy improvements were specific for that probable feature dimension. Decomposition of the response time benefits demonstrated that spatial probability only affected initiation times, whereas manipulations of feature probability affected both initiation and movement times. As detection was made more difficult, the two effects further diverged, with spatial probability disproportionally affecting initiation times and feature probability disproportionately affecting accuracy. In conclusion, all manipulations of probability, whether spatial or featural, affect detection. However, only feature probability affects perceptual precision, and precision effects are specific to the probable attribute.
Temporal scaling in information propagation.
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-18
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Quantum Dynamics Study of the Isotopic Effect on Capture Reactions: HD, D2 + CH3
NASA Technical Reports Server (NTRS)
Wang, Dunyou; Kwak, Dochan (Technical Monitor)
2002-01-01
Time-dependent wave-packet-propagation calculations are reported for the isotopic reactions, HD + CH3 and D2 + CH3, in six degrees of freedom and for zero total angular momentum. Initial state selected reaction probabilities for different initial rotational-vibrational states are presented in this study. This study shows that excitations of the HD(D2) enhances the reactivities; whereas the excitations of the CH3 umbrella mode have the opposite effects. This is consistent with the reaction of H2 + CH3. The comparison of these three isotopic reactions also shows the isotopic effects in the initial-state-selected reaction probabilities. The cumulative reaction probabilities (CRP) are obtained by summing over initial-state-selected reaction probabilities. The energy-shift approximation to account for the contribution of degrees of freedom missing in the six dimensionality calculation is employed to obtain approximate full-dimensional CRPs. The rate constant comparison shows H2 + CH3 reaction has the biggest reactivity, then HD + CH3, and D2 + CH3 has the smallest.
Temporal scaling in information propagation
NASA Astrophysics Data System (ADS)
Huang, Junming; Li, Chao; Wang, Wen-Qiang; Shen, Hua-Wei; Li, Guojie; Cheng, Xue-Qi
2014-06-01
For the study of information propagation, one fundamental problem is uncovering universal laws governing the dynamics of information propagation. This problem, from the microscopic perspective, is formulated as estimating the propagation probability that a piece of information propagates from one individual to another. Such a propagation probability generally depends on two major classes of factors: the intrinsic attractiveness of information and the interactions between individuals. Despite the fact that the temporal effect of attractiveness is widely studied, temporal laws underlying individual interactions remain unclear, causing inaccurate prediction of information propagation on evolving social networks. In this report, we empirically study the dynamics of information propagation, using the dataset from a population-scale social media website. We discover a temporal scaling in information propagation: the probability a message propagates between two individuals decays with the length of time latency since their latest interaction, obeying a power-law rule. Leveraging the scaling law, we further propose a temporal model to estimate future propagation probabilities between individuals, reducing the error rate of information propagation prediction from 6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
Complex growing networks with intrinsic vertex fitness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedogne, C.; Rodgers, G. J.
2006-10-15
One of the major questions in complex network research is to identify the range of mechanisms by which a complex network can self organize into a scale-free state. In this paper we investigate the interplay between a fitness linking mechanism and both random and preferential attachment. In our models, each vertex is assigned a fitness x, drawn from a probability distribution {rho}(x). In Model A, at each time step a vertex is added and joined to an existing vertex, selected at random, with probability p and an edge is introduced between vertices with fitnesses x and y, with a ratemore » f(x,y), with probability 1-p. Model B differs from Model A in that, with probability p, edges are added with preferential attachment rather than randomly. The analysis of Model A shows that, for every fixed fitness x, the network's degree distribution decays exponentially. In Model B we recover instead a power-law degree distribution whose exponent depends only on p, and we show how this result can be generalized. The properties of a number of particular networks are examined.« less
Single electron relativistic clock interferometer
NASA Astrophysics Data System (ADS)
Bushev, P. A.; Cole, J. H.; Sholokhov, D.; Kukharchyk, N.; Zych, M.
2016-09-01
Although time is one of the fundamental notions in physics, it does not have a unique description. In quantum theory time is a parameter ordering the succession of the probability amplitudes of a quantum system, while according to relativity theory each system experiences in general a different proper time, depending on the system's world line, due to time dilation. It is therefore of fundamental interest to test the notion of time in the regime where both quantum and relativistic effects play a role, for example, when different amplitudes of a single quantum clock experience different magnitudes of time dilation. Here we propose a realization of such an experiment with a single electron in a Penning trap. The clock can be implemented in the electronic spin precession and its time dilation then depends on the radial (cyclotron) state of the electron. We show that coherent manipulation and detection of the electron can be achieved already with present day technology. A single electron in a Penning trap is a technologically ready platform where the notion of time can be probed in a hitherto untested regime, where it requires a relativistic as well as quantum description.
Using multilevel spatial models to understand salamander site occupancy patterns after wildfire
Chelgren, Nathan; Adams, Michael J.; Bailey, Larissa L.; Bury, R. Bruce
2011-01-01
Studies of the distribution of elusive forest wildlife have suffered from the confounding of true presence with the uncertainty of detection. Occupancy modeling, which incorporates probabilities of species detection conditional on presence, is an emerging approach for reducing observation bias. However, the current likelihood modeling framework is restrictive for handling unexplained sources of variation in the response that may occur when there are dependence structures such as smaller sampling units that are nested within larger sampling units. We used multilevel Bayesian occupancy modeling to handle dependence structures and to partition sources of variation in occupancy of sites by terrestrial salamanders (family Plethodontidae) within and surrounding an earlier wildfire in western Oregon, USA. Comparison of model fit favored a spatial N-mixture model that accounted for variation in salamander abundance over models that were based on binary detection/non-detection data. Though catch per unit effort was higher in burned areas than unburned, there was strong support that this pattern was due to a higher probability of capture for individuals in burned plots. Within the burn, the odds of capturing an individual given it was present were 2.06 times the odds outside the burn, reflecting reduced complexity of ground cover in the burn. There was weak support that true occupancy was lower within the burned area. While the odds of occupancy in the burn were 0.49 times the odds outside the burn among the five species, the magnitude of variation attributed to the burn was small in comparison to variation attributed to other landscape variables and to unexplained, spatially autocorrelated random variation. While ordinary occupancy models may separate the biological pattern of interest from variation in detection probability when all sources of variation are known, the addition of random effects structures for unexplained sources of variation in occupancy and detection probability may often more appropriately represent levels of uncertainty. ?? 2011 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Yamada, Yuhei; Yamazaki, Yoshihiro
2018-04-01
This study considered a stochastic model for cluster growth in a Markov process with a cluster size dependent additive noise. According to this model, the probability distribution of the cluster size transiently becomes an exponential or a log-normal distribution depending on the initial condition of the growth. In this letter, a master equation is obtained for this model, and derivation of the distributions is discussed.
Winter movement dynamics of Black Brant
Lindberg, Mark S.; Ward, David H.; Tibbitts, T. Lee; Roser, John
2007-01-01
Although North American geese are managed based on their breeding distributions, the dynamics of those breeding populations may be affected by events that occur during the winter. Birth rates of capital breeding geese may be influenced by wintering conditions, mortality may be influenced by timing of migration and wintering distribution, and immigration and emigration among breeding populations may depend on winter movement and timing of pair formation. We examined factors affecting movements of black brant (Branta bernicla nigricans) among their primary wintering sites in Mexico and southern California, USA, (Mar 1998-Mar 2000) using capture-recapture models. Although brant exhibited high probability (>0.85) of monthly and annual fidelity to the wintering sites we sampled, we observed movements among all wintering sites. Movement probabilities both within and among winters were negatively related to distance between sites. We observed a higher probability both of southward movement between winters (Mar to Dec) and northward movement between months within winters. Between-winter movements were probably most strongly affected by spatial and temporal variation in habitat quality as we saw movement patterns consistent with contrasting environmental conditions (e.g., La Niña and El Niño southern oscillation cycles). Month-to-month movements were related to migration patterns and may also have been affected by differences in habitat conditions among sites. Patterns of winter movements indicate that a network of wintering sites may be necessary for effective conservation of brant.
Winter movement dynamics of black brant
Lindberg, Mark S.; Ward, David H.; Tibbitts, T. Lee; Roser, John
2007-01-01
Although North American geese are managed based on their breeding distributions, the dynamics of those breeding populations may be affected by events that occur during the winter. Birth rates of capital breeding geese may be influenced by wintering conditions, mortality may be influenced by timing of migration and wintering distribution, and immigration and emigration among breeding populations may depend on winter movement and timing of pair formation. We examined factors affecting movements of black brant (Branta bernicla nigricans) among their primary wintering sites in Mexico and southern California, USA, (Mar 1998–Mar 2000) using capture–recapture models. Although brant exhibited high probability (>0.85) of monthly and annual fidelity to the wintering sites we sampled, we observed movements among all wintering sites. Movement probabilities both within and among winters were negatively related to distance between sites. We observed a higher probability both of southward movement between winters (Mar to Dec) and northward movement between months within winters. Between-winter movements were probably most strongly affected by spatial and temporal variation in habitat quality as we saw movement patterns consistent with contrasting environmental conditions (e.g., La Niña and El Niño southern oscillation cycles). Month-to-month movements were related to migration patterns and may also have been affected by differences in habitat conditions among sites. Patterns of winter movements indicate that a network of wintering sites may be necessary for effective conservation of brant.
NASA Astrophysics Data System (ADS)
Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.
2006-12-01
We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.
ERIC Educational Resources Information Center
Kuzmak, Sylvia
2016-01-01
Teaching probability and statistics is more than teaching the mathematics itself. Historically, the mathematics of probability and statistics was first developed through analyzing games of chance such as the rolling of dice. This article makes the case that the understanding of probability and statistics is dependent upon building a…
Yang, Xinsong; Feng, Zhiguo; Feng, Jianwen; Cao, Jinde
2017-01-01
In this paper, synchronization in an array of discrete-time neural networks (DTNNs) with time-varying delays coupled by Markov jump topologies is considered. It is assumed that the switching information can be collected by a tracker with a certain probability and transmitted from the tracker to controller precisely. Then the controller selects suitable control gains based on the received switching information to synchronize the network. This new control scheme makes full use of received information and overcomes the shortcomings of mode-dependent and mode-independent control schemes. Moreover, the proposed control method includes both the mode-dependent and mode-independent control techniques as special cases. By using linear matrix inequality (LMI) method and designing new Lyapunov functionals, delay-dependent conditions are derived to guarantee that the DTNNs with Markov jump topologies to be asymptotically synchronized. Compared with existing results on Markov systems which are obtained by separately using mode-dependent and mode-independent methods, our result has great flexibility in practical applications. Numerical simulations are finally given to demonstrate the effectiveness of the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reduced-order dynamic output feedback control of uncertain discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Morais, Cecília F.; Braga, Márcio F.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.
2017-11-01
This paper deals with the problem of designing reduced-order robust dynamic output feedback controllers for discrete-time Markov jump linear systems (MJLS) with polytopic state space matrices and uncertain transition probabilities. Starting from a full order, mode-dependent and polynomially parameter-dependent dynamic output feedback controller, sufficient linear matrix inequality based conditions are provided for the existence of a robust reduced-order dynamic output feedback stabilising controller with complete, partial or none mode dependency assuring an upper bound to the ? or the ? norm of the closed-loop system. The main advantage of the proposed method when compared to the existing approaches is the fact that the dynamic controllers are exclusively expressed in terms of the decision variables of the problem. In other words, the matrices that define the controller realisation do not depend explicitly on the state space matrices associated with the modes of the MJLS. As a consequence, the method is specially suitable to handle order reduction or cluster availability constraints in the context of ? or ? dynamic output feedback control of discrete-time MJLS. Additionally, as illustrated by means of numerical examples, the proposed approach can provide less conservative results than other conditions in the literature.
Langevin equation with time dependent linear force and periodic load force: stochastic resonance
NASA Astrophysics Data System (ADS)
Sau Fa, Kwok
2017-11-01
The motion of a particle described by the Langevin equation with constant diffusion coefficient, time dependent linear force (ω (1+α \\cos ({ω }1t))x) and periodic load force ({A}0\\cos ({{Ω }}t)) is investigated. Analytical solutions for the probability density function (PDF) and n-moment are obtained and analysed. For {ω }1\\gg α ω the influence of the periodic term α \\cos ({ω }1t) is negligible to the PDF and n-moment for any time; this result shows that the statistical averages such as n-moments and the PDF have no access to some information of the system. For small and intermediate values of {ω }1 the influence of the periodic term α \\cos ({ω }1t) to the system is also analysed; in particular the system may present multiresonance. The solutions are obtained in a direct and pedagogical manner readily understandable by graduate students.
Non-Maximal Tripartite Entanglement Degradation of Dirac and Scalar Fields in Non-Inertial Frames
NASA Astrophysics Data System (ADS)
Salman, Khan; Niaz, Ali Khan; M. K., Khan
2014-03-01
The π-tangle is used to study the behavior of entanglement of a nonmaximal tripartite state of both Dirac and scalar fields in accelerated frame. For Dirac fields, the degree of degradation with acceleration of both one-tangle of accelerated observer and π-tangle, for the same initial entanglement, is different by just interchanging the values of probability amplitudes. A fraction of both one-tangles and the π-tangle always survives for any choice of acceleration and the degree of initial entanglement. For scalar field, the one-tangle of accelerated observer depends on the choice of values of probability amplitudes and it vanishes in the range of infinite acceleration, whereas for π-tangle this is not always true. The dependence of π-tangle on probability amplitudes varies with acceleration. In the lower range of acceleration, its behavior changes by switching between the values of probability amplitudes and for larger values of acceleration this dependence on probability amplitudes vanishes. Interestingly, unlike bipartite entanglement, the degradation of π-tangle against acceleration in the case of scalar fields is slower than for Dirac fields.
Barrett, Jeffrey S; Jayaraman, Bhuvana; Patel, Dimple; Skolnik, Jeffrey M
2008-06-01
Previous exploration of oncology study design efficiency has focused on Markov processes alone (probability-based events) without consideration for time dependencies. Barriers to study completion include time delays associated with patient accrual, inevaluability (IE), time to dose limiting toxicities (DLT) and administrative and review time. Discrete event simulation (DES) can incorporate probability-based assignment of DLT and IE frequency, correlated with cohort in the case of DLT, with time-based events defined by stochastic relationships. A SAS-based solution to examine study efficiency metrics and evaluate design modifications that would improve study efficiency is presented. Virtual patients are simulated with attributes defined from prior distributions of relevant patient characteristics. Study population datasets are read into SAS macros which select patients and enroll them into a study based on the specific design criteria if the study is open to enrollment. Waiting times, arrival times and time to study events are also sampled from prior distributions; post-processing of study simulations is provided within the decision macros and compared across designs in a separate post-processing algorithm. This solution is examined via comparison of the standard 3+3 decision rule relative to the "rolling 6" design, a newly proposed enrollment strategy for the phase I pediatric oncology setting.
Crimaldi, John P.; Thompson, Janet K.; Rosman, Johanna H.; Lowe, Ryan J.; Koseff, Jeffrey R.
2002-01-01
We describe a laboratory investigation into the effect of turbulent hydrodynamic stresses on clam larvae in the settlement phase of the recruitment process. A two-component laser-Doppler anemometer (LDA) was used to measure time histories of the instantaneous turbulence structure at potential recruitment sites within reconstructed beds of the adult Asian clam, Potamocorbula amurensis. Measurements were made for two flow speeds over beds with three different clam densities and two different clam heights. We analyze the statistical effect of the turbulence on the larval flux to the bed and on the probability of successful anchoring to the substrate. It is shown that the anchoring probability depends on the nature of the instantaneous stress events rather than on mean stresses. The instantaneous turbulence structure near the bed is altered by the flow rate and the spacing and height of adult clams living in the substrate. The ability to anchor quickly is therefore extremely important, since the time sequence of episodic turbulent stress events influences larval settlement success. The probability of successful larval settlement is predicted to decrease as the spacing between adults decreases, implying that the hydrodynamics impose negative feedback on clam bed aggregation dynamics.
Quantum Inference on Bayesian Networks
NASA Astrophysics Data System (ADS)
Yoder, Theodore; Low, Guang Hao; Chuang, Isaac
2014-03-01
Because quantum physics is naturally probabilistic, it seems reasonable to expect physical systems to describe probabilities and their evolution in a natural fashion. Here, we use quantum computation to speedup sampling from a graphical probability model, the Bayesian network. A specialization of this sampling problem is approximate Bayesian inference, where the distribution on query variables is sampled given the values e of evidence variables. Inference is a key part of modern machine learning and artificial intelligence tasks, but is known to be NP-hard. Classically, a single unbiased sample is obtained from a Bayesian network on n variables with at most m parents per node in time (nmP(e) - 1 / 2) , depending critically on P(e) , the probability the evidence might occur in the first place. However, by implementing a quantum version of rejection sampling, we obtain a square-root speedup, taking (n2m P(e) -1/2) time per sample. The speedup is the result of amplitude amplification, which is proving to be broadly applicable in sampling and machine learning tasks. In particular, we provide an explicit and efficient circuit construction that implements the algorithm without the need for oracle access.
Detailed Analysis of the Interoccurrence Time Statistics in Seismic Activity
NASA Astrophysics Data System (ADS)
Tanaka, Hiroki; Aizawa, Yoji
2017-02-01
The interoccurrence time statistics of seismiciry is studied theoretically as well as numerically by taking into account the conditional probability and the correlations among many earthquakes in different magnitude levels. It is known so far that the interoccurrence time statistics is well approximated by the Weibull distribution, but the more detailed information about the interoccurrence times can be obtained from the analysis of the conditional probability. Firstly, we propose the Embedding Equation Theory (EET), where the conditional probability is described by two kinds of correlation coefficients; one is the magnitude correlation and the other is the inter-event time correlation. Furthermore, the scaling law of each correlation coefficient is clearly determined from the numerical data-analysis carrying out with the Preliminary Determination of Epicenter (PDE) Catalog and the Japan Meteorological Agency (JMA) Catalog. Secondly, the EET is examined to derive the magnitude dependence of the interoccurrence time statistics and the multi-fractal relation is successfully formulated. Theoretically we cannot prove the universality of the multi-fractal relation in seismic activity; nevertheless, the theoretical results well reproduce all numerical data in our analysis, where several common features or the invariant aspects are clearly observed. Especially in the case of stationary ensembles the multi-fractal relation seems to obey an invariant curve, furthermore in the case of non-stationary (moving time) ensembles for the aftershock regime the multi-fractal relation seems to satisfy a certain invariant curve at any moving times. It is emphasized that the multi-fractal relation plays an important role to unify the statistical laws of seismicity: actually the Gutenberg-Richter law and the Weibull distribution are unified in the multi-fractal relation, and some universality conjectures regarding the seismicity are briefly discussed.
Guest Editor's Introduction: Special section on dependable distributed systems
NASA Astrophysics Data System (ADS)
Fetzer, Christof
1999-09-01
We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)
NASA Astrophysics Data System (ADS)
Tomas, A.; Menendez, M.; Mendez, F. J.; Coco, G.; Losada, I. J.
2012-04-01
In the last decades, freak or rogue waves have become an important topic in engineering and science. Forecasting the occurrence probability of freak waves is a challenge for oceanographers, engineers, physicists and statisticians. There are several mechanisms responsible for the formation of freak waves, and different theoretical formulations (primarily based on numerical models with simplifying assumption) have been proposed to predict the occurrence probability of freak wave in a sea state as a function of N (number of individual waves) and kurtosis (k). On the other hand, different attempts to parameterize k as a function of spectral parameters such as the Benjamin-Feir Index (BFI) and the directional spreading (Mori et al., 2011) have been proposed. The objective of this work is twofold: (1) develop a statistical model to describe the uncertainty of maxima individual wave height, Hmax, considering N and k as covariates; (2) obtain a predictive formulation to estimate k as a function of aggregated sea state spectral parameters. For both purposes, we use free surface measurements (more than 300,000 20-minutes sea states) from the Spanish deep water buoy network (Puertos del Estado, Spanish Ministry of Public Works). Non-stationary extreme value models are nowadays widely used to analyze the time-dependent or directional-dependent behavior of extreme values of geophysical variables such as significant wave height (Izaguirre et al., 2010). In this work, a Generalized Extreme Value (GEV) statistical model for the dimensionless maximum wave height (x=Hmax/Hs) in every sea state is used to assess the probability of freak waves. We allow the location, scale and shape parameters of the GEV distribution to vary as a function of k and N. The kurtosis-dependency is parameterized using third-order polynomials and the model is fitted using standard log-likelihood theory, obtaining a very good behavior to predict the occurrence probability of freak waves (x>2). Regarding the second objective of this work, we apply different algorithms using three spectral parameters (wave steepness, directional dispersion, frequential dispersion) as predictors, to estimate the probability density function of the kurtosis for a given sea state. ACKNOWLEDGMENTS The authors thank to Puertos del Estado (Spanish Ministry of Public Works) for providing the free surface measurement database.
Pathology Report for Intraperitoneal Sodium Dichromate Exposure in Rats, Protocol No. 15-002-3
2015-12-08
neither was dose-dependent), subcapsular single cell necrosis was evident only at the highest dose, and minimal capsular fibroplasia in exposed...number per group of rats to exhibit any subcapsular granulocytic, mononuclear infiltrates, or single cell hepatocellular necrosis was greater, the higher...hepatocellular necrosis . Capsular fibrin appeared within 24 hours of injection but disappeared over time. Probably associated was the presence of
A computational model for telomere-dependent cell-replicative aging.
Portugal, R D; Land, M G P; Svaiter, B F
2008-01-01
Telomere shortening provides a molecular basis for the Hayflick limit. Recent data suggest that telomere shortening also influence mitotic rate. We propose a stochastic growth model of this phenomena, assuming that cell division in each time interval is a random process which probability decreases linearly with telomere shortening. Computer simulations of the proposed stochastic telomere-regulated model provides good approximation of the qualitative growth of cultured human mesenchymal stem cells.
Effects of trawl selectivity and genetic parameters on fish body length under long-term trawling
NASA Astrophysics Data System (ADS)
Yu, Yang; Sun, Peng; Cui, He; Sheng, Huaxiang; Zhao, Fenfang; Tang, Yanli; Chen, Zelin
2015-10-01
Long-term fishing pressure affects the biological characteristics of exploited fish stocks. The biological characteristics of hairtail ( Trichiurus lepturus) in the East China Sea are unable to recover because of long-term trawling. Fishing induces evolutionary effects on the fish's biological characteristics. Evidence of these changes includes small size at age, a shift to earlier age structure, and early maturation. Natural and artificial selection usually affect the fish's life history. Selection can induce different chances of reproduction, and individual fish can give a different genetic contribution to the next generation. In this study, analysis of time-dependent probability of significance and test of sensitivity were used to explore the effects of fish exploitation rate, mesh size, and heritability with long-term trawling. Results showed that fishing parameters were important drivers to exploited fish population. However, genetic traits altered by fishing were slow, and the changes in biological characteristics were weaker than those caused by fishing selection. Exploitation rate and mesh size exhibited similar evolutionary trend tendency under long-term fishing. The time-dependent probability of significance trend showed a gradual growth and tended to be stable. Therefore, the direction of fishing-induced evolution and successful management of fish species require considerable attention to contribute to sustainable fisheries in China.
Letcher, B.H.; Horton, G.E.
2008-01-01
We estimated the magnitude and shape of size-dependent survival (SDS) across multiple sampling intervals for two cohorts of stream-dwelling Atlantic salmon (Salmo salar) juveniles using multistate capture-mark-recapture (CMR) models. Simulations designed to test the effectiveness of multistate models for detecting SDS in our system indicated that error in SDS estimates was low and that both time-invariant and time-varying SDS could be detected with sample sizes of >250, average survival of >0.6, and average probability of capture of >0.6, except for cases of very strong SDS. In the field (N ??? 750, survival 0.6-0.8 among sampling intervals, probability of capture 0.6-0.8 among sampling occasions), about one-third of the sampling intervals showed evidence of SDS, with poorer survival of larger fish during the age-2+ autumn and quadratic survival (opposite direction between cohorts) during age-1+ spring. The varying magnitude and shape of SDS among sampling intervals suggest a potential mechanism for the maintenance of the very wide observed size distributions. Estimating SDS using multistate CMR models appears complementary to established approaches, can provide estimates with low error, and can be used to detect intermittent SDS. ?? 2008 NRC Canada.
Methods for estimating dispersal probabilities and related parameters using marked animals
Bennetts, R.E.; Nichols, J.D.; Pradel, R.; Lebreton, J.D.; Kitchens, W.M.; Clobert, Jean; Danchin, Etienne; Dhondt, Andre A.; Nichols, James D.
2001-01-01
Deriving valid inferences about the causes and consequences of dispersal from empirical studies depends largely on our ability reliably to estimate parameters associated with dispersal. Here, we present a review of the methods available for estimating dispersal and related parameters using marked individuals. We emphasize methods that place dispersal in a probabilistic framework. In this context, we define a dispersal event as a movement of a specified distance or from one predefined patch to another, the magnitude of the distance or the definition of a `patch? depending on the ecological or evolutionary question(s) being addressed. We have organized the chapter based on four general classes of data for animals that are captured, marked, and released alive: (1) recovery data, in which animals are recovered dead at a subsequent time, (2) recapture/resighting data, in which animals are either recaptured or resighted alive on subsequent sampling occasions, (3) known-status data, in which marked animals are reobserved alive or dead at specified times with probability 1.0, and (4) combined data, in which data are of more than one type (e.g., live recapture and ring recovery). For each data type, we discuss the data required, the estimation techniques, and the types of questions that might be addressed from studies conducted at single and multiple sites.
Queues on a Dynamically Evolving Graph
NASA Astrophysics Data System (ADS)
Mandjes, Michel; Starreveld, Nicos J.; Bekker, René
2018-04-01
This paper considers a population process on a dynamically evolving graph, which can be alternatively interpreted as a queueing network. The queues are of infinite-server type, entailing that at each node all customers present are served in parallel. The links that connect the queues have the special feature that they are unreliable, in the sense that their status alternates between `up' and `down'. If a link between two nodes is down, with a fixed probability each of the clients attempting to use that link is lost; otherwise the client remains at the origin node and reattempts using the link (and jumps to the destination node when it finds the link restored). For these networks we present the following results: (a) a system of coupled partial differential equations that describes the joint probability generating function corresponding to the queues' time-dependent behavior (and a system of ordinary differential equations for its stationary counterpart), (b) an algorithm to evaluate the (time-dependent and stationary) moments, and procedures to compute user-perceived performance measures which facilitate the quantification of the impact of the links' outages, (c) a diffusion limit for the joint queue length process. We include explicit results for a series relevant special cases, such as tandem networks and symmetric fully connected networks.
Stochastic Model for the Vocabulary Growth in Natural Languages
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Altmann, Eduardo G.
2013-04-01
We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i) a finite number of core words, which have higher frequency and do not affect the probability of a new word to be used, and (ii) the remaining virtually infinite number of noncore words, which have lower frequency and, once used, reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the Google Ngram database of books published in the last centuries, and its main consequence is the generalization of Zipf’s and Heaps’ law to two-scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model, the main change on historical time scales is the composition of the specific words included in the finite list of core words, which we observe to decay exponentially in time with a rate of approximately 30 words per year for English.
Fuzzy rationality and parameter elicitation in decision analysis
NASA Astrophysics Data System (ADS)
Nikolova, Natalia D.; Tenekedjiev, Kiril I.
2010-07-01
It is widely recognised by decision analysts that real decision-makers always make estimates in an interval form. An overview of techniques to find an optimal alternative among such with imprecise and interval probabilities is presented. Scalarisation methods are outlined as most appropriate. A proper continuation of such techniques is fuzzy rational (FR) decision analysis. A detailed representation of the elicitation process influenced by fuzzy rationality is given. The interval character of probabilities leads to the introduction of ribbon functions, whose general form and special cases are compared with the p-boxes. As demonstrated, approximation of utilities in FR decision analysis does not depend on the probabilities, but the approximation of probabilities is dependent on preferences.
On the dependence on inclination of capture probability of short-period comets
NASA Astrophysics Data System (ADS)
Yabushita, S.; Tsujii, T.
1990-06-01
Calculation is made of probability of capture whereby a nearly parabolic comet with perihelion near the Jovian orbit comes to have a perihelion distance less than 2.5 AU and a period less than 200 yr. The probability is found to depend strongly on the inclination, in accordance with earlier results of Everhart and of Stagg and Bailey. It is large for orbits close to the ecliptic but decreases drastically for large inclinations. The overall probability of capture from randomly distributed orbits is 0.00044, which shows that either the presently observed short-period comets are not in a steady state or the source flux may be in the Uranus-Neptune zone.
Dembo, M; De Penfold, J B; Ruiz, R; Casalta, H
1985-03-01
Four pigeons were trained to peck a key under different values of a temporally defined independent variable (T) and different probabilities of reinforcement (p). Parameter T is a fixed repeating time cycle and p the probability of reinforcement for the first response of each cycle T. Two dependent variables were used: mean response rate and mean postreinforcement pause. For all values of p a critical value for the independent variable T was found (T=1 sec) in which marked changes took place in response rate and postreinforcement pauses. Behavior typical of random ratio schedules was obtained at T 1 sec and behavior typical of random interval schedules at T 1 sec. Copyright © 1985. Published by Elsevier B.V.
Deuteron Coulomb Excitation in Peripheral Collisions with a Heavy Ion
NASA Astrophysics Data System (ADS)
Du, Weijie; Yin, Peng; Li, Yang; Chen, Guangyao; Zuo, Wei; Zhao, Xingbo; Vary, James P.
2017-09-01
We develop an ab initio time-dependent Basis Function (tBF) method to solve non-perturbative and time-dependent problems in non-relativistic quantum mechanics. As a test problem, we apply this method to the Coulomb excitation of a deuteron by an impinging heavy ion. We employ wave functions for the bound and excited states of the deuterium system based on a realistic nucleon-nucleon interaction and study the evolution of the transition probability, the r.m.s. radius and the r.m.s. momentum of the system during the scattering process. The dependencies of these quantities on the external field strength and the bombarding energy are also analyzed and compared to corresponding results obtained from first-order perturbation theory. The time evolution of both the charge and the momentum distributions is shown. This work was supported in part by the U. S. Department of Energy (DOE) under Grants No. DESC0008485 (SciDAC/NUCLEI) and DE-FG02-87ER40371. W. Zuo and P. Yin are supported by the National Natural Science Foundation of China (11435014).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman, Michael F.
2015-10-28
The time independent semiclassical treatment of barrier tunneling has been understood for a very long time. Several semiclassical approaches to time dependent tunneling through barriers have also been presented. These typically involve trajectories for which the position variable is a complex function of time. In this paper, a method is presented that uses only real valued trajectories, thus avoiding the complications that can arise when complex trajectories are employed. This is accomplished by expressing the time dependent wave packet as an integration over momentum. The action function in the exponent in this expression is expanded to second order in themore » momentum. The expansion is around the momentum, p{sub 0{sup *}}, at which the derivative of the real part of the action is zero. The resulting Gaussian integral is then taken. The stationary phase approximation requires that the derivative of the full action is zero at the expansion point, and this leads to a complex initial momentum and complex tunneling trajectories. The “pseudo-stationary phase” approximation employed in this work results in real values for the initial momentum and real valued trajectories. The transmission probabilities obtained are found to be in good agreement with exact quantum results.« less
An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations
Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.
2016-01-01
We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach. PMID:28316360
Sean A. Parks; Marc-Andre Parisien; Carol Miller
2011-01-01
We examined the scale-dependent relationship between spatial fire likelihood or burn probability (BP) and some key environmental controls in the southern Sierra Nevada, California, USA. Continuous BP estimates were generated using a fire simulation model. The correspondence between BP (dependent variable) and elevation, ignition density, fuels and aspect was evaluated...
Cuthbertson, Carmen C; Kucharska-Newton, Anna; Faurot, Keturah R; Stürmer, Til; Jonsson Funk, Michele; Palta, Priya; Windham, B Gwen; Thai, Sydney; Lund, Jennifer L
2018-07-01
Frailty is a geriatric syndrome characterized by weakness and weight loss and is associated with adverse health outcomes. It is often an unmeasured confounder in pharmacoepidemiologic and comparative effectiveness studies using administrative claims data. Among the Atherosclerosis Risk in Communities (ARIC) Study Visit 5 participants (2011-2013; n = 3,146), we conducted a validation study to compare a Medicare claims-based algorithm of dependency in activities of daily living (or dependency) developed as a proxy for frailty with a reference standard measure of phenotypic frailty. We applied the algorithm to the ARIC participants' claims data to generate a predicted probability of dependency. Using the claims-based algorithm, we estimated the C-statistic for predicting phenotypic frailty. We further categorized participants by their predicted probability of dependency (<5%, 5% to <20%, and ≥20%) and estimated associations with difficulties in physical abilities, falls, and mortality. The claims-based algorithm showed good discrimination of phenotypic frailty (C-statistic = 0.71; 95% confidence interval [CI] = 0.67, 0.74). Participants classified with a high predicted probability of dependency (≥20%) had higher prevalence of falls and difficulty in physical ability, and a greater risk of 1-year all-cause mortality (hazard ratio = 5.7 [95% CI = 2.5, 13]) than participants classified with a low predicted probability (<5%). Sensitivity and specificity varied across predicted probability of dependency thresholds. The Medicare claims-based algorithm showed good discrimination of phenotypic frailty and high predictive ability with adverse health outcomes. This algorithm can be used in future Medicare claims analyses to reduce confounding by frailty and improve study validity.
Effects of variability in probable maximum precipitation patterns on flood losses
NASA Astrophysics Data System (ADS)
Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul
2018-05-01
The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.
Simonov, P V
1997-01-01
At the 23rd International Congress of Physiology Sciences (Tokyo, 1965) the results of experiment led us to the conclusion that emotions were determined by the actual need and estimation of probability (possibility) of its satisfaction. Low probability of need satisfaction leads to negative emotions actively minimized by the subject. Increased probability of satisfaction, as compared to the earlier forecast, generates positive emotions which the subject tries to maximize, that is, to enhance, to prolong, to repeat. We named our concept the Need-Informational Theory of Emotions. According to this theory, motivation, emotion, and estimation of probability have different neuromorphological substrates. Activation through the hypothalamic motivatiogenic structures of the frontal parts of the neocortex orients the behavior to signals with a high probability of their reinforcement. At the same time the hippocampus is necessary for reactions to signals of low probability events, which are typical for the emotionally excited brain. By comparison of motivational excitation with available stimuli or their engrams, the amygdala selects a dominant motivation, destined to be satisfied in the first instance. In the cases of classical conditioning and escape reaction the reinforcement was related to involvement of the negative emotion's hypothalamic neurons, while in the course of avoidance reaction the positive emotion's neurons were involved. The role of the left and right frontal neocortex in the appearance or positive or negative emotions depends on these informational (cognitive) functions.
[The brain mechanisms of emotions].
Simonov, P V
1997-01-01
At the 23rd International Congress of Physiological Sciences (Tokyo, 1965) the results of experiment brought us to a conclusion that emotions were determined by the actual need and estimation of probability (possibility) of its satisfaction. Low probability of need satisfaction leads to negative emotions actively minimized by the subject. Increased probability of satisfaction, as compared to the earlier forecast, generates positive emotions which the subject tries to maximize, that is to enhance, to prolong, to repeat. We named our concept the Need-Informational Theory of Emotions. According to this theory, motivation, emotion and estimation of probability have different neuromorphological substrate. Activating by motivatiogenic structures of the hypothalamus the frontal parts of neocortex orients the behavior to signals with a high probability of their reinforcement. At the same time the hippocampus is necessary for reactions to signals of low probability events, which is typical for emotionally excited brain. By comparison of motivational excitation with available stimuli or their engrams the amygdala selects a dominant motivation, destined to be satisfied in the first instance. In the cases of classical conditioning and escape reaction the reinforcement was related to involvement of the negative emotion's hypothalamic neurons while in the course of avoidance reaction the positive emotion's neurons being involved. The role of the left and right frontal neocortex in the appearance of positive or negative emotions depends on this informational (cognitive) functions.
A Comprehensive Breath Plume Model for Disease Transmission via Expiratory Aerosols
Halloran, Siobhan K.; Wexler, Anthony S.; Ristenpart, William D.
2012-01-01
The peak in influenza incidence during wintertime in temperate regions represents a longstanding, unresolved scientific question. One hypothesis is that the efficacy of airborne transmission via aerosols is increased at lower humidities and temperatures, conditions that prevail in wintertime. Recent work with a guinea pig model by Lowen et al. indicated that humidity and temperature do modulate airborne influenza virus transmission, and several investigators have interpreted the observed humidity dependence in terms of airborne virus survivability. This interpretation, however, neglects two key observations: the effect of ambient temperature on the viral growth kinetics within the animals, and the strong influence of the background airflow on transmission. Here we provide a comprehensive theoretical framework for assessing the probability of disease transmission via expiratory aerosols between test animals in laboratory conditions. The spread of aerosols emitted from an infected animal is modeled using dispersion theory for a homogeneous turbulent airflow. The concentration and size distribution of the evaporating droplets in the resulting “Gaussian breath plume” are calculated as functions of position, humidity, and temperature. The overall transmission probability is modeled with a combination of the time-dependent viral concentration in the infected animal and the probability of droplet inhalation by the exposed animal downstream. We demonstrate that the breath plume model is broadly consistent with the results of Lowen et al., without invoking airborne virus survivability. The results also suggest that, at least for guinea pigs, variation in viral kinetics within the infected animals is the dominant factor explaining the increased transmission probability observed at lower temperatures. PMID:22615902
Estimating time-dependent ROC curves using data under prevalent sampling.
Li, Shanshan
2017-04-15
Prevalent sampling is frequently a convenient and economical sampling technique for the collection of time-to-event data and thus is commonly used in studies of the natural history of a disease. However, it is biased by design because it tends to recruit individuals with longer survival times. This paper considers estimation of time-dependent receiver operating characteristic curves when data are collected under prevalent sampling. To correct the sampling bias, we develop both nonparametric and semiparametric estimators using extended risk sets and the inverse probability weighting techniques. The proposed estimators are consistent and converge to Gaussian processes, while substantial bias may arise if standard estimators for right-censored data are used. To illustrate our method, we analyze data from an ovarian cancer study and estimate receiver operating characteristic curves that assess the accuracy of the composite markers in distinguishing subjects who died within 3-5 years from subjects who remained alive. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Wu, Cai; Li, Liang
2018-05-15
This paper focuses on quantifying and estimating the predictive accuracy of prognostic models for time-to-event outcomes with competing events. We consider the time-dependent discrimination and calibration metrics, including the receiver operating characteristics curve and the Brier score, in the context of competing risks. To address censoring, we propose a unified nonparametric estimation framework for both discrimination and calibration measures, by weighting the censored subjects with the conditional probability of the event of interest given the observed data. The proposed method can be extended to time-dependent predictive accuracy metrics constructed from a general class of loss functions. We apply the methodology to a data set from the African American Study of Kidney Disease and Hypertension to evaluate the predictive accuracy of a prognostic risk score in predicting end-stage renal disease, accounting for the competing risk of pre-end-stage renal disease death, and evaluate its numerical performance in extensive simulation studies. Copyright © 2018 John Wiley & Sons, Ltd.
Kim, Yeonho; Nabili, Marjan; Acharya, Priyanka; Lopez, Asis; Myers, Matthew R
2017-01-01
Safety analyses of transcranial therapeutic ultrasound procedures require knowledge of the dependence of the rupture probability and rupture time upon sonication parameters. As previous vessel-rupture studies have concentrated on a specific set of exposure conditions, there is a need for more comprehensive parametric studies. Probability of rupture and rupture times were measured by exposing the large blood vessel of a live earthworm to high-intensity focused ultrasound pulse trains of various characteristics. Pressures generated by the ultrasound transducers were estimated through numerical solutions to the KZK (Khokhlov-Zabolotskaya-Kuznetsov) equation. Three ultrasound frequencies (1.1, 2.5, and 3.3 MHz) were considered, as were three pulse repetition frequencies (1, 3, and 10 Hz), and two duty factors (0.0001, 0.001). The pressures produced ranged from 4 to 18 MPa. Exposures of up to 10 min in duration were employed. Trials were repeated an average of 11 times. No trends as a function of pulse repetition rate were identifiable, for either probability of rupture or rupture time. Rupture time was found to be a strong function of duty factor at the lower pressures; at 1.1 MHz the rupture time was an order of magnitude lower for the 0.001 duty factor than the 0.0001. At moderate pressures, the difference between the duty factors was less, and there was essentially no difference between duty factors at the highest pressure. Probability of rupture was not found to be a strong function of duty factor. Rupture thresholds were about 4 MPa for the 1.1 MHz frequency, 7 MPa at 3.3 MHz, and 11 MPa for the 2.5 MHz, though the pressure value at 2.5 MHz frequency will likely be reduced when steep-angle corrections are accounted for in the KZK model used to estimate pressures. Mechanical index provided a better collapse of the data (less separation of the curves pertaining to the different frequencies) than peak negative pressure, for both probability of rupture and rupture time. The results provide a database with which investigations in more complex animal models can be compared, potentially establishing trends by which bioeffects in human vessels can be estimated.
Discrete time modelization of human pilot behavior
NASA Technical Reports Server (NTRS)
Cavalli, D.; Soulatges, D.
1975-01-01
This modelization starts from the following hypotheses: pilot's behavior is a time discrete process, he can perform only one task at a time and his operating mode depends on the considered flight subphase. Pilot's behavior was observed using an electro oculometer and a simulator cockpit. A FORTRAN program has been elaborated using two strategies. The first one is a Markovian process in which the successive instrument readings are governed by a matrix of conditional probabilities. In the second one, strategy is an heuristic process and the concepts of mental load and performance are described. The results of the two aspects have been compared with simulation data.
Implications of fault constitutive properties for earthquake prediction.
Dieterich, J H; Kilgore, B
1996-01-01
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks. Images Fig. 3 PMID:11607666
Aggregation increases prey survival time in group chase and escape
NASA Astrophysics Data System (ADS)
Yang, Sicong; Jiang, Shijie; Jiang, Li; Li, Geng; Han, Zhangang
2014-08-01
Recently developed chase-and-escape models have addressed a fascinating pursuit-and-evasion problem that may have both theoretical significance and potential applications. We introduce three aggregation strategies for the prey in a group chase model on a lattice. Simulation results show that aggregation dramatically increases the group survival time, even allowing immortal prey. The average survival time τ and the aggregation probability P have a power-law dependence of \\tau \\sim {{(1-P)}^{-1}} for P\\in [0.9,0.997]. With increasing numbers of predators, there is still a phase transition. When the number of predators is less than the critical point value, the prey group survival time increases significantly.
Significance of stress transfer in time-dependent earthquake probability calculations
Parsons, T.
2005-01-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.
Sur la vitesse d'extinction d'une population dans un environnement aléatoire.
Bacaër, Nicolas
2017-05-01
This study focuses on the speed of extinction of a population living in a random environment that follows a continuous-time Markov chain. Each individual dies or reproduces at a rate that depends on the environment. The number of offspring during reproduction follows a given probability law that also depends on the environment. In the so-called subcritical case where the population goes for sure to extinction, there is an explicit formula for the speed of extinction. In some sense, environmental stochasticity slows down population extinction. Copyright © 2017 Académie des sciences. Published by Elsevier Masson SAS. All rights reserved.
EFFECTS OF LASER RADIATION ON MATTER: Photoinduced absorption in chalcogenide glasses
NASA Astrophysics Data System (ADS)
Ponomar', V. V.
1990-08-01
A dependence of the absorption coefficient on the optical radiation intensity in the range 10 - 5 - 1 W/cm2 was observed for chalcogenide glasses at a photon energy less than the band gap of the material. The absorption coefficient depended on the irradiation time. In the case of arsenic sulfide in the range 1.6-1.7 eV an absorption peak was observed at intensities of the order of 10 - 3 W/cm2. In this part of the spectrum the absorption probably involved metastable As-As, S-Se, and Se-Se "defect" bonds and was similar to the photoinduced degradation of hydrogenated amorphous silicon.
Survival behavior in the cyclic Lotka-Volterra model with a randomly switching reaction rate
NASA Astrophysics Data System (ADS)
West, Robert; Mobilia, Mauro; Rucklidge, Alastair M.
2018-02-01
We study the influence of a randomly switching reproduction-predation rate on the survival behavior of the nonspatial cyclic Lotka-Volterra model, also known as the zero-sum rock-paper-scissors game, used to metaphorically describe the cyclic competition between three species. In large and finite populations, demographic fluctuations (internal noise) drive two species to extinction in a finite time, while the species with the smallest reproduction-predation rate is the most likely to be the surviving one (law of the weakest). Here we model environmental (external) noise by assuming that the reproduction-predation rate of the strongest species (the fastest to reproduce and predate) in a given static environment randomly switches between two values corresponding to more and less favorable external conditions. We study the joint effect of environmental and demographic noise on the species survival probabilities and on the mean extinction time. In particular, we investigate whether the survival probabilities follow the law of the weakest and analyze their dependence on the external noise intensity and switching rate. Remarkably, when, on average, there is a finite number of switches prior to extinction, the survival probability of the predator of the species whose reaction rate switches typically varies nonmonotonically with the external noise intensity (with optimal survival about a critical noise strength). We also outline the relationship with the case where all reaction rates switch on markedly different time scales.
Nadeau, C.P.; Conway, C.J.; Smith, B.S.; Lewis, T.E.
2008-01-01
We conducted 262 call-broadcast point-count surveys (1-6 replicate surveys on each of 62 points) using standardized North American Marsh Bird Monitoring Protocols between 31 May and 7 July 2006 on St. Vincent National Wildlife Refuge, an island off the northwest coast of Florida. We conducted double-blind multiple-observer surveys, paired morning and evening surveys, and paired morning and night surveys to examine the influence of call-broadcast and time of day on detection probability. Observer detection probability for all species pooled was 75% and was similar between passive (69%) and call-broadcast (65%) periods. Detection probability was higher on morning than evening (t = 3.0, P = 0.030) or night (t = 3.4, P = 0.042) surveys when we pooled all species. Detection probability was higher (but not significant for all species) on morning compared to evening or night surveys for all five focal species detected on surveys: Least Bittern (Ixobrychus exilis), Clapper Rail (Rallus longirostris), Purple Gallinule (Porphyrula martinica), Common Moorhen (Gallinula chloropus), and American Coot (Fulica americana). We detected more Least Bitterns (t = 2.4, P = 0.064) and Common Moorhens (t = 2.8, P = 0.026) on morning than evening surveys, and more Clapper Rails (t = 5.1, P = 0.014) on morning than night surveys.
Scalar decay in two-dimensional chaotic advection and Batchelor-regime turbulence
NASA Astrophysics Data System (ADS)
Fereday, D. R.; Haynes, P. H.
2004-12-01
This paper considers the decay in time of an advected passive scalar in a large-scale flow. The relation between the decay predicted by "Lagrangian stretching theories," which consider evolution of the scalar field within a small fluid element and then average over many such elements, and that observed at large times in numerical simulations, associated with emergence of a "strange eigenmode" is discussed. Qualitative arguments are supported by results from numerical simulations of scalar evolution in two-dimensional spatially periodic, time aperiodic flows, which highlight the differences between the actual behavior and that predicted by the Lagrangian stretching theories. In some cases the decay rate of the scalar variance is different from the theoretical prediction and determined globally and in other cases it apparently matches the theoretical prediction. An updated theory for the wavenumber spectrum of the scalar field and a theory for the probability distribution of the scalar concentration are presented. The wavenumber spectrum and the probability density function both depend on the decay rate of the variance, but can otherwise be calculated from the statistics of the Lagrangian stretching history. In cases where the variance decay rate is not determined by the Lagrangian stretching theory, the wavenumber spectrum for scales that are much smaller than the length scale of the flow but much larger than the diffusive scale is argued to vary as k-1+ρ, where k is wavenumber, and ρ is a positive number which depends on the decay rate of the variance γ2 and on the Lagrangian stretching statistics. The probability density function for the scalar concentration is argued to have algebraic tails, with exponent roughly -3 and with a cutoff that is determined by diffusivity κ and scales roughly as κ-1/2 and these predictions are shown to be in good agreement with numerical simulations.
The Binomial Model in Fluctuation Analysis of Quantal Neurotransmitter Release
Quastel, D. M. J.
1997-01-01
The mathematics of the binomial model for quantal neurotransmitter release is considered in general terms, to explore what information might be extractable from statistical aspects of data. For an array of N statistically independent release sites, each with a release probability p, the compound binomial always pertains, with , p′ ≡ 1 - var(m)/ (1 + cvp2) and n′ ≡ 2. Unless n′ is invariant with ambient conditions or stimulation paradigms, the simple binomial (cvp = 0) is untenable and n′ is neither N nor the number of “active” sites or sites with a quantum available. At each site p = popA, where po is the output probability if a site is “eligible” or “filled” despite previous quantal discharge, and pA (eligibility probability) depends at least on the replenishment rate, po, and interstimulus time. Assuming stochastic replenishment, a simple algorithm allows calculation of the full statistical composition of outputs for any hypothetical combinations of po's and refill rates, for any stimulation paradigm and spontaneous release. A rise in n′ (reduced cvp) tends to occur whenever po varies widely between sites, with a raised stimulation frequency or factors tending to increase po's. Unlike
An energy dependent earthquake frequency-magnitude distribution
NASA Astrophysics Data System (ADS)
Spassiani, I.; Marzocchi, W.
2017-12-01
The most popular description of the frequency-magnitude distribution of seismic events is the exponential Gutenberg-Richter (G-R) law, which is widely used in earthquake forecasting and seismic hazard models. Although it has been experimentally well validated in many catalogs worldwide, it is not yet clear at which space-time scales the G-R law still holds. For instance, in a small area where a large earthquake has just happened, the probability that another very large earthquake nucleates in a short time window should diminish because it takes time to recover the same level of elastic energy just released. In short, the frequency-magnitude distribution before and after a large earthquake in a small area should be different because of the different amount of available energy.Our study is then aimed to explore a possible modification of the classical G-R distribution by including the dependence on an energy parameter. In a nutshell, this more general version of the G-R law should be such that a higher release of energy corresponds to a lower probability of strong aftershocks. In addition, this new frequency-magnitude distribution has to satisfy an invariance condition: when integrating over large areas, that is when integrating over infinite energy available, the G-R law must be recovered.Finally we apply a proposed generalization of the G-R law to different seismic catalogs to show how it works and the differences with the classical G-R law.
NASA Astrophysics Data System (ADS)
Vysotskii, V. I.; Vysotskyy, M. V.
2017-08-01
We consider a method for optimizing the tunnel effect for low-energy particles by using coherent correlated states formed under controllable pulsed action on these particles. Typical examples of such actions are the effect of a pulsed magnetic field on charged particles in a gas or plasma. Coherent correlated states are characterized most comprehensively by the correlation coefficient r( t); an increase of this factor elevates the probability of particle tunneling through a high potential barrier by several orders of magnitude without an appreciable increase in their energy. It is shown for the first time that the formation of coherent correlated states, as well as maximal | r( t)|max and time-averaged 〈| r( t)|〉 amplitudes of the correlation coefficient and the corresponding tunneling probability are characterized by a nonmonotonic (oscillating) dependence on the forming pulse duration and amplitude. This result makes it possible to optimize experiments on the realization of low-energy nuclear fusion and demonstrates the incorrectness of the intuitive idea that the tunneling probability always increases with the amplitude of an external action on a particle. Our conclusions can be used, in particular, for explaining random (unpredictable and low-repeatability) experimental results on optimization of energy release from nuclear reactions occurring under a pulsed action with fluctuations of the amplitude and duration. We also consider physical premises for the observed dependences and obtain optimal relations between the aforementioned parameters, which ensure the formation of an optimal coherent correlated state and optimal low-energy tunneling in various physical systems with allowance for the dephasing action of a random force. The results of theoretical analysis are compared with the data of successful experiments on the generation of neutrons and alpha particles in an electric discharge in air and gaseous deuterium.
NASA Astrophysics Data System (ADS)
Henri, Christopher; Fernàndez-Garcia, Daniel
2015-04-01
Modeling multi-species reactive transport in natural systems with strong heterogeneities and complex biochemical reactions is a major challenge for assessing groundwater polluted sites with organic and inorganic contaminants. A large variety of these contaminants react according to serial-parallel reaction networks commonly simplified by a combination of first-order kinetic reactions. In this context, a random-walk particle tracking method is presented. This method is capable of efficiently simulating the motion of particles affected by first-order network reactions in three-dimensional systems, which are represented by spatially variable physical and biochemical coefficients described at high resolution. The approach is based on the development of transition probabilities that describe the likelihood that particles belonging to a given species and location at a given time will be transformed into and moved to another species and location afterwards. These probabilities are derived from the solution matrix of the spatial moments governing equations. The method is fully coupled with reactions, free of numerical dispersion and overcomes the inherent numerical problems stemming from the incorporation of heterogeneities to reactive transport codes. In doing this, we demonstrate that the motion of particles follows a standard random walk with time-dependent effective retardation and dispersion parameters that depend on the initial and final chemical state of the particle. The behavior of effective parameters develops as a result of differential retardation effects among species. Moreover, explicit analytic solutions of the transition probability matrix and related particle motions are provided for serial reactions. An example of the effect of heterogeneity on the dechlorination of organic solvents in a three-dimensional random porous media shows that the power-law behavior typically observed in conservative tracers breakthrough curves can be largely compromised by the effect of biochemical reactions.
NASA Astrophysics Data System (ADS)
Henri, Christopher V.; Fernàndez-Garcia, Daniel
2014-09-01
Modeling multispecies reactive transport in natural systems with strong heterogeneities and complex biochemical reactions is a major challenge for assessing groundwater polluted sites with organic and inorganic contaminants. A large variety of these contaminants react according to serial-parallel reaction networks commonly simplified by a combination of first-order kinetic reactions. In this context, a random-walk particle tracking method is presented. This method is capable of efficiently simulating the motion of particles affected by first-order network reactions in three-dimensional systems, which are represented by spatially variable physical and biochemical coefficients described at high resolution. The approach is based on the development of transition probabilities that describe the likelihood that particles belonging to a given species and location at a given time will be transformed into and moved to another species and location afterward. These probabilities are derived from the solution matrix of the spatial moments governing equations. The method is fully coupled with reactions, free of numerical dispersion and overcomes the inherent numerical problems stemming from the incorporation of heterogeneities to reactive transport codes. In doing this, we demonstrate that the motion of particles follows a standard random walk with time-dependent effective retardation and dispersion parameters that depend on the initial and final chemical state of the particle. The behavior of effective parameters develops as a result of differential retardation effects among species. Moreover, explicit analytic solutions of the transition probability matrix and related particle motions are provided for serial reactions. An example of the effect of heterogeneity on the dechlorination of organic solvents in a three-dimensional random porous media shows that the power-law behavior typically observed in conservative tracers breakthrough curves can be largely compromised by the effect of biochemical reactions.
NASA Astrophysics Data System (ADS)
Velásquez-Rojas, Fátima; Vazquez, Federico
2017-05-01
Opinion formation and disease spreading are among the most studied dynamical processes on complex networks. In real societies, it is expected that these two processes depend on and affect each other. However, little is known about the effects of opinion dynamics over disease dynamics and vice versa, since most studies treat them separately. In this work we study the dynamics of the voter model for opinion formation intertwined with that of the contact process for disease spreading, in a population of agents that interact via two types of connections, social and contact. These two interacting dynamics take place on two layers of networks, coupled through a fraction q of links present in both networks. The probability that an agent updates its state depends on both the opinion and disease states of the interacting partner. We find that the opinion dynamics has striking consequences on the statistical properties of disease spreading. The most important is that the smooth (continuous) transition from a healthy to an endemic phase observed in the contact process, as the infection probability increases beyond a threshold, becomes abrupt (discontinuous) in the two-layer system. Therefore, disregarding the effects of social dynamics on epidemics propagation may lead to a misestimation of the real magnitude of the spreading. Also, an endemic-healthy discontinuous transition is found when the coupling q overcomes a threshold value. Furthermore, we show that the disease dynamics delays the opinion consensus, leading to a consensus time that varies nonmonotonically with q in a large range of the model's parameters. A mean-field approach reveals that the coupled dynamics of opinions and disease can be approximately described by the dynamics of the voter model decoupled from that of the contact process, with effective probabilities of opinion and disease transmission.
Probability of coincidental similarity among the orbits of small bodies - I. Pairing
NASA Astrophysics Data System (ADS)
Jopek, Tadeusz Jan; Bronikowska, Małgorzata
2017-09-01
Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.
An energy-dependent numerical model for the condensation probability, γ j
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie Marie
The “condensation” probability, γ j, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that p j excited nucleons (excitons) will “condense” to form complex particle type j in the excited residual nucleus. In addition, it has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γ j were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γ j, one which is energy-dependent and valid for up to 28Mg, andmore » which provides improved fits compared to experimental fragment spectra.« less
An energy-dependent numerical model for the condensation probability, γ j
Kerby, Leslie Marie
2016-12-09
The “condensation” probability, γ j, is an important variable in the preequilibrium stage of nuclear spallation reactions. It represents the probability that p j excited nucleons (excitons) will “condense” to form complex particle type j in the excited residual nucleus. In addition, it has a significant impact on the emission width, or probability of emitting fragment type j from the residual nucleus. Previous formulations for γ j were energy-independent and valid for fragments up to 4He only. This paper explores the formulation of a new model for γ j, one which is energy-dependent and valid for up to 28Mg, andmore » which provides improved fits compared to experimental fragment spectra.« less
Average receiving scaling of the weighted polygon Koch networks with the weight-dependent walk
NASA Astrophysics Data System (ADS)
Ye, Dandan; Dai, Meifeng; Sun, Yanqiu; Shao, Shuxiang; Xie, Qi
2016-09-01
Based on the weighted Koch networks and the self-similarity of fractals, we present a family of weighted polygon Koch networks with a weight factor r(0 < r ≤ 1) . We study the average receiving time (ART) on weight-dependent walk (i.e., the walker moves to any of its neighbors with probability proportional to the weight of edge linking them), whose key step is to calculate the sum of mean first-passage times (MFPTs) for all nodes absorpt at a hub node. We use a recursive division method to divide the weighted polygon Koch networks in order to calculate the ART scaling more conveniently. We show that the ART scaling exhibits a sublinear or linear dependence on network order. Thus, the weighted polygon Koch networks are more efficient than expended Koch networks in receiving information. Finally, compared with other previous studies' results (i.e., Koch networks, weighted Koch networks), we find out that our models are more general.
Single and double multiphoton ionization of Li and Be atoms by strong laser fields
NASA Astrophysics Data System (ADS)
Telnov, Dmitry; Heslar, John; Chu, Shih-I.
2011-05-01
The time-dependent density functional theory with self-interaction correction and proper asymptotic long-range potential is extended for nonperturbative treatment of multiphoton single and double ionization of Li and Be atoms by strong 800 nm laser fields. We make use of the time-dependent Krieger-Li-Iafrate (TDKLI) exchange-correlation potential with the integer discontinuity which improves the description of the double ionization process. However, we have found that the discontinuity of the TDKLI potential is not sufficient to reproduce the characteristic feature of double ionization. This may happen because the discontinuity of the TDKLI potential is related to the spin particle numbers only and not to the total particle number. Introducing a discontinuity with respect to the total particle number to the exchange-correlation potential, we were able to obtain the knee structure in the intensity dependence of the double ionization probability of Be. This work was partially supported by DOE and NSF and by NSC-Taiwan.
NASA Astrophysics Data System (ADS)
Heslar, John; Telnov, Dmitry; Chu, Shih-I.
2012-06-01
We present a self-interaction-free (SIC) time-dependent density-functional theory (TDDFT) for the treatment of double ionization processes of many-electron systems. The method is based on the Krieger-Li-Iafrate (KLI) treatment of the optimized effective potential (OEP) theory and the incorporation of an explicit self-interaction correction (SIC) term. In the framework of the time-dependent density functional theory, we have performed 3D calculations of double ionization of He and Be atoms by strong near-infrared laser fields. We make use of the exchange-correlation potential with the integer discontinuity which improves the description of the double ionization process. We found that proper description of the double ionization requires the TDDFT exchange-correlation potential with the discontinuity with respect to the variation of the spin particle numbers (SPN) only. The results for the intensity-dependent probabilities of single and double ionization are presented and reproduce the famous ``knee'' structure.
Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno
2016-01-01
Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision. PMID:27303323
Probabilistic Estimation of Rare Random Collisions in 3 Space
2009-03-01
extended Poisson process as a feature of probability theory. With the bulk of research in extended Poisson processes going into parame- ter estimation, the...application of extended Poisson processes to spatial processes is largely untouched. Faddy performed a short study of spatial data, but overtly...the theory of extended Poisson processes . To date, the processes are limited in that the rates only depend on the number of arrivals at some time
Gravimetric method for in vitro calibration of skin hydration measurements.
Martinsen, Ørjan G; Grimnes, Sverre; Nilsen, Jon K; Tronstad, Christian; Jang, Wooyoung; Kim, Hongsig; Shin, Kunsoo; Naderi, Majid; Thielmann, Frank
2008-02-01
A novel method for in vitro calibration of skin hydration measurements is presented. The method combines gravimetric and electrical measurements and reveals an exponential dependency of measured electrical susceptance to absolute water content in the epidermal stratum corneum. The results also show that absorption of water into the stratum corneum exhibits three different phases with significant differences in absorption time constant. These phases probably correspond to bound, loosely bound, and bulk water.
Broadcasting but not receiving: density dependence considerations for SETI signals
NASA Astrophysics Data System (ADS)
Smith, Reginald D.
2009-04-01
This paper develops a detailed quantitative model which uses the Drake equation and an assumption of an average maximum radio broadcasting distance by an communicative civilization. Using this basis, it estimates the minimum civilization density for contact between two civilizations to be probable in a given volume of space under certain conditions, the amount of time it would take for a first contact, and the question of whether reciprocal contact is possible.
InAs/GaSb Broken-Gap Heterostructure Laser for Terahertz Spectroscopic Sensing Application
2010-09-01
from interband tunneling from the emitter is insignificant when forward biasing is applied. This means that HHs will accumulate in the right VB well... dependent on in-plane momentum. An important observation from Figs. 3 and 4 is that the interband tunneling probability is significantly less than the CB...leverages resonant electron injection and interband tunneling electron depletion to realize electron population inversion, while at the same time mitigating
Surveying Europe's Only Cave-Dwelling Chordate Species (Proteus anguinus) Using Environmental DNA.
Vörös, Judit; Márton, Orsolya; Schmidt, Benedikt R; Gál, Júlia Tünde; Jelić, Dušan
2017-01-01
In surveillance of subterranean fauna, especially in the case of rare or elusive aquatic species, traditional techniques used for epigean species are often not feasible. We developed a non-invasive survey method based on environmental DNA (eDNA) to detect the presence of the red-listed cave-dwelling amphibian, Proteus anguinus, in the caves of the Dinaric Karst. We tested the method in fifteen caves in Croatia, from which the species was previously recorded or expected to occur. We successfully confirmed the presence of P. anguinus from ten caves and detected the species for the first time in five others. Using a hierarchical occupancy model we compared the availability and detection probability of eDNA of two water sampling methods, filtration and precipitation. The statistical analysis showed that both availability and detection probability depended on the method and estimates for both probabilities were higher using filter samples than for precipitation samples. Combining reliable field and laboratory methods with robust statistical modeling will give the best estimates of species occurrence.
Alani, Amir M.; Faramarzi, Asaad
2015-01-01
In this paper, a stochastic finite element method (SFEM) is employed to investigate the probability of failure of cementitious buried sewer pipes subjected to combined effect of corrosion and stresses. A non-linear time-dependant model is used to determine the extent of concrete corrosion. Using the SFEM, the effects of different random variables, including loads, pipe material, and corrosion on the remaining safe life of the cementitious sewer pipes are explored. A numerical example is presented to demonstrate the merit of the proposed SFEM in evaluating the effects of the contributing parameters upon the probability of failure of cementitious sewer pipes. The developed SFEM offers many advantages over traditional probabilistic techniques since it does not use any empirical equations in order to determine failure of pipes. The results of the SFEM can help the concerning industry (e.g., water companies) to better plan their resources by providing accurate prediction for the remaining safe life of cementitious sewer pipes. PMID:26068092
Density profiles of the exclusive queuing process
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Schadschneider, Andreas
2012-12-01
The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.
NASA Astrophysics Data System (ADS)
Klimenko, V. V.
2017-12-01
We obtain expressions for the probabilities of the normal-noise spikes with the Gaussian correlation function and for the probability density of the inter-spike intervals. As distinct from the delta-correlated noise, in which the intervals are distributed by the exponential law, the probability of the subsequent spike depends on the previous spike and the interval-distribution law deviates from the exponential one for a finite noise-correlation time (frequency-bandwidth restriction). This deviation is the most pronounced for a low detection threshold. Similarity of the behaviors of the distributions of the inter-discharge intervals in a thundercloud and the noise spikes for the varying repetition rate of the discharges/spikes, which is determined by the ratio of the detection threshold to the root-mean-square value of noise, is observed. The results of this work can be useful for the quantitative description of the statistical characteristics of the noise spikes and studying the role of fluctuations for the discharge emergence in a thundercloud.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Shirong; Davis, Michael J.; Skodje, Rex T.
2015-11-12
The sensitivity of kinetic observables is analyzed using a newly developed sum over histories representation of chemical kinetics. In the sum over histories representation, the concentrations of the chemical species are decomposed into the sum of probabilities for chemical pathways that follow molecules from reactants to products or intermediates. Unlike static flux methods for reaction path analysis, the sum over histories approach includes the explicit time dependence of the pathway probabilities. Using the sum over histories representation, the sensitivity of an observable with respect to a kinetic parameter such as a rate coefficient is then analyzed in terms of howmore » that parameter affects the chemical pathway probabilities. The method is illustrated for species concentration target functions in H-2 combustion where the rate coefficients are allowed to vary over their associated uncertainty ranges. It is found that large sensitivities are often associated with rate limiting steps along important chemical pathways or by reactions that control the branching of reactive flux« less
Probabilistic confidence for decisions based on uncertain reliability estimates
NASA Astrophysics Data System (ADS)
Reid, Stuart G.
2013-05-01
Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.
Leue, Anja; Cano Rodilla, Carmen; Beauducel, André
2015-01-01
Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated. PMID:26783525
Leue, Anja; Cano Rodilla, Carmen; Beauducel, André
2015-01-01
Individuals typically evaluate whether their performance and the obtained feedback match. Previous research has shown that feedback negativity (FN) depends on outcome probability and feedback valence. It is, however, less clear to what extent previous effects of outcome probability on FN depend on self-evaluations of response correctness. Therefore, we investigated the effects of outcome probability on FN amplitude in a simple go/no-go task that allowed for the self-evaluation of response correctness. We also investigated effects of performance incompatibility and feedback valence. In a sample of N = 22 participants, outcome probability was manipulated by means of precues, feedback valence by means of monetary feedback, and performance incompatibility by means of feedback that induced a match versus mismatch with individuals' performance. We found that the 100% outcome probability condition induced a more negative FN following no-loss than the 50% outcome probability condition. The FN following loss was more negative in the 50% compared to the 100% outcome probability condition. Performance-incompatible loss resulted in a more negative FN than performance-compatible loss. Our results indicate that the self-evaluation of the correctness of responses should be taken into account when the effects of outcome probability and expectation mismatch on FN are investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Tianhui; Fu, Bina, E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn; Zhang, Dong H., E-mail: bina@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn
A new finding of the site-averaging approximation was recently reported on the dissociative chemisorption of the HCl/DCl+Au(111) surface reaction [T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 139, 184705 (2013); T. Liu, B. Fu, and D. H. Zhang, J. Chem. Phys. 140, 144701 (2014)]. Here, in order to investigate the dependence of new site-averaging approximation on the initial vibrational state of H{sub 2} as well as the PES for the dissociative chemisorption of H{sub 2} on Cu(111) surface at normal incidence, we carried out six-dimensional quantum dynamics calculations using the initial state-selected time-dependent wave packet approach, withmore » H{sub 2} initially in its ground vibrational state and the first vibrational excited state. The corresponding four-dimensional site-specific dissociation probabilities are also calculated with H{sub 2} fixed at bridge, center, and top sites. These calculations are all performed based on two different potential energy surfaces (PESs). It is found that the site-averaging dissociation probability over 15 fixed sites obtained from four-dimensional quantum dynamics calculations can accurately reproduce the six-dimensional dissociation probability for H{sub 2} (v = 0) and (v = 1) on the two PESs.« less
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.
2017-11-01
The objective of this paper is to analyse state dependent arrival in bulk retrial queueing system with immediate Bernoulli feedback, multiple vacations, threshold and constant retrial policy. Primary customers are arriving into the system in bulk with different arrival rates λ a and λ b . If arriving customers find the server is busy then the entire batch will join to orbit. Customer from orbit request service one by one with constant retrial rate γ. On the other hand if an arrival of customers finds the server is idle then customers will be served in batches according to general bulk service rule. After service completion, customers may request service again with probability δ as feedback or leave from the system with probability 1 - δ. In the service completion epoch, if the orbit size is zero then the server leaves for multiple vacations. The server continues the vacation until the orbit size reaches the value ‘N’ (N > b). At the vacation completion, if the orbit size is ‘N’ then the server becomes ready to provide service for customers from the main pool or from the orbit. For the designed queueing model, probability generating function of the queue size at an arbitrary time will be obtained by using supplementary variable technique. Various performance measures will be derived with suitable numerical illustrations.
Modulation of inward rectifier potassium channel by toosendanin, a presynaptic blocker.
Wang, Z F; Shi, Y L
2001-07-01
The effect of toosendanin, a presynaptic blocker, on the inward rectifier potassium channel (K(Kir)) of hippocampal CA1 pyramidal neurons of rats was studied by the single-channel patch-clamp technique. The results showed that toosendanin had an inhibitory effect on K(Kir) in an excised inside-out patch of the neuron under a symmetrical 150 mM K(+) condition. By decreasing the slower open time constant and increasing the slower close time constant, toosendanin (1x10(-6)-1x10(-4) g/ml) significantly reduced the open probability of the channel in a concentration-dependent manner. Meanwhile, a dose-dependent reduction in unitary conductance of the channel was also detected after toosendanin application. These data offer an explanation for toosendanin-induced facilitation of neurotransmitter release and antibotulismic effect of the drug.
Dynamics of a Landau-Zener transitions in a two-level system driven by a dissipative environment
NASA Astrophysics Data System (ADS)
Ateuafack, M. E.; Diffo, J. T.; Fai, L. C.
2016-02-01
The paper investigates the effects of a two-level quantum system coupled to transversal and longitudinal dissipative environment. The time-dependent phase accumulation, LZ transition probability and entropy in the presence of fast-ohmic, sub-ohmic and super-ohmic quantum noise are derived. Analytical results are obtained in terms of temperature, dissipation strength, LZ parameter and bath cutoff frequency. The bath is observed to modify the standard occupation difference by a decaying random phase factor and also produces dephasing during the transfer of population. The dephasing characteristics or the initial non-zero decoherence rate are observed to increase in time with the bath temperature and depend on the system-bath coupling strength and cutoff frequency. These parameters are found to strongly affect the memory and thus tailor the coherence process of the system.
Stochastic modeling of stock price process induced from the conjugate heat equation
NASA Astrophysics Data System (ADS)
Paeng, Seong-Hun
2015-02-01
Currency can be considered as a ruler for values of commodities. Then the price is the measured value by the ruler. We can suppose that inflation and variation of exchange rate are caused by variation of the scale of the ruler. In geometry, variation of the scale means that the metric is time-dependent. The conjugate heat equation is the modified heat equation which satisfies the heat conservation law for the time-dependent metric space. We propose a new model of stock prices by using the stochastic process whose transition probability is determined by the kernel of the conjugate heat equation. Our model of stock prices shows how the volatility term is affected by inflation and exchange rate. This model modifies the Black-Scholes equation in light of inflation and exchange rate.
Forster, Jeri E.; MaWhinney, Samantha; Ball, Erika L.; Fairclough, Diane
2011-01-01
Dropout is common in longitudinal clinical trials and when the probability of dropout depends on unobserved outcomes even after conditioning on available data, it is considered missing not at random and therefore nonignorable. To address this problem, mixture models can be used to account for the relationship between a longitudinal outcome and dropout. We propose a Natural Spline Varying-coefficient mixture model (NSV), which is a straightforward extension of the parametric Conditional Linear Model (CLM). We assume that the outcome follows a varying-coefficient model conditional on a continuous dropout distribution. Natural cubic B-splines are used to allow the regression coefficients to semiparametrically depend on dropout and inference is therefore more robust. Additionally, this method is computationally stable and relatively simple to implement. We conduct simulation studies to evaluate performance and compare methodologies in settings where the longitudinal trajectories are linear and dropout time is observed for all individuals. Performance is assessed under conditions where model assumptions are both met and violated. In addition, we compare the NSV to the CLM and a standard random-effects model using an HIV/AIDS clinical trial with probable nonignorable dropout. The simulation studies suggest that the NSV is an improvement over the CLM when dropout has a nonlinear dependence on the outcome. PMID:22101223
Class dependency of fuzzy relational database using relational calculus and conditional probability
NASA Astrophysics Data System (ADS)
Deni Akbar, Mohammad; Mizoguchi, Yoshihiro; Adiwijaya
2018-03-01
In this paper, we propose a design of fuzzy relational database to deal with a conditional probability relation using fuzzy relational calculus. In the previous, there are several researches about equivalence class in fuzzy database using similarity or approximate relation. It is an interesting topic to investigate the fuzzy dependency using equivalence classes. Our goal is to introduce a formulation of a fuzzy relational database model using the relational calculus on the category of fuzzy relations. We also introduce general formulas of the relational calculus for the notion of database operations such as ’projection’, ’selection’, ’injection’ and ’natural join’. Using the fuzzy relational calculus and conditional probabilities, we introduce notions of equivalence class, redundant, and dependency in the theory fuzzy relational database.
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
Role of initial correlation in coarsening of a ferromagnet
NASA Astrophysics Data System (ADS)
Chakraborty, Saikat; Das, Subir K.
2015-06-01
We study the dynamics of ordering in ferromagnets via Monte Carlo simulations of the Ising model, employing the Glauber spin-flip mechanism, in space dimensions d = 2 and 3, on square and simple cubic lattices. Results for the persistence probability and the domain growth are discussed for quenches to various temperatures (Tf) below the critical one (Tc), from different initial temperatures Ti ≥ Tc. In long time limit, for Ti>Tc, the persistence probability exhibits power-law decay with exponents θ ≃ 0.22 and ≃ 0.18 in d = 2 and 3, respectively. For finite Ti, the early time behavior is a different power-law whose life-time diverges and exponent decreases as Ti → Tc. The two steps are connected via power-law as a function of domain length and the crossover to the second step occurs when this characteristic length exceeds the equilibrium correlation length at T = Ti. Ti = Tc is expected to provide a new universality class for which we obtain θ ≡ θc ≃ 0.035 in d = 2 and ≃0.105 in d = 3. The time dependence of the average domain size ℓ, however, is observed to be rather insensitive to the choice of Ti.
SU-F-T-683: Cancer Stem Cell Hypothesis and Radiation Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fourkal, E
Purpose: The tumor control probability in radiation therapy allows comparing different radiation treatments to each other by means of calculating the probability that a prescribed dose of radiation eradicates or controls the tumor. In the conventional approach, all cancer cells can divide unlimited number of times and the tumor control often means eradicating every malignant cell by the radiation. In recent years however, there is a mounting consensus that in a given tumor volume there is a sub-population of cells, known as cancer stem cells (CSCs) that are responsible for tumor initiation and growth. Other or progenitor cancer cells canmore » only divide limited number of times. This entails that only cancer stem cells may nned to be eliminated in order to control the tumor. Thus one may define TCP as the probability of eliminating CSCs for the given dose of radiation. Methods: Using stochastic methods, specifically the birth-and-death Markov processes, an infinite system of equations is set for probabilities of having m cancer stem cells at time t after the start of radiation. The TCP is calculated as the probability of no cancer stem cells surviving the radiation. Two scenarios are studied. In the first situation, the TCP is calculated for a unidirectional case when CSC gives birth to another CSC or a progenitor cell. In the second scenario, a bidirectional model is studied where the progenitor cell gives rise to CSC. Results: The proposed calculations show that the calculated TCP for CSC depends on whether one adopts unidirectional or bidirectional conversion models. The bidirectional model shows significantly lower TCP values for the given dose delivered to the tumor. Conclusion: Incorporating CSC hypothesis into the TCP modeling may notably influence the dose prescription as well as the concept of the expected TCP after the radiation treatments.« less
Quantum Probability -- A New Direction for Modeling in Cognitive Science
NASA Astrophysics Data System (ADS)
Roy, Sisir
2014-07-01
Human cognition is still a puzzling issue in research and its appropriate modeling. It depends on how the brain behaves at that particular instance and identifies and responds to a signal among myriads of noises that are present in the surroundings (called external noise) as well as in the neurons themselves (called internal noise). Thus it is not surprising to assume that the functionality consists of various uncertainties, possibly a mixture of aleatory and epistemic uncertainties. It is also possible that a complicated pathway consisting of both types of uncertainties in continuum play a major role in human cognition. For more than 200 years mathematicians and philosophers have been using probability theory to describe human cognition. Recently in several experiments with human subjects, violation of traditional probability theory has been clearly revealed in plenty of cases. Literature survey clearly suggests that classical probability theory fails to model human cognition beyond a certain limit. While the Bayesian approach may seem to be a promising candidate to this problem, the complete success story of Bayesian methodology is yet to be written. The major problem seems to be the presence of epistemic uncertainty and its effect on cognition at any given time. Moreover the stochasticity in the model arises due to the unknown path or trajectory (definite state of mind at each time point), a person is following. To this end a generalized version of probability theory borrowing ideas from quantum mechanics may be a plausible approach. A superposition state in quantum theory permits a person to be in an indefinite state at each point of time. Such an indefinite state allows all the states to have the potential to be expressed at each moment. Thus a superposition state appears to be able to represent better, the uncertainty, ambiguity or conflict experienced by a person at any moment demonstrating that mental states follow quantum mechanics during perception and cognition of ambiguous figures.
Reasenberg, P.A.; Hanks, T.C.; Bakun, W.H.
2003-01-01
The moment magnitude M 7.8 earthquake in 1906 profoundly changed the rate of seismic activity over much of northern California. The low rate of seismic activity in the San Francisco Bay region (SFBR) since 1906, relative to that of the preceding 55 yr, is often explained as a stress-shadow effect of the 1906 earthquake. However, existing elastic and visco-elastic models of stress change fail to fully account for the duration of the lowered rate of earthquake activity. We use variations in the rate of earthquakes as a basis for a simple empirical model for estimating the probability of M ≥6.7 earthquakes in the SFBR. The model preserves the relative magnitude distribution of sources predicted by the Working Group on California Earthquake Probabilities' (WGCEP, 1999; WGCEP, 2002) model of characterized ruptures on SFBR faults and is consistent with the occurrence of the four M ≥6.7 earthquakes in the region since 1838. When the empirical model is extrapolated 30 yr forward from 2002, it gives a probability of 0.42 for one or more M ≥6.7 in the SFBR. This result is lower than the probability of 0.5 estimated by WGCEP (1988), lower than the 30-yr Poisson probability of 0.60 obtained by WGCEP (1999) and WGCEP (2002), and lower than the 30-yr time-dependent probabilities of 0.67, 0.70, and 0.63 obtained by WGCEP (1990), WGCEP (1999), and WGCEP (2002), respectively, for the occurrence of one or more large earthquakes. This lower probability is consistent with the lack of adequate accounting for the 1906 stress-shadow in these earlier reports. The empirical model represents one possible approach toward accounting for the stress-shadow effect of the 1906 earthquake. However, the discrepancy between our result and those obtained with other modeling methods underscores the fact that the physics controlling the timing of earthquakes is not well understood. Hence, we advise against using the empirical model alone (or any other single probability model) for estimating the earthquake hazard and endorse the use of all credible earthquake probability models for the region, including the empirical model, with appropriate weighting, as was done in WGCEP (2002).
Pan, Wei; Chen, Yi-Shin
2018-01-01
Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.
Rare event computation in deterministic chaotic systems using genealogical particle analysis
NASA Astrophysics Data System (ADS)
Wouters, J.; Bouchet, F.
2016-09-01
In this paper we address the use of rare event computation techniques to estimate small over-threshold probabilities of observables in deterministic dynamical systems. We demonstrate that genealogical particle analysis algorithms can be successfully applied to a toy model of atmospheric dynamics, the Lorenz ’96 model. We furthermore use the Ornstein-Uhlenbeck system to illustrate a number of implementation issues. We also show how a time-dependent objective function based on the fluctuation path to a high threshold can greatly improve the performance of the estimator compared to a fixed-in-time objective function.
Photoconductivity response time in amorphous semiconductors
NASA Astrophysics Data System (ADS)
Adriaenssens, G. J.; Baranovskii, S. D.; Fuhs, W.; Jansen, J.; Öktü, Ö.
1995-04-01
The photoconductivity response time of amorphous semiconductors is examined theoretically on the basis of standard definitions for free- and trapped-carrier lifetimes, and experimentally for a series of a-Si1-xCx:H alloys with x<0.1. Particular attention is paid to its dependence on carrier generation rate and temperature. As no satisfactory agreement between models and experiments emerges, a simple theory is developed that can account for the experimental observations on the basis of the usual multiple-trappping ideas, provided a small probability of direct free-carrier recombination is included. The theory leads to a stretched-exponential photocurrent decay.
A model for the transfer of perceptual-motor skill learning in human behaviors.
Rosalie, Simon M; Müller, Sean
2012-09-01
This paper presents a preliminary model that outlines the mechanisms underlying the transfer of perceptual-motor skill learning in sport and everyday tasks. Perceptual-motor behavior is motivated by performance demands and evolves over time to increase the probability of success through adaptation. Performance demands at the time of an event create a unique transfer domain that specifies a range of potentially successful actions. Transfer comprises anticipatory subconscious and conscious mechanisms. The model also outlines how transfer occurs across a continuum, which depends on the individual's expertise and contextual variables occurring at the incidence of transfer
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2014-05-01
This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.
Thermodynamically controlled crystallization of glucose pentaacetates from amorphous phase
NASA Astrophysics Data System (ADS)
Wlodarczyk, P.; Hawelek, L.; Hudecki, A.; Wlodarczyk, A.; Kolano-Burian, A.
2016-08-01
The α and β glucose pentaacetates are known sugar derivatives, which can be potentially used as stabilizers of amorphous phase of active ingredients of drugs (API). In the present work, crystallization behavior of equimolar mixture of α and β form in comparison to both pure anomers is revealed. It was shown that despite the same molecular interactions and similar molecular dynamics, crystallization from amorphous phase is significantly suppressed in equimolar mixture. Time dependent X-ray diffraction studies confirmed higher stability of the quenched amorphous equimolar mixture. Its tendency to crystallization is about 10 times lower than for pure anomers. Calorimetric studies revealed that the α and β anomers don't form solid solutions and have eutectic point for xα = 0.625. Suppressed crystallization tendency in the mixture is probably caused by the altered thermodynamics of the system. The factors such as difference of free energy between crystalline and amorphous state or altered configurational entropy are probably responsible for the inhibitory effect.
NASA Astrophysics Data System (ADS)
da Silva, Roberto
2018-06-01
This work explores the features of a graph generated by agents that hop from one node to another node, where the nodes have evolutionary attractiveness. The jumps are governed by Boltzmann-like transition probabilities that depend both on the euclidean distance between the nodes and on the ratio (β) of the attractiveness between them. It is shown that persistent nodes, i.e., nodes that never been reached by this special random walk are possible in the stationary limit differently from the case where the attractiveness is fixed and equal to one for all nodes (β = 1). Simultaneously, one also investigates the spectral properties and statistics related to the attractiveness and degree distribution of the evolutionary network. Finally, a study of the crossover between persistent phase and no persistent phase was performed and it was also observed the existence of a special type of transition probability which leads to a power law behaviour for the time evolution of the persistence.
Thermodynamically controlled crystallization of glucose pentaacetates from amorphous phase
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wlodarczyk, P., E-mail: patrykw@imn.gliwice.pl; Hawelek, L.; Hudecki, A.
The α and β glucose pentaacetates are known sugar derivatives, which can be potentially used as stabilizers of amorphous phase of active ingredients of drugs (API). In the present work, crystallization behavior of equimolar mixture of α and β form in comparison to both pure anomers is revealed. It was shown that despite the same molecular interactions and similar molecular dynamics, crystallization from amorphous phase is significantly suppressed in equimolar mixture. Time dependent X-ray diffraction studies confirmed higher stability of the quenched amorphous equimolar mixture. Its tendency to crystallization is about 10 times lower than for pure anomers. Calorimetric studiesmore » revealed that the α and β anomers don’t form solid solutions and have eutectic point for x{sub α} = 0.625. Suppressed crystallization tendency in the mixture is probably caused by the altered thermodynamics of the system. The factors such as difference of free energy between crystalline and amorphous state or altered configurational entropy are probably responsible for the inhibitory effect.« less
NASA Astrophysics Data System (ADS)
Livorati, André L. P.; Palmero, Matheus S.; Díaz-I, Gabriel; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.
2018-02-01
We study the dynamics of an ensemble of non interacting particles constrained by two infinitely heavy walls, where one of them is moving periodically in time, while the other is fixed. The system presents mixed dynamics, where the accessible region for the particle to diffuse chaotically is bordered by an invariant spanning curve. Statistical analysis for the root mean square velocity, considering high and low velocity ensembles, leads the dynamics to the same steady state plateau for long times. A transport investigation of the dynamics via escape basins reveals that depending of the initial velocity ensemble, the decay rates of the survival probability present different shapes and bumps, in a mix of exponential, power law and stretched exponential decays. After an analysis of step-size averages, we found that the stable manifolds play the role of a preferential path for faster escape, being responsible for the bumps and different shapes of the survival probability.
Observation of Bs(0)-Bs(0) oscillations.
Abulencia, A; Adelman, J; Affolder, T; Akimoto, T; Albrow, M G; Ambrose, D; Amerio, S; Amidei, D; Anastassov, A; Anikeev, K; Annovi, A; Antos, J; Aoki, M; Apollinari, G; Arguin, J-F; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Azfar, F; Azzi-Bacchetta, P; Azzurri, P; Bacchetta, N; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Bartsch, V; Bauer, G; Bedeschi, F; Behari, S; Belforte, S; Bellettini, G; Bellinger, J; Belloni, A; Benjamin, D; Beretvas, A; Beringer, J; Berry, T; Bhatti, A; Binkley, M; Bisello, D; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bolshov, A; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Budroni, S; Burkett, K; Busetto, G; Bussey, P; Byrum, K L; Cabrera, S; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carrillo, S; Carlsmith, D; Carosi, R; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, I; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciljak, M; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Coca, M; Compostella, G; Convery, M E; Conway, J; Cooper, B; Copic, K; Cordelli, M; Cortiana, G; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Cyr, D; DaRonco, S; D'Auria, S; Davies, T; D'Onofrio, M; Dagenhart, D; de Barbaro, P; De Cecco, S; Deisher, A; De Lentdecker, G; Dell'Orso, M; Delli Paoli, F; Demortier, L; Deng, J; Deninno, M; De Pedis, D; Derwent, P F; Di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; DiTuro, P; Dörr, C; Donati, S; Donega, M; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, I; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Field, R; Flanagan, G; Foland, A; Forrester, S; Foster, G W; Franklin, M; Freeman, J C; Frisch, H J; Furic, I; Gallinaro, M; Galyardt, J; Garcia, J E; Garberson, F; Garfinkel, A F; Gay, C; Gerberich, H; Gerdes, D; Giagu, S; Giannetti, P; Gibson, A; Gibson, K; Gimmell, J L; Ginsburg, C; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glenzinski, D; Gold, M; Goldschmidt, N; Goldstein, J; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Griffiths, M; Grinstein, S; Grosso-Pilcher, C; Group, R C; Grundler, U; da Costa, J Guimaraes; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Hamilton, A; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Holloway, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ishizawa, Y; Ivanov, A; Iyutin, B; James, E; Jang, D; Jayatilaka, B; Jeans, D; Jensen, H; Jeon, E J; Jindariani, S; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Karchin, P E; Kato, Y; Kemp, Y; Kephart, R; Kerzel, U; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Klute, M; Knuteson, B; Ko, B R; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kovalev, A; Kraan, A C; Kraus, J; Kravchenko, I; Kreps, M; Kroll, J; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhlmann, S E; Kuhr, T; Kusakabe, Y; Kwang, S; Laasanen, A T; Lai, S; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; LeCompte, T; Lee, J; Lee, J; Lee, Y J; Lee, S W; Lefèvre, R; Leonardo, N; Leone, S; Levy, S; Lewis, J D; Lin, C; Lin, C S; Lindgren, M; Lipeles, E; Liss, T M; Lister, A; Litvintsev, D O; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Loverre, P; Lu, R-S; Lucchesi, D; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; MacQueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Manca, G; Margaroli, F; Marginean, R; Marino, C; Marino, C P; Martin, A; Martin, M; Martin, V; Martínez, M; Maruyama, T; Mastrandrea, P; Masubuchi, T; Matsunaga, H; Mattson, M E; Mazini, R; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzemer, S; Menzione, A; Merkel, P; Mesropian, C; Messina, A; Miao, T; Miladinovic, N; Miles, J; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyamoto, A; Moed, S; Moggi, N; Mohr, B; Moore, R; Morello, M; Fernandez, P Movilla; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Nachtman, J; Nagano, A; Naganoma, J; Nahn, S; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nigmanov, T; Nodulman, L; Norniella, O; Nurse, E; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Oldeman, R; Orava, R; Osterberg, K; Pagliarone, C; Palencia, E; Papadimitriou, V; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Piedra, J; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Portell, X; Poukhov, O; Pounder, N; Prokoshin, F; Pronko, A; Proudfoot, J; Ptochos, F; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ranjan, N; Rappoccio, S; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Sabik, S; Safonov, A; Sakumoto, W K; Salamanna, G; Saltó, O; Saltzberg, D; Sánchez, C; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savard, P; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shapiro, M D; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Sjolin, J; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soderberg, M; Soha, A; Somalwar, S; Sorin, V; Spalding, J; Spinella, F; Spreitzer, T; Squillacioti, P; Stanitzki, M; Staveris-Polykalas, A; Denis, R St; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Sun, H; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Takikawa, K; Tanaka, M; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Tourneur, S; Trischuk, W; Tsuchiya, R; Tsuno, S; Turini, N; Ukegawa, F; Unverhau, T; Uozumi, S; Usynin, D; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Veramendi, G; Veszpremi, V; Vidal, R; Vila, I; Vilar, R; Vine, T; Vollrath, I; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner, J; Wagner, W; Wallny, R; Wang, S M; Warburton, A; Waschke, S; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Yagil, A; Yamamoto, K; Yamaoka, J; Yamashita, T; Yang, C; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zhou, J; Zucchelli, S
2006-12-15
We report the observation of Bs(0)-Bs(0) oscillations from a time-dependent measurement of the Bs(0)-Bs(0) oscillation frequency Deltams. Using a data sample of 1 fb(-1) of pp collisions at square root of s=1.96 TeV collected with the CDF II detector at the Fermilab Tevatron, we find signals of 5600 fully reconstructed hadronic Bs decays, 3100 partially reconstructed hadronic Bs decays, and 61,500 partially reconstructed semileptonic Bs decays. We measure the probability as a function of proper decay time that the Bs decays with the same, or opposite, flavor as the flavor at production, and we find a signal for Bs(0)-Bs(0) oscillations. The probability that random fluctuations could produce a comparable signal is 8 x 10(-8), which exceeds 5sigma significance. We measure Deltams=17.77 +/- 0.10(stat) +/- 0.07(syst) ps(-1) and extract /V(td)/V(ts)/=0.2060+/-0.0007(Deltams)(-0.0060)(+0.008)(Deltamd+theor).
Stationary echo canceling in velocity estimation by time-domain cross-correlation.
Jensen, J A
1993-01-01
The application of stationary echo canceling to ultrasonic estimation of blood velocities using time-domain cross-correlation is investigated. Expressions are derived that show the influence from the echo canceler on the signals that enter the cross-correlation estimator. It is demonstrated that the filtration results in a velocity-dependent degradation of the signal-to-noise ratio. An analytic expression is given for the degradation for a realistic pulse. The probability of correct detection at low signal-to-noise ratios is influenced by signal-to-noise ratio, transducer bandwidth, center frequency, number of samples in the range gate, and number of A-lines employed in the estimation. Quantitative results calculated by a simple simulation program are given for the variation in probability from these parameters. An index reflecting the reliability of the estimate at hand can be calculated from the actual cross-correlation estimate by a simple formula and used in rejecting poor estimates or in displaying the reliability of the velocity estimated.
Effects of heterogeneous convergence rate on consensus in opinion dynamics
NASA Astrophysics Data System (ADS)
Huang, Changwei; Dai, Qionglin; Han, Wenchen; Feng, Yuee; Cheng, Hongyan; Li, Haihong
2018-06-01
The Deffuant model has attracted much attention in the study of opinion dynamics. Here, we propose a modified version by introducing into the model a heterogeneous convergence rate which is dependent on the opinion difference between interacting agents and a tunable parameter κ. We study the effects of heterogeneous convergence rate on consensus by investigating the probability of complete consensus, the size of the largest opinion cluster, the number of opinion clusters, and the relaxation time. We find that the decrease of the convergence rate is favorable to decreasing the confidence threshold for the population to always reach complete consensus, and there exists optimal κ resulting in the minimal bounded confidence threshold. Moreover, we find that there exists a window before the threshold of confidence in which complete consensus may be reached with a nonzero probability when κ is not too large. We also find that, within a certain confidence range, decreasing the convergence rate will reduce the relaxation time, which is somewhat counterintuitive.
NASA Astrophysics Data System (ADS)
Bevilacqua, Andrea; Neri, Augusto; Bisson, Marina; Esposti Ongaro, Tomaso; Flandoli, Franco; Isaia, Roberto; Rosi, Mauro; Vitale, Stefano
2017-09-01
This study presents a new method for producing long-term hazard maps for pyroclastic density currents (PDC) originating at Campi Flegrei caldera. Such method is based on a doubly stochastic approach and is able to combine the uncertainty assessments on the spatial location of the volcanic vent, the size of the flow and the expected time of such an event. The results are obtained by using a Monte Carlo approach and adopting a simplified invasion model based on the box model integral approximation. Temporal assessments are modelled through a Cox-type process including self-excitement effects, based on the eruptive record of the last 15 kyr. Mean and percentile maps of PDC invasion probability are produced, exploring their sensitivity to some sources of uncertainty and to the effects of the dependence between PDC scales and the caldera sector where they originated. Conditional maps representative of PDC originating inside limited zones of the caldera, or of PDC with a limited range of scales are also produced. Finally, the effect of assuming different time windows for the hazard estimates is explored, also including the potential occurrence of a sequence of multiple events. Assuming that the last eruption of Monte Nuovo (A.D. 1538) marked the beginning of a new epoch of activity similar to the previous ones, results of the statistical analysis indicate a mean probability of PDC invasion above 5% in the next 50 years on almost the entire caldera (with a probability peak of 25% in the central part of the caldera). In contrast, probability values reduce by a factor of about 3 if the entire eruptive record is considered over the last 15 kyr, i.e. including both eruptive epochs and quiescent periods.
Schlenker, Lela S.; Latour, Robert J.; Brill, Richard W.; Graves, John E.
2016-01-01
White marlin, a highly migratory pelagic marine fish, support important commercial and recreational fisheries throughout their range in the tropical and subtropical Atlantic Ocean. More than 10 000 individuals can be caught annually in the United States recreational fishery, of which the vast majority are captured on circle hooks and released alive. The probability of post-release mortality of white marlin released from circle hooks has been documented to be <0.02, but the associated physiological stress resulting from capture and handling techniques has not been characterized despite its importance for understanding the health of released fish. We examined the physiological response of 68 white marlin caught on circle hooks in the recreational fishery and followed the fate of 22 of these fish with pop-up satellite archival tags programmed to release after 30 days. Measures of plasma sodium, chloride, glucose and lactate concentrations taken from fish that were briefly and consistently (mean = 120 s, standard deviation = 40 s) removed from the water increased with angling time, but post-release mortality was inversely related to angling time. The probability of post-release mortality was predicted by elevated plasma potassium concentrations and was more than 10 times greater than has been previously reported for white marlin caught on circle hooks that were not removed from the water. This disparity in estimates of post-release mortality suggests that removal of fish from the water for physiological sampling greatly heightens stress, disrupts homeostasis and thus increases the probability of post-release mortality. Our results demonstrate that elevated concentrations of plasma potassium predict mortality in white marlin and that the probability of post-release mortality is highly dependent on post-capture handling procedures. PMID:27293745
Schlenker, Lela S; Latour, Robert J; Brill, Richard W; Graves, John E
2016-01-01
White marlin, a highly migratory pelagic marine fish, support important commercial and recreational fisheries throughout their range in the tropical and subtropical Atlantic Ocean. More than 10 000 individuals can be caught annually in the United States recreational fishery, of which the vast majority are captured on circle hooks and released alive. The probability of post-release mortality of white marlin released from circle hooks has been documented to be <0.02, but the associated physiological stress resulting from capture and handling techniques has not been characterized despite its importance for understanding the health of released fish. We examined the physiological response of 68 white marlin caught on circle hooks in the recreational fishery and followed the fate of 22 of these fish with pop-up satellite archival tags programmed to release after 30 days. Measures of plasma sodium, chloride, glucose and lactate concentrations taken from fish that were briefly and consistently (mean = 120 s, standard deviation = 40 s) removed from the water increased with angling time, but post-release mortality was inversely related to angling time. The probability of post-release mortality was predicted by elevated plasma potassium concentrations and was more than 10 times greater than has been previously reported for white marlin caught on circle hooks that were not removed from the water. This disparity in estimates of post-release mortality suggests that removal of fish from the water for physiological sampling greatly heightens stress, disrupts homeostasis and thus increases the probability of post-release mortality. Our results demonstrate that elevated concentrations of plasma potassium predict mortality in white marlin and that the probability of post-release mortality is highly dependent on post-capture handling procedures.
NASA Astrophysics Data System (ADS)
Bock, Y.; Fang, P.; Moore, A. W.; Kedar, S.; Liu, Z.; Owen, S. E.; Glasscoe, M. T.
2016-12-01
Detection of time-dependent crustal deformation relies on the availability of accurate surface displacements, proper time series analysis to correct for secular motion, coseismic and non-tectonic instrument offsets, periodic signatures at different frequencies, and a realistic estimate of uncertainties for the parameters of interest. As part of the NASA Solid Earth Science ESDR System (SESES) project, daily displacement time series are estimated for about 2500 stations, focused on tectonic plate boundaries and having a global distribution for accessing the terrestrial reference frame. The "combined" time series are optimally estimated from independent JPL GIPSY and SIO GAMIT solutions, using a consistent set of input epoch-date coordinates and metadata. The longest time series began in 1992; more than 30% of the stations have experienced one or more of 35 major earthquakes with significant postseismic deformation. Here we present three examples of time-dependent deformation that have been detected in the SESES displacement time series. (1) Postseismic deformation is a fundamental time-dependent signal that indicates a viscoelastic response of the crust/mantle lithosphere, afterslip, or poroelastic effects at different spatial and temporal scales. It is critical to identify and estimate the extent of postseismic deformation in both space and time not only for insight into the crustal deformation and earthquake cycles and their underlying physical processes, but also to reveal other time-dependent signals. We report on our database of characterized postseismic motions using a principal component analysis to isolate different postseismic processes. (2) Starting with the SESES combined time series and applying a time-dependent Kalman filter, we examine episodic tremor and slow slip (ETS) in the Cascadia subduction zone. We report on subtle slip details, allowing investigation of the spatiotemporal relationship between slow slip transients and tremor and their underlying physical mechanisms. (3) We present evolving strain dilatation and shear rates based on the SESES velocities for regional subnetworks as a metric for assigning earthquake probabilities and detection of possible time-dependent deformation related to underlying physical processes.
Bayesian network representing system dynamics in risk analysis of nuclear systems
NASA Astrophysics Data System (ADS)
Varuttamaseni, Athi
2011-12-01
A dynamic Bayesian network (DBN) model is used in conjunction with the alternating conditional expectation (ACE) regression method to analyze the risk associated with the loss of feedwater accident coupled with a subsequent initiation of the feed and bleed operation in the Zion-1 nuclear power plant. The use of the DBN allows the joint probability distribution to be factorized, enabling the analysis to be done on many simpler network structures rather than on one complicated structure. The construction of the DBN model assumes conditional independence relations among certain key reactor parameters. The choice of parameter to model is based on considerations of the macroscopic balance statements governing the behavior of the reactor under a quasi-static assumption. The DBN is used to relate the peak clad temperature to a set of independent variables that are known to be important in determining the success of the feed and bleed operation. A simple linear relationship is then used to relate the clad temperature to the core damage probability. To obtain a quantitative relationship among different nodes in the DBN, surrogates of the RELAP5 reactor transient analysis code are used. These surrogates are generated by applying the ACE algorithm to output data obtained from about 50 RELAP5 cases covering a wide range of the selected independent variables. These surrogates allow important safety parameters such as the fuel clad temperature to be expressed as a function of key reactor parameters such as the coolant temperature and pressure together with important independent variables such as the scram delay time. The time-dependent core damage probability is calculated by sampling the independent variables from their probability distributions and propagate the information up through the Bayesian network to give the clad temperature. With the knowledge of the clad temperature and the assumption that the core damage probability has a one-to-one relationship to it, we have calculated the core damage probably as a function of transient time. The use of the DBN model in combination with ACE allows risk analysis to be performed with much less effort than if the analysis were done using the standard techniques.
Phase synchronization of bursting neurons in clustered small-world networks
NASA Astrophysics Data System (ADS)
Batista, C. A. S.; Lameu, E. L.; Batista, A. M.; Lopes, S. R.; Pereira, T.; Zamora-López, G.; Kurths, J.; Viana, R. L.
2012-07-01
We investigate the collective dynamics of bursting neurons on clustered networks. The clustered network model is composed of subnetworks, each of them presenting the so-called small-world property. This model can also be regarded as a network of networks. In each subnetwork a neuron is connected to other ones with regular as well as random connections, the latter with a given intracluster probability. Moreover, in a given subnetwork each neuron has an intercluster probability to be connected to the other subnetworks. The local neuron dynamics has two time scales (fast and slow) and is modeled by a two-dimensional map. In such small-world network the neuron parameters are chosen to be slightly different such that, if the coupling strength is large enough, there may be synchronization of the bursting (slow) activity. We give bounds for the critical coupling strength to obtain global burst synchronization in terms of the network structure, that is, the probabilities of intracluster and intercluster connections. We find that, as the heterogeneity in the network is reduced, the network global synchronizability is improved. We show that the transitions to global synchrony may be abrupt or smooth depending on the intercluster probability.
Crossing trend analysis methodology and application for Turkish rainfall records
NASA Astrophysics Data System (ADS)
Şen, Zekâi
2018-01-01
Trend analyses are the necessary tools for depicting possible general increase or decrease in a given time series. There are many versions of trend identification methodologies such as the Mann-Kendall trend test, Spearman's tau, Sen's slope, regression line, and Şen's innovative trend analysis. The literature has many papers about the use, cons and pros, and comparisons of these methodologies. In this paper, a completely new approach is proposed based on the crossing properties of a time series. It is suggested that the suitable trend from the centroid of the given time series should have the maximum number of crossings (total number of up-crossings or down-crossings). This approach is applicable whether the time series has dependent or independent structure and also without any dependence on the type of the probability distribution function. The validity of this method is presented through extensive Monte Carlo simulation technique and its comparison with other existing trend identification methodologies. The application of the methodology is presented for a set of annual daily extreme rainfall time series from different parts of Turkey and they have physically independent structure.
Rainfall disaggregation for urban hydrology: Effects of spatial consistence
NASA Astrophysics Data System (ADS)
Müller, Hannes; Haberlandt, Uwe
2015-04-01
For urban hydrology rainfall time series with a high temporal resolution are crucial. Observed time series of this kind are very short in most cases, so they cannot be used. On the contrary, time series with lower temporal resolution (daily measurements) exist for much longer periods. The objective is to derive time series with a long duration and a high resolution by disaggregating time series of the non-recording stations with information of time series of the recording stations. The multiplicative random cascade model is a well-known disaggregation model for daily time series. For urban hydrology it is often assumed, that a day consists of only 1280 minutes in total as starting point for the disaggregation process. We introduce a new variant for the cascade model, which is functional without this assumption and also outperforms the existing approach regarding time series characteristics like wet and dry spell duration, average intensity, fraction of dry intervals and extreme value representation. However, in both approaches rainfall time series of different stations are disaggregated without consideration of surrounding stations. This yields in unrealistic spatial patterns of rainfall. We apply a simulated annealing algorithm that has been used successfully for hourly values before. Relative diurnal cycles of the disaggregated time series are resampled to reproduce the spatial dependence of rainfall. To describe spatial dependence we use bivariate characteristics like probability of occurrence, continuity ratio and coefficient of correlation. Investigation area is a sewage system in Northern Germany. We show that the algorithm has the capability to improve spatial dependence. The influence of the chosen disaggregation routine and the spatial dependence on overflow occurrences and volumes of the sewage system will be analyzed.
A theory of the Io phase asymmetry of the Jovian decametric radiation
NASA Technical Reports Server (NTRS)
Hashimoto, K.; Goldstein, M. L.
1982-01-01
An explanation of an asymmetry in the occurrence probability of the Io-dependent Jovian decametric radiation is proposed. Io generates stronger Alfven waves toward the south when it is in the northern part of the torus. This wave then generates decametric radiation in the northern ionosphere after it reflects in the southern ionosphere. The asymmetry then results from computing the propagation time of the alfven wave along this trajectory. The ray paths of the decameter radiation are calculated using a three dimensional ray tracing program in the Jovian ionosphere. Variations in the expected probability plots are computer for two models of the Jovian ionosphere and global magnetic field, as well as for several choices of the ratio of the radiated frequency to the X-mode cutoff frequency.
Gerasimov, A V; Kostyuchenko, V P; Solovieva, A S; Olovnikov, A M
2014-10-01
We found that some morphological properties of the pineal gland and submandibular salivary gland of mice are significantly distinct at the new and full moon. We suppose that the differences are initiated by the displacements of the electron-dense concretions in the secretory vesicles of pinealocytes. This presumably occurs under the influence of the gravitational field, which periodically changes during different phases of the moon. It seems that the pinealocyte is both an endocrine and gravisensory cell. A periodic secretion of the pineal gland probably stimulates, in a lunaphasic mode, the neuroendocrine system that, in turn, periodically exerts influence on different organs of the body. The observed effect probably serves, within the lifelong clock of a brain, to control development and aging in time.
Quantum tunneling with friction
NASA Astrophysics Data System (ADS)
Tokieda, M.; Hagino, K.
2017-05-01
Using the phenomenological quantum friction models introduced by P. Caldirola [Nuovo Cimento 18, 393 (1941), 10.1007/BF02960144] and E. Kanai [Prog. Theor. Phys. 3, 440 (1948), 10.1143/ptp/3.4.440], M. D. Kostin [J. Chem. Phys. 57, 3589 (1972), 10.1063/1.1678812], and K. Albrecht [Phys. Lett. B 56, 127 (1975), 10.1016/0370-2693(75)90283-X], we study quantum tunneling of a one-dimensional potential in the presence of energy dissipation. To this end, we calculate the tunneling probability using a time-dependent wave-packet method. The friction reduces the tunneling probability. We show that the three models provide similar penetrabilities to each other, among which the Caldirola-Kanai model requires the least numerical effort. We also discuss the effect of energy dissipation on quantum tunneling in terms of barrier distributions.
NASA Astrophysics Data System (ADS)
Wable, Pawan S.; Jha, Madan K.
2018-02-01
The effects of rainfall and the El Niño Southern Oscillation (ENSO) on groundwater in a semi-arid basin of India were analyzed using Archimedean copulas considering 17 years of data for monsoon rainfall, post-monsoon groundwater level (PMGL) and ENSO Index. The evaluated dependence among these hydro-climatic variables revealed that PMGL-Rainfall and PMGL-ENSO Index pairs have significant dependence. Hence, these pairs were used for modeling dependence by employing four types of Archimedean copulas: Ali-Mikhail-Haq, Clayton, Gumbel-Hougaard, and Frank. For the copula modeling, the results of probability distributions fitting to these hydro-climatic variables indicated that the PMGL and rainfall time series are best represented by Weibull and lognormal distributions, respectively, while the non-parametric kernel-based normal distribution is the most suitable for the ENSO Index. Further, the PMGL-Rainfall pair is best modeled by the Clayton copula, and the PMGL-ENSO Index pair is best modeled by the Frank copula. The Clayton copula-based conditional probability of PMGL being less than or equal to its average value at a given mean rainfall is above 70% for 33% of the study area. In contrast, the spatial variation of the Frank copula-based probability of PMGL being less than or equal to its average value is 35-40% in 23% of the study area during El Niño phase, while it is below 15% in 35% of the area during the La Niña phase. This copula-based methodology can be applied under data-scarce conditions for exploring the impacts of rainfall and ENSO on groundwater at basin scales.
Mental disorder in limb reconstruction: Prevalence, associations and impact on work disability.
Rayner, L; Simpson, A; Matcham, F; Shetty, S; Lahoti, O; Groom, G; Hotopf, M
2016-10-01
This cross-sectional survey aimed to assess the prevalence of depression, anxiety, post-traumatic stress disorder (PTSD), and drug and alcohol dependence in a limb reconstruction population and examine associations with demographic and functional variables. As part of routine clinical care, data were collected from 566 patients attending a tertiary referral centre for limb reconstruction between April 2012 and February 2016. Depression, anxiety, post-traumatic stress disorder (PTSD), and alcohol and drug dependence were measured using standardised self-report screening tools. 173 patients (30.6% CI 26.7-34.4) screened positive for at least one of the mental disorders assessed. 110 (19.4% CI 16.2-22.7) met criteria for probable major depression; 112 (19.9% CI 16.6-23.2) patients met criteria for probable generalised anxiety disorder; and 41 (7.6% CI 5.3-9.8) patients met criteria for probable PTSD. The prevalence of probable alcohol dependence and probable drug dependence was 1.6% (CI 0.6-2.7) and 4.5% (CI 2.7-6.3), respectively. Patients who screened positive for depression, anxiety and PTSD reported significantly higher levels of pain, fatigue, and functional impairment. Depression and anxiety were independently associated with work disability after adjustment for covariates (OR 1.98 (CI 1.08-3.62) and OR 1.83 (CI 1.04-3.23), respectively). The high prevalence and adverse associations of probable mental disorder in limb reconstruction attest to the need for routine psychological assessment and support. Integrated screening and management of mental disorder in this population may have a positive impact on patients' emotional, physical and occupational rehabilitation. A randomised controlled trial is needed to test this hypothesis. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Budke, Carsten; Dreischmeier, Katharina; Koop, Thomas
2014-05-01
Homogeneous ice nucleation is a stochastic process, implying that it is not only temperature but also time dependent. For heterogeneous ice nucleation it is still under debate whether there is a significant time dependence or not. In case of minor time dependence it is probably sufficient to use a singular or slightly modified singular approach, which mainly supposes temperature dependence and just small stochastic variations. We contribute to this discussion using a novel optical freezing array termed BINARY (Bielefeld Ice Nucleation ARraY). The setup consists of an array of microliter-sized droplets on a Peltier cooling stage. The droplets are separated from each other with a polydimethylsiloxane (PDMS) spacer to prevent a Bergeron-Findeisen process, in which the first freezing droplets grow at the expense of the remaining liquid ones due to their vapor pressure differences. An automatic detection of nucleation events is realized optically by the change in brightness during freezing. Different types of ice nucleating agents were tested with the presented setup, e. g. pollen and clay mineral dust. Exemplarily, cooling rate dependent measurements are shown for the heterogeneous ice nucleation induced by Snomax®. The authors gratefully acknowledge funding by the German Research Foundation (DFG) through the project BIOCLOUDS (KO 2944/1-1) and through the research unit INUIT (FOR 1525) under KO 2944/2-1. We particularly thank our INUIT partners for fruitful collaboration and sharing of ideas and IN samples.
Rose, Johnie; Hawthorn, Rachael L; Watts, Brook; Singer, Mendel E
2009-09-25
To examine the public health impact of mass vaccination with live attenuated human rotavirus vaccine (RIX4414) in a birth cohort in India, and to estimate the cost effectiveness and affordability of such a programme. Decision analytical Markov model encompassing all direct medical costs. Infection risk and severity depended on age, number of previous infections, and vaccination history; probabilities of use of inpatient and outpatient health services depended on symptom severity. Published clinical, epidemiological, and economic data. When possible, parameter estimates were based on data specific for India. Population Simulated Indian birth cohort followed for five years. Decrease in rotavirus gastroenteritis episodes (non-severe and severe), deaths, outpatient visits, and admission to hospital; incremental cost effectiveness ratio of vaccination expressed as net cost in 2007 rupees per life year saved. In the base case, vaccination prevented 28,943 (29.7%) symptomatic episodes, 6981 (38.2%) severe episodes, 164 deaths (41.0%), 7178 (33.3%) outpatient visits, and 812 (34.3%) admissions to hospital per 100,000 children. Vaccination cost 8023 rupees (about pound100, euro113, $165) per life year saved, less than India's per capita gross domestic product, a common criterion for cost effectiveness. The net programme cost would be equivalent to 11.6% of the 2006-7 budget of the Indian Department of Health and Family Welfare. Model results were most sensitive to variations in access to outpatient care for those with severe symptoms. If this parameter was increased to its upper limit, the incremental cost effectiveness ratio for vaccination still fell between one and three times the per capita gross domestic product, meeting the World Health Organization's criterion for "cost effective" interventions. Uncertainty analysis indicated a 94.7% probability that vaccination would be cost effective according to a criterion of one times per capita gross domestic product per life year saved, and a 97.8% probability that it would be cost effective according to a criterion of three times per capita gross domestic product. Across a wide range of assumptions, mass RIX4414 vaccination in India would probably prevent substantial morbidity and mortality at a cost per life year saved below typical thresholds of cost effectiveness. The opportunity costs of such a programme in this or similar settings, however, should be weighed up carefully.
Xu, Wenwu; Zhang, Peiyu
2013-02-21
A time-dependent quantum wave packet method is used to investigate the dynamics of the He + HeH(+)(X(1)Σ(+)) reaction based on a new potential energy surface [Liang et al., J. Chem. Phys.2012, 136, 094307]. The coupled channel (CC) and centrifugal-sudden (CS) reaction probabilities as well as the total integral cross sections are calculated. A comparison of the results with and without Coriolis coupling revealed that the number of K states N(K) (K is the projection of the total angular momentum J on the body-fixed z axis) significantly influences the reaction threshold. The effective potential energy profiles of each N(K) for the He + HeH(+) reaction in a collinear geometry indicate that the barrier height gradually decreased with increased N(K). The calculated time evolution of CC and CS probability density distribution over the collision energy of 0.27-0.36 eV at total angular momentum J = 50 clearly suggests a lower reaction threshold of CC probabilities. The CC cross sections are larger than the CS results within the entire energy range, demonstrating that the Coriolis coupling effect can effectively promote the He + HeH(+) reaction.
Generating intrinsically disordered protein conformational ensembles from a Markov chain
NASA Astrophysics Data System (ADS)
Cukier, Robert I.
2018-03-01
Intrinsically disordered proteins (IDPs) sample a diverse conformational space. They are important to signaling and regulatory pathways in cells. An entropy penalty must be payed when an IDP becomes ordered upon interaction with another protein or a ligand. Thus, the degree of conformational disorder of an IDP is of interest. We create a dichotomic Markov model that can explore entropic features of an IDP. The Markov condition introduces local (neighbor residues in a protein sequence) rotamer dependences that arise from van der Waals and other chemical constraints. A protein sequence of length N is characterized by its (information) entropy and mutual information, MIMC, the latter providing a measure of the dependence among the random variables describing the rotamer probabilities of the residues that comprise the sequence. For a Markov chain, the MIMC is proportional to the pair mutual information MI which depends on the singlet and pair probabilities of neighbor residue rotamer sampling. All 2N sequence states are generated, along with their probabilities, and contrasted with the probabilities under the assumption of independent residues. An efficient method to generate realizations of the chain is also provided. The chain entropy, MIMC, and state probabilities provide the ingredients to distinguish different scenarios using the terminologies: MoRF (molecular recognition feature), not-MoRF, and not-IDP. A MoRF corresponds to large entropy and large MIMC (strong dependence among the residues' rotamer sampling), a not-MoRF corresponds to large entropy but small MIMC, and not-IDP corresponds to low entropy irrespective of the MIMC. We show that MorFs are most appropriate as descriptors of IDPs. They provide a reasonable number of high-population states that reflect the dependences between neighbor residues, thus classifying them as IDPs, yet without very large entropy that might lead to a too high entropy penalty.
Three Dimensional Time Dependent Stochastic Method for Cosmic-ray Modulation
NASA Astrophysics Data System (ADS)
Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J. M.
2009-12-01
A proper understanding of the different behavior of intensities of galactic cosmic rays in different solar cycle phases requires solving the modulation equation with time dependence. We present a detailed description of our newly developed stochastic approach for cosmic ray modulation which we believe is the first attempt to solve the time dependent Parker equation in 3D evolving from our 3D steady state stochastic approach, which has been benchmarked extensively by using the finite difference method. Our 3D stochastic method is different from other stochastic approaches in literature (Ball et al 2005, Miyake et al 2005, and Florinski 2008) in several ways. For example, we employ spherical coordinates which makes the code much more efficient by reducing coordinate transformations. What's more, our stochastic differential equations are different from others because our map from Parker's original equation to the Fokker-Planck equation extends the method used by Jokipii and Levy 1977 while others don't although all 3D stochastic methods are essentially based on Ito formula. The advantage of the stochastic approach is that it also gives the probability information of travel times and path lengths of cosmic rays besides the intensities. We show that excellent agreement exists between solutions obtained by our steady state stochastic method and by the traditional finite difference method. We also show time dependent solutions for an idealized heliosphere which has a Parker magnetic field, a planar current sheet, and a simple initial condition.
Wellicome, Troy I.; Bayne, Erin M.
2017-01-01
The expansion of humans and their related infrastructure is increasing the likelihood that wildlife will interact with humans. When disturbed by humans, animals often change their behaviour, which can result in time and energetic costs to that animal. An animal's decision to change behaviour is likely related to the type of disturbance, the individual's past experience with disturbance, and the landscape in which the disturbance occurs. In southern Alberta and Saskatchewan, we quantified probability of flight initiation from the nest by Ferruginous Hawks (Buteo regalis) during approaches to nests by investigators. We tested if probability of flight was related to different disturbance types, previous experience, and the anthropogenic landscape in which individual Ferruginous Hawks nested. Probability of flight was related to the type of approach by the investigator, the number of previous visits by investigators, and the vehicular traffic around the nest. Approaches by humans on foot resulted in a greater probability of flight than those in a vehicle. Approaches in a vehicle via low traffic volume access roads were related to increased probability of flight relative to other road types. The number of previous investigator approaches to the nest increased the probability of flight. Overall, we found support that Ferruginous Hawks show habituation to vehicles and the positive reinforcement hypotheses as probability of flight was negatively related to an index of traffic activity near the nest. Our work emphasizes that complex, dynamic processes drive the decision to initiate flight from the nest, and contributes to the growing body of work explaining how responses to humans vary within species. PMID:28542334
Nordell, Cameron J; Wellicome, Troy I; Bayne, Erin M
2017-01-01
The expansion of humans and their related infrastructure is increasing the likelihood that wildlife will interact with humans. When disturbed by humans, animals often change their behaviour, which can result in time and energetic costs to that animal. An animal's decision to change behaviour is likely related to the type of disturbance, the individual's past experience with disturbance, and the landscape in which the disturbance occurs. In southern Alberta and Saskatchewan, we quantified probability of flight initiation from the nest by Ferruginous Hawks (Buteo regalis) during approaches to nests by investigators. We tested if probability of flight was related to different disturbance types, previous experience, and the anthropogenic landscape in which individual Ferruginous Hawks nested. Probability of flight was related to the type of approach by the investigator, the number of previous visits by investigators, and the vehicular traffic around the nest. Approaches by humans on foot resulted in a greater probability of flight than those in a vehicle. Approaches in a vehicle via low traffic volume access roads were related to increased probability of flight relative to other road types. The number of previous investigator approaches to the nest increased the probability of flight. Overall, we found support that Ferruginous Hawks show habituation to vehicles and the positive reinforcement hypotheses as probability of flight was negatively related to an index of traffic activity near the nest. Our work emphasizes that complex, dynamic processes drive the decision to initiate flight from the nest, and contributes to the growing body of work explaining how responses to humans vary within species.
Energetics and Dynamics of GaAs Epitaxial Growth via Quantum Wave Packet Studies
NASA Technical Reports Server (NTRS)
Dzegilenko, Fedor N.; Saini, Subhash (Technical Monitor)
1998-01-01
The dynamics of As(sub 2) molecule incorporation into the flat Ga-terminated GaAs(100) surface is studied computationally. The time-dependent Schrodinger equation is solved on a two-dimensional potential energy surface obtained using density functional theory calculations. The probabilities of trapping and subsequent dissociation of the molecular As(sub 2) bond are calculated as a function of beam translational energy and vibrational quantum number of As(sub 2).
Trisomy 13 (Patau syndrome) with an 11-year survival.
Zoll, B; Wolf, J; Lensing-Hebben, D; Pruggmayer, M; Thorpe, B
1993-01-01
Trisomy 13 is very rare in live-born children. Only a small number of these children survive the first year and very few cases are reported to live longer. Survival time depends partly on the cytogenetic findings--full trisomy 13 or trisomy 13 mosaicism--and partly on the existence of serious somatic malformations. We report on a 11-year-old girl with full trisomy 13. In this case, missing cerebral and cardiovascular malformations probably allowed the long survival.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vongehr, Sascha, E-mail: vongehr@usc.edu
There are increasingly suggestions for computer simulations of quantum statistics which try to violate Bell type inequalities via classical, common cause correlations. The Clauser–Horne–Shimony–Holt (CHSH) inequality is very robust. However, we argue that with the Einstein–Podolsky–Rosen setup, the CHSH is inferior to the Bell inequality, although and because the latter must assume anti-correlation of entangled photon singlet states. We simulate how often quantum behavior violates both inequalities, depending on the number of photons. Violating Bell 99% of the time is argued to be an ideal benchmark. We present hidden variables that violate the Bell and CHSH inequalities with 50% probability,more » and ones which violate Bell 85% of the time when missing 13% anti-correlation. We discuss how to present the quantum correlations to a wide audience and conclude that, when defending against claims of hidden classicality, one should demand numerical simulations and insist on anti-correlation and the full amount of Bell violation. -- Highlights: •The widely assumed superiority of the CHSH fails in the EPR problem. •We simulate Bell type inequalities behavior depending on the number of photons. •The core of Bell’s theorem in the EPR setup is introduced in a simple way understandable to a wide audience. •We present hidden variables that violate both inequalities with 50% probability. •Algorithms have been supplied in form of Mathematica programs.« less
Probability of loss of assured safety in systems with multiple time-dependent failure modes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helton, Jon Craig; Pilch, Martin.; Sallaberry, Cedric Jean-Marie.
2012-09-01
Weak link (WL)/strong link (SL) systems are important parts of the overall operational design of high-consequence systems. In such designs, the SL system is very robust and is intended to permit operation of the entire system under, and only under, intended conditions. In contrast, the WL system is intended to fail in a predictable and irreversible manner under accident conditions and render the entire system inoperable before an accidental operation of the SL system. The likelihood that the WL system will fail to deactivate the entire system before the SL system fails (i.e., degrades into a configuration that could allowmore » an accidental operation of the entire system) is referred to as probability of loss of assured safety (PLOAS). Representations for PLOAS for situations in which both link physical properties and link failure properties are time-dependent are derived and numerically evaluated for a variety of WL/SL configurations, including PLOAS defined by (i) failure of all SLs before failure of any WL, (ii) failure of any SL before failure of any WL, (iii) failure of all SLs before failure of all WLs, and (iv) failure of any SL before failure of all WLs. The effects of aleatory uncertainty and epistemic uncertainty in the definition and numerical evaluation of PLOAS are considered.« less
Pierce, Jordan E; McDowell, Jennifer E
2016-02-01
Cognitive control supports flexible behavior adapted to meet current goals and can be modeled through investigation of saccade tasks with varying cognitive demands. Basic prosaccades (rapid glances toward a newly appearing stimulus) are supported by neural circuitry, including occipital and posterior parietal cortex, frontal and supplementary eye fields, and basal ganglia. These trials can be contrasted with complex antisaccades (glances toward the mirror image location of a stimulus), which are characterized by greater functional magnetic resonance imaging (MRI) blood oxygenation level-dependent (BOLD) signal in the aforementioned regions and recruitment of additional regions such as dorsolateral prefrontal cortex. The current study manipulated the cognitive demands of these saccade tasks by presenting three rapid event-related runs of mixed saccades with a varying probability of antisaccade vs. prosaccade trials (25, 50, or 75%). Behavioral results showed an effect of trial-type probability on reaction time, with slower responses in runs with a high antisaccade probability. Imaging results exhibited an effect of probability in bilateral pre- and postcentral gyrus, bilateral superior temporal gyrus, and medial frontal gyrus. Additionally, the interaction between saccade trial type and probability revealed a strong probability effect for prosaccade trials, showing a linear increase in activation parallel to antisaccade probability in bilateral temporal/occipital, posterior parietal, medial frontal, and lateral prefrontal cortex. In contrast, antisaccade trials showed elevated activation across all runs. Overall, this study demonstrated that improbable performance of a typically simple prosaccade task led to augmented BOLD signal to support changing cognitive control demands, resulting in activation levels similar to the more complex antisaccade task. Copyright © 2016 the American Physiological Society.
Rare targets are less susceptible to attention capture once detection has begun.
Hon, Nicholas; Ng, Gavin; Chan, Gerald
2016-04-01
Rare or low probability targets are detected more slowly and/ or less accurately than higher probability counterparts. Various proposals have implicated perceptual and response-based processes in this deficit. Recent evidence, however, suggests that it is attentional in nature, with low probability targets requiring more attentional resources than high probability ones to detect. This difference in attentional requirements, in turn, suggests the possibility that low and high probability targets may have different susceptibilities to attention capture, which is also known to be resource-dependent. Supporting this hypothesis, we found that, once attentional resources have begun to be engaged by detection processes, low, but not high, probability targets have a reduced susceptibility to capture. Our findings speak to several issues. First, they indicate that the likelihood of attention capture occurring when a given task-relevant stimulus is being processed is dependent, to some extent, on how said stimulus is represented within mental task sets. Second, they provide added support for the idea that the behavioural deficit associated with low probability targets is attention-based. Finally, the current data point to reduced top-down biasing of target templates as a likely mechanism underlying the attentional locus of the deficit in question.
Singular solution of the Feller diffusion equation via a spectral decomposition.
Gan, Xinjun; Waxman, David
2015-01-01
Feller studied a branching process and found that the distribution for this process approximately obeys a diffusion equation [W. Feller, in Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability (University of California Press, Berkeley and Los Angeles, 1951), pp. 227-246]. This diffusion equation and its generalizations play an important role in many scientific problems, including, physics, biology, finance, and probability theory. We work under the assumption that the fundamental solution represents a probability density and should account for all of the probability in the problem. Thus, under the circumstances where the random process can be irreversibly absorbed at the boundary, this should lead to the presence of a Dirac delta function in the fundamental solution at the boundary. However, such a feature is not present in the standard approach (Laplace transformation). Here we require that the total integrated probability is conserved. This yields a fundamental solution which, when appropriate, contains a term proportional to a Dirac delta function at the boundary. We determine the fundamental solution directly from the diffusion equation via spectral decomposition. We obtain exact expressions for the eigenfunctions, and when the fundamental solution contains a Dirac delta function at the boundary, every eigenfunction of the forward diffusion operator contains a delta function. We show how these combine to produce a weight of the delta function at the boundary which ensures the total integrated probability is conserved. The solution we present covers cases where parameters are time dependent, thereby greatly extending its applicability.
Wein, Lawrence M.; Baveja, Manas
2005-01-01
Motivated by the difficulty of biometric systems to correctly match fingerprints with poor image quality, we formulate and solve a game-theoretic formulation of the identification problem in two settings: U.S. visa applicants are checked against a list of visa holders to detect visa fraud, and visitors entering the U.S. are checked against a watchlist of criminals and suspected terrorists. For three types of biometric strategies, we solve the game in which the U.S. Government chooses the strategy's optimal parameter values to maximize the detection probability subject to a constraint on the mean biometric processing time per legal visitor, and then the terrorist chooses the image quality to minimize the detection probability. At current inspector staffing levels at ports of entry, our model predicts that a quality-dependent two-finger strategy achieves a detection probability of 0.733, compared to 0.526 under the quality-independent two-finger strategy that is currently implemented at the U.S. border. Increasing the staffing level of inspectors offers only minor increases in the detection probability for these two strategies. Using more than two fingers to match visitors with poor image quality allows a detection probability of 0.949 under current staffing levels, but may require major changes to the current U.S. biometric program. The detection probabilities during visa application are ≈11–22% smaller than at ports of entry for all three strategies, but the same qualitative conclusions hold. PMID:15894628