Spatial scaling patterns and functional redundancies in a changing boreal lake landscape
Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.
2015-01-01
Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.
Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis
2018-01-01
Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.
Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
ERIC Educational Resources Information Center
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick
2015-01-01
Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions Euros, shows that geological and geophysical investigations necessary to assess a reliable deterministic hazard evaluation are largely justified.
Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.
Marino, Dale J; Starr, Thomas B
2007-12-01
A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
Guymon, Gary L.; Yen, Chung-Cheng
1990-01-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
NASA Astrophysics Data System (ADS)
Guymon, Gary L.; Yen, Chung-Cheng
1990-07-01
The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.
Field evaluation of an avian risk assessment model
Vyas, N.B.; Spann, J.W.; Hulse, C.S.; Borges, S.L.; Bennett, R.S.; Torrez, M.; Williams, B.I.; Leffel, R.
2006-01-01
We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in the field. We tested technical-grade diazinon and its D Z N- 50W (50% diazinon active ingredient wettable powder) formulation on Canada goose (Branta canadensis) goslings. Brain acetylcholinesterase activity was measured, and the feathers and skin, feet. and gastrointestinal contents were analyzed for diazinon residues. The dose-response curves showed that diazinon was significantly more toxic to goslings in the outdoor test than in the laboratory tests. The deterministic risk assessment method identified the potential for risk to birds in general, but the factors associated with extrapolating from the laboratory to the field, and from the laboratory test species to other species, resulted in the underestimation of risk to the goslings. The present study indicates that laboratory-based risk quotients should be interpreted with caution.
NASA Astrophysics Data System (ADS)
Tang, J.; Riley, W. J.
2017-12-01
Most existing soil carbon cycle models have modeled the moisture and temperature dependence of soil respiration using deterministic response functions. However, empirical data suggest abundant variability in both of these dependencies. We here use the recently developed SUPECA (Synthesizing Unit and Equilibrium Chemistry Approximation) theory and a published dynamic energy budget based microbial model to investigate how soil carbon decomposition responds to changes in soil moisture and temperature under the influence of organo-mineral interactions. We found that both the temperature and moisture responses are hysteretic and cannot be represented by deterministic functions. We then evaluate how the multi-scale variability in temperature and moisture forcing affect soil carbon decomposition. Our results indicate that when the model is run in scenarios mimicking laboratory incubation experiments, the often-observed temperature and moisture response functions can be well reproduced. However, when such response functions are used for model extrapolation involving more transient variability in temperature and moisture forcing (as found in real ecosystems), the dynamic model that explicitly accounts for hysteresis in temperature and moisture dependency produces significantly different estimations of soil carbon decomposition, suggesting there are large biases in models that do not resolve such hysteresis. We call for more studies on organo-mineral interactions to improve modeling of such hysteresis.
Effect of Uncertainty on Deterministic Runway Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
Study on the evaluation method for fault displacement based on characterized source model
NASA Astrophysics Data System (ADS)
Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.
2016-12-01
In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.
First-passage problems: A probabilistic dynamic analysis for degraded structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1990-01-01
Structures subjected to random excitations with uncertain system parameters degraded by surrounding environments (a random time history) are studied. Methods are developed to determine the statistics of dynamic responses, such as the time-varying mean, the standard deviation, the autocorrelation functions, and the joint probability density function of any response and its derivative. Moreover, the first-passage problems with deterministic and stationary/evolutionary random barriers are evaluated. The time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.
Kucza, Witold
2013-07-25
Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.
Operating health analysis of electric power systems
NASA Astrophysics Data System (ADS)
Fotuhi-Firuzabad, Mahmud
The required level of operating reserve to be maintained by an electric power system can be determined using both deterministic and probabilistic techniques. Despite the obvious disadvantages of deterministic approaches there is still considerable reluctance to apply probabilistic techniques due to the difficulty of interpreting a single numerical risk index and the lack of sufficient information provided by a single index. A practical way to overcome difficulties is to embed deterministic considerations in the probabilistic indices in order to monitor the system well-being. The system well-being can be designated as healthy, marginal and at risk. The concept of system well-being is examined and extended in this thesis to cover the overall area of operating reserve assessment. Operating reserve evaluation involves the two distinctly different aspects of unit commitment and the dispatch of the committed units. Unit commitment health analysis involves the determination of which unit should be committed to satisfy the operating criteria. The concepts developed for unit commitment health, margin and risk are extended in this thesis to evaluate the response well-being of a generating system. A procedure is presented to determine the optimum dispatch of the committed units to satisfy the response criteria. The impact on the response wellbeing being of variations in the margin time, required regulating margin and load forecast uncertainty are illustrated. The effects on the response well-being of rapid start units, interruptible loads and postponable outages are also illustrated. System well-being is, in general, greatly improved by interconnection with other power systems. The well-being concepts are extended to evaluate the spinning reserve requirements in interconnected systems. The interconnected system unit commitment problem is decomposed into two subproblems in which unit scheduling is performed in each isolated system followed by interconnected system evaluation. A procedure is illustrated to determine the well-being indices of the overall interconnected system. Under normal operating conditions, the system may also be able to carry a limited amount of interruptible load on top of its firm load without violating the operating criterion. An energy based approach is presented to determine the optimum interruptible load carrying capability in both the isolated and interconnected systems. Composite system spinning reserve assessment and composite system well-being are also examined in this research work. The impacts on the composite well-being of operating reserve considerations such as stand-by units, interruptible loads and the physical locations of these resources are illustrated. It is expected that the well-being framework and the concepts developed in this research work will prove extremely useful in the new competitive utility environment.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrisson, G.; Marleau, G.
2012-07-01
The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less
NASA Astrophysics Data System (ADS)
Reynders, Edwin P. B.; Langley, Robin S.
2018-08-01
The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.
Seed availability constrains plant species sorting along a soil fertility gradient
Bryan L. Foster; Erin J. Questad; Cathy D. Collins; Cheryl A. Murphy; Timothy L. Dickson; Val H. Smith
2011-01-01
1. Spatial variation in species composition within and among communities may be caused by deterministic, niche-based species sorting in response to underlying environmental heterogeneity as well as by stochastic factors such as dispersal limitation and variable species pools. An important goal in ecology is to reconcile deterministic and stochastic perspectives of...
NASA Astrophysics Data System (ADS)
Liu, Xiangdong; Li, Qingze; Pan, Jianxin
2018-06-01
Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.
Scott, B R; Lyzlov, A F; Osovets, S V
1998-05-01
During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A; and (6) the expected cases of death via the hematopoietic syndrome mode for Mayak workers chronically exposed during work shifts at Site A to gamma rays and neutrons can be predicted using ln(2)B M[D]; where B (pronounced "beh") is the number of workers at risk (criticality accident victims excluded); and M[D] is the average (mean) value of D (averaged over the worker population at risk, for Site A, for the time period considered). These results can be used to facilitate a Phase II study of deterministic radiation effects among Mayak workers chronically exposed to gamma rays and neutrons.
Mass sensing based on deterministic and stochastic responses of elastically coupled nanocantilevers.
Gil-Santos, Eduardo; Ramos, Daniel; Jana, Anirban; Calleja, Montserrat; Raman, Arvind; Tamayo, Javier
2009-12-01
Coupled nanomechanical systems and their entangled eigenstates offer unique opportunities for the detection of ultrasmall masses. In this paper we show theoretically and experimentally that the stochastic and deterministic responses of a pair of coupled nanocantilevers provide different and complementary information about the added mass of an analyte and its location. This method allows the sensitive detection of minute quantities of mass even in the presence of large initial differences in the active masses of the two cantilevers. Finally, we show the fundamental limits in mass detection of this sensing paradigm.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...
2015-03-17
Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less
Comparison of probabilistic and deterministic fiber tracking of cranial nerves.
Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H
2017-09-01
OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.
An alternative approach to measure similarity between two deterministic transient signals
NASA Astrophysics Data System (ADS)
Shin, Kihong
2016-06-01
In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.
NASA Technical Reports Server (NTRS)
Bogdanoff, J. L.; Kayser, K.; Krieger, W.
1977-01-01
The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
Nonlinear response of the immune system to power-frequency magnetic fields.
Marino, A A; Wolcott, R M; Chervenak, R; Jourd'Heuil, F; Nilsen, E; Frilot, C
2000-09-01
Studies of the effects of power-frequency electromagnetic fields (EMFs) on the immune and other body systems produced positive and negative results, and this pattern was usually interpreted to indicate the absence of real effects. However, if the biological effects of EMFs were governed by nonlinear laws, deterministic responses to fields could occur that were both real and inconsistent, thereby leading to both types of results. The hypothesis of real inconsistent effects due to EMFs was tested by exposing mice to 1 G, 60 Hz for 1-105 days and observing the effect on 20 immune parameters, using flow cytometry and functional assays. The data were evaluated by means of a novel statistical procedure that avoided averaging away oppositely directed changes in different animals, which we perceived to be the problem in some of the earlier EMF studies. The reliability of the procedure was shown using appropriate controls. In three independent experiments involving exposure for 21 or more days, the field altered lymphoid phenotype even though the changes in individual immune parameters were inconsistent. When the data were evaluated using traditional linear statistical methods, no significant difference in any immune parameter was found. We were able to mimic the results by sampling from known chaotic systems, suggesting that deterministic chaos could explain the effect of fields on the immune system. We conclude that exposure to power-frequency fields produced changes in the immune system that were both real and inconsistent.
Guidelines 13 and 14—Prediction uncertainty
Hill, Mary C.; Tiedeman, Claire
2005-01-01
An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.
Coupled Multi-Disciplinary Optimization for Structural Reliability and Affordability
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
A computational simulation method is presented for Non-Deterministic Multidisciplinary Optimization of engine composite materials and structures. A hypothetical engine duct made with ceramic matrix composites (CMC) is evaluated probabilistically in the presence of combined thermo-mechanical loading. The structure is tailored by quantifying the uncertainties in all relevant design variables such as fabrication, material, and loading parameters. The probabilistic sensitivities are used to select critical design variables for optimization. In this paper, two approaches for non-deterministic optimization are presented. The non-deterministic minimization of combined failure stress criterion is carried out by: (1) performing probabilistic evaluation first and then optimization and (2) performing optimization first and then probabilistic evaluation. The first approach shows that the optimization feasible region can be bounded by a set of prescribed probability limits and that the optimization follows the cumulative distribution function between those limits. The second approach shows that the optimization feasible region is bounded by 0.50 and 0.999 probabilities.
NASA Astrophysics Data System (ADS)
Roberts, Sean; Eykholt, R.; Thaut, Michael H.
2000-08-01
We investigate rhythmic finger tapping in both the presence and the absence of a metronome. We examine both the time intervals between taps and the time lags between the stimulus tones from the metronome and the response taps by the subject. We analyze the correlations in these data sets, and we search for evidence of deterministic chaos, as opposed to randomness, in the fluctuations.
Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine
2016-01-01
Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854
Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine
2016-01-01
Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.
NASA Astrophysics Data System (ADS)
Degtyar, V. G.; Kalashnikov, S. T.; Mokin, Yu. A.
2017-10-01
The paper considers problems of analyzing aerodynamic properties (ADP) of reenetry vehicles (RV) as blunted rotary bodies with small random surface distortions. The interactions of math simulation of surface distortions, selection of tools for predicting ADPs of shaped bodies, evaluation of different-type ADP variations and their adaptation for dynamic problems are analyzed. The possibilities of deterministic and probabilistic approaches to evaluation of ADP variations are considered. The practical value of the probabilistic approach is demonstrated. The examples of extremal deterministic evaluations of ADP variations for a sphere and a sharp cone are given.
Cognitive diagnosis modelling incorporating item response times.
Zhan, Peida; Jiao, Hong; Liao, Dandan
2018-05-01
To provide more refined diagnostic feedback with collateral information in item response times (RTs), this study proposed joint modelling of attributes and response speed using item responses and RTs simultaneously for cognitive diagnosis. For illustration, an extended deterministic input, noisy 'and' gate (DINA) model was proposed for joint modelling of responses and RTs. Model parameter estimation was explored using the Bayesian Markov chain Monte Carlo (MCMC) method. The PISA 2012 computer-based mathematics data were analysed first. These real data estimates were treated as true values in a subsequent simulation study. A follow-up simulation study with ideal testing conditions was conducted as well to further evaluate model parameter recovery. The results indicated that model parameters could be well recovered using the MCMC approach. Further, incorporating RTs into the DINA model would improve attribute and profile correct classification rates and result in more accurate and precise estimation of the model parameters. © 2017 The British Psychological Society.
A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration
NASA Technical Reports Server (NTRS)
Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce
2008-01-01
Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.
Watershed scale response to climate change--Trout Lake Basin, Wisconsin
Walker, John F.; Hunt, Randall J.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Trout River Basin at Trout Lake in northern Wisconsin.
Watershed scale response to climate change--Clear Creek Basin, Iowa
Christiansen, Daniel E.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Clear Creek Basin, near Coralville, Iowa.
Watershed scale response to climate change--Feather River Basin, California
Koczot, Kathryn M.; Markstrom, Steven L.; Hay, Lauren E.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Feather River Basin, California.
Watershed scale response to climate change--South Fork Flathead River Basin, Montana
Chase, Katherine J.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the South Fork Flathead River Basin, Montana.
Watershed scale response to climate change--Cathance Stream Basin, Maine
Dudley, Robert W.; Hay, Lauren E.; Markstrom, Steven L.; Hodgkins, Glenn A.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Cathance Stream Basin, Maine.
Watershed scale response to climate change--Pomperaug River Watershed, Connecticut
Bjerklie, David M.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Pomperaug River Basin at Southbury, Connecticut.
Watershed scale response to climate change--Starkweather Coulee Basin, North Dakota
Vining, Kevin C.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Starkweather Coulee Basin near Webster, North Dakota.
Watershed scale response to climate change--Sagehen Creek Basin, California
Markstrom, Steven L.; Hay, Lauren E.; Regan, R. Steven
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sagehen Creek Basin near Truckee, California.
Watershed scale response to climate change--Sprague River Basin, Oregon
Risley, John; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sprague River Basin near Chiloquin, Oregon.
Watershed scale response to climate change--Black Earth Creek Basin, Wisconsin
Hunt, Randall J.; Walker, John F.; Westenbroek, Steven M.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Black Earth Creek Basin, Wisconsin.
Watershed scale response to climate change--East River Basin, Colorado
Battaglin, William A.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the East River Basin, Colorado.
Watershed scale response to climate change--Naches River Basin, Washington
Mastin, Mark C.; Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Naches River Basin below Tieton River in Washington.
Watershed scale response to climate change--Flint River Basin, Georgia
Hay, Lauren E.; Markstrom, Steven L.
2012-01-01
Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Flint River Basin at Montezuma, Georgia.
Deterministic or Probabilistic - Robustness or Resilience: How to Respond to Climate Change?
NASA Astrophysics Data System (ADS)
Plag, H.; Earnest, D.; Jules-Plag, S.
2013-12-01
Our response to climate change is dominated by a deterministic approach that emphasizes the interaction between only the natural and the built environment. But in the non-ergodic world of unprecedented climate change, social factors drive recovery from unforeseen Black Swans much more than natural or built ones. Particularly the sea level rise discussion focuses on deterministic predictions, accounting for uncertainties in major driving processes with a set of forcing scenarios and public deliberations on which of the plausible trajectories is most likely. Science focuses on the prediction of future climate change, and policies focus on mitigation of both climate change itself and its impacts. The deterministic approach is based on two basic assumptions: 1) Climate change is an ergodic process; 2) The urban coast is a robust system. Evidence suggests that these assumptions may not hold. Anthropogenic changes are pushing key parameters of the climate system outside of the natural range of variability from the last 1 Million years, creating the potential for environmental Black Swans. A probabilistic approach allows for non-ergodic processes and focuses more on resilience, hence does not depend on the two assumptions. Recent experience with hurricanes revealed threshold limitations of the built environment of the urban coast, which, once exceeded, brought to the forefront the importance of the social fabric and social networking in evaluating resilience. Resilience strongly depends on social capital, and building social capital that can create resilience must be a key element in our response to climate change. Although social capital cannot mitigate hazards, social scientists have found that communities rich in strong norms of cooperation recover more quickly than communities without social capital. There is growing evidence that the built environment can affect the social capital of a community, for example public health and perceptions of public safety. This suggests an intriguing hypothesis: disaster risk reduction programs need to account for whether they also facilitate the public trust, cooperation, and communication needed to recover from a disaster. Our work in the Hampton Roads area, where the probability of hazardous flooding and inundation events exceeding the thresholds of the infrastructure is high, suggests that to facilitate the paradigm shift from the deterministic to a probabilistic approach, natural sciences have to focus on hazard probabilities, while engineering and social sciences have to work together to understand how interactions of the built and social environments impact robustness and resilience. The current science-policy relationship needs to be augmented by social structures that can learn from previous unexpected events. In this response to climate change, science does not have the primary goal to reduce uncertainties and prediction errors, but rather to develop processes that can utilize uncertainties and surprises to increase robustness, strengthen resilience, and reduce fragility of the social systems during times when infrastructure fails.
Integrated Arrival and Departure Schedule Optimization Under Uncertainty
NASA Technical Reports Server (NTRS)
Xue, Min; Zelinski, Shannon
2014-01-01
In terminal airspace, integrating arrivals and departures with shared waypoints provides the potential of improving operational efficiency by allowing direct routes when possible. Incorporating stochastic evaluation as a post-analysis process of deterministic optimization, and imposing a safety buffer in deterministic optimization, are two ways to learn and alleviate the impact of uncertainty and to avoid unexpected outcomes. This work presents a third and direct way to take uncertainty into consideration during the optimization. The impact of uncertainty was incorporated into cost evaluations when searching for the optimal solutions. The controller intervention count was computed using a heuristic model and served as another stochastic cost besides total delay. Costs under uncertainty were evaluated using Monte Carlo simulations. The Pareto fronts that contain a set of solutions were identified and the trade-off between delays and controller intervention count was shown. Solutions that shared similar delays but had different intervention counts were investigated. The results showed that optimization under uncertainty could identify compromise solutions on Pareto fonts, which is better than deterministic optimization with extra safety buffers. It helps decision-makers reduce controller intervention while achieving low delays.
NASA Astrophysics Data System (ADS)
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.
Precipitation-runoff modeling system; user's manual
Leavesley, G.H.; Lichty, R.W.; Troutman, B.M.; Saindon, L.G.
1983-01-01
The concepts, structure, theoretical development, and data requirements of the precipitation-runoff modeling system (PRMS) are described. The precipitation-runoff modeling system is a modular-design, deterministic, distributed-parameter modeling system developed to evaluate the impacts of various combinations of precipitation, climate, and land use on streamflow, sediment yields, and general basin hydrology. Basin response to normal and extreme rainfall and snowmelt can be simulated to evaluate changes in water balance relationships, flow regimes, flood peaks and volumes, soil-water relationships, sediment yields, and groundwater recharge. Parameter-optimization and sensitivity analysis capabilites are provided to fit selected model parameters and evaluate their individual and joint effects on model output. The modular design provides a flexible framework for continued model system enhancement and hydrologic modeling research and development. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Kostrzewa, Daniel; Josiński, Henryk
2016-06-01
The expanded Invasive Weed Optimization algorithm (exIWO) is an optimization metaheuristic modelled on the original IWO version inspired by dynamic growth of weeds colony. The authors of the present paper have modified the exIWO algorithm introducing a set of both deterministic and non-deterministic strategies of individuals' selection. The goal of the project was to evaluate the modified exIWO by testing its usefulness for multidimensional numerical functions optimization. The optimized functions: Griewank, Rastrigin, and Rosenbrock are frequently used as benchmarks because of their characteristics.
Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.
Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael
2018-04-01
The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.
Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran
NASA Astrophysics Data System (ADS)
Ney, B.; Askari, M.
2009-04-01
Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran Behnoosh Neyestani , Mina Askari Students of Science and Research University,Iran. Seismic Hazard Assessment has been done for Shahdad city in this study , and four maps (Kerman-Bam-Nakhil Ab-Allah Abad) has been prepared to indicate the Deterministic estimate of Peak Ground Acceleration (PGA) in this area. Deterministic Seismic Hazard Assessment has been preformed for a region in eastern Iran (Shahdad) based on the available geological, seismological and geophysical information and seismic zoning map of region has been constructed. For this assessment first Seimotectonic map of study region in a radius of 100km is prepared using geological maps, distribution of historical and instrumental earthquake data and focal mechanism solutions it is used as the base map for delineation of potential seismic sources. After that minimum distance, for every seismic sources until site (Shahdad) and maximum magnitude for each source have been determined. In Shahdad ,according to results, peak ground acceleration using the Yoshimitsu Fukushima &Teiji Tanaka'1990 attenuation relationship is estimated to be 0.58 g, that is related to the movement of nayband fault with distance 2.4km of the site and maximum magnitude Ms=7.5.
Patients' understanding of and responses to multiplex genetic susceptibility test results.
Kaphingst, Kimberly A; McBride, Colleen M; Wade, Christopher; Alford, Sharon Hensley; Reid, Robert; Larson, Eric; Baxevanis, Andreas D; Brody, Lawrence C
2012-07-01
Examination of patients' responses to direct-to-consumer genetic susceptibility tests is needed to inform clinical practice. This study examined patients' recall and interpretation of, and responses to, genetic susceptibility test results provided directly by mail. This observational study had three prospective assessments (before testing, 10 days after receiving results, and 3 months later). Participants were 199 patients aged 25-40 years who received free genetic susceptibility testing for eight common health conditions. More than 80% of the patients correctly recalled their results for the eight health conditions. Patients were unlikely to interpret genetic results as deterministic of health outcomes (mean = 6.0, s.d. = 0.8 on a scale of 1-7, 1 indicating strongly deterministic). In multivariate analysis, patients with the least deterministic interpretations were white (P = 0.0098), more educated (P = 0.0093), and least confused by results (P = 0.001). Only 1% talked about their results with a provider. Findings suggest that most patients will correctly recall their results and will not interpret genetics as the sole cause of diseases. The subset of those confused by results could benefit from consultation with a health-care provider, which could emphasize that health habits currently are the best predictors of risk. Providers could leverage patients' interest in genetic tests to encourage behavior changes to reduce disease risk.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
Stochastic Analysis and Probabilistic Downscaling of Soil Moisture
NASA Astrophysics Data System (ADS)
Deshon, J. P.; Niemann, J. D.; Green, T. R.; Jones, A. S.
2017-12-01
Soil moisture is a key variable for rainfall-runoff response estimation, ecological and biogeochemical flux estimation, and biodiversity characterization, each of which is useful for watershed condition assessment. These applications require not only accurate, fine-resolution soil-moisture estimates but also confidence limits on those estimates and soil-moisture patterns that exhibit realistic statistical properties (e.g., variance and spatial correlation structure). The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution (9-40 km) soil moisture from satellite remote sensing or land-surface models to produce fine-resolution (10-30 m) estimates. The model was designed to produce accurate deterministic soil-moisture estimates at multiple points, but the resulting patterns do not reproduce the variance or spatial correlation of observed soil-moisture patterns. The primary objective of this research is to generalize the EMT+VS model to produce a probability density function (pdf) for soil moisture at each fine-resolution location and time. Each pdf has a mean that is equal to the deterministic soil-moisture estimate, and the pdf can be used to quantify the uncertainty in the soil-moisture estimates and to simulate soil-moisture patterns. Different versions of the generalized model are hypothesized based on how uncertainty enters the model, whether the uncertainty is additive or multiplicative, and which distributions describe the uncertainty. These versions are then tested by application to four catchments with detailed soil-moisture observations (Tarrawarra, Satellite Station, Cache la Poudre, and Nerrigundah). The performance of the generalized models is evaluated by comparing the statistical properties of the simulated soil-moisture patterns to those of the observations and the deterministic EMT+VS model. The versions of the generalized EMT+VS model with normally distributed stochastic components produce soil-moisture patterns with more realistic statistical properties than the deterministic model. Additionally, the results suggest that the variance and spatial correlation of the stochastic soil-moisture variations do not vary consistently with the spatial-average soil moisture.
Random yet deterministic: convergent immunoglobulin responses to influenza.
Martins, Andrew J; Tsang, John S
2014-09-01
B cell clonal expansion is a hallmark of host-defense and vaccination responses. Given the vast immunoglobulin repertoire, individuals may expand B cells carrying largely distinct immunoglobulin genes following antigenic challenge. Using immunoglobulin-repertoire sequencing to dynamically track responses to influenza vaccination, Jackson et al. find evidence of convergent immunoglobulin responses across individuals. Published by Elsevier Ltd.
The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation
ERIC Educational Resources Information Center
Brown, Scott D.; Heathcote, Andrew
2008-01-01
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…
Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise
NASA Astrophysics Data System (ADS)
Meyers, Stephen R.; Hinnov, Linda A.
2010-08-01
Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.
Quantitative analysis of random ameboid motion
NASA Astrophysics Data System (ADS)
Bödeker, H. U.; Beta, C.; Frank, T. D.; Bodenschatz, E.
2010-04-01
We quantify random migration of the social ameba Dictyostelium discoideum. We demonstrate that the statistics of cell motion can be described by an underlying Langevin-type stochastic differential equation. An analytic expression for the velocity distribution function is derived. The separation into deterministic and stochastic parts of the movement shows that the cells undergo a damped motion with multiplicative noise. Both contributions to the dynamics display a distinct response to external physiological stimuli. The deterministic component depends on the developmental state and ambient levels of signaling substances, while the stochastic part does not.
Diffusion in Deterministic Interacting Lattice Systems
NASA Astrophysics Data System (ADS)
Medenjak, Marko; Klobas, Katja; Prosen, Tomaž
2017-09-01
We study reversible deterministic dynamics of classical charged particles on a lattice with hard-core interaction. It is rigorously shown that the system exhibits three types of transport phenomena, ranging from ballistic, through diffusive to insulating. By obtaining an exact expressions for the current time-autocorrelation function we are able to calculate the linear response transport coefficients, such as the diffusion constant and the Drude weight. Additionally, we calculate the long-time charge profile after an inhomogeneous quench and obtain diffusive profilewith the Green-Kubo diffusion constant. Exact analytical results are corroborated by Monte Carlo simulations.
Universal first-order reliability concept applied to semistatic structures
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Universal first-order reliability concept applied to semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-07-01
A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.
Earthquake Hazard Mitigation Using a Systems Analysis Approach to Risk Assessment
NASA Astrophysics Data System (ADS)
Legg, M.; Eguchi, R. T.
2015-12-01
The earthquake hazard mitigation goal is to reduce losses due to severe natural events. The first step is to conduct a Seismic Risk Assessment consisting of 1) hazard estimation, 2) vulnerability analysis, 3) exposure compilation. Seismic hazards include ground deformation, shaking, and inundation. The hazard estimation may be probabilistic or deterministic. Probabilistic Seismic Hazard Assessment (PSHA) is generally applied to site-specific Risk assessments, but may involve large areas as in a National Seismic Hazard Mapping program. Deterministic hazard assessments are needed for geographically distributed exposure such as lifelines (infrastructure), but may be important for large communities. Vulnerability evaluation includes quantification of fragility for construction or components including personnel. Exposure represents the existing or planned construction, facilities, infrastructure, and population in the affected area. Risk (expected loss) is the product of the quantified hazard, vulnerability (damage algorithm), and exposure which may be used to prepare emergency response plans, retrofit existing construction, or use community planning to avoid hazards. The risk estimate provides data needed to acquire earthquake insurance to assist with effective recovery following a severe event. Earthquake Scenarios used in Deterministic Risk Assessments provide detailed information on where hazards may be most severe, what system components are most susceptible to failure, and to evaluate the combined effects of a severe earthquake to the whole system or community. Casualties (injuries and death) have been the primary factor in defining building codes for seismic-resistant construction. Economic losses may be equally significant factors that can influence proactive hazard mitigation. Large urban earthquakes may produce catastrophic losses due to a cascading of effects often missed in PSHA. Economic collapse may ensue if damaged workplaces, disruption of utilities, and resultant loss of income produces widespread default on payments. With increased computational power and more complete inventories of exposure, Monte Carlo methods may provide more accurate estimation of severe losses and the opportunity to increase resilience of vulnerable systems and communities.
Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping
2016-09-07
Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.
Evolution of the Climate Continuum from the Mid-Miocene Climatic Optimum to the Present
NASA Astrophysics Data System (ADS)
Aswasereelert, W.; Meyers, S. R.; Hinnov, L. A.; Kelly, D.
2011-12-01
The recognition of orbital rhythms in paleoclimate data has led to a rich understanding of climate evolution during the Neogene and Quaternary. In contrast, changes in stochastic variability associated with the transition from unipolar to bipolar glaciation have received less attention, although the stochastic component likely preserves key insights about climate. In this study, we seek to evaluate the dominance and character of stochastic climate energy since the Middle Miocene Climatic Optimum (~17 Ma). These analyses extend a previous study that suggested diagnostic stochastic responses associated with Northern Hemisphere ice sheet development during the Plio-Pleistocene (Meyers and Hinnov, 2010). A critical and challenging step necessary to conduct the work is the conversion of depth data to time data. We investigate climate proxy datasets using multiple time scale hypotheses, including depth-derived time scales, sedimentologic/geochemical "tuning", minimal orbital tuning, and comprehensive orbital tuning. To extract the stochastic component of climate, and also explore potential relationships between the orbital parameters and paleoclimate response, a number of approaches rooted in Thomson's (1982) multi-taper spectral method (MTM) are applied. Importantly, the MTM technique is capable of separating the spectral "continuum" - a measure of stochastic variability - from the deterministic periodic orbital signals (spectral "lines") preserved in proxy data. Time series analysis of the proxy records using different chronologic approaches allows us to evaluate the sensitivity of our conclusion about stochastic and deterministic orbital processes during the Middle Miocene to present. Moreover, comparison of individual records permits examination of the spatial dependence of the identified climate responses. Meyers, S.R., and Hinnov, L.A. (2010), Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise: Paleoceanography, 25, PA3207, doi:10.1029/2009PA001834. Thomson, D.J. (1982), Spectrum estimation and harmonic analysis: IEEE Proceedings, v. 70, p. 1055-1096.
Probabilistic structural analysis methods for space transportation propulsion systems
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Moore, N.; Anis, C.; Newell, J.; Nagpal, V.; Singhal, S.
1991-01-01
Information on probabilistic structural analysis methods for space propulsion systems is given in viewgraph form. Information is given on deterministic certification methods, probability of failure, component response analysis, stress responses for 2nd stage turbine blades, Space Shuttle Main Engine (SSME) structural durability, and program plans. .
An Estimation Procedure for the Structural Parameters of the Unified Cognitive/IRT Model.
ERIC Educational Resources Information Center
Jiang, Hai; And Others
L. V. DiBello, W. F. Stout, and L. A. Roussos (1993) have developed a new item response model, the Unified Model, which brings together the discrete, deterministic aspects of cognition favored by cognitive scientists, and the continuous, stochastic aspects of test response behavior that underlie item response theory (IRT). The Unified Model blends…
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
In Search of Determinism-Sensitive Region to Avoid Artefacts in Recurrence Plots
NASA Astrophysics Data System (ADS)
Wendi, Dadiyorto; Marwan, Norbert; Merz, Bruno
As an effort to reduce parameter uncertainties in constructing recurrence plots, and in particular to avoid potential artefacts, this paper presents a technique to derive artefact-safe region of parameter sets. This technique exploits both deterministic (incl. chaos) and stochastic signal characteristics of recurrence quantification (i.e. diagonal structures). It is useful when the evaluated signal is known to be deterministic. This study focuses on the recurrence plot generated from the reconstructed phase space in order to represent many real application scenarios when not all variables to describe a system are available (data scarcity). The technique involves random shuffling of the original signal to destroy its original deterministic characteristics. Its purpose is to evaluate whether the determinism values of the original and the shuffled signal remain closely together, and therefore suggesting that the recurrence plot might comprise artefacts. The use of such determinism-sensitive region shall be accompanied by standard embedding optimization approaches, e.g. using indices like false nearest neighbor and mutual information, to result in a more reliable recurrence plot parameterization.
Precision production: enabling deterministic throughput for precision aspheres with MRF
NASA Astrophysics Data System (ADS)
Maloney, Chris; Entezarian, Navid; Dumas, Paul
2017-10-01
Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.
Accurate measurement of RF exposure from emerging wireless communication systems
NASA Astrophysics Data System (ADS)
Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno
2013-04-01
Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.
Performance evaluation of a distance learning program.
Dailey, D J; Eno, K R; Brinkley, J F
1994-01-01
This paper presents a performance metric which uses a single number to characterize the response time for a non-deterministic client-server application operating over the Internet. When applied to a Macintosh-based distance learning application called the Digital Anatomist Browser, the metric allowed us to observe that "A typical student doing a typical mix of Browser commands on a typical data set will experience the same delay if they use a slow Macintosh on a local network or a fast Macintosh on the other side of the country accessing the data over the Internet." The methodology presented is applicable to other client-server applications that are rapidly appearing on the Internet.
NASA Astrophysics Data System (ADS)
Contreras, Arturo Javier
This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.
Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen
2015-01-01
Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. PMID:26714277
Modeling heterogeneous responsiveness of intrinsic apoptosis pathway
2013-01-01
Background Apoptosis is a cell suicide mechanism that enables multicellular organisms to maintain homeostasis and to eliminate individual cells that threaten the organism’s survival. Dependent on the type of stimulus, apoptosis can be propagated by extrinsic pathway or intrinsic pathway. The comprehensive understanding of the molecular mechanism of apoptotic signaling allows for development of mathematical models, aiming to elucidate dynamical and systems properties of apoptotic signaling networks. There have been extensive efforts in modeling deterministic apoptosis network accounting for average behavior of a population of cells. Cellular networks, however, are inherently stochastic and significant cell-to-cell variability in apoptosis response has been observed at single cell level. Results To address the inevitable randomness in the intrinsic apoptosis mechanism, we develop a theoretical and computational modeling framework of intrinsic apoptosis pathway at single-cell level, accounting for both deterministic and stochastic behavior. Our deterministic model, adapted from the well-accepted Fussenegger model, shows that an additional positive feedback between the executioner caspase and the initiator caspase plays a fundamental role in yielding the desired property of bistability. We then examine the impact of intrinsic fluctuations of biochemical reactions, viewed as intrinsic noise, and natural variation of protein concentrations, viewed as extrinsic noise, on behavior of the intrinsic apoptosis network. Histograms of the steady-state output at varying input levels show that the intrinsic noise could elicit a wider region of bistability over that of the deterministic model. However, the system stochasticity due to intrinsic fluctuations, such as the noise of steady-state response and the randomness of response delay, shows that the intrinsic noise in general is insufficient to produce significant cell-to-cell variations at physiologically relevant level of molecular numbers. Furthermore, the extrinsic noise represented by random variations of two key apoptotic proteins, namely Cytochrome C and inhibitor of apoptosis proteins (IAP), is modeled separately or in combination with intrinsic noise. The resultant stochasticity in the timing of intrinsic apoptosis response shows that the fluctuating protein variations can induce cell-to-cell stochastic variability at a quantitative level agreeing with experiments. Finally, simulations illustrate that the mean abundance of fluctuating IAP protein is positively correlated with the degree of cellular stochasticity of the intrinsic apoptosis pathway. Conclusions Our theoretical and computational study shows that the pronounced non-genetic heterogeneity in intrinsic apoptosis responses among individual cells plausibly arises from extrinsic rather than intrinsic origin of fluctuations. In addition, it predicts that the IAP protein could serve as a potential therapeutic target for suppression of the cell-to-cell variation in the intrinsic apoptosis responsiveness. PMID:23875784
Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.
Soil erosion assessment - Mind the gap
NASA Astrophysics Data System (ADS)
Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone
2016-12-01
Accurate assessment of erosion rates remains an elusive problem because soil loss is strongly nonunique with respect to the main drivers. In addressing the mechanistic causes of erosion responses, we discriminate between macroscale effects of external factors - long studied and referred to as "geomorphic external variability", and microscale effects, introduced as "geomorphic internal variability." The latter source of erosion variations represents the knowledge gap, an overlooked but vital element of geomorphic response, significantly impacting the low predictability skill of deterministic models at field-catchment scales. This is corroborated with experiments using a comprehensive physical model that dynamically updates the soil mass and particle composition. As complete knowledge of microscale conditions for arbitrary location and time is infeasible, we propose that new predictive frameworks of soil erosion should embed stochastic components in deterministic assessments of external and internal types of geomorphic variability.
Probabilistic Design and Analysis Framework
NASA Technical Reports Server (NTRS)
Strack, William C.; Nagpal, Vinod K.
2010-01-01
PRODAF is a software package designed to aid analysts and designers in conducting probabilistic analysis of components and systems. PRODAF can integrate multiple analysis programs to ease the tedious process of conducting a complex analysis process that requires the use of multiple software packages. The work uses a commercial finite element analysis (FEA) program with modules from NESSUS to conduct a probabilistic analysis of a hypothetical turbine blade, disk, and shaft model. PRODAF applies the response surface method, at the component level, and extrapolates the component-level responses to the system level. Hypothetical components of a gas turbine engine are first deterministically modeled using FEA. Variations in selected geometrical dimensions and loading conditions are analyzed to determine the effects of the stress state within each component. Geometric variations include the cord length and height for the blade, inner radius, outer radius, and thickness, which are varied for the disk. Probabilistic analysis is carried out using developing software packages like System Uncertainty Analysis (SUA) and PRODAF. PRODAF was used with a commercial deterministic FEA program in conjunction with modules from the probabilistic analysis program, NESTEM, to perturb loads and geometries to provide a reliability and sensitivity analysis. PRODAF simplified the handling of data among the various programs involved, and will work with many commercial and opensource deterministic programs, probabilistic programs, or modules.
Field Evaluation of an Avian Risk Assessment Model
We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in ...
Soil pH mediates the balance between stochastic and deterministic assembly of bacteria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol
Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less
NASA Astrophysics Data System (ADS)
Fischer, P.; Jardani, A.; Lecoq, N.
2018-02-01
In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.
NASA Technical Reports Server (NTRS)
Duffy, S. F.; Hu, J.; Hopkins, D. A.
1995-01-01
The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.
Topology optimization under stochastic stiffness
NASA Astrophysics Data System (ADS)
Asadpoure, Alireza
Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Vibroacoustic Response of Pad Structures to Space Shuttle Launch Acoustic Loads
NASA Technical Reports Server (NTRS)
Margasahayam, R. N.; Caimi, Raoul E.
1995-01-01
This paper presents a deterministic theory for the random vibration problem for predicting the response of structures in the low-frequency range (0 to 20 hertz) of launch transients. Also presented are some innovative ways to characterize noise and highlights of ongoing test-analysis correlation efforts titled the Verification Test Article (VETA) project.
Hybrid deterministic/stochastic simulation of complex biochemical systems.
Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina
2017-11-21
In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.
Development of a First-of-a-Kind Deterministic Decision-Making Tool for Supervisory Control System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cetiner, Sacit M; Kisner, Roger A; Muhlheim, Michael David
2015-07-01
Decision-making is the process of identifying and choosing alternatives where each alternative offers a different approach or path to move from a given state or condition to a desired state or condition. The generation of consistent decisions requires that a structured, coherent process be defined, immediately leading to a decision-making framework. The overall objective of the generalized framework is for it to be adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or nomore » human intervention. The overriding goal of automation is to replace or supplement human decision makers with reconfigurable decision- making modules that can perform a given set of tasks reliably. Risk-informed decision making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The implementation of the probabilistic portion of the decision-making engine of the proposed supervisory control system was detailed in previous milestone reports. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic multi-attribute decision-making framework uses variable sensor data (e.g., outlet temperature) and calculates where it is within the challenge state, its trajectory, and margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. Metrics to be evaluated include stability, cost, time to complete (action), power level, etc. The integration of deterministic calculations using multi-physics analyses (i.e., neutronics, thermal, and thermal-hydraulics) and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermal-hydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies.« less
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Application of Wavelet Filters in an Evaluation of Photochemical Model Performance
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model pe...
NASA Astrophysics Data System (ADS)
Modgil, Girish A.
Gas turbine engines for aerospace applications have evolved dramatically over the last 50 years through the constant pursuit for better specific fuel consumption, higher thrust-to-weight ratio, lower noise and emissions all while maintaining reliability and affordability. An important step in enabling these improvements is a forced response aeromechanics analysis involving structural dynamics and aerodynamics of the turbine. It is well documented that forced response vibration is a very critical problem in aircraft engine design, causing High Cycle Fatigue (HCF). Pushing the envelope on engine design has led to increased forced response problems and subsequently an increased risk of HCF failure. Forced response analysis is used to assess design feasibility of turbine blades for HCF using a material limit boundary set by the Goodman Diagram envelope that combines the effects of steady and vibratory stresses. Forced response analysis is computationally expensive, time consuming and requires multi-domain experts to finalize a result. As a consequence, high-fidelity aeromechanics analysis is performed deterministically and is usually done at the end of the blade design process when it is very costly to make significant changes to geometry or aerodynamic design. To address uncertainties in the system (engine operating point, temperature distribution, mistuning, etc.) and variability in material properties, designers apply conservative safety factors in the traditional deterministic approach, which leads to bulky designs. Moreover, using a deterministic approach does not provide a calculated risk of HCF failure. This thesis describes a process that begins with the optimal aerodynamic design of a turbomachinery blade developed using surrogate models of high-fidelity analyses. The resulting optimal blade undergoes probabilistic evaluation to generate aeromechanics results that provide a calculated likelihood of failure from HCF. An existing Rolls-Royce High Work Single Stage (HWSS) turbine blisk provides a baseline to demonstrate the process. The generalized polynomial chaos (gPC) toolbox which was developed includes sampling methods and constructs polynomial approximations. The toolbox provides not only the means for uncertainty quantification of the final blade design, but also facilitates construction of the surrogate models used for the blade optimization. This paper shows that gPC , with a small number of samples, achieves very fast rates of convergence and high accuracy in describing probability distributions without loss of detail in the tails . First, an optimization problem maximizes stage efficiency using turbine aerodynamic design rules as constraints; the function evaluations for this optimization are surrogate models from detailed 3D steady Computational Fluid Dynamics (CFD) analyses. The resulting optimal shape provides a starting point for the 3D high-fidelity aeromechanics (unsteady CFD and 3D Finite Element Analysis (FEA)) UQ study assuming three uncertain input parameters. This investigation seeks to find the steady and vibratory stresses associated with the first torsion mode for the HWSS turbine blisk near maximum operating speed of the engine. Using gPC to provide uncertainty estimates of the steady and vibratory stresses enables the creation of a Probabilistic Goodman Diagram, which - to the authors' best knowledge - is the first of its kind using high fidelity aeromechanics for turbomachinery blades. The Probabilistic Goodman Diagram enables turbine blade designers to make more informed design decisions and it allows the aeromechanics expert to assess quantitatively the risk associated with HCF for any mode crossing based on high fidelity simulations.
NASA Astrophysics Data System (ADS)
Mozaffarilegha, Marjan; Esteki, Ali; Ahadi, Mohsen; Nazeri, Ahmadreza
The speech-evoked auditory brainstem response (sABR) shows how complex sounds such as speech and music are processed in the auditory system. Speech-ABR could be used to evaluate particular impairments and improvements in auditory processing system. Many researchers used linear approaches for characterizing different components of sABR signal, whereas nonlinear techniques are not applied so commonly. The primary aim of the present study is to examine the underlying dynamics of normal sABR signals. The secondary goal is to evaluate whether some chaotic features exist in this signal. We have presented a methodology for determining various components of sABR signals, by performing Ensemble Empirical Mode Decomposition (EEMD) to get the intrinsic mode functions (IMFs). Then, composite multiscale entropy (CMSE), the largest Lyapunov exponent (LLE) and deterministic nonlinear prediction are computed for each extracted IMF. EEMD decomposes sABR signal into five modes and a residue. The CMSE results of sABR signals obtained from 40 healthy people showed that 1st, and 2nd IMFs were similar to the white noise, IMF-3 with synthetic chaotic time series and 4th, and 5th IMFs with sine waveform. LLE analysis showed positive values for 3rd IMFs. Moreover, 1st, and 2nd IMFs showed overlaps with surrogate data and 3rd, 4th and 5th IMFs showed no overlap with corresponding surrogate data. Results showed the presence of noisy, chaotic and deterministic components in the signal which respectively corresponded to 1st, and 2nd IMFs, IMF-3, and 4th and 5th IMFs. While these findings provide supportive evidence of the chaos conjecture for the 3rd IMF, they do not confirm any such claims. However, they provide a first step towards an understanding of nonlinear behavior of auditory system dynamics in brainstem level.
Phenotypic switching of populations of cells in a stochastic environment
NASA Astrophysics Data System (ADS)
Hufton, Peter G.; Lin, Yen Ting; Galla, Tobias
2018-02-01
In biology phenotypic switching is a common bet-hedging strategy in the face of uncertain environmental conditions. Existing mathematical models often focus on periodically changing environments to determine the optimal phenotypic response. We focus on the case in which the environment switches randomly between discrete states. Starting from an individual-based model we derive stochastic differential equations to describe the dynamics, and obtain analytical expressions for the mean instantaneous growth rates based on the theory of piecewise-deterministic Markov processes. We show that optimal phenotypic responses are non-trivial for slow and intermediate environmental processes, and systematically compare the cases of periodic and random environments. The best response to random switching is more likely to be heterogeneity than in the case of deterministic periodic environments, net growth rates tend to be higher under stochastic environmental dynamics. The combined system of environment and population of cells can be interpreted as host-pathogen interaction, in which the host tries to choose environmental switching so as to minimise growth of the pathogen, and in which the pathogen employs a phenotypic switching optimised to increase its growth rate. We discuss the existence of Nash-like mutual best-response scenarios for such host-pathogen games.
DOT National Transportation Integrated Search
1999-09-01
A deterministic algorithm was developed which allowed data from Department of Transportation motor vehicle crash records, state mortality registry records, and hospital admission and emergency department records to be linked for analysis of the impac...
Parallel Stochastic discrete event simulation of calcium dynamics in neuron.
Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W
2017-09-26
The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.
NASA Technical Reports Server (NTRS)
Davis, M. W.
1984-01-01
A Real-Time Self-Adaptive (RTSA) active vibration controller was used as the framework in developing a computer program for a generic controller that can be used to alleviate helicopter vibration. Based upon on-line identification of system parameters, the generic controller minimizes vibration in the fuselage by closed-loop implementation of higher harmonic control in the main rotor system. The new generic controller incorporates a set of improved algorithms that gives the capability to readily define many different configurations by selecting one of three different controller types (deterministic, cautious, and dual), one of two linear system models (local and global), and one or more of several methods of applying limits on control inputs (external and/or internal limits on higher harmonic pitch amplitude and rate). A helicopter rotor simulation analysis was used to evaluate the algorithms associated with the alternative controller types as applied to the four-bladed H-34 rotor mounted on the NASA Ames Rotor Test Apparatus (RTA) which represents the fuselage. After proper tuning all three controllers provide more effective vibration reduction and converge more quickly and smoothly with smaller control inputs than the initial RTSA controller (deterministic with external pitch-rate limiting). It is demonstrated that internal limiting of the control inputs a significantly improves the overall performance of the deterministic controller.
NASA Astrophysics Data System (ADS)
Kaláb, Zdeněk; Šílený, Jan; Lednická, Markéta
2017-07-01
This paper deals with the seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel in the Czech Republic. The basic source of data for historical earthquakes up to 1990 was the seismic website [1-]. The most intense earthquake described occurred on September 15, 1590 in the Niederroesterreich region (Austria) in the historical period; its reported intensity is Io = 8-9. The source of the contemporary seismic data for the period since 1991 to the end of 2014 was the website [11]. It may be stated based on the databases and literature review that in the period from 1900, no earthquake exceeding magnitude 5.1 originated in the territory of the Czech Republic. In order to evaluate seismicity and to assess the impact of seismic effects at depths of hypothetical deep geological repository for the next time period, the neo-deterministic method was selected as an extension of the probabilistic method. Each one out of the seven survey areas were assessed by the neo-deterministic evaluation of the seismic wave-field excited by selected individual events and determining the maximum loading. Results of seismological databases studies and neo-deterministic analysis of Čihadlo locality are presented.
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
NASA Astrophysics Data System (ADS)
Dixon, Kenneth
A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.
On the usage of ultrasound computational models for decision making under ambiguity
NASA Astrophysics Data System (ADS)
Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron
2018-04-01
Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.
Space Radiation Transport Methods Development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2002-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.
Calculating the Responses of Self-Powered Radiation Detectors.
NASA Astrophysics Data System (ADS)
Thornton, D. A.
Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Who's flying the plane: serotonin levels, aggression and free will.
Siegel, Allan; Douard, John
2011-01-01
The present paper addresses the philosophical problem raised by current causal neurochemical models of impulsive violence and aggression: to what extent can we hold violent criminal offenders responsible for their conduct if that conduct is the result of deterministic biochemical processes in the brain. This question is currently receiving a great deal of attention among neuroscientists, legal scholars and philosophers. We examine our current knowledge of neuroscience to assess the possible roles of deterministic factors which induce impulsive aggression, and the extent to which this behavior can be controlled by neural conditioning mechanisms. Neural conditioning mechanisms, we suggest, may underlie what we consider the basis of responsible (though not necessarily moral) behavior: the capacity to give and take reasons. The models we first examine are based in part upon the role played by the neurotransmitter, serotonin, in the regulation of violence and aggression. Collectively, these results would appear to argue in favor of the view that low brain serotonin levels induce impulsive aggression which overrides mechanisms related to rational decision making processes. We next present an account of responsibility as based on the capacity to exercise a certain kind of reason-responsive control over one's conduct. The problem with such accounts of responsibility, however, is that they fail to specify a neurobiological realization of such mechanisms of control. We present a neurobiological, and weakly determinist, framework for understanding how persons can exercise guidance control over their conduct. This framework is based upon classical conditioning of neurons in the prefrontal cortex that allow for a decision making mechanism that provides for prefrontal cortical control of the sites in the brain which express aggressive behavior that include the hypothalamus and midbrain periaqueductal gray. The authors support the view that, in many circumstances, neural conditioning mechanisms provide the basis for the control of human aggression in spite of the presence of brain serotonin levels that might otherwise favor the expression of impulsive aggressive behavior. Indeed if those neural conditioning mechanisms underlie the human capacity to exercise control, they may be the neural realization of reason-responsiveness generally. Copyright © 2010 Elsevier Ltd. All rights reserved.
Who's flying the plane: Serotonin levels, aggression and free will
Siegel, Allan; Douard, John
2010-01-01
The present paper addresses the philosophical problem raised by current causal neurochemical models of impulsive violence and aggression: to what extent can we hold violent criminal offenders responsible for their conduct if that conduct is the result of deterministic biochemical processes in the brain. This question is currently receiving a great deal of attention among neuroscientists, legal scholars and philosophers. We examine our current knowledge of neuroscience to assess the possible roles of deterministic factors which induce impulsive aggression, and the extent to which this behavior can be controlled by neural conditioning mechanisms. Neural conditioning mechanisms, we suggest, may underlie what we consider the basis of responsible (though not necessarily moral) behavior: the capacity to give and take reasons. The models we first examine are based in part upon the role played by the neurotransmitter, serotonin, in the regulation of violence and aggression. Collectively, these results would appear to argue in favor of the view that low brain serotonin levels induce impulsive aggression which overrides mechanisms related to rational decision making processes. We next present an account of responsibility as based on the capacity to exercise a certain kind of reason-responsive control over one's conduct. The problem with such accounts of responsibility, however, is that they fail to specify a neurobiological realization of such mechanisms of control. We present a neurobiological, and weakly determinist, framework for understanding how persons can exercise guidance control over their conduct. This framework is based upon classical conditioning of neurons in the prefrontal cortex that allow for a decision making mechanism that provides for prefrontal cortical control of the sites in the brain which express aggressive behavior that include the hypothalamus and midbrain periaqueductal gray. The authors support the view that, in many circumstances, neural conditioning mechanisms provide the basis for the control of human aggression in spite of the presence of brain serotonin levels that might otherwise favor the expression of impulsive aggressive behavior. Indeed if those neural conditioning mechanisms underlie the human capacity to exercise control, they may be the neural realization of reason-responsiveness generally. PMID:21112635
Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)
NASA Astrophysics Data System (ADS)
Kędra, Mariola
2014-02-01
Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.
DOT National Transportation Integrated Search
1999-09-01
A deterministic algorithm was developed which allowed data from Department of Transportation motor vehicle crash records, state mortality registry records, and hospital admission and emergency department records to be linked for analysis of the finan...
Soft network composite materials with deterministic and bio-inspired designs
Jang, Kyung-In; Chung, Ha Uk; Xu, Sheng; Lee, Chi Hwan; Luan, Haiwen; Jeong, Jaewoong; Cheng, Huanyu; Kim, Gwang-Tae; Han, Sang Youn; Lee, Jung Woo; Kim, Jeonghyun; Cho, Moongee; Miao, Fuxing; Yang, Yiyuan; Jung, Han Na; Flavin, Matthew; Liu, Howard; Kong, Gil Woo; Yu, Ki Jun; Rhee, Sang Il; Chung, Jeahoon; Kim, Byunggik; Kwak, Jean Won; Yun, Myoung Hee; Kim, Jin Young; Song, Young Min; Paik, Ungyu; Zhang, Yihui; Huang, Yonggang; Rogers, John A.
2015-01-01
Hard and soft structural composites found in biology provide inspiration for the design of advanced synthetic materials. Many examples of bio-inspired hard materials can be found in the literature; far less attention has been devoted to soft systems. Here we introduce deterministic routes to low-modulus thin film materials with stress/strain responses that can be tailored precisely to match the non-linear properties of biological tissues, with application opportunities that range from soft biomedical devices to constructs for tissue engineering. The approach combines a low-modulus matrix with an open, stretchable network as a structural reinforcement that can yield classes of composites with a wide range of desired mechanical responses, including anisotropic, spatially heterogeneous, hierarchical and self-similar designs. Demonstrative application examples in thin, skin-mounted electrophysiological sensors with mechanics precisely matched to the human epidermis and in soft, hydrogel-based vehicles for triggered drug release suggest their broad potential uses in biomedical devices. PMID:25782446
On the Development of a Deterministic Three-Dimensional Radiation Transport Code
NASA Technical Reports Server (NTRS)
Rockell, Candice; Tweed, John
2011-01-01
Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.; ...
2017-02-28
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
Integrated Risk-Informed Decision-Making for an ALMR PRISM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muhlheim, Michael David; Belles, Randy; Denning, Richard S.
Decision-making is the process of identifying decision alternatives, assessing those alternatives based on predefined metrics, selecting an alternative (i.e., making a decision), and then implementing that alternative. The generation of decisions requires a structured, coherent process, or a decision-making process. The overall objective for this work is that the generalized framework is adopted into an autonomous decision-making framework and tailored to specific requirements for various applications. In this context, automation is the use of computing resources to make decisions and implement a structured decision-making process with limited or no human intervention. The overriding goal of automation is to replace ormore » supplement human decision makers with reconfigurable decision-making modules that can perform a given set of tasks rationally, consistently, and reliably. Risk-informed decision-making requires a probabilistic assessment of the likelihood of success given the status of the plant/systems and component health, and a deterministic assessment between plant operating parameters and reactor protection parameters to prevent unnecessary trips and challenges to plant safety systems. The probabilistic portion of the decision-making engine of the supervisory control system is based on the control actions associated with an ALMR PRISM. Newly incorporated into the probabilistic models are the prognostic/diagnostic models developed by Pacific Northwest National Laboratory. These allow decisions to incorporate the health of components into the decision–making process. Once the control options are identified and ranked based on the likelihood of success, the supervisory control system transmits the options to the deterministic portion of the platform. The deterministic portion of the decision-making engine uses thermal-hydraulic modeling and components for an advanced liquid-metal reactor Power Reactor Inherently Safe Module. The deterministic multi-attribute decision-making framework uses various sensor data (e.g., reactor outlet temperature, steam generator drum level) and calculates its position within the challenge state, its trajectory, and its margin within the controllable domain using utility functions to evaluate current and projected plant state space for different control decisions. The metrics that are evaluated are based on reactor trip set points. The integration of the deterministic calculations using multi-physics analyses and probabilistic safety calculations allows for the examination and quantification of margin recovery strategies. This also provides validation of the control options identified from the probabilistic assessment. Thus, the thermalhydraulics analyses are used to validate the control options identified from the probabilistic assessment. Future work includes evaluating other possible metrics and computational efficiencies, and developing a user interface to mimic display panels at a modern nuclear power plant.« less
Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.
Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P
2014-03-04
Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.
New Criterion and Tool for Caltrans Seismic Hazard Characterization
NASA Astrophysics Data System (ADS)
Shantz, T.; Merriam, M.; Turner, L.; Chiou, B.; Liu, X.
2008-12-01
Caltrans recently adopted new procedures for the development of response spectra for structure design. These procedures incorporate both deterministic and probabilistic criteria. The Next Generation Attenuation (NGA) models (2008) are used for deterministic assessment (using a revised late-Quaternary age fault database), and the USGS 2008 5% in 50-year hazard maps are used for probabilistic assessment. A minimum deterministic spectrum based on a M6.5 earthquake at 12 km is also included. These spectra are enveloped and the largest values used. A new publicly available web-based design tool for calculating the design spectrum will be used for calculations. The tool is built on a Windows-Apache-MySQL-PHP (WAMP) platform and integrates GoogleMaps for increased flexibility in the tool's use. Links to Caltrans data such as pre-construction logs of test borings assist in the estimation of Vs30 values used in the new procedures. Basin effects based on new models developed for the CFM, for the San Francisco Bay area by the USGS, and by Thurber (2008) are also incorporated. It is anticipated that additional layers such as CGS Seismic Hazard Zone maps will be added in the future. Application of the new criterion will result in expected higher levels of ground motion at many bridges west of the Coast Ranges. In eastern California, use of the NGA relationships for strike-slip faulting (the dominant sense of motion in California) will often result in slightly lower expected values for bridges. The expected result is a more realistic prediction of ground motions at bridges, in keeping with those motions developed for other large-scale and important structures. The tool is based on a simplified fault map of California, so it will not be used for more detailed evaluations such as surface rupture determination. Announcements regarding tool availability (expected to be in early 2009) are at http://www.dot.ca.gov/research/index.htm
A MODEL OF ESTUARY RESPONSE TO NITROGEN LOADING AND FRESHWATER RESIDENCE TIME
We have developed a deterministic model that relates average annual nitrogen loading rate and water residence time in an estuary to in-estuary nitrogen concentrations and loss rates (e.g. denitrification and incorporation in sediments), and to rates of nitrogen export across the ...
Probabilistic evaluation of uncertainties and risks in aerospace components
NASA Technical Reports Server (NTRS)
Shah, A. R.; Shiao, M. C.; Nagpal, V. K.; Chamis, C. C.
1992-01-01
This paper summarizes a methodology developed at NASA Lewis Research Center which computationally simulates the structural, material, and load uncertainties associated with Space Shuttle Main Engine (SSME) components. The methodology was applied to evaluate the scatter in static, buckling, dynamic, fatigue, and damage behavior of the SSME turbo pump blade. Also calculated are the probability densities of typical critical blade responses, such as effective stress, natural frequency, damage initiation, most probable damage path, etc. Risk assessments were performed for different failure modes, and the effect of material degradation on the fatigue and damage behaviors of a blade were calculated using a multi-factor interaction equation. Failure probabilities for different fatigue cycles were computed and the uncertainties associated with damage initiation and damage propagation due to different load cycle were quantified. Evaluations on the effects of mistuned blades on a rotor were made; uncertainties in the excitation frequency were found to significantly amplify the blade responses of a mistuned rotor. The effects of the number of blades on a rotor were studied. The autocorrelation function of displacements and the probability density function of the first passage time for deterministic and random barriers for structures subjected to random processes also were computed. A brief discussion was included on the future direction of probabilistic structural analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yishen; Zhou, Zhi; Liu, Cong
2016-08-01
As more wind power and other renewable resources are being integrated into the electric power grid, the forecast uncertainty brings operational challenges for the power system operators. In this report, different operational strategies for uncertainty management are presented and evaluated. A comprehensive and consistent simulation framework is developed to analyze the performance of different reserve policies and scheduling techniques under uncertainty in wind power. Numerical simulations are conducted on a modified version of the IEEE 118-bus system with a 20% wind penetration level, comparing deterministic, interval, and stochastic unit commitment strategies. The results show that stochastic unit commitment provides amore » reliable schedule without large increases in operational costs. Moreover, decomposition techniques, such as load shift factor and Benders decomposition, can help in overcoming the computational obstacles to stochastic unit commitment and enable the use of a larger scenario set to represent forecast uncertainty. In contrast, deterministic and interval unit commitment tend to give higher system costs as more reserves are being scheduled to address forecast uncertainty. However, these approaches require a much lower computational effort Choosing a proper lower bound for the forecast uncertainty is important for balancing reliability and system operational cost in deterministic and interval unit commitment. Finally, we find that the introduction of zonal reserve requirements improves reliability, but at the expense of higher operational costs.« less
Zamora-Chimal, Criseida; Santillán, Moisés; Rodríguez-González, Jesús
2012-10-07
In this paper we introduce a mathematical model for the tryptophan operon regulatory pathway in Bacillus subtilis. This model considers the transcription-attenuation, and the enzyme-inhibition regulatory mechanisms. Special attention is paid to the estimation of all the model parameters from reported experimental data. With the aid of this model we investigate, from a mathematical-modeling point of view, whether the existing multiplicity of regulatory feedback loops is advantageous in some sense, regarding the dynamic response and the biochemical noise in the system. The tryptophan operon dynamic behavior is studied by means of deterministic numeric simulations, while the biochemical noise is analyzed with the aid of stochastic simulations. The model feasibility is tested comparing its stochastic and deterministic results with experimental reports. Our results for the wildtype and for a couple of mutant bacterial strains suggest that the enzyme-inhibition feedback loop, dynamically accelerates the operon response, and plays a major role in the reduction of biochemical noise. Also, the transcription-attenuation feedback loop makes the trp operon sensitive to changes in the endogenous tryptophan level, and increases the amplitude of the biochemical noise. Copyright © 2012 Elsevier Ltd. All rights reserved.
Stochastic von Bertalanffy models, with applications to fish recruitment.
Lv, Qiming; Pitchford, Jonathan W
2007-02-21
We consider three individual-based models describing growth in stochastic environments. Stochastic differential equations (SDEs) with identical von Bertalanffy deterministic parts are formulated, with a stochastic term which decreases, remains constant, or increases with organism size, respectively. Probability density functions for hitting times are evaluated in the context of fish growth and mortality. Solving the hitting time problem analytically or numerically shows that stochasticity can have a large positive impact on fish recruitment probability. It is also demonstrated that the observed mean growth rate of surviving individuals always exceeds the mean population growth rate, which itself exceeds the growth rate of the equivalent deterministic model. The consequences of these results in more general biological situations are discussed.
Rapid isolation of cancer cells using microfluidic deterministic lateral displacement structure.
Liu, Zongbin; Huang, Fei; Du, Jinghui; Shu, Weiliang; Feng, Hongtao; Xu, Xiaoping; Chen, Yan
2013-01-01
This work reports a microfluidic device with deterministic lateral displacement (DLD) arrays allowing rapid and label-free cancer cell separation and enrichment from diluted peripheral whole blood, by exploiting the size-dependent hydrodynamic forces. Experiment data and theoretical simulation are presented to evaluate the isolation efficiency of various types of cancer cells in the microfluidic DLD structure. We also demonstrated the use of both circular and triangular post arrays for cancer cell separation in cell solution and blood samples. The device was able to achieve high cancer cell isolation efficiency and enrichment factor with our optimized design. Therefore, this platform with DLD structure shows great potential on fundamental and clinical studies of circulating tumor cells.
A summary of wind power prediction methods
NASA Astrophysics Data System (ADS)
Wang, Yuqi
2018-06-01
The deterministic prediction of wind power, the probability prediction and the prediction of wind power ramp events are introduced in this paper. Deterministic prediction includes the prediction of statistical learning based on histor ical data and the prediction of physical models based on NWP data. Due to the great impact of wind power ramp events on the power system, this paper also introduces the prediction of wind power ramp events. At last, the evaluation indicators of all kinds of prediction are given. The prediction of wind power can be a good solution to the adverse effects of wind power on the power system due to the abrupt, intermittent and undulation of wind power.
Tiedeman's Approach to Career Development.
ERIC Educational Resources Information Center
Harren, Vincent A.
Basic to Tiedeman's approach to career development and decision making is the assumption that one is responsible for one's own behavior because one has the capacity for choice and lives in a world which is not deterministic. Tiedeman, a cognitive-developmental theorist, views continuity of development as internal or psychological while…
Non-Lipschitzian dynamics for neural net modelling
NASA Technical Reports Server (NTRS)
Zak, Michail
1989-01-01
Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.
Deterministic Demographic Characteristics in Tertiary Education: An Exploratory Study
ERIC Educational Resources Information Center
Morton, Lisa; Lamb, Charles
2006-01-01
This paper reports the responses of 235 tertiary commerce students to a questionnaire in relation to their learning and assessment experiences. Significant correlations between measures were used to identify underlying constructs within the overall set of variable measures. Logistic regression incorporating the factors was then used to further…
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and themore » SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.« less
Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications
NASA Astrophysics Data System (ADS)
Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.
2017-09-01
A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.
Watershed scale response to climate change--Yampa River Basin, Colorado
Hay, Lauren E.; Battaglin, William A.; Markstrom, Steven L.
2012-01-01
General Circulation Model simulations of future climate through 2099 project a wide range of possible scenarios. To determine the sensitivity and potential effect of long-term climate change on the freshwater resources of the United States, the U.S. Geological Survey Global Change study, "An integrated watershed scale response to global change in selected basins across the United States" was started in 2008. The long-term goal of this national study is to provide the foundation for hydrologically based climate change studies across the nation. Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Yampa River Basin at Steamboat Springs, Colorado.
NASA Astrophysics Data System (ADS)
Dadashzadeh, N.; Duzgun, H. S. B.; Yesiloglu-Gultekin, N.
2017-08-01
While advanced numerical techniques in slope stability analysis are successfully used in deterministic studies, they have so far found limited use in probabilistic analyses due to their high computation cost. The first-order reliability method (FORM) is one of the most efficient probabilistic techniques to perform probabilistic stability analysis by considering the associated uncertainties in the analysis parameters. However, it is not possible to directly use FORM in numerical slope stability evaluations as it requires definition of a limit state performance function. In this study, an integrated methodology for probabilistic numerical modeling of rock slope stability is proposed. The methodology is based on response surface method, where FORM is used to develop an explicit performance function from the results of numerical simulations. The implementation of the proposed methodology is performed by considering a large potential rock wedge in Sumela Monastery, Turkey. The accuracy of the developed performance function to truly represent the limit state surface is evaluated by monitoring the slope behavior. The calculated probability of failure is compared with Monte Carlo simulation (MCS) method. The proposed methodology is found to be 72% more efficient than MCS, while the accuracy is decreased with an error of 24%.
Dijkshoorn, J P; Schutyser, M A I; Sebris, M; Boom, R M; Wagterveld, R M
2017-10-26
Deterministic lateral displacement technology was originally developed in the realm of microfluidics, but has potential for larger scale separation as well. In our previous studies, we proposed a sieve-based lateral displacement device inspired on the principle of deterministic lateral displacement. The advantages of this new device is that it gives a lower pressure drop, lower risk of particle accumulation, higher throughput and is simpler to manufacture. However, until now this device has only been investigated for its separation of large particles of around 785 µm diameter. To separate smaller particles, we investigate several design parameters for their influence on the critical particle diameter. In a dimensionless evaluation, device designs with different geometry and dimensions were compared. It was found that sieve-based lateral displacement devices are able to displace particles due to the crucial role of the flow profile, despite of their unusual and asymmetric design. These results demonstrate the possibility to actively steer the velocity profile in order to reduce the critical diameter in deterministic lateral displacement devices, which makes this separation principle more accessible for large-scale, high throughput applications.
A space radiation transport method development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2004-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.
A Review of Intra- and Extracellular Antigen Delivery Systems for Virus Vaccines of Finfish
Munang'andu, Hetron Mweemba; Evensen, Øystein
2015-01-01
Vaccine efficacy in aquaculture has for a long time depended on evaluating relative percent survival and antibody responses after vaccination. However, current advances in vaccine immunology show that the route in which antigens are delivered into cells is deterministic of the type of adaptive immune response evoked by vaccination. Antigens delivered by the intracellular route induce MHC-I restricted CD8+ responses while antigens presented through the extracellular route activate MHC-II restricted CD4+ responses implying that the route of antigen delivery is a conduit to induction of B- or T-cell immune responses. In finfish, different antigen delivery systems have been explored that include live, DNA, inactivated whole virus, fusion protein, virus-like particles, and subunit vaccines although mechanisms linking these delivery systems to protective immunity have not been studied in detail. Hence, in this review we provide a synopsis of different strategies used to administer viral antigens via the intra- or extracellular compartments. Further, we highlight the differences in immune responses induced by antigens processed by the endogenous route compared to exogenously processed antigens. Overall, we anticipate that the synopsis put together in this review will shed insights into limitations and successes of the current vaccination strategies used in finfish vaccinology. PMID:26065009
First principles pulse pile-up balance equation and fast deterministic solution
NASA Astrophysics Data System (ADS)
Sabbatucci, Lorenzo; Fernández, Jorge E.
2017-08-01
Pulse pile-up (PPU) is an always present effect which introduces a distortion into the spectrum measured with radiation detectors and that worsen with the increasing emission rate of the radiation source. It is fully ascribable to the pulse handling circuitry of the detector and it is not comprised in the detector response function which is well explained by a physical model. The PPU changes both the number and the height of the recorded pulses, which are related, respectively, with the number of detected particles and their energy. In the present work, it is derived a first principles balance equation for second order PPU to obtain a post-processing correction to apply to X-ray measurements. The balance equation is solved for the particular case of rectangular pulse shape using a deterministic iterative procedure for which it will be shown the convergence. The proposed method, deterministic rectangular PPU (DRPPU), requires minimum amount of information and, as example, it is applied to a solid state Si detector with active or off-line PPU suppression circuitry. A comparison shows that the results obtained with this fast and simple approach are comparable to those from the more sophisticated procedure using precise detector pulse shapes.
Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.
Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter
2017-01-01
Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.
Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments
Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter
2018-01-01
Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801
Large-Amplitude Forced Response of Dynamic Systems
1992-11-01
Blacksburg, VA, June 25-27, 1990. 11. A. Abou- Rayan , A. H. Nayfeh, D. T. Mook, and M. A. Nayfeh, "Nonlinear Analysis of a Parametrically Excited...34 62nd Shock and Vibration Symposium, Springfield, VA, October 29-31, 1991. 23. A. Abou- Rayan , A. H. Nayfeh, D. T. Mook, and M. A. Nayfeh...Mechanics, Virginia Polytechnic Institute and State University, Blacksburg, VA, 1991. 6. A. Abou- Rayan , Ph.D., "Deterministic and Stochastic Responses
Stochastic Stability of Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
In this paper an equivalence between the stochastic stability of a sampled-data system and its associated discrete-time representation is established. The sampled-data system consists of a deterministic, linear, time-invariant, continuous-time plant and a stochastic, linear, time-invariant, discrete-time, jump linear controller. The jump linear controller models computer systems and communication networks that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. This paper shows that the known equivalence between the stability of a deterministic sampled-data system and the associated discrete-time representation holds even in a stochastic framework.
NASA Technical Reports Server (NTRS)
Smialek, James L.
2002-01-01
An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.
Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente
2009-12-20
This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.
Adaptation to Temporally Fluctuating Environments by the Evolution of Maternal Effects.
Dey, Snigdhadip; Proulx, Stephen R; Teotónio, Henrique
2016-02-01
All organisms live in temporally fluctuating environments. Theory predicts that the evolution of deterministic maternal effects (i.e., anticipatory maternal effects or transgenerational phenotypic plasticity) underlies adaptation to environments that fluctuate in a predictably alternating fashion over maternal-offspring generations. In contrast, randomizing maternal effects (i.e., diversifying and conservative bet-hedging), are expected to evolve in response to unpredictably fluctuating environments. Although maternal effects are common, evidence for their adaptive significance is equivocal since they can easily evolve as a correlated response to maternal selection and may or may not increase the future fitness of offspring. Using the hermaphroditic nematode Caenorhabditis elegans, we here show that the experimental evolution of maternal glycogen provisioning underlies adaptation to a fluctuating normoxia-anoxia hatching environment by increasing embryo survival under anoxia. In strictly alternating environments, we found that hermaphrodites evolved the ability to increase embryo glycogen provisioning when they experienced normoxia and to decrease embryo glycogen provisioning when they experienced anoxia. At odds with existing theory, however, populations facing irregularly fluctuating normoxia-anoxia hatching environments failed to evolve randomizing maternal effects. Instead, adaptation in these populations may have occurred through the evolution of fitness effects that percolate over multiple generations, as they maintained considerably high expected growth rates during experimental evolution despite evolving reduced fecundity and reduced embryo survival under one or two generations of anoxia. We develop theoretical models that explain why adaptation to a wide range of patterns of environmental fluctuations hinges on the existence of deterministic maternal effects, and that such deterministic maternal effects are more likely to contribute to adaptation than randomizing maternal effects.
Adaptation to Temporally Fluctuating Environments by the Evolution of Maternal Effects
Dey, Snigdhadip; Proulx, Stephen R.; Teotónio, Henrique
2016-01-01
All organisms live in temporally fluctuating environments. Theory predicts that the evolution of deterministic maternal effects (i.e., anticipatory maternal effects or transgenerational phenotypic plasticity) underlies adaptation to environments that fluctuate in a predictably alternating fashion over maternal-offspring generations. In contrast, randomizing maternal effects (i.e., diversifying and conservative bet-hedging), are expected to evolve in response to unpredictably fluctuating environments. Although maternal effects are common, evidence for their adaptive significance is equivocal since they can easily evolve as a correlated response to maternal selection and may or may not increase the future fitness of offspring. Using the hermaphroditic nematode Caenorhabditis elegans, we here show that the experimental evolution of maternal glycogen provisioning underlies adaptation to a fluctuating normoxia–anoxia hatching environment by increasing embryo survival under anoxia. In strictly alternating environments, we found that hermaphrodites evolved the ability to increase embryo glycogen provisioning when they experienced normoxia and to decrease embryo glycogen provisioning when they experienced anoxia. At odds with existing theory, however, populations facing irregularly fluctuating normoxia–anoxia hatching environments failed to evolve randomizing maternal effects. Instead, adaptation in these populations may have occurred through the evolution of fitness effects that percolate over multiple generations, as they maintained considerably high expected growth rates during experimental evolution despite evolving reduced fecundity and reduced embryo survival under one or two generations of anoxia. We develop theoretical models that explain why adaptation to a wide range of patterns of environmental fluctuations hinges on the existence of deterministic maternal effects, and that such deterministic maternal effects are more likely to contribute to adaptation than randomizing maternal effects. PMID:26910440
DOT National Transportation Integrated Search
1999-09-01
A deterministic algorithm was developed which allowed data from Department of Transportation motor vehicle crash records, state mortality registry records, and hospital admission and emergency department records to be linked for analysis of the types...
Structural Deterministic Safety Factors Selection Criteria and Verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
Stochasticity, succession, and environmental perturbations in a fluidic ecosystem
Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.
2014-01-01
Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Candille, G.; Brankart, J. M.; Brasseur, P.
2015-07-01
Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. An incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semiindependent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analyzed jointly. The consistency and complementarity between both validations are highlighted.
Determining the bias and variance of a deterministic finger-tracking algorithm.
Morash, Valerie S; van der Velden, Bas H M
2016-06-01
Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.
Convergence studies of deterministic methods for LWR explicit reflector methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canepa, S.; Hursin, M.; Ferroukhi, H.
2013-07-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less
NASA Astrophysics Data System (ADS)
Song, Yiliao; Qin, Shanshan; Qu, Jiansheng; Liu, Feng
2015-10-01
The issue of air quality regarding PM pollution levels in China is a focus of public attention. To address that issue, to date, a series of studies is in progress, including PM monitoring programs, PM source apportionment, and the enactment of new ambient air quality index standards. However, related research concerning computer modeling for PM future trends estimation is rare, despite its significance to forecasting and early warning systems. Thereby, a study regarding deterministic and interval forecasts of PM is performed. In this study, data on hourly and 12 h-averaged air pollutants are applied to forecast PM concentrations within the Yangtze River Delta (YRD) region of China. The characteristics of PM emissions have been primarily examined and analyzed using different distribution functions. To improve the distribution fitting that is crucial for estimating PM levels, an artificial intelligence algorithm is incorporated to select the optimal parameters. Following that step, an ANF model is used to conduct deterministic forecasts of PM. With the identified distributions and deterministic forecasts, different levels of PM intervals are estimated. The results indicate that the lognormal or gamma distributions are highly representative of the recorded PM data with a goodness-of-fit R2 of approximately 0.998. Furthermore, the results of the evaluation metrics (MSE, MAPE and CP, AW) also show high accuracy within the deterministic and interval forecasts of PM, indicating that this method enables the informative and effective quantification of future PM trends.
Use of recurrence plots in the analysis of pupil diameter dynamics in narcoleptics
NASA Astrophysics Data System (ADS)
Keegan, Andrew P.; Zbilut, J. P.; Merritt, S. L.; Mercer, P. J.
1993-11-01
Recurrence plots were used to evaluate pupil dynamics of subjects with narcolepsy. Preliminary data indicate that this nonlinear method of analyses may be more useful in revealing underlying deterministic differences than traditional methods like FFT and counting statistics.
Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M
2017-06-01
This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
The general situation, (but exemplified in urban areas), where a significant degree of sub-grid variability (SGV) exists in grid models poses problems when comparing gridbased air quality modeling results with observations. Typically, grid models ignore or parameterize processes ...
76 FR 28102 - Notice of Issuance of Regulatory Guide
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
..., Probabilistic Risk Assessment Branch, Division of Risk Analysis, Office of Nuclear Regulatory Research, U.S... approaches and methods (whether quantitative or qualitative, deterministic or probabilistic), data, and... uses in evaluating specific problems or postulated accidents, and data that the staff needs in its...
Hardware-software face detection system based on multi-block local binary patterns
NASA Astrophysics Data System (ADS)
Acasandrei, Laurentiu; Barriga, Angel
2015-03-01
Face detection is an important aspect for biometrics, video surveillance and human computer interaction. Due to the complexity of the detection algorithms any face detection system requires a huge amount of computational and memory resources. In this communication an accelerated implementation of MB LBP face detection algorithm targeting low frequency, low memory and low power embedded system is presented. The resulted implementation is time deterministic and uses a customizable AMBA IP hardware accelerator. The IP implements the kernel operations of the MB-LBP algorithm and can be used as universal accelerator for MB LBP based applications. The IP employs 8 parallel MB-LBP feature evaluators cores, uses a deterministic bandwidth, has a low area profile and the power consumption is ~95 mW on a Virtex5 XC5VLX50T. The resulted implementation acceleration gain is between 5 to 8 times, while the hardware MB-LBP feature evaluation gain is between 69 and 139 times.
Mayo, Christie; Shelley, Courtney; MacLachlan, N. James; Gardner, Ian; Hartley, David; Barker, Christopher
2016-01-01
The global distribution of bluetongue virus (BTV) has been changing recently, perhaps as a result of climate change. To evaluate the risk of BTV infection and transmission in a BTV-endemic region of California, sentinel dairy cows were evaluated for BTV infection, and populations of Culicoides vectors were collected at different sites using carbon dioxide. A deterministic model was developed to quantify risk and guide future mitigation strategies to reduce BTV infection in California dairy cattle. The greatest risk of BTV transmission was predicted within the warm Central Valley of California that contains the highest density of dairy cattle in the United States. Temperature and parameters associated with Culicoides vectors (transmission probabilities, carrying capacity, and survivorship) had the greatest effect on BTV’s basic reproduction number, R0. Based on these analyses, optimal control strategies for reducing BTV infection risk in dairy cattle will be highly reliant upon early efforts to reduce vector abundance during the months prior to peak transmission. PMID:27812161
Dynamic Probabilistic Instability of Composite Structures
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
2009-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties in that order.
Scattering effects of machined optical surfaces
NASA Astrophysics Data System (ADS)
Thompson, Anita Kotha
1998-09-01
Optical fabrication is one of the most labor-intensive industries in existence. Lensmakers use pitch to affix glass blanks to metal chucks that hold the glass as they grind it with tools that have not changed much in fifty years. Recent demands placed on traditional optical fabrication processes in terms of surface accuracy, smoothnesses, and cost effectiveness has resulted in the exploitation of precision machining technology to develop a new generation of computer numerically controlled (CNC) optical fabrication equipment. This new kind of precision machining process is called deterministic microgrinding. The most conspicuous feature of optical surfaces manufactured by the precision machining processes (such as single-point diamond turning or deterministic microgrinding) is the presence of residual cutting tool marks. These residual tool marks exhibit a highly structured topography of periodic azimuthal or radial deterministic marks in addition to random microroughness. These distinct topographic features give rise to surface scattering effects that can significantly degrade optical performance. In this dissertation project we investigate the scattering behavior of machined optical surfaces and their imaging characteristics. In particular, we will characterize the residual optical fabrication errors and relate the resulting scattering behavior to the tool and machine parameters in order to evaluate and improve the deterministic microgrinding process. Other desired information derived from the investigation of scattering behavior is the optical fabrication tolerances necessary to satisfy specific image quality requirements. Optical fabrication tolerances are a major cost driver for any precision optical manufacturing technology. The derivation and control of the optical fabrication tolerances necessary for different applications and operating wavelength regimes will play a unique and central role in establishing deterministic microgrinding as a preferred and a cost-effective optical fabrication process. Other well understood optical fabrication processes will also be reviewed and a performance comparison with the conventional grinding and polishing technique will be made to determine any inherent advantages in the optical quality of surfaces produced by other techniques.
Temporal assessment of microbial communities in soils of two contrasting mangroves.
Rigonato, Janaina; Kent, Angela D; Gumiere, Thiago; Branco, Luiz Henrique Zanini; Andreote, Fernando Dini; Fiore, Marli Fátima
Variations in microbial communities promoted by alterations in environmental conditions are reflected in similarities/differences both at taxonomic and functional levels. Here we used a natural gradient within mangroves from seashore to upland, to contrast the natural variability in bacteria, cyanobacteria and diazotroph assemblages in a pristine area compared to an oil polluted area along a timespan of three years, based on ARISA (bacteria and cyanobacteria) and nifH T-RFLP (diazotrophs) fingerprinting. The data presented herein indicated that changes in all the communities evaluated were mainly driven by the temporal effect in the contaminated area, while local effects were dominant on the pristine mangrove. A positive correlation of community structure between diazotrophs and cyanobacteria was observed, suggesting the functional importance of this phylum as nitrogen fixers in mangroves soils. Different ecological patterns explained the microbial behavior in the pristine and polluted mangroves. Stochastic models in the pristine mangrove indicate that there is not a specific environmental factor that determines the bacterial distribution, while cyanobacteria and diazotrophs better fitted in deterministic model in the same area. For the contaminated mangrove site, deterministic models better represented the variations in the communities, suggesting that the presence of oil might change the microbial ecological structures over time. Mangroves represent a unique environment threatened by global change, and this study contributed to the knowledge of the microbial distribution in such areas and its response on persistent contamination historic events. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1995-01-01
Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.
Deterministic quantum dense coding networks
NASA Astrophysics Data System (ADS)
Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal
2018-07-01
We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.
NASA Astrophysics Data System (ADS)
Lucarini, Valerio; Wouters, Jeroen
2017-09-01
Predicting the response of a system to perturbations is a key challenge in mathematical and natural sciences. Under suitable conditions on the nature of the system, of the perturbation, and of the observables of interest, response theories allow to construct operators describing the smooth change of the invariant measure of the system of interest as a function of the small parameter controlling the intensity of the perturbation. In particular, response theories can be developed both for stochastic and chaotic deterministic dynamical systems, where in the latter case stricter conditions imposing some degree of structural stability are required. In this paper we extend previous findings and derive general response formulae describing how n- point correlations are affected by perturbations to the vector flow. We also show how to compute the response of the spectral properties of the system to perturbations. We then apply our results to the seemingly unrelated problem of coarse graining in multiscale systems: we find explicit formulae describing the change in the terms describing the parameterisation of the neglected degrees of freedom resulting from applying perturbations to the full system. All the terms envisioned by the Mori-Zwanzig theory—the deterministic, stochastic, and non-Markovian terms—are affected at first order in the perturbation. The obtained results provide a more comprehensive understanding of the response of statistical mechanical systems to perturbations. They also contribute to the goal of constructing accurate and robust parameterisations and are of potential relevance for fields like molecular dynamics, condensed matter, and geophysical fluid dynamics. We envision possible applications of our general results to the study of the response of climate variability to anthropogenic and natural forcing and to the study of the equivalence of thermostatted statistical mechanical systems.
NASA Astrophysics Data System (ADS)
Gottwald, Georg A.; Wormell, J. P.; Wouters, Jeroen
2016-09-01
Using a sensitive statistical test we determine whether or not one can detect the breakdown of linear response given observations of deterministic dynamical systems. A goodness-of-fit statistics is developed for a linear statistical model of the observations, based on results for central limit theorems for deterministic dynamical systems, and used to detect linear response breakdown. We apply the method to discrete maps which do not obey linear response and show that the successful detection of breakdown depends on the length of the time series, the magnitude of the perturbation and on the choice of the observable. We find that in order to reliably reject the assumption of linear response for typical observables sufficiently large data sets are needed. Even for simple systems such as the logistic map, one needs of the order of 106 observations to reliably detect the breakdown with a confidence level of 95 %; if less observations are available one may be falsely led to conclude that linear response theory is valid. The amount of data required is larger the smaller the applied perturbation. For judiciously chosen observables the necessary amount of data can be drastically reduced, but requires detailed a priori knowledge about the invariant measure which is typically not available for complex dynamical systems. Furthermore we explore the use of the fluctuation-dissipation theorem (FDT) in cases with limited data length or coarse-graining of observations. The FDT, if applied naively to a system without linear response, is shown to be very sensitive to the details of the sampling method, resulting in erroneous predictions of the response.
A class of optimum digital phase locked loops for the DSN advanced receiver
NASA Technical Reports Server (NTRS)
Hurd, W. J.; Kumar, R.
1985-01-01
A class of optimum digital filters for digital phase locked loop of the deep space network advanced receiver is discussed. The filter minimizes a weighted combination of the variance of the random component of the phase error and the sum square of the deterministic dynamic component of phase error at the output of the numerically controlled oscillator (NCO). By varying the weighting coefficient over a suitable range of values, a wide set of filters are obtained such that, for any specified value of the equivalent loop-noise bandwidth, there corresponds a unique filter in this class. This filter thus has the property of having the best transient response over all possible filters of the same bandwidth and type. The optimum filters are also evaluated in terms of their gain margin for stability and their steady-state error performance.
Markstrom, Steven L.
2012-01-01
A software program, called P2S, has been developed which couples the daily stream temperature simulation capabilities of the U.S. Geological Survey Stream Network Temperature model with the watershed hydrology simulation capabilities of the U.S. Geological Survey Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a modular, deterministic, distributed-parameter, physical-process watershed model that simulates hydrologic response to various combinations of climate and land use. Stream Network Temperature was developed to help aquatic biologists and engineers predict the effects of changes that hydrology and energy have on water temperatures. P2S will allow scientists and watershed managers to evaluate the effects of historical climate and projected climate change, landscape evolution, and resource management scenarios on watershed hydrology and in-stream water temperature.
Seismic, high wind, tornado, and probabilistic risk assessments of the High Flux Isotope Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.P.; Stover, R.L.; Hashimoto, P.S.
1989-01-01
Natural phenomena analyses were performed on the High Flux Isotope Reactor (HFIR) Deterministic and probabilistic evaluations were made to determine the risks resulting from earthquakes, high winds, and tornadoes. Analytic methods in conjunction with field evaluations and an earthquake experience data base evaluation methods were used to provide more realistic results in a shorter amount of time. Plant modifications completed in preparation for HFIR restart and potential future enhancements are discussed. 5 figs.
NASA Astrophysics Data System (ADS)
Vukicevic, T.; Uhlhorn, E.; Reasor, P.; Klotz, B.
2012-12-01
A significant potential for improving numerical model forecast skill of tropical cyclone (TC) intensity by assimilation of airborne inner core observations in high resolution models has been demonstrated in recent studies. Although encouraging , the results so far have not provided clear guidance on the critical information added by the inner core data assimilation with respect to the intensity forecast skill. Better understanding of the relationship between the intensity forecast and the value added by the assimilation is required to further the progress, including the assimilation of satellite observations. One of the major difficulties in evaluating such a relationship is the forecast verification metric of TC intensity: the maximum one-minute sustained wind speed at 10 m above surface. The difficulty results from two issues : 1) the metric refers to a practically unobservable quantity since it is an extreme value in a highly turbulent, and spatially-extensive wind field and 2) model- and observation-based estimates of this measure are not compatible in terms of spatial and temporal scales, even in high-resolution models. Although the need for predicting the extreme value of near surface wind is well justified, and the observation-based estimates that are used in practice are well thought of, a revised metric for the intensity is proposed for the purpose of numerical forecast evaluation and the impacts on the forecast. The metric should enable a robust observation- and model-resolvable and phenomenologically-based evaluation of the impacts. It is shown that the maximum intensity could be represented in terms of decomposition into deterministic and stochastic components of the wind field. Using the vortex-centric cylindrical reference frame, the deterministic component is defined as the sum of amplitudes of azimuthal wave numbers 0 and 1 at the radius of maximum wind, whereas the stochastic component is represented by a non-Gaussian PDF. This decomposition is exact and fully independent of individual TC properties. The decomposition of the maximum wind intensity was first evaluated using several sources of data including Step Frequency Microwave Radiometer surface wind speeds from NOAA and Air Force reconnaissance flights,NOAA P-3 Tail Doppler Radar measurements, and best track maximum intensity estimates as well as the simulations from Hurricane WRF Ensemble Data Assimilation System (HEDAS) experiments for 83 real data cases. The results confirmed validity of the method: the stochastic component of the maximum exibited a non-Gaussian PDF with small mean amplitude and variance that was comparable to the known best track error estimates. The results of the decomposition were then used to evaluate the impact of the improved initial conditions on the forecast. It was shown that the errors in the deterministic component of the intensity had the dominant effect on the forecast skill for the studied cases. This result suggests that the data assimilation of the inner core observations could focus primarily on improving the analysis of wave number 0 and 1 initial structure and on the mechanisms responsible for forcing the evolution of this low-wavenumber structure. For the latter analysis, the assimilation of airborne and satellite remote sensing observations could play significant role.
NASA Astrophysics Data System (ADS)
Gao, Yi
The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.
Population density equations for stochastic processes with memory kernels
NASA Astrophysics Data System (ADS)
Lai, Yi Ming; de Kamps, Marc
2017-06-01
We present a method for solving population density equations (PDEs)-a mean-field technique describing homogeneous populations of uncoupled neurons—where the populations can be subject to non-Markov noise for arbitrary distributions of jump sizes. The method combines recent developments in two different disciplines that traditionally have had limited interaction: computational neuroscience and the theory of random networks. The method uses a geometric binning scheme, based on the method of characteristics, to capture the deterministic neurodynamics of the population, separating the deterministic and stochastic process cleanly. We can independently vary the choice of the deterministic model and the model for the stochastic process, leading to a highly modular numerical solution strategy. We demonstrate this by replacing the master equation implicit in many formulations of the PDE formalism by a generalization called the generalized Montroll-Weiss equation—a recent result from random network theory—describing a random walker subject to transitions realized by a non-Markovian process. We demonstrate the method for leaky- and quadratic-integrate and fire neurons subject to spike trains with Poisson and gamma-distributed interspike intervals. We are able to model jump responses for both models accurately to both excitatory and inhibitory input under the assumption that all inputs are generated by one renewal process.
Distribution and regulation of stochasticity and plasticity in Saccharomyces cerevisiae
Dar, R. D.; Karig, D. K.; Cooke, J. F.; ...
2010-09-01
Stochasticity is an inherent feature of complex systems with nanoscale structure. In such systems information is represented by small collections of elements (e.g. a few electrons on a quantum dot), and small variations in the populations of these elements may lead to big uncertainties in the information. Unfortunately, little is known about how to work within this inherently noisy environment to design robust functionality into complex nanoscale systems. Here, we look to the biological cell as an intriguing model system where evolution has mediated the trade-offs between fluctuations and function, and in particular we look at the relationships and trade-offsmore » between stochastic and deterministic responses in the gene expression of budding yeast (Saccharomyces cerevisiae). We find gene regulatory arrangements that control the stochastic and deterministic components of expression, and show that genes that have evolved to respond to stimuli (stress) in the most strongly deterministic way exhibit the most noise in the absence of the stimuli. We show that this relationship is consistent with a bursty 2-state model of gene expression, and demonstrate that this regulatory motif generates the most uncertainty in gene expression when there is the greatest uncertainty in the optimal level of gene expression.« less
Stochastic population dynamic models as probability networks
M.E. and D.C. Lee Borsuk
2009-01-01
The dynamics of a population and its response to environmental change depend on the balance of birth, death and age-at-maturity, and there have been many attempts to mathematically model populations based on these characteristics. Historically, most of these models were deterministic, meaning that the results were strictly determined by the equations of the model and...
ERIC Educational Resources Information Center
Huang, Hung-Yu; Wang, Wen-Chung
2014-01-01
The DINA (deterministic input, noisy, and gate) model has been widely used in cognitive diagnosis tests and in the process of test development. The outcomes known as slip and guess are included in the DINA model function representing the responses to the items. This study aimed to extend the DINA model by using the random-effect approach to allow…
Solar cosmic rays as a specific source of radiation risk during piloted space flight.
Petrov, V M
2004-01-01
Solar cosmic rays present one of several radiation sources that are unique to space flight. Under ground conditions the exposure to individuals has a controlled form and radiation risk occurs as stochastic radiobiological effects. Existence of solar cosmic rays in space leads to a stochastic mode of radiation environment as a result of which any radiobiological consequences of exposure to solar cosmic rays during the flight will be probabilistic values. In this case, the hazard of deterministic effects should also be expressed in radiation risk values. The main deterministic effect under space conditions is radiation sickness. The best dosimetric functional for its analysis is the blood forming organs dose equivalent but not an effective dose. In addition, the repair processes in red bone marrow affect strongly on the manifestation of this pathology and they must be taken into account for radiation risk assessment. A method for taking into account the mentioned above peculiarities for the solar cosmic rays radiation risk assessment during the interplanetary flights is given in the report. It is shown that radiation risk of deterministic effects defined, as the death probability caused by radiation sickness due to acute solar cosmic rays exposure, can be comparable to risk of stochastic effects. Its value decreases strongly because of the fractional mode of exposure during the orbital movement of the spacecraft. On the contrary, during the interplanetary flight, radiation risk of deterministic effects increases significantly because of the residual component of the blood forming organs dose from previous solar proton events. The noted quality of radiation responses must be taken into account for estimating radiation hazard in space. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.
Cao, Pengxing; Tan, Xiahui; Donovan, Graham; Sanderson, Michael J; Sneyd, James
2014-08-01
The inositol trisphosphate receptor ([Formula: see text]) is one of the most important cellular components responsible for oscillations in the cytoplasmic calcium concentration. Over the past decade, two major questions about the [Formula: see text] have arisen. Firstly, how best should the [Formula: see text] be modeled? In other words, what fundamental properties of the [Formula: see text] allow it to perform its function, and what are their quantitative properties? Secondly, although calcium oscillations are caused by the stochastic opening and closing of small numbers of [Formula: see text], is it possible for a deterministic model to be a reliable predictor of calcium behavior? Here, we answer these two questions, using airway smooth muscle cells (ASMC) as a specific example. Firstly, we show that periodic calcium waves in ASMC, as well as the statistics of calcium puffs in other cell types, can be quantitatively reproduced by a two-state model of the [Formula: see text], and thus the behavior of the [Formula: see text] is essentially determined by its modal structure. The structure within each mode is irrelevant for function. Secondly, we show that, although calcium waves in ASMC are generated by a stochastic mechanism, [Formula: see text] stochasticity is not essential for a qualitative prediction of how oscillation frequency depends on model parameters, and thus deterministic [Formula: see text] models demonstrate the same level of predictive capability as do stochastic models. We conclude that, firstly, calcium dynamics can be accurately modeled using simplified [Formula: see text] models, and, secondly, to obtain qualitative predictions of how oscillation frequency depends on parameters it is sufficient to use a deterministic model.
A stochastic electricity market clearing formulation with consistent pricing properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavala, Victor M.; Kim, Kibaek; Anitescu, Mihai
We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives. We extend and analyze a previously proposed stochastic clearing formulation in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation yields price bounded price distortions, and we show that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network. We prove that when the price distortions are zero, day-ahead quantities equal a quantile of their real-time counterparts. The undesired effects of price distortions suggest that stochasticmore » settings provide significant benefits over deterministic ones that go beyond social surplus improvements. Finally, we propose additional metrics to evaluate these benefits.« less
Zbilut, Joseph P.; Colosimo, Alfredo; Conti, Filippo; Colafranceschi, Mauro; Manetti, Cesare; Valerio, MariaCristina; Webber, Charles L.; Giuliani, Alessandro
2003-01-01
The problem of protein folding vs. aggregation was investigated in acylphosphatase and the amyloid protein Aβ(1–40) by means of nonlinear signal analysis of their chain hydrophobicity. Numerical descriptors of recurrence patterns provided the basis for statistical evaluation of folding/aggregation distinctive features. Static and dynamic approaches were used to elucidate conditions coincident with folding vs. aggregation using comparisons with known protein secondary structure classifications, site-directed mutagenesis studies of acylphosphatase, and molecular dynamics simulations of amyloid protein, Aβ(1–40). The results suggest that a feature derived from principal component space characterized by the smoothness of singular, deterministic hydrophobicity patches plays a significant role in the conditions governing protein aggregation. PMID:14645049
A stochastic electricity market clearing formulation with consistent pricing properties
Zavala, Victor M.; Kim, Kibaek; Anitescu, Mihai; ...
2017-03-16
We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives. We extend and analyze a previously proposed stochastic clearing formulation in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation yields price bounded price distortions, and we show that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network. We prove that when the price distortions are zero, day-ahead quantities equal a quantile of their real-time counterparts. The undesired effects of price distortions suggest that stochasticmore » settings provide significant benefits over deterministic ones that go beyond social surplus improvements. Finally, we propose additional metrics to evaluate these benefits.« less
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...
An analytical framework to assist decision makers in the use of forest ecosystem model predictions
USDA-ARS?s Scientific Manuscript database
The predictions of most terrestrial ecosystem models originate from deterministic simulations. Relatively few uncertainty evaluation exercises in model outputs are performed by either model developers or users. This issue has important consequences for decision makers who rely on models to develop n...
The goal of achieving verisimilitude of air quality simulations to observations is problematic. Chemical transport models such as the Community Multi-Scale Air Quality (CMAQ) modeling system produce volume averages of pollutant concentration fields. When grid sizes are such tha...
Deterministic and fuzzy-based methods to evaluate community resilience
NASA Astrophysics Data System (ADS)
Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo
2018-04-01
Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.
Deterministic Walks with Choice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
Mesoscopic and continuum modelling of angiogenesis
Spill, F.; Guerrero, P.; Alarcon, T.; Maini, P. K.; Byrne, H. M.
2016-01-01
Angiogenesis is the formation of new blood vessels from pre-existing ones in response to chemical signals secreted by, for example, a wound or a tumour. In this paper, we propose a mesoscopic lattice-based model of angiogenesis, in which processes that include proliferation and cell movement are considered as stochastic events. By studying the dependence of the model on the lattice spacing and the number of cells involved, we are able to derive the deterministic continuum limit of our equations and compare it to similar existing models of angiogenesis. We further identify conditions under which the use of continuum models is justified, and others for which stochastic or discrete effects dominate. We also compare different stochastic models for the movement of endothelial tip cells which have the same macroscopic, deterministic behaviour, but lead to markedly different behaviour in terms of production of new vessel cells. PMID:24615007
Lai, C.; Tsay, T.-K.; Chien, C.-H.; Wu, I.-L.
2009-01-01
Researchers at the Hydroinformatic Research and Development Team (HIRDT) of the National Taiwan University undertook a project to create a real time flood forecasting model, with an aim to predict the current in the Tamsui River Basin. The model was designed based on deterministic approach with mathematic modeling of complex phenomenon, and specific parameter values operated to produce a discrete result. The project also devised a rainfall-stage model that relates the rate of rainfall upland directly to the change of the state of river, and is further related to another typhoon-rainfall model. The geographic information system (GIS) data, based on precise contour model of the terrain, estimate the regions that were perilous to flooding. The HIRDT, in response to the project's progress, also devoted their application of a deterministic model to unsteady flow of thermodynamics to help predict river authorities issue timely warnings and take other emergency measures.
Stochastic inference with spiking neurons in the high-conductance state
NASA Astrophysics Data System (ADS)
Petrovici, Mihai A.; Bill, Johannes; Bytschok, Ilja; Schemmel, Johannes; Meier, Karlheinz
2016-10-01
The highly variable dynamics of neocortical circuits observed in vivo have been hypothesized to represent a signature of ongoing stochastic inference but stand in apparent contrast to the deterministic response of neurons measured in vitro. Based on a propagation of the membrane autocorrelation across spike bursts, we provide an analytical derivation of the neural activation function that holds for a large parameter space, including the high-conductance state. On this basis, we show how an ensemble of leaky integrate-and-fire neurons with conductance-based synapses embedded in a spiking environment can attain the correct firing statistics for sampling from a well-defined target distribution. For recurrent networks, we examine convergence toward stationarity in computer simulations and demonstrate sample-based Bayesian inference in a mixed graphical model. This points to a new computational role of high-conductance states and establishes a rigorous link between deterministic neuron models and functional stochastic dynamics on the network level.
Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J. S.; Pollicove, H.
The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less
Single-photon non-linear optics with a quantum dot in a waveguide
NASA Astrophysics Data System (ADS)
Javadi, A.; Söllner, I.; Arcari, M.; Hansen, S. Lindskov; Midolo, L.; Mahmoodian, S.; Kiršanskė, G.; Pregnolato, T.; Lee, E. H.; Song, J. D.; Stobbe, S.; Lodahl, P.
2015-10-01
Strong non-linear interactions between photons enable logic operations for both classical and quantum-information technology. Unfortunately, non-linear interactions are usually feeble and therefore all-optical logic gates tend to be inefficient. A quantum emitter deterministically coupled to a propagating mode fundamentally changes the situation, since each photon inevitably interacts with the emitter, and highly correlated many-photon states may be created. Here we show that a single quantum dot in a photonic-crystal waveguide can be used as a giant non-linearity sensitive at the single-photon level. The non-linear response is revealed from the intensity and quantum statistics of the scattered photons, and contains contributions from an entangled photon-photon bound state. The quantum non-linearity will find immediate applications for deterministic Bell-state measurements and single-photon transistors and paves the way to scalable waveguide-based photonic quantum-computing architectures.
NASA Astrophysics Data System (ADS)
Li, Fei; Subramanian, Kartik; Chen, Minghan; Tyson, John J.; Cao, Yang
2016-06-01
The asymmetric cell division cycle in Caulobacter crescentus is controlled by an elaborate molecular mechanism governing the production, activation and spatial localization of a host of interacting proteins. In previous work, we proposed a deterministic mathematical model for the spatiotemporal dynamics of six major regulatory proteins. In this paper, we study a stochastic version of the model, which takes into account molecular fluctuations of these regulatory proteins in space and time during early stages of the cell cycle of wild-type Caulobacter cells. We test the stochastic model with regard to experimental observations of increased variability of cycle time in cells depleted of the divJ gene product. The deterministic model predicts that overexpression of the divK gene blocks cell cycle progression in the stalked stage; however, stochastic simulations suggest that a small fraction of the mutants cells do complete the cell cycle normally.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2011-05-17
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.
A deterministic particle method for one-dimensional reaction-diffusion equations
NASA Technical Reports Server (NTRS)
Mascagni, Michael
1995-01-01
We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.
Chance, destiny, and the inner workings of ClpXP.
Russell, Rick; Matouschek, Andreas
2014-07-31
AAA+ proteases are responsible for protein degradation in all branches of life. Using single-molecule and ensemble assays, Cordova et al. investigate how the bacterial protease ClpXP steps through a substrate's polypeptide chain and construct a quantitative kinetic model that recapitulates the interplay between stochastic and deterministic behaviors of ClpXP. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clayton, Daniel James; Lipinski, Ronald J.; Bechtel, Ryan D.
As compact and light weight power sources with reliable, long lives, Radioisotope Power Systems (RPSs) have made space missions to explore the solar system possible. Due to the hazardous material that can be released during a launch accident, the potential health risk of an accident must be quantified, so that appropriate launch approval decisions can be made. One part of the risk estimation involves modeling the response of the RPS to potential accident environments. Due to the complexity of modeling the full RPS response deterministically on dynamic variables, the evaluation is performed in a stochastic manner with a Monte Carlomore » simulation. The potential consequences can be determined by modeling the transport of the hazardous material in the environment and in human biological pathways. The consequence analysis results are summed and weighted by appropriate likelihood values to give a collection of probabilistic results for the estimation of the potential health risk. This information is used to guide RPS designs, spacecraft designs, mission architecture, or launch procedures to potentially reduce the risk, as well as to inform decision makers of the potential health risks resulting from the use of RPSs for space missions.« less
NASA Astrophysics Data System (ADS)
Huang, Duruo; Du, Wenqi; Zhu, Hong
2017-10-01
In performance-based seismic design, ground-motion time histories are needed for analyzing dynamic responses of nonlinear structural systems. However, the number of ground-motion data at design level is often limited. In order to analyze seismic performance of structures, ground-motion time histories need to be either selected from recorded strong-motion database or numerically simulated using stochastic approaches. In this paper, a detailed procedure to select proper acceleration time histories from the Next Generation Attenuation (NGA) database for several cities in Taiwan is presented. Target response spectra are initially determined based on a local ground-motion prediction equation under representative deterministic seismic hazard analyses. Then several suites of ground motions are selected for these cities using the Design Ground Motion Library (DGML), a recently proposed interactive ground-motion selection tool. The selected time histories are representatives of the regional seismic hazard and should be beneficial to earthquake studies when comprehensive seismic hazard assessments and site investigations are unavailable. Note that this method is also applicable to site-specific motion selections with the target spectra near the ground surface considering the site effect.
2011-01-01
Background Bacteria have evolved a rich set of mechanisms for sensing and adapting to adverse conditions in their environment. These are crucial for their survival, which requires them to react to extracellular stresses such as heat shock, ethanol treatment or phage infection. Here we focus on studying the phage shock protein (Psp) stress response in Escherichia coli induced by a phage infection or other damage to the bacterial membrane. This system has not yet been theoretically modelled or analysed in silico. Results We develop a model of the Psp response system, and illustrate how such models can be constructed and analyzed in light of available sparse and qualitative information in order to generate novel biological hypotheses about their dynamical behaviour. We analyze this model using tools from Petri-net theory and study its dynamical range that is consistent with currently available knowledge by conditioning model parameters on the available data in an approximate Bayesian computation (ABC) framework. Within this ABC approach we analyze stochastic and deterministic dynamics. This analysis allows us to identify different types of behaviour and these mechanistic insights can in turn be used to design new, more detailed and time-resolved experiments. Conclusions We have developed the first mechanistic model of the Psp response in E. coli. This model allows us to predict the possible qualitative stochastic and deterministic dynamic behaviours of key molecular players in the stress response. Our inferential approach can be applied to stress response and signalling systems more generally: in the ABC framework we can condition mathematical models on qualitative data in order to delimit e.g. parameter ranges or the qualitative system dynamics in light of available end-point or qualitative information. PMID:21569396
Hazledine, Saul; Sun, Jongho; Wysham, Derin; Downie, J. Allan; Oldroyd, Giles E. D.; Morris, Richard J.
2009-01-01
Legume plants form beneficial symbiotic interactions with nitrogen fixing bacteria (called rhizobia), with the rhizobia being accommodated in unique structures on the roots of the host plant. The legume/rhizobial symbiosis is responsible for a significant proportion of the global biologically available nitrogen. The initiation of this symbiosis is governed by a characteristic calcium oscillation within the plant root hair cells and this signal is activated by the rhizobia. Recent analyses on calcium time series data have suggested that stochastic effects have a large role to play in defining the nature of the oscillations. The use of multiple nonlinear time series techniques, however, suggests an alternative interpretation, namely deterministic chaos. We provide an extensive, nonlinear time series analysis on the nature of this calcium oscillation response. We build up evidence through a series of techniques that test for determinism, quantify linear and nonlinear components, and measure the local divergence of the system. Chaos is common in nature and it seems plausible that properties of chaotic dynamics might be exploited by biological systems to control processes within the cell. Systems possessing chaotic control mechanisms are more robust in the sense that the enhanced flexibility allows more rapid response to environmental changes with less energetic costs. The desired behaviour could be most efficiently targeted in this manner, supporting some intriguing speculations about nonlinear mechanisms in biological signaling. PMID:19675679
PRMS-IV, the precipitation-runoff modeling system, version 4
Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.
2015-01-01
Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.
Optimisation of lateral car dynamics taking into account parameter uncertainties
NASA Astrophysics Data System (ADS)
Busch, Jochen; Bestle, Dieter
2014-02-01
Simulation studies on an active all-wheel-steering car show that disturbance of vehicle parameters have high influence on lateral car dynamics. This motivates the need of robust design against such parameter uncertainties. A specific parametrisation is established combining deterministic, velocity-dependent steering control parameters with partly uncertain, velocity-independent vehicle parameters for simultaneous use in a numerical optimisation process. Model-based objectives are formulated and summarised in a multi-objective optimisation problem where especially the lateral steady-state behaviour is improved by an adaption strategy based on measurable uncertainties. The normally distributed uncertainties are generated by optimal Latin hypercube sampling and a response surface based strategy helps to cut down time consuming model evaluations which offers the possibility to use a genetic optimisation algorithm. Optimisation results are discussed in different criterion spaces and the achieved improvements confirm the validity of the proposed procedure.
ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.
Morota, Gota
2017-12-20
Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.
Stochastic and deterministic processes regulate spatio-temporal variation in seed bank diversity
Alejandro A. Royo; Todd E. Ristau
2013-01-01
Seed banks often serve as reservoirs of taxonomic and genetic diversity that buffer plant populations and influence post-disturbance vegetation trajectories; yet evaluating their importance requires understanding how their composition varies within and across spatial and temporal scales (α- and β-diversity). Shifts in seed bank diversity are strongly...
Use of Remote Sensing and Dust Modelling to Evaluate Ecosystem Phenology and Pollen Dispersal
NASA Technical Reports Server (NTRS)
Luvall, Jeffrey C.; Sprigg, William A.; Watts, Carol; Shaw, Patrick
2007-01-01
The impact of pollen release and downwind concentrations can be evaluated utilizing remote sensing. Previous NASA studies have addressed airborne dust prediction systems PHAiRS (Public Health Applications in Remote Sensing) which have determined that pollen forecasts and simulations are possible. By adapting the deterministic dust model (as an in-line system with the National Weather Service operational forecast model) used in PHAiRS to simulate downwind dispersal of pollen, initializing the model with pollen source regions from MODIS, assessing the results a rapid prototype concept can be produced. We will present the results of our effort to develop a deterministic model for predicting and simulating pollen emission and downwind concentration to study details or phenology and meteorology and their dependencies, and the promise of a credible real time forecast system to support public health and agricultural science and service. Previous studies have been done with PHAiRS research, the use of NASA data, the dust model and the PHAiRS potential to improve public health and environmental services long into the future.
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2015-04-01
Sea surface height, sea surface temperature and temperature profiles at depth collected between January and December 2005 are assimilated into a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. 60 ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments. Incremental analysis update scheme is applied in order to reduce spurious oscillations due to the model state correction. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with observations used in the assimilation experiments and independent observations, which goes further than most previous studies and constitutes one of the original points of this paper. Regarding the deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations in order to diagnose the ensemble distribution properties in a deterministic way. Regarding the probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system. The improvement of the assimilation is demonstrated using these validation metrics. Finally, the deterministic validation and the probabilistic validation are analysed jointly. The consistency and complementarity between both validations are highlighted. High reliable situations, in which the RMS error and the CRPS give the same information, are identified for the first time in this paper.
A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments
NASA Astrophysics Data System (ADS)
Gokhale, Sharad; Khare, Mukesh
Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).
NASA Astrophysics Data System (ADS)
Sinner, K.; Teasley, R. L.
2016-12-01
Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling
Current fluctuations in periodically driven systems
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Chetrite, Raphael
2018-05-01
Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.
Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system
NASA Astrophysics Data System (ADS)
Keane, R. J.; Plant, R. S.; Tennant, W. J.
2015-12-01
The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
Chen, Huipeng; Poulard, David; Forman, Jason; Crandall, Jeff; Panzer, Matthew B
2018-07-04
Evaluating the biofidelity of pedestrian finite element models (PFEM) using postmortem human subjects (PMHS) is a challenge because differences in anthropometry between PMHS and PFEM could limit a model's capability to accurately capture cadaveric responses. Geometrical personalization via morphing can modify the PFEM geometry to match the specific PMHS anthropometry, which could alleviate this issue. In this study, the Total Human Model for Safety (THUMS) PFEM (Ver 4.01) was compared to the cadaveric response in vehicle-pedestrian impacts using geometrically personalized models. The AM50 THUMS PFEM was used as the baseline model, and 2 morphed PFEM were created to the anthropometric specifications of 2 obese PMHS used in a previous pedestrian impact study with a mid-size sedan. The same measurements as those obtained during the PMHS tests were calculated from the simulations (kinematics, accelerations, strains), and biofidelity metrics based on signals correlation (correlation and analysis, CORA) were established to compare the response of the models to the experiments. Injury outcomes were predicted deterministically (through strain-based threshold) and probabilistically (with injury risk functions) and compared with the injuries reported in the necropsy. The baseline model could not accurately capture all aspects of the PMHS kinematics, strain, and injury risks, whereas the morphed models reproduced biofidelic response in terms of trajectory (CORA score = 0.927 ± 0.092), velocities (0.975 ± 0.027), accelerations (0.862 ± 0.072), and strains (0.707 ± 0.143). The personalized THUMS models also generally predicted injuries consistent with those identified during posttest autopsy. The study highlights the need to control for pedestrian anthropometry when validating pedestrian human body models against PMHS data. The information provided in the current study could be useful for improving model biofidelity for vehicle-pedestrian impact scenarios.
Massatti, Rob; Knowles, L Lacey
2016-08-01
Deterministic processes may uniquely affect codistributed species' phylogeographic patterns such that discordant genetic variation among taxa is predicted. Yet, explicitly testing expectations of genomic discordance in a statistical framework remains challenging. Here, we construct spatially and temporally dynamic models to investigate the hypothesized effect of microhabitat preferences on the permeability of glaciated regions to gene flow in two closely related montane species. Utilizing environmental niche models from the Last Glacial Maximum and the present to inform demographic models of changes in habitat suitability over time, we evaluate the relative probabilities of two alternative models using approximate Bayesian computation (ABC) in which glaciated regions are either (i) permeable or (ii) a barrier to gene flow. Results based on the fit of the empirical data to data sets simulated using a spatially explicit coalescent under alternative models indicate that genomic data are consistent with predictions about the hypothesized role of microhabitat in generating discordant patterns of genetic variation among the taxa. Specifically, a model in which glaciated areas acted as a barrier was much more probable based on patterns of genomic variation in Carex nova, a wet-adapted species. However, in the dry-adapted Carex chalciolepis, the permeable model was more probable, although the difference in the support of the models was small. This work highlights how statistical inferences can be used to distinguish deterministic processes that are expected to result in discordant genomic patterns among species, including species-specific responses to climate change. © 2016 John Wiley & Sons Ltd.
Weight Stigma Reduction and Genetic Determinism
Hilbert, Anja
2016-01-01
One major approach to weight stigma reduction consists of decreasing beliefs about the personal controllability of—and responsibility for—obesity by educating about its biogenetic causes. Evidence on the efficacy of this approach is mixed, and it remains unclear whether this would create a deterministic view, potentially leading to detrimental side-effects. Two independent studies from Germany using randomized designs with delayed-intervention control groups served to (1) develop and pilot a brief, interactive stigma reduction intervention to educate N = 128 university students on gene × environment interactions in the etiology of obesity; and to (2) evaluate this intervention in the general population (N = 128) and determine mechanisms of change. The results showed (1) decreased weight stigma and controllability beliefs two weeks post-intervention in a student sample; and (2) decreased internal attributions and increased genetic attributions, knowledge, and deterministic beliefs four weeks post-intervention in a population sample. Lower weight stigma was longitudinally predicted by a decrease in controllability beliefs and an increase in the belief in genetic determinism, especially in women. The results underline the usefulness of a brief, interactive intervention promoting an interactionist view of obesity to reduce weight stigma, at least in the short term, lending support to the mechanisms of change derived from attribution theory. The increase in genetic determinism that occurred despite the intervention’s gene × environment focus had no detrimental side-effect on weight stigma, but instead contributed to its reduction. Further research is warranted on the effects of how biogenetic causal information influences weight management behavior of individuals with obesity. PMID:27631384
Weight Stigma Reduction and Genetic Determinism.
Hilbert, Anja
2016-01-01
One major approach to weight stigma reduction consists of decreasing beliefs about the personal controllability of-and responsibility for-obesity by educating about its biogenetic causes. Evidence on the efficacy of this approach is mixed, and it remains unclear whether this would create a deterministic view, potentially leading to detrimental side-effects. Two independent studies from Germany using randomized designs with delayed-intervention control groups served to (1) develop and pilot a brief, interactive stigma reduction intervention to educate N = 128 university students on gene × environment interactions in the etiology of obesity; and to (2) evaluate this intervention in the general population (N = 128) and determine mechanisms of change. The results showed (1) decreased weight stigma and controllability beliefs two weeks post-intervention in a student sample; and (2) decreased internal attributions and increased genetic attributions, knowledge, and deterministic beliefs four weeks post-intervention in a population sample. Lower weight stigma was longitudinally predicted by a decrease in controllability beliefs and an increase in the belief in genetic determinism, especially in women. The results underline the usefulness of a brief, interactive intervention promoting an interactionist view of obesity to reduce weight stigma, at least in the short term, lending support to the mechanisms of change derived from attribution theory. The increase in genetic determinism that occurred despite the intervention's gene × environment focus had no detrimental side-effect on weight stigma, but instead contributed to its reduction. Further research is warranted on the effects of how biogenetic causal information influences weight management behavior of individuals with obesity.
Godt, J.W.; Baum, R.L.; Savage, W.Z.; Salciarini, D.; Schulz, W.H.; Harp, E.L.
2008-01-01
Application of transient deterministic shallow landslide models over broad regions for hazard and susceptibility assessments requires information on rainfall, topography and the distribution and properties of hillside materials. We survey techniques for generating the spatial and temporal input data for such models and present an example using a transient deterministic model that combines an analytic solution to assess the pore-pressure response to rainfall infiltration with an infinite-slope stability calculation. Pore-pressures and factors of safety are computed on a cell-by-cell basis and can be displayed or manipulated in a grid-based GIS. Input data are high-resolution (1.8??m) topographic information derived from LiDAR data and simple descriptions of initial pore-pressure distribution and boundary conditions for a study area north of Seattle, Washington. Rainfall information is taken from a previously defined empirical rainfall intensity-duration threshold and material strength and hydraulic properties were measured both in the field and laboratory. Results are tested by comparison with a shallow landslide inventory. Comparison of results with those from static infinite-slope stability analyses assuming fixed water-table heights shows that the spatial prediction of shallow landslide susceptibility is improved using the transient analyses; moreover, results can be depicted in terms of the rainfall intensity and duration known to trigger shallow landslides in the study area.
Analysis of randomly time varying systems by gaussian closure technique
NASA Astrophysics Data System (ADS)
Dash, P. K.; Iyengar, R. N.
1982-07-01
The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.
NASA Astrophysics Data System (ADS)
Hattaf, Khalid; Mahrouf, Marouane; Adnani, Jihad; Yousfi, Noura
2018-01-01
In this paper, we propose a stochastic delayed epidemic model with specific functional response. The time delay represents temporary immunity period, i.e., time from recovery to becoming susceptible again. We first show that the proposed model is mathematically and biologically well-posed. Moreover, the extinction of the disease and the persistence in the mean are established in the terms of a threshold value R0S which is smaller than the basic reproduction number R0 of the corresponding deterministic system.
NASA Astrophysics Data System (ADS)
Delorit, Justin; Cristian Gonzalez Ortuya, Edmundo; Block, Paul
2017-09-01
In many semi-arid regions, multisectoral demands often stress available water supplies. Such is the case in the Elqui River valley of northern Chile, which draws on a limited-capacity reservoir to allocate 25 000 water rights. Delayed infrastructure investment forces water managers to address demand-based allocation strategies, particularly in dry years, which are realized through reductions in the volume associated with each water right. Skillful season-ahead streamflow forecasts have the potential to inform managers with an indication of future conditions to guide reservoir allocations. This work evaluates season-ahead statistical prediction models of October-January (growing season) streamflow at multiple lead times associated with manager and user decision points, and links predictions with a reservoir allocation tool. Skillful results (streamflow forecasts outperform climatology) are produced for short lead times (1 September: ranked probability skill score (RPSS) of 0.31, categorical hit skill score of 61 %). At longer lead times, climatological skill exceeds forecast skill due to fewer observations of precipitation. However, coupling the 1 September statistical forecast model with a sea surface temperature phase and strength statistical model allows for equally skillful categorical streamflow forecasts to be produced for a 1 May lead, triggered for 60 % of years (1950-2015), suggesting forecasts need not be strictly deterministic to be useful for water rights holders. An early (1 May) categorical indication of expected conditions is reinforced with a deterministic forecast (1 September) as more observations of local variables become available. The reservoir allocation model is skillful at the 1 September lead (categorical hit skill score of 53 %); skill improves to 79 % when categorical allocation prediction certainty exceeds 80 %. This result implies that allocation efficiency may improve when forecasts are integrated into reservoir decision frameworks. The methods applied here advance the understanding of the mechanisms and timing responsible for moisture transport to the Elqui Valley and provide a unique application of streamflow forecasting in the prediction of water right allocations.
Probabilistic Finite Element Analysis & Design Optimization for Structural Designs
NASA Astrophysics Data System (ADS)
Deivanayagam, Arumugam
This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.
NASA Astrophysics Data System (ADS)
Zatarain-Salazar, J.; Reed, P. M.; Herman, J. D.; Giuliani, M.; Castelletti, A.
2014-12-01
Globally reservoir operations provide fundamental services to water supply, energy generation, recreation, and ecosystems. The pressures of expanding populations, climate change, and increased energy demands are motivating a significant investment in re-operationalizing existing reservoirs or defining operations for new reservoirs. Recent work has highlighted the potential benefits of exploiting recent advances in many-objective optimization and direct policy search (DPS) to aid in addressing these systems' multi-sector demand tradeoffs. This study contributes to a comprehensive diagnostic assessment of multi-objective evolutionary optimization algorithms (MOEAs) efficiency, effectiveness, reliability, and controllability when supporting DPS for the Conowingo dam in the Lower Susquehanna River Basin. The Lower Susquehanna River is an interstate water body that has been subject to intensive water management efforts due to the system's competing demands from urban water supply, atomic power plant cooling, hydropower production, and federally regulated environmental flows. Seven benchmark and state-of-the-art MOEAs are tested on deterministic and stochastic instances of the Susquehanna test case. In the deterministic formulation, the operating objectives are evaluated over the historical realization of the hydroclimatic variables (i.e., inflows and evaporation rates). In the stochastic formulation, the same objectives are instead evaluated over an ensemble of stochastic inflows and evaporation rates realizations. The algorithms are evaluated in their ability to support DPS in discovering reservoir operations that compose the tradeoffs for six multi-sector performance objectives with thirty-two decision variables. Our diagnostic results highlight that many-objective DPS is very challenging for modern MOEAs and that epsilon dominance is critical for attaining high levels of performance. Epsilon dominance algorithms epsilon-MOEA, epsilon-NSGAII and the auto adaptive Borg MOEA, are statistically superior for the six-objective Susquehanna instance of this important class of problems. Additionally, shifting from deterministic history-based DPS to stochastic DPS significantly increases the difficulty of the problem.
Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design
NASA Technical Reports Server (NTRS)
Kuguoglu, Latife; Ludwiczak, Damian
2006-01-01
The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.
NASA Astrophysics Data System (ADS)
Ravishankar, Bharani
Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
The past, present and future of cyber-physical systems: a focus on models.
Lee, Edward A
2015-02-26
This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.
The Past, Present and Future of Cyber-Physical Systems: A Focus on Models
Lee, Edward A.
2015-01-01
This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486
NASA Astrophysics Data System (ADS)
Xia, H.; Shen, X. M.; Yang, X. C.; Xiong, Y.; Jiang, G. L.
2018-01-01
Deterministic electroplating repair is a novel method for rapidly repairing the attrited parts. By the qualitative contrast and quantitative comparison, influences of the current density on performances of the chrome-plated layer were concluded in this study. The chrome-plated layers were fabricated under different current densities when the other parameters were kept constant. Hardnesses, thicknesses and components, surface morphologies and roughnesses, and wearability of the chrome-plated layers were detected by the Vickers hardness tester, scanning electron microscope / energy dispersive X-ray detector, digital microscope in the 3D imaging mode, and the ball-milling instrument with profilograph, respectively. In order to scientifically evaluate each factor, the experimental data was normalized. A comprehensive evaluation model was founded to quantitative analyse influence of the current density based on analytic hierarchy process method and the weighted evaluation method. The calculated comprehensive evaluation indexes corresponding to current density of 40A/dm2, 45A/dm2, 50A/dm2, 55A/dm2, 60A/dm2, and 65A/dm2 were 0.2246, 0.4850, 0.4799, 0.4922, 0.8672, and 0.1381, respectively. Experimental results indicate that final optimal option was 60A/dm2, and the priority orders were 60A/dm2, 55A/dm2, 45A/dm2, 50A/dm2, 40A/dm2, and 65A/dm2.
NASA Astrophysics Data System (ADS)
Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre
2016-04-01
In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.
Linear response to nonstationary random excitation.
NASA Technical Reports Server (NTRS)
Hasselman, T.
1972-01-01
Development of a method for computing the mean-square response of linear systems to nonstationary random excitation of the form given by y(t) = f(t) x(t), in which x(t) = a stationary process and f(t) is deterministic. The method is suitable for application to multidegree-of-freedom systems when the mean-square response at a point due to excitation applied at another point is desired. Both the stationary process, x(t), and the modulating function, f(t), may be arbitrary. The method utilizes a fundamental component of transient response dependent only on x(t) and the system, and independent of f(t) to synthesize the total response. The role played by this component is analogous to that played by the Green's function or impulse response function in the convolution integral.
Economic evaluation of DNA ploidy analysis vs liquid-based cytology for cervical screening.
Nghiem, V T; Davies, K R; Beck, J R; Follen, M; MacAulay, C; Guillaud, M; Cantor, S B
2015-06-09
DNA ploidy analysis involves automated quantification of chromosomal aneuploidy, a potential marker of progression toward cervical carcinoma. We evaluated the cost-effectiveness of this method for cervical screening, comparing five ploidy strategies (using different numbers of aneuploid cells as cut points) with liquid-based Papanicolaou smear and no screening. A state-transition Markov model simulated the natural history of HPV infection and possible progression into cervical neoplasia in a cohort of 12-year-old females. The analysis evaluated cost in 2012 US$ and effectiveness in quality-adjusted life-years (QALYs) from a health-system perspective throughout a lifetime horizon in the US setting. We calculated incremental cost-effectiveness ratios (ICERs) to determine the best strategy. The robustness of optimal choices was examined in deterministic and probabilistic sensitivity analyses. In the base-case analysis, the ploidy 4 cell strategy was cost-effective, yielding an increase of 0.032 QALY and an ICER of $18 264/QALY compared to no screening. For most scenarios in the deterministic sensitivity analysis, the ploidy 4 cell strategy was the only cost-effective strategy. Cost-effectiveness acceptability curves showed that this strategy was more likely to be cost-effective than the Papanicolaou smear. Compared to the liquid-based Papanicolaou smear, screening with a DNA ploidy strategy appeared less costly and comparably effective.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran
NASA Astrophysics Data System (ADS)
Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid
2018-04-01
The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.
Weather is the main driver in both plant use of nutrients and fate and transport of nutrients in the environment. In previous work, we evaluated a green tax for control of agricultural nutrients in a bi-level optimization framework that linked deterministic models. In this study,...
Refinement of the Arc-Habcap model to predict habitat effectiveness for elk
Lakhdar Benkobi; Mark A. Rumble; Gary C. Brundige; Joshua J. Millspaugh
2004-01-01
Wildlife habitat modeling is increasingly important for managers who need to assess the effects of land management activities. We evaluated the performance of a spatially explicit deterministic habitat model (Arc-Habcap) that predicts habitat effectiveness for elk. We used five years of radio-telemetry locations of elk from Custer State Park (CSP), South Dakota, to...
NASA Astrophysics Data System (ADS)
Castaneda-Lopez, Homero
A methodology for detecting and locating defects or discontinuities on the outside covering of coated metal underground pipelines subjected to cathodic protection has been addressed. On the basis of wide range AC impedance signals for various frequencies applied to a steel-coated pipeline system and by measuring its corresponding transfer function under several laboratory simulation scenarios, a physical laboratory setup of an underground cathodic-protected, coated pipeline was built. This model included different variables and elements that exist under real conditions, such as soil resistivity, soil chemical composition, defect (holiday) location in the pipeline covering, defect area and geometry, and level of cathodic protection. The AC impedance data obtained under different working conditions were used to fit an electrical transmission line model. This model was then used as a tool to fit the impedance signal for different experimental conditions and to establish trends in the impedance behavior without the necessity of further experimental work. However, due to the chaotic nature of the transfer function response of this system under several conditions, it is believed that non-deterministic models based on pattern recognition algorithms are suitable for field condition analysis. A non-deterministic approach was used for experimental analysis by applying an artificial neural network (ANN) algorithm based on classification analysis capable of studying the pipeline system and differentiating the variables that can change impedance conditions. These variables include level of cathodic protection, location of discontinuities (holidays), and severity of corrosion. This work demonstrated a proof-of-concept for a well-known technique and a novel algorithm capable of classifying impedance data for experimental results to predict the exact location of the active holidays and defects on the buried pipelines. Laboratory findings from this procedure are promising, and efforts to develop it for field conditions should continue.
Comparison of three controllers applied to helicopter vibration
NASA Technical Reports Server (NTRS)
Leyland, Jane A.
1992-01-01
A comparison was made of the applicability and suitability of the deterministic controller, the cautious controller, and the dual controller for the reduction of helicopter vibration by using higher harmonic blade pitch control. A randomly generated linear plant model was assumed and the performance index was defined to be a quadratic output metric of this linear plant. A computer code, designed to check out and evaluate these controllers, was implemented and used to accomplish this comparison. The effects of random measurement noise, the initial estimate of the plant matrix, and the plant matrix propagation rate were determined for each of the controllers. With few exceptions, the deterministic controller yielded the greatest vibration reduction (as characterized by the quadratic output metric) and operated with the greatest reliability. Theoretical limitations of these controllers were defined and appropriate candidate alternative methods, including one method particularly suitable to the cockpit, were identified.
An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling
NASA Astrophysics Data System (ADS)
Qiu, X. N.; Lau, H. Y. K.
The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.
Lv, Qiming; Schneider, Manuel K; Pitchford, Jonathan W
2008-08-01
We study individual plant growth and size hierarchy formation in an experimental population of Arabidopsis thaliana, within an integrated analysis that explicitly accounts for size-dependent growth, size- and space-dependent competition, and environmental stochasticity. It is shown that a Gompertz-type stochastic differential equation (SDE) model, involving asymmetric competition kernels and a stochastic term which decreases with the logarithm of plant weight, efficiently describes individual plant growth, competition, and variability in the studied population. The model is evaluated within a Bayesian framework and compared to its deterministic counterpart, and to several simplified stochastic models, using distributional validation. We show that stochasticity is an important determinant of size hierarchy and that SDE models outperform the deterministic model if and only if structural components of competition (asymmetry; size- and space-dependence) are accounted for. Implications of these results are discussed in the context of plant ecology and in more general modelling situations.
Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.
Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts
Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor
2012-01-01
The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369
NASA Astrophysics Data System (ADS)
Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.
2016-05-01
The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
Wu, Taotao; Kim, Taewung; Bollapragada, Varun; Poulard, David; Chen, Huipeng; Panzer, Matthew B; Forman, Jason L; Crandall, Jeff R; Pipkorn, Bengt
2017-05-29
The goal of this study was to evaluate the biofidelity of the Total Human Model for Safety (THUMS; Ver. 4.01) pedestrian finite element models (PFEM) in a whole-body pedestrian impact condition using a well-characterized generic pedestrian buck model. The biofidelity of THUMS PFEM was evaluated with respect to data from 3 full-scale postmortem human subject (PMHS) pedestrian impact tests, in which a pedestrian buck laterally struck the subjects using a pedestrian buck at 40 km/h. The pedestrian model was scaled to match the anthropometry of the target subjects and then positioned to match the pre-impact postures of the target subjects based on the 3-dimensional motion tracking data obtained during the experiments. An objective rating method was employed to quantitatively evaluate the correlation between the responses of the models and the PMHS. Injuries in the models were predicted both probabilistically and deterministically using empirical injury risk functions and strain measures, respectively, and compared with those of the target PMHS. In general, the model exhibited biofidelic kinematic responses (in the Y-Z plane) regarding trajectories (International Organization for Standardization [ISO] ratings: Y = 0.90 ± 0.11, Z = 0.89 ± 0.09), linear resultant velocities (ISO ratings: 0.83 ± 0.07), accelerations (ISO ratings: Y = 0.58 ± 0.11, Z = 0.52 ± 0.12), and angular velocities (ISO ratings: X = 0.48 ± 0.13) but exhibited stiffer leg responses and delayed head responses compared to those of the PMHS. This indicates potential biofidelity issues with the PFEM for regions below the knee and in the neck. The model also demonstrated comparable reaction forces at the buck front-end regions to those from the PMHS tests. The PFEM generally predicted the injuries that the PMHS sustained but overestimated injuries in the ankle and leg regions. Based on the data considered, the THUMS PFEM was considered to be biofidelic for this pedestrian impact condition and vehicle. Given the capability of the model to reproduce biomechanical responses, it shows potential as a valuable tool for developing novel pedestrian safety systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Dirk, E-mail: dirk.koenig@unsw.edu.au
2016-08-15
Semiconductor nanocrystals (NCs) experience stress and charge transfer by embedding materials or ligands and impurity atoms. In return, the environment of NCs experiences a NC stress response which may lead to matrix deformation and propagated strain. Up to now, there is no universal gauge to evaluate the stress impact on NCs and their response as a function of NC size d{sub NC}. I deduce geometrical number series as analytical tools to obtain the number of NC atoms N{sub NC}(d{sub NC}[i]), bonds between NC atoms N{sub bnd}(d{sub NC}[i]) and interface bonds N{sub IF}(d{sub NC}[i]) for seven high symmetry zinc-blende (zb) NCsmore » with low-index faceting: {001} cubes, {111} octahedra, {110} dodecahedra, {001}-{111} pyramids, {111} tetrahedra, {111}-{001} quatrodecahedra and {001}-{111} quadrodecahedra. The fundamental insights into NC structures revealed here allow for major advancements in data interpretation and understanding of zb- and diamond-lattice based nanomaterials. The analytical number series can serve as a standard procedure for stress evaluation in solid state spectroscopy due to their deterministic nature, easy use and general applicability over a wide range of spectroscopy methods as well as NC sizes, forms and materials.« less
Design and evaluation of a parametric model for cardiac sounds.
Ibarra-Hernández, Roilhi F; Alonso-Arévalo, Miguel A; Cruz-Gutiérrez, Alejandro; Licona-Chávez, Ana L; Villarreal-Reyes, Salvador
2017-10-01
Heart sound analysis plays an important role in the auscultative diagnosis process to detect the presence of cardiovascular diseases. In this paper we propose a novel parametric heart sound model that accurately represents normal and pathological cardiac audio signals, also known as phonocardiograms (PCG). The proposed model considers that the PCG signal is formed by the sum of two parts: one of them is deterministic and the other one is stochastic. The first part contains most of the acoustic energy. This part is modeled by the Matching Pursuit (MP) algorithm, which performs an analysis-synthesis procedure to represent the PCG signal as a linear combination of elementary waveforms. The second part, also called residual, is obtained after subtracting the deterministic signal from the original heart sound recording and can be accurately represented as an autoregressive process using the Linear Predictive Coding (LPC) technique. We evaluate the proposed heart sound model by performing subjective and objective tests using signals corresponding to different pathological cardiac sounds. The results of the objective evaluation show an average Percentage of Root-Mean-Square Difference of approximately 5% between the original heart sound and the reconstructed signal. For the subjective test we conducted a formal methodology for perceptual evaluation of audio quality with the assistance of medical experts. Statistical results of the subjective evaluation show that our model provides a highly accurate approximation of real heart sound signals. We are not aware of any previous heart sound model rigorously evaluated as our proposal. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deterministic and stochastic CTMC models from Zika disease transmission
NASA Astrophysics Data System (ADS)
Zevika, Mona; Soewono, Edy
2018-03-01
Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.
Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.
Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar
2016-01-01
We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.
White, L J; Mandl, J N; Gomes, M G M; Bodley-Tickell, A T; Cane, P A; Perez-Brena, P; Aguilar, J C; Siqueira, M M; Portes, S A; Straliotto, S M; Waris, M; Nokes, D J; Medley, G F
2007-09-01
The nature and role of re-infection and partial immunity are likely to be important determinants of the transmission dynamics of human respiratory syncytial virus (hRSV). We propose a single model structure that captures four possible host responses to infection and subsequent reinfection: partial susceptibility, altered infection duration, reduced infectiousness and temporary immunity (which might be partial). The magnitude of these responses is determined by four homotopy parameters, and by setting some of these parameters to extreme values we generate a set of eight nested, deterministic transmission models. In order to investigate hRSV transmission dynamics, we applied these models to incidence data from eight international locations. Seasonality is included as cyclic variation in transmission. Parameters associated with the natural history of the infection were assumed to be independent of geographic location, while others, such as those associated with seasonality, were assumed location specific. Models incorporating either of the two extreme assumptions for immunity (none or solid and lifelong) were unable to reproduce the observed dynamics. Model fits with either waning or partial immunity to disease or both were visually comparable. The best fitting structure was a lifelong partial immunity to both disease and infection. Observed patterns were reproduced by stochastic simulations using the parameter values estimated from the deterministic models.
Frankel, A.
2009-01-01
Broadband (0.1-20 Hz) synthetic seismograms for finite-fault sources were produced for a model where stress drop is constant with seismic moment to see if they can match the magnitude dependence and distance decay of response spectral amplitudes found in the Next Generation Attenuation (NGA) relations recently developed from strong-motion data of crustal earthquakes in tectonically active regions. The broadband synthetics were constructed for earthquakes of M 5.5, 6.5, and 7.5 by combining deterministic synthetics for plane-layered models at low frequencies with stochastic synthetics at high frequencies. The stochastic portion used a source model where the Brune stress drop of 100 bars is constant with seismic moment. The deterministic synthetics were calculated using an average slip velocity, and hence, dynamic stress drop, on the fault that is uniform with magnitude. One novel aspect of this procedure is that the transition frequency between the deterministic and stochastic portions varied with magnitude, so that the transition frequency is inversely related to the rise time of slip on the fault. The spectral accelerations at 0.2, 1.0, and 3.0 sec periods from the synthetics generally agreed with those from the set of NGA relations for M 5.5-7.5 for distances of 2-100 km. At distances of 100-200 km some of the NGA relations for 0.2 sec spectral acceleration were substantially larger than the values of the synthetics for M 7.5 and M 6.5 earthquakes because these relations do not have a term accounting for Q. At 3 and 5 sec periods, the synthetics for M 7.5 earthquakes generally had larger spectral accelerations than the NGA relations, although there was large scatter in the results from the synthetics. The synthetics showed a sag in response spectra at close-in distances for M 5.5 between 0.3 and 0.7 sec that is not predicted from the NGA relations.
Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh
1998-01-01
In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.
NASA Technical Reports Server (NTRS)
Bollman, W. E.; Chadwick, C.
1982-01-01
A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.
Decision-making and evacuation planning for flood risk management in the Netherlands.
Kolen, Bas; Helsloot, Ira
2014-07-01
A traditional view of decision-making for evacuation planning is that, given an uncertain threat, there is a deterministic way of defining the best decision. In other words, there is a linear relation between threat, decision, and execution consequences. Alternatives and the impact of uncertainties are not taken into account. This study considers the 'top strategic decision-making' for mass evacuation owing to flooding in the Netherlands. It reveals that the top strategic decision-making process itself is probabilistic because of the decision-makers involved and their crisis managers (as advisers). The paper concludes that deterministic planning is not sufficient, and it recommends probabilistic planning that considers uncertainties in the decision-making process itself as well as other uncertainties, such as forecasts, citizens responses, and the capacity of infrastructure. This results in less optimistic, but more realistic, strategies and a need to pay attention to alternative strategies. © 2014 The Author(s). Disasters © Overseas Development Institute, 2014.
NASA Astrophysics Data System (ADS)
Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing; Walker, Paul D.
2017-02-01
This paper proposes an uncertain modelling and computational method to analyze dynamic responses of rigid-flexible multibody systems (or mechanisms) with random geometry and material properties. Firstly, the deterministic model for the rigid-flexible multibody system is built with the absolute node coordinate formula (ANCF), in which the flexible parts are modeled by using ANCF elements, while the rigid parts are described by ANCF reference nodes (ANCF-RNs). Secondly, uncertainty for the geometry of rigid parts is expressed as uniform random variables, while the uncertainty for the material properties of flexible parts is modeled as a continuous random field, which is further discretized to Gaussian random variables using a series expansion method. Finally, a non-intrusive numerical method is developed to solve the dynamic equations of systems involving both types of random variables, which systematically integrates the deterministic generalized-α solver with Latin Hypercube sampling (LHS) and Polynomial Chaos (PC) expansion. The benchmark slider-crank mechanism is used as a numerical example to demonstrate the characteristics of the proposed method.
Stochastic Stability of Nonlinear Sampled Data Systems with a Jump Linear Controller
NASA Technical Reports Server (NTRS)
Gonzalez, Oscar R.; Herencia-Zapana, Heber; Gray, W. Steven
2004-01-01
This paper analyzes the stability of a sampled- data system consisting of a deterministic, nonlinear, time- invariant, continuous-time plant and a stochastic, discrete- time, jump linear controller. The jump linear controller mod- els, for example, computer systems and communication net- works that are subject to stochastic upsets or disruptions. This sampled-data model has been used in the analysis and design of fault-tolerant systems and computer-control systems with random communication delays without taking into account the inter-sample response. To analyze stability, appropriate topologies are introduced for the signal spaces of the sampled- data system. With these topologies, the ideal sampling and zero-order-hold operators are shown to be measurable maps. This paper shows that the known equivalence between the stability of a deterministic, linear sampled-data system and its associated discrete-time representation as well as between a nonlinear sampled-data system and a linearized representation holds even in a stochastic framework.
Damage detection of structures identified with deterministic-stochastic models using seismic data.
Huang, Ming-Chih; Wang, Yen-Po; Chang, Ming-Lian
2014-01-01
A deterministic-stochastic subspace identification method is adopted and experimentally verified in this study to identify the equivalent single-input-multiple-output system parameters of the discrete-time state equation. The method of damage locating vector (DLV) is then considered for damage detection. A series of shaking table tests using a five-storey steel frame has been conducted. Both single and multiple damage conditions at various locations have been considered. In the system identification analysis, either full or partial observation conditions have been taken into account. It has been shown that the damaged stories can be identified from global responses of the structure to earthquakes if sufficiently observed. In addition to detecting damage(s) with respect to the intact structure, identification of new or extended damages of the as-damaged counterpart has also been studied. This study gives further insights into the scheme in terms of effectiveness, robustness, and limitation for damage localization of frame systems.
NASA Astrophysics Data System (ADS)
García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.
2018-07-01
In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model
Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.
2018-01-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
DCBRP: a deterministic chain-based routing protocol for wireless sensor networks.
Marhoon, Haydar Abdulameer; Mahmuddin, M; Nor, Shahrudin Awang
2016-01-01
Wireless sensor networks (WSNs) are a promising area for both researchers and industry because of their various applications The sensor node expends the majority of its energy on communication with other nodes. Therefore, the routing protocol plays an important role in delivering network data while minimizing energy consumption as much as possible. The chain-based routing approach is superior to other approaches. However, chain-based routing protocols still expend substantial energy in the Chain Head (CH) node. In addition, these protocols also have the bottleneck issues. A novel routing protocol which is Deterministic Chain-Based Routing Protocol (DCBRP). DCBRP consists of three mechanisms: Backbone Construction Mechanism, Chain Head Selection (CHS), and the Next Hop Connection Mechanism. The CHS mechanism is presented in detail, and it is evaluated through comparison with the CCM and TSCP using an ns-3 simulator. It show that DCBRP outperforms both CCM and TSCP in terms of end-to-end delay by 19.3 and 65%, respectively, CH energy consumption by 18.3 and 23.0%, respectively, overall energy consumption by 23.7 and 31.4%, respectively, network lifetime by 22 and 38%, respectively, and the energy*delay metric by 44.85 and 77.54%, respectively. DCBRP can be used in any deterministic node deployment applications, such as smart cities or smart agriculture, to reduce energy depletion and prolong the lifetimes of WSNs.
NASA Astrophysics Data System (ADS)
Wong, B.; Kilthau, W.; Knopf, D. A.
2017-12-01
Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (Jhet), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.
Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.
Kang, Yun; Lanchier, Nicolas
2011-06-01
We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.
Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier
Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong
2017-01-01
Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2–188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 (Cryptococcus terricola) increased with successional distance, whereas that of OTU65 (Tolypocladium tundrense) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and NH4+–N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario. PMID:28649234
Ecological Succession Pattern of Fungal Community in Soil along a Retreating Glacier.
Tian, Jianqing; Qiao, Yuchen; Wu, Bing; Chen, Huai; Li, Wei; Jiang, Na; Zhang, Xiaoling; Liu, Xingzhong
2017-01-01
Accelerated by global climate changing, retreating glaciers leave behind soil chronosequences of primary succession. Current knowledge of primary succession is mainly from studies of vegetation dynamics, whereas information about belowground microbes remains unclear. Here, we combined shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. We investigated fungal succession and community assembly via high-throughput sequencing along a well-established glacier forefront chronosequence that spans 2-188 years of deglaciation. Shannon diversity and evenness peaked at a distance of 370 m and declined afterwards. The response of fungal diversity to distance varied in different phyla. Basidiomycota Shannon diversity significantly decreased with distance, while the pattern of Rozellomycota Shannon diversity was unimodal. Abundance of most frequencies OTU2 ( Cryptococcus terricola ) increased with successional distance, whereas that of OTU65 ( Tolypocladium tundrense ) decreased. Based on null deviation analyses, composition of the fungal community was initially governed by deterministic processes strongly but later less deterministic processes. Our results revealed that distance, altitude, soil microbial biomass carbon, soil microbial biomass nitrogen and [Formula: see text]-N significantly correlated with fungal community composition along the chronosequence. These results suggest that the drivers of fungal community are dynamics in a glacier chronosequence, that may relate to fungal ecophysiological traits and adaptation in an evolving ecosystem. The information will provide understanding the mechanistic underpinnings of microbial community assembly during ecosystem succession under different scales and scenario.
Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I.; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S.; Henry, Roland G.
2013-01-01
Introduction Diffusion MRI tractography has been increasingly used to delineate white matter pathways in vivo for which the leading clinical application is presurgical mapping of eloquent regions. However, there is rare opportunity to quantify the accuracy or sensitivity of these approaches to delineate white matter fiber pathways in vivo due to the lack of a gold standard. Intraoperative electrical stimulation (IES) provides a gold standard for the location and existence of functional motor pathways that can be used to determine the accuracy and sensitivity of fiber tracking algorithms. In this study we used intraoperative stimulation from brain tumor patients as a gold standard to estimate the sensitivity and accuracy of diffusion tensor MRI (DTI) and q-ball models of diffusion with deterministic and probabilistic fiber tracking algorithms for delineation of motor pathways. Methods We used preoperative high angular resolution diffusion MRI (HARDI) data (55 directions, b = 2000 s/mm2) acquired in a clinically feasible time frame from 12 patients who underwent a craniotomy for resection of a cerebral glioma. The corticospinal fiber tracts were delineated with DTI and q-ball models using deterministic and probabilistic algorithms. We used cortical and white matter IES sites as a gold standard for the presence and location of functional motor pathways. Sensitivity was defined as the true positive rate of delineating fiber pathways based on cortical IES stimulation sites. For accuracy and precision of the course of the fiber tracts, we measured the distance between the subcortical stimulation sites and the tractography result. Positive predictive rate of the delineated tracts was assessed by comparison of subcortical IES motor function (upper extremity, lower extremity, face) with the connection of the tractography pathway in the motor cortex. Results We obtained 21 cortical and 8 subcortical IES sites from intraoperative mapping of motor pathways. Probabilistic q-ball had the best sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p < 0.001) and the probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). Discussion This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and intra-operative management decisions. PMID:24273719
ERIC Educational Resources Information Center
Thomas, Tami L.; DiClemente, Ralph; Snell, Samuel
2014-01-01
Objective: To discuss how the effects of culture, economy, and geographical location intersect to form a gestalt triad determining health-related disparities in rural areas. Methods: We critically profile each component of the deterministic triad in shaping current health-related disparities in rural areas; evaluate the uniquely composed…
A Wide Area Risk Assessment Framework for Underwater Military Munitions Response
NASA Astrophysics Data System (ADS)
Holland, K. T.; Calantoni, J.
2017-12-01
Our objective was to develop a prototype statistical framework supporting Wide Area Assessment and Remedial Investigation decisions relating to the risk of unexploded ordnance and other military munitions concentrated in underwater environments. Decision making involving underwater munitions is inherently complex due to the high degree of uncertainty in the environmental conditions that force munitions responses (burial, decay, migration, etc.) and associated risks to the public. The prototype framework provides a consistent approach to accurately delineating contaminated areas at underwater munitions sites through the estimation of most probable concentrations. We adapted existing deterministic models and environmental data services for use within statistical modules that allowed the estimation of munition concentration given historic site information and environmental attributes. Ultimately this risk surface can be used to evaluate costs associated with various remediation approaches (e.g. removal, monitoring, etc.). Unfortunately, evaluation of the assessment framework was limited due to the lack of enduser data services from munition site managers. Of the 450 U.S. sites identified as having potential contamination with underwater munitions, assessment of available munitions information (including historic firing or disposal records, and recent ground-truth munitions samples) indicated very limited information in the databases. Example data types include the most probable munition types, approximate firing / disposal dates and locations, and any supportive munition survey or sampling results. However the overall technical goal to integrate trained statistical belief networks with detailed geophysical knowledge of sites, of sensors and of the underwater environment was demonstrated and should allow probabilistic estimates of the most likely outcomes and tradeoffs while managing uncertainty associated with military munitions response.
Dynamic load synthesis for shock numerical simulation in space structure design
NASA Astrophysics Data System (ADS)
Monti, Riccardo; Gasbarri, Paolo
2017-08-01
Pyroshock loads are the most stressing environments that a space equipment experiences during its operating life from a mechanical point of view. In general, the mechanical designer considers the pyroshock analysis as a very demanding constraint. Unfortunately, due to the non-linear behaviour of the structure under such loads, only the experimental tests can demonstrate if it is able to withstand these dynamic loads. By taking all the previous considerations into account, some preliminary information about the design correctness could be done by performing ;ad-hoc; numerical simulations, for example via commercial finite element software (i.e. MSC Nastran). Usually these numerical tools face the shock solution in two ways: 1) a direct mode, by using a time dependent enforcement and by evaluating the time-response and space-response as well as the internal forces; 2) a modal basis approach, by considering a frequency dependent load and of course by evaluating internal forces in the frequency domain. This paper has the main aim to develop a numerical tool to synthetize the time dependent enforcement based on deterministic and/or genetic algorithm optimisers. In particular starting from a specified spectrum in terms of SRS (Shock Response Spectrum) a time dependent discrete function, typically an acceleration profile, will be obtained to force the equipment by simulating the shock event. The synthetizing time and the interface with standards numerical codes will be two of the main topics dealt with in the paper. In addition a congruity and consistency methodology will be presented to ensure that the identified time dependent loads fully match the specified spectrum.
Turnes, Juan; Domínguez-Hernández, Raquel; Casado, Miguel Ángel
To evaluate the cost-effectiveness of a strategy based on direct-acting antivirals (DAAs) following the marketing of simeprevir and sofosbuvir (post-DAA) versus a pre-direct-acting antiviral strategy (pre-DAA) in patients with chronic hepatitis C, from the perspective of the Spanish National Health System. A decision tree combined with a Markov model was used to estimate the direct health costs (€, 2016) and health outcomes (quality-adjusted life years, QALYs) throughout the patient's life, with an annual discount rate of 3%. The sustained virological response, percentage of patients treated or not treated in each strategy, clinical characteristics of the patients, annual likelihood of transition, costs of treating and managing the disease, and utilities were obtained from the literature. The cost-effectiveness analysis was expressed as an incremental cost-effectiveness ratio (incremental cost per QALY gained). A deterministic sensitivity analysis and a probabilistic sensitivity analysis were performed. The post-DAA strategy showed higher health costs per patient (€30,944 vs. €23,707) than the pre-DAA strategy. However, it was associated with an increase of QALYs gained (15.79 vs. 12.83), showing an incremental cost-effectiveness ratio of €2,439 per QALY. The deterministic sensitivity analysis and the probabilistic sensitivity analysis showed the robustness of the results, with the post-DAA strategy being cost-effective in 99% of cases compared to the pre-DAA strategy. Compared to the pre-DAA strategy, the post-DAA strategy is efficient for the treatment of chronic hepatitis C in Spain, resulting in a much lower cost per QALY than the efficiency threshold used in Spain (€30,000 per QALY). Copyright © 2017 Elsevier España, S.L.U., AEEH y AEG. All rights reserved.
Estimating the epidemic threshold on networks by deterministic connections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less
Experimental demonstration on the deterministic quantum key distribution based on entangled photons.
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-02-10
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.
Experimental demonstration on the deterministic quantum key distribution based on entangled photons
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-01-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582
Peak Dose Assessment for Proposed DOE-PPPO Authorized Limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maldonado, Delis
2012-06-01
The Oak Ridge Institute for Science and Education (ORISE), a U.S. Department of Energy (DOE) prime contractor, was contracted by the DOE Portsmouth/Paducah Project Office (DOE-PPPO) to conduct a peak dose assessment in support of the Authorized Limits Request for Solid Waste Disposal at Landfill C-746-U at the Paducah Gaseous Diffusion Plant (DOE-PPPO 2011a). The peak doses were calculated based on the DOE-PPPO Proposed Single Radionuclides Soil Guidelines and the DOE-PPPO Proposed Authorized Limits (AL) Volumetric Concentrations available in DOE-PPPO 2011a. This work is provided as an appendix to the Dose Modeling Evaluations and Technical Support Document for the Authorizedmore » Limits Request for the C-746-U Landfill at the Paducah Gaseous Diffusion Plant, Paducah, Kentucky (ORISE 2012). The receptors evaluated in ORISE 2012 were selected by the DOE-PPPO for the additional peak dose evaluations. These receptors included a Landfill Worker, Trespasser, Resident Farmer (onsite), Resident Gardener, Recreational User, Outdoor Worker and an Offsite Resident Farmer. The RESRAD (Version 6.5) and RESRAD-OFFSITE (Version 2.5) computer codes were used for the peak dose assessments. Deterministic peak dose assessments were performed for all the receptors and a probabilistic dose assessment was performed only for the Offsite Resident Farmer at the request of the DOE-PPPO. In a deterministic analysis, a single input value results in a single output value. In other words, a deterministic analysis uses single parameter values for every variable in the code. By contrast, a probabilistic approach assigns parameter ranges to certain variables, and the code randomly selects the values for each variable from the parameter range each time it calculates the dose (NRC 2006). The receptor scenarios, computer codes and parameter input files were previously used in ORISE 2012. A few modifications were made to the parameter input files as appropriate for this effort. Some of these changes included increasing the time horizon beyond 1,050 years (yr), and using the radionuclide concentrations provided by the DOE-PPPO as inputs into the codes. The deterministic peak doses were evaluated within time horizons of 70 yr (for the Landfill Worker and Trespasser), 1,050 yr, 10,000 yr and 100,000 yr (for the Resident Farmer [onsite], Resident Gardener, Recreational User, Outdoor Worker and Offsite Resident Farmer) at the request of the DOE-PPPO. The time horizons of 10,000 yr and 100,000 yr were used at the request of the DOE-PPPO for informational purposes only. The probabilistic peak of the mean dose assessment was performed for the Offsite Resident Farmer using Technetium-99 (Tc-99) and a time horizon of 1,050 yr. The results of the deterministic analyses indicate that among all receptors and time horizons evaluated, the highest projected dose, 2,700 mrem/yr, occurred for the Resident Farmer (onsite) at 12,773 yr. The exposure pathways contributing to the peak dose are ingestion of plants, external gamma, and ingestion of milk, meat and soil. However, this receptor is considered an implausible receptor. The only receptors considered plausible are the Landfill Worker, Recreational User, Outdoor Worker and the Offsite Resident Farmer. The maximum projected dose among the plausible receptors is 220 mrem/yr for the Outdoor Worker and it occurs at 19,045 yr. The exposure pathways contributing to the dose for this receptor are external gamma and soil ingestion. The results of the probabilistic peak of the mean dose analysis for the Offsite Resident Farmer indicate that the average (arithmetic mean) of the peak of the mean doses for this receptor is 0.98 mrem/yr and it occurs at 1,050 yr. This dose corresponds to Tc-99 within the time horizon of 1,050 yr.« less
Application of Wavelet Filters in an Evaluation of ...
Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model performance metrics lead one to devote resources to stochastic variations in model outputs. In this analysis, observations are compared with model outputs at seasonal, weekly, diurnal and intra-day time scales. Filters provide frequency specific information that can be used to compare the strength (amplitude) and timing (phase) of observations and model estimates. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollu
Agent-based modeling of endotoxin-induced acute inflammatory response in human blood leukocytes.
Dong, Xu; Foteinou, Panagiota T; Calvano, Steven E; Lowry, Stephen F; Androulakis, Ioannis P
2010-02-18
Inflammation is a highly complex biological response evoked by many stimuli. A persistent challenge in modeling this dynamic process has been the (nonlinear) nature of the response that precludes the single-variable assumption. Systems-based approaches offer a promising possibility for understanding inflammation in its homeostatic context. In order to study the underlying complexity of the acute inflammatory response, an agent-based framework is developed that models the emerging host response as the outcome of orchestrated interactions associated with intricate signaling cascades and intercellular immune system interactions. An agent-based modeling (ABM) framework is proposed to study the nonlinear dynamics of acute human inflammation. The model is implemented using NetLogo software. Interacting agents involve either inflammation-specific molecules or cells essential for the propagation of the inflammatory reaction across the system. Spatial orientation of molecule interactions involved in signaling cascades coupled with the cellular heterogeneity are further taken into account. The proposed in silico model is evaluated through its ability to successfully reproduce a self-limited inflammatory response as well as a series of scenarios indicative of the nonlinear dynamics of the response. Such scenarios involve either a persistent (non)infectious response or innate immune tolerance and potentiation effects followed by perturbations in intracellular signaling molecules and cascades. The ABM framework developed in this study provides insight on the stochastic interactions of the mediators involved in the propagation of endotoxin signaling at the cellular response level. The simulation results are in accordance with our prior research effort associated with the development of deterministic human inflammation models that include transcriptional dynamics, signaling, and physiological components. The hypothetical scenarios explored in this study would potentially improve our understanding of how manipulating the behavior of the molecular species could manifest into emergent behavior of the overall system.
Characterization of normality of chaotic systems including prediction and detection of anomalies
NASA Astrophysics Data System (ADS)
Engler, Joseph John
Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.
Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.
Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R
2018-05-01
A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.
Insights into the deterministic skill of air quality ensembles ...
Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati
NASA Astrophysics Data System (ADS)
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
Comparison of the economic impact of different wind power forecast systems for producers
NASA Astrophysics Data System (ADS)
Alessandrini, S.; Davò, F.; Sperati, S.; Benini, M.; Delle Monache, L.
2014-05-01
Deterministic forecasts of wind production for the next 72 h at a single wind farm or at the regional level are among the main end-users requirement. However, for an optimal management of wind power production and distribution it is important to provide, together with a deterministic prediction, a probabilistic one. A deterministic forecast consists of a single value for each time in the future for the variable to be predicted, while probabilistic forecasting informs on probabilities for potential future events. This means providing information about uncertainty (i.e. a forecast of the PDF of power) in addition to the commonly provided single-valued power prediction. A significant probabilistic application is related to the trading of energy in day-ahead electricity markets. It has been shown that, when trading future wind energy production, using probabilistic wind power predictions can lead to higher benefits than those obtained by using deterministic forecasts alone. In fact, by using probabilistic forecasting it is possible to solve economic model equations trying to optimize the revenue for the producer depending, for example, on the specific penalties for forecast errors valid in that market. In this work we have applied a probabilistic wind power forecast systems based on the "analog ensemble" method for bidding wind energy during the day-ahead market in the case of a wind farm located in Italy. The actual hourly income for the plant is computed considering the actual selling energy prices and penalties proportional to the unbalancing, defined as the difference between the day-ahead offered energy and the actual production. The economic benefit of using a probabilistic approach for the day-ahead energy bidding are evaluated, resulting in an increase of 23% of the annual income for a wind farm owner in the case of knowing "a priori" the future energy prices. The uncertainty on price forecasting partly reduces the economic benefit gained by using a probabilistic energy forecast system.
Controllability of Deterministic Networks with the Identical Degree Sequence
Ma, Xiujuan; Zhao, Haixing; Wang, Binghong
2015-01-01
Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920
Inverse kinematic problem for a random gradient medium in geometric optics approximation
NASA Astrophysics Data System (ADS)
Petersen, N. V.
1990-03-01
Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.
Quasi-Static Probabilistic Structural Analyses Process and Criteria
NASA Technical Reports Server (NTRS)
Goldberg, B.; Verderaime, V.
1999-01-01
Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.
Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M
2016-12-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.
Predicting reasoning from memory.
Heit, Evan; Hayes, Brett K
2011-02-01
In an effort to assess the relations between reasoning and memory, in 8 experiments, the authors examined how well responses on an inductive reasoning task are predicted from responses on a recognition memory task for the same picture stimuli. Across several experimental manipulations, such as varying study time, presentation frequency, and the presence of stimuli from other categories, there was a high correlation between reasoning and memory responses (average r = .87), and these manipulations showed similar effects on the 2 tasks. The results point to common mechanisms underlying inductive reasoning and recognition memory abilities. A mathematical model, GEN-EX (generalization from examples), derived from exemplar models of categorization, is presented, which predicts both reasoning and memory responses from pairwise similarities among the stimuli, allowing for additional influences of subtyping and deterministic responding. (c) 2010 APA, all rights reserved.
Reactor Pressure Vessel Fracture Analysis Capabilities in Grizzly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Benjamin; Backman, Marie; Chakraborty, Pritam
2015-03-01
Efforts have been underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). Development in prior years has resulted a capability to calculate -integrals. For this application, these are used to calculate stress intensity factors for cracks to be used in deterministic linear elastic fracture mechanics (LEFM) assessments of fracture in degraded RPVs. The -integral can only be used to evaluate stress intensity factors for axis-aligned flaws because it can only be used to obtain the stress intensity factor for pure Mode Imore » loading. Off-axis flaws will be subjected to mixed-mode loading. For this reason, work has continued to expand the set of fracture mechanics capabilities to permit it to evaluate off-axis flaws. This report documents the following work to enhance Grizzly’s engineering fracture mechanics capabilities for RPVs: • Interaction Integral and -stress: To obtain mixed-mode stress intensity factors, a capability to evaluate interaction integrals for 2D or 3D flaws has been developed. A -stress evaluation capability has been developed to evaluate the constraint at crack tips in 2D or 3D. Initial verification testing of these capabilities is documented here. • Benchmarking for axis-aligned flaws: Grizzly’s capabilities to evaluate stress intensity factors for axis-aligned flaws have been benchmarked against calculations for the same conditions in FAVOR. • Off-axis flaw demonstration: The newly-developed interaction integral capabilities are demon- strated in an application to calculate the mixed-mode stress intensity factors for off-axis flaws. • Other code enhancements: Other enhancements to the thermomechanics capabilities that relate to the solution of the engineering RPV fracture problem are documented here.« less
Efficient room-temperature source of polarized single photons
Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.
2007-08-07
An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.
Goeree, Ron; Chiva-Razavi, Sima; Gunda, Praveen; Graham, Christopher N; Miles, LaStella; Nikoglou, Efthalia; Jugl, Steffen M; Gladman, Dafna D
2018-02-01
The study evaluates the cost-effectiveness of secukinumab, a fully human monoclonal antibody that selectively neutralizes interleukin (IL)-17A, vs currently licensed biologic treatments in patients with active psoriatic arthritis (PsA) from a Canadian healthcare system perspective. A decision analytic semi-Markov model evaluated the cost-effectiveness of secukinumab 150 mg and 300 mg compared to subcutaneous biologics adalimumab, certolizumab pegol, etanercept, golimumab, and ustekinumab, and intravenous biologics infliximab and infliximab biosimilar in biologic-naive and biologic-experienced patients over a lifetime horizon. The response to treatments was evaluated after 12 weeks by PsA Response Criteria (PsARC) response rates. Non-responders or patients discontinuing initial-line of biologic treatment were allowed to switch to subsequent-line biologics. Model input parameters (Psoriasis Area Severity Index [PASI], Health Assessment Questionnaire [HAQ], withdrawal rates, costs, and resource use) were collected from clinical trials, published literature, and other Canadian sources. Benefits were expressed as quality-adjusted life years (QALYs). An annual discount rate of 5% was applied to costs and benefits. The robustness of the study findings were evaluated via sensitivity analyses. Biologic-naive patients treated with secukinumab achieved the highest number of QALYs (8.54) at the lowest cost (CAD 925,387) over a lifetime horizon vs all comparators. Secukinumab dominated all treatments, except for infliximab and its biosimilar, which achieved minimally more QALYs (8.58). However, infliximab and its biosimilar incurred more costs than secukinumab (infliximab: CAD 1,015,437; infliximab biosimilar: CAD 941,004), resulting in higher cost-effectiveness estimates relative to secukinumab. In the biologic-experienced population, secukinumab dominated all treatments as it generated more QALYs (8.89) at lower costs (CAD 954,692). Deterministic sensitivity analyses indicated the results were most sensitive to variation in PsARC response rates, change in HAQ, and utility values in both populations. Secukinumab is either dominant or cost-effective vs all licensed biologics for the treatment of active PsA in biologic-naive and biologic-experienced populations in Canada.
Zhu, Wei; Wang, Wei; Yuan, Gannan
2016-06-01
In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).
Reliability assessment of slender concrete columns at the stability failure
NASA Astrophysics Data System (ADS)
Valašík, Adrián; Benko, Vladimír; Strauss, Alfred; Täubling, Benjamin
2018-01-01
The European Standard for designing concrete columns within the use of non-linear methods shows deficiencies in terms of global reliability, in case that the concrete columns fail by the loss of stability. The buckling failure is a brittle failure which occurs without warning and the probability of its formation depends on the columns slenderness. Experiments with slender concrete columns were carried out in cooperation with STRABAG Bratislava LTD in Central Laboratory of Faculty of Civil Engineering SUT in Bratislava. The following article aims to compare the global reliability of slender concrete columns with slenderness of 90 and higher. The columns were designed according to methods offered by EN 1992-1-1 [1]. The mentioned experiments were used as basis for deterministic nonlinear modelling of the columns and subsequent the probabilistic evaluation of structural response variability. Final results may be utilized as thresholds for loading of produced structural elements and they aim to present probabilistic design as less conservative compared to classic partial safety factor based design and alternative ECOV method.
Research on response spectrum of dam based on scenario earthquake
NASA Astrophysics Data System (ADS)
Zhang, Xiaoliang; Zhang, Yushan
2017-10-01
Taking a large hydropower station as an example, the response spectrum based on scenario earthquake is determined. Firstly, the potential source of greatest contribution to the site is determined on the basis of the results of probabilistic seismic hazard analysis (PSHA). Secondly, the magnitude and epicentral distance of the scenario earthquake are calculated according to the main faults and historical earthquake of the potential seismic source zone. Finally, the response spectrum of scenario earthquake is calculated using the Next Generation Attenuation (NGA) relations. The response spectrum based on scenario earthquake method is less than the probability-consistent response spectrum obtained by PSHA method. The empirical analysis shows that the response spectrum of scenario earthquake considers the probability level and the structural factors, and combines the advantages of the deterministic and probabilistic seismic hazard analysis methods. It is easy for people to accept and provide basis for seismic engineering of hydraulic engineering.
Genetic progress in multistage dairy cattle breeding schemes using genetic markers.
Schrooten, C; Bovenhuis, H; van Arendonk, J A M; Bijma, P
2005-04-01
The aim of this paper was to explore general characteristics of multistage breeding schemes and to evaluate multistage dairy cattle breeding schemes that use information on quantitative trait loci (QTL). Evaluation was either for additional genetic response or for reduction in number of progeny-tested bulls while maintaining the same response. The reduction in response in multistage breeding schemes relative to comparable single-stage breeding schemes (i.e., with the same overall selection intensity and the same amount of information in the final stage of selection) depended on the overall selection intensity, the selection intensity in the various stages of the breeding scheme, and the ratio of the accuracies of selection in the various stages of the breeding scheme. When overall selection intensity was constant, reduction in response increased with increasing selection intensity in the first stage. The decrease in response was highest in schemes with lower overall selection intensity. Reduction in response was limited in schemes with low to average emphasis on first-stage selection, especially if the accuracy of selection in the first stage was relatively high compared with the accuracy in the final stage. Closed nucleus breeding schemes in dairy cattle that use information on QTL were evaluated by deterministic simulation. In the base scheme, the selection index consisted of pedigree information and own performance (dams), or pedigree information and performance of 100 daughters (sires). In alternative breeding schemes, information on a QTL was accounted for by simulating an additional index trait. The fraction of the variance explained by the QTL determined the correlation between the additional index trait and the breeding goal trait. Response in progeny test schemes relative to a base breeding scheme without QTL information ranged from +4.5% (QTL explaining 5% of the additive genetic variance) to +21.2% (QTL explaining 50% of the additive genetic variance). A QTL explaining 5% of the additive genetic variance allowed a 35% reduction in the number of progeny tested bulls, while maintaining genetic response at the level of the base scheme. Genetic progress was up to 31.3% higher for schemes with increased embryo production and selection of embryos based on QTL information. The challenge for breeding organizations is to find the optimum breeding program with regard to additional genetic progress and additional (or reduced) cost.
NASA Technical Reports Server (NTRS)
Tilley, David G.
1987-01-01
NASA Space Shuttle Challenger SIR-B ocean scenes are used to derive directional wave spectra for which speckle noise is modeled as a function of Rayleigh random phase coherence downrange and Poisson random amplitude errors inherent in the Doppler measurement of along-track position. A Fourier filter that preserves SIR-B image phase relations is used to correct the stationary and dynamic response characteristics of the remote sensor and scene correlator, as well as to subtract an estimate of the speckle noise component. A two-dimensional map of sea surface elevation is obtained after the filtered image is corrected for both random and deterministic motions.
NASA Astrophysics Data System (ADS)
Fraldi, M.; Perrella, G.; Ciervo, M.; Bosia, F.; Pugno, N. M.
2017-09-01
Very recently, a Weibull-based probabilistic strategy has been successfully applied to bundles of wires to determine their overall stress-strain behaviour, also capturing previously unpredicted nonlinear and post-elastic features of hierarchical strands. This approach is based on the so-called "Equal Load Sharing (ELS)" hypothesis by virtue of which, when a wire breaks, the load acting on the strand is homogeneously redistributed among the surviving wires. Despite the overall effectiveness of the method, some discrepancies between theoretical predictions and in silico Finite Element-based simulations or experimental findings might arise when more complex structures are analysed, e.g. helically arranged bundles. To overcome these limitations, an enhanced hybrid approach is proposed in which the probability of rupture is combined with a deterministic mechanical model of a strand constituted by helically-arranged and hierarchically-organized wires. The analytical model is validated comparing its predictions with both Finite Element simulations and experimental tests. The results show that generalized stress-strain responses - incorporating tension/torsion coupling - are naturally found and, once one or more elements break, the competition between geometry and mechanics of the strand microstructure, i.e. the different cross sections and helical angles of the wires in the different hierarchical levels of the strand, determines the no longer homogeneous stress redistribution among the surviving wires whose fate is hence governed by a "Hierarchical Load Sharing" criterion.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN
2011-08-23
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.
Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology
Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.
2016-01-01
Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915
Stochasticity and determinism in models of hematopoiesis.
Kimmel, Marek
2014-01-01
This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.
Hofer, Florian; Achelrod, Dmitrij; Stargardt, Tom
2016-12-01
Chronic obstructive pulmonary disease (COPD) poses major challenges for health care systems. Previous studies suggest that telemonitoring could be effective in preventing hospitalisations and hence reduce costs. The aim was to evaluate whether telemonitoring interventions for COPD are cost-effective from the perspective of German statutory sickness funds. A cost-utility analysis was conducted using a combination of a Markov model and a decision tree. Telemonitoring as add-on to standard treatment was compared with standard treatment alone. The model consisted of four transition stages to account for COPD severity, and a terminal stage for death. Within each cycle, the frequency of exacerbations as well as outcomes for 2015 costs and quality adjusted life years (QALYs) for each stage were calculated. Values for input parameters were taken from the literature. Deterministic and probabilistic sensitivity analyses were conducted. In the base case, telemonitoring led to an increase in incremental costs (€866 per patient) but also in incremental QALYs (0.05 per patient). The incremental cost-effectiveness ratio (ICER) was thus €17,410 per QALY gained. A deterministic sensitivity analysis showed that hospitalisation rate and costs for telemonitoring equipment greatly affected results. The probabilistic ICER averaged €34,432 per QALY (95 % confidence interval 12,161-56,703). We provide evidence that telemonitoring may be cost-effective in Germany from a payer's point of view. This holds even after deterministic and probabilistic sensitivity analyses.
Optimization Testbed Cometboards Extended into Stochastic Domain
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.; Patnaik, Surya N.
2010-01-01
COMparative Evaluation Testbed of Optimization and Analysis Routines for the Design of Structures (CometBoards) is a multidisciplinary design optimization software. It was originally developed for deterministic calculation. It has now been extended into the stochastic domain for structural design problems. For deterministic problems, CometBoards is introduced through its subproblem solution strategy as well as the approximation concept in optimization. In the stochastic domain, a design is formulated as a function of the risk or reliability. Optimum solution including the weight of a structure, is also obtained as a function of reliability. Weight versus reliability traced out an inverted-S-shaped graph. The center of the graph corresponded to 50 percent probability of success, or one failure in two samples. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure that corresponded to unity for reliability. Weight can be reduced to a small value for the most failure-prone design with a compromised reliability approaching zero. The stochastic design optimization (SDO) capability for an industrial problem was obtained by combining three codes: MSC/Nastran code was the deterministic analysis tool, fast probabilistic integrator, or the FPI module of the NESSUS software, was the probabilistic calculator, and CometBoards became the optimizer. The SDO capability requires a finite element structural model, a material model, a load model, and a design model. The stochastic optimization concept is illustrated considering an academic example and a real-life airframe component made of metallic and composite materials.
Prediction of the Arctic Oscillation in Boreal Winter by Dynamical Seasonal Forecasting Systems
NASA Technical Reports Server (NTRS)
Kang, Daehyun; Lee, Myong-In; Im, Jungho; Kim, Daehyun; Kim, Hye-Mi; Kang, Hyun-Suk; Shubert, Siegfried D.; Arriba, Albertom; MacLachlan, Craig
2013-01-01
This study assesses the prediction skill of the boreal winter Arctic Oscillation (AO) in the state-of-the-art dynamical ensemble prediction systems (EPSs): the UKMO GloSea4, the NCEP CFSv2, and the NASA GEOS-5. Long-term reforecasts made with the EPSs are used to evaluate representations of the AO, and to examine skill scores for the deterministic and probabilistic forecast of the AO index. The reforecasts reproduce the observed changes in the large-scale patterns of the Northern Hemispheric surface temperature, upper-level wind, and precipitation according to the AO phase. Results demonstrate that all EPSs have better prediction skill than the persistence prediction for lead times up to 3-month, suggesting a great potential for skillful prediction of the AO and the associated climate anomalies in seasonal time scale. It is also found that the deterministic and probabilistic forecast skill of the AO in the recent period (1997-2010) is higher than that in the earlier period (1983-1996).
Faster PET reconstruction with a stochastic primal-dual hybrid gradient method
NASA Astrophysics Data System (ADS)
Ehrhardt, Matthias J.; Markiewicz, Pawel; Chambolle, Antonin; Richtárik, Peter; Schott, Jonathan; Schönlieb, Carola-Bibiane
2017-08-01
Image reconstruction in positron emission tomography (PET) is computationally challenging due to Poisson noise, constraints and potentially non-smooth priors-let alone the sheer size of the problem. An algorithm that can cope well with the first three of the aforementioned challenges is the primal-dual hybrid gradient algorithm (PDHG) studied by Chambolle and Pock in 2011. However, PDHG updates all variables in parallel and is therefore computationally demanding on the large problem sizes encountered with modern PET scanners where the number of dual variables easily exceeds 100 million. In this work, we numerically study the usage of SPDHG-a stochastic extension of PDHG-but is still guaranteed to converge to a solution of the deterministic optimization problem with similar rates as PDHG. Numerical results on a clinical data set show that by introducing randomization into PDHG, similar results as the deterministic algorithm can be achieved using only around 10 % of operator evaluations. Thus, making significant progress towards the feasibility of sophisticated mathematical models in a clinical setting.
de Miguel-Bilbao, Silvia; Aguirre, Erik; Lopez Iturri, Peio; Azpilicueta, Leire; Roldán, José; Falcone, Francisco; Ramos, Victoria
2015-01-01
In the last decade the number of wireless devices operating at the frequency band of 2.4 GHz has increased in several settings, such as healthcare, occupational, and household. In this work, the emissions from Wi-Fi transceivers applicable to context aware scenarios are analyzed in terms of potential interference and assessment on exposure guideline compliance. Near field measurement results as well as deterministic simulation results on realistic indoor environments are presented, providing insight on the interaction between the Wi-Fi transceiver and implantable/body area network devices as well as other transceivers operating within an indoor environment, exhibiting topological and morphological complexity. By following approaches (near field estimation/deterministic estimation), colocated body situations as well as large indoor emissions can be determined. The results show in general compliance with exposure levels and the impact of overall network deployment, which can be optimized in order to reduce overall interference levels while maximizing system performance.
de Miguel-Bilbao, Silvia; Aguirre, Erik; Lopez Iturri, Peio; Azpilicueta, Leire; Roldán, José; Falcone, Francisco; Ramos, Victoria
2015-01-01
In the last decade the number of wireless devices operating at the frequency band of 2.4 GHz has increased in several settings, such as healthcare, occupational, and household. In this work, the emissions from Wi-Fi transceivers applicable to context aware scenarios are analyzed in terms of potential interference and assessment on exposure guideline compliance. Near field measurement results as well as deterministic simulation results on realistic indoor environments are presented, providing insight on the interaction between the Wi-Fi transceiver and implantable/body area network devices as well as other transceivers operating within an indoor environment, exhibiting topological and morphological complexity. By following approaches (near field estimation/deterministic estimation), colocated body situations as well as large indoor emissions can be determined. The results show in general compliance with exposure levels and the impact of overall network deployment, which can be optimized in order to reduce overall interference levels while maximizing system performance. PMID:25632400
Interrelation Between Safety Factors and Reliability
NASA Technical Reports Server (NTRS)
Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)
2001-01-01
An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.
Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems
NASA Technical Reports Server (NTRS)
Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael
2013-01-01
The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.
Failed rib region prediction in a human body model during crash events with precrash braking.
Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S
2018-02-28
The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.
Bardach, Ariel Esteban; Garay, Osvaldo Ulises; Calderón, María; Pichón-Riviére, Andrés; Augustovski, Federico; Martí, Sebastián García; Cortiñas, Paula; Gonzalez, Marino; Naranjo, Laura T; Gomez, Jorge Alberto; Caporale, Joaquín Enzo
2017-02-02
Cervical cancer (CC) and genital warts (GW) are a significant public health issue in Venezuela. Our objective was to assess the cost-effectiveness of the two available vaccines, bivalent and quadrivalent, against Human Papillomavirus (HPV) in Venezuelan girls in order to inform decision-makers. A previously published Markov cohort model, informed by the best available evidence, was adapted to the Venezuelan context to evaluate the effects of vaccination on health and healthcare costs from the perspective of the healthcare payer in an 11-year-old girls cohort of 264,489. Costs and quality-adjusted life years (QALYs) were discounted at 5%. Eight scenarios were analyzed to depict the cost-effectiveness under alternative vaccine prices, exchange rates and dosing schemes. Deterministic and probabilistic sensitivity analyses were performed. Compared to screening only, the bivalent and quadrivalent vaccines were cost-saving in all scenarios, avoiding 2,310 and 2,143 deaths, 4,781 and 4,431 CCs up to 18,459 GW for the quadrivalent vaccine and gaining 4,486 and 4,395 discounted QALYs respectively. For both vaccines, the main determinants of variations in the incremental costs-effectiveness ratio after running deterministic and probabilistic sensitivity analyses were transition probabilities, vaccine and cancer-treatment costs and HPV 16 and 18 distribution in CC cases. When comparing vaccines, none of them was consistently more cost-effective than the other. In sensitivity analyses, for these comparisons, the main determinants were GW incidence, the level of cross-protection and, for some scenarios, vaccines costs. Immunization with the bivalent or quadrivalent HPV vaccines showed to be cost-saving or cost-effective in Venezuela, falling below the threshold of one Gross Domestic Product (GDP) per capita (104,404 VEF) per QALY gained. Deterministic and probabilistic sensitivity analyses confirmed the robustness of these results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, L; Zhu, L; Vedantham, S
2016-06-15
Purpose: The image quality of dedicated cone-beam breast CT (CBBCT) is fundamentally limited by substantial x-ray scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose to suppress x-ray scatter in CBBCT images using a deterministic forward projection model. Method: We first use the 1st-pass FDK-reconstructed CBBCT images to segment fibroglandular and adipose tissue. Attenuation coefficients are assigned to the two tissues based on the x-ray spectrum used for imaging acquisition, and is forward projected to simulatemore » scatter-free primary projections. We estimate the scatter by subtracting the simulated primary projection from the measured projection, and then the resultant scatter map is further refined by a Fourier-domain fitting algorithm after discarding untrusted scatter information. The final scatter estimate is subtracted from the measured projection for effective scatter correction. In our implementation, the proposed scatter correction takes 0.5 seconds for each projection. The method was evaluated using the overall image spatial non-uniformity (SNU) metric and the contrast-to-noise ratio (CNR) with 5 clinical datasets of BI-RADS 4/5 subjects. Results: For the 5 clinical datasets, our method reduced the SNU from 7.79% to 1.68% in coronal view and from 6.71% to 3.20% in sagittal view. The average CNR is improved by a factor of 1.38 in coronal view and 1.26 in sagittal view. Conclusion: The proposed scatter correction approach requires no additional scans or prior images and uses a deterministic model for efficient calculation. Evaluation with clinical datasets demonstrates the feasibility and stability of the method. These features are attractive for clinical CBBCT and make our method distinct from other approaches. Supported partly by NIH R21EB019597, R21CA134128 and R01CA195512.The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.« less
Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data
NASA Astrophysics Data System (ADS)
Larkin, Steven Paul
Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.
Ion implantation for deterministic single atom devices
NASA Astrophysics Data System (ADS)
Pacheco, J. L.; Singh, M.; Perry, D. L.; Wendt, J. R.; Ten Eyck, G.; Manginell, R. P.; Pluym, T.; Luhman, D. R.; Lilly, M. P.; Carroll, M. S.; Bielejec, E.
2017-12-01
We demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.
Counterfactual Quantum Deterministic Key Distribution
NASA Astrophysics Data System (ADS)
Zhang, Sheng; Wang, Jian; Tang, Chao-Jing
2013-01-01
We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.
Ion implantation for deterministic single atom devices
Pacheco, J. L.; Singh, M.; Perry, D. L.; ...
2017-12-04
Here, we demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.
Deterministic quantum splitter based on time-reversed Hong-Ou-Mandel interference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jun; Lee, Kim Fook; Kumar, Prem
2007-09-15
By utilizing a fiber-based indistinguishable photon-pair source in the 1.55 {mu}m telecommunications band [J. Chen et al., Opt. Lett. 31, 2798 (2006)], we present the first, to the best of our knowledge, deterministic quantum splitter based on the principle of time-reversed Hong-Ou-Mandel quantum interference. The deterministically separated identical photons' indistinguishability is then verified by using a conventional Hong-Ou-Mandel quantum interference, which exhibits a near-unity dip visibility of 94{+-}1%, making this quantum splitter useful for various quantum information processing applications.
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
El-Kadi, A. I.; Torikai, J.D.
2001-01-01
The objective of this paper is to identify water-flow patterns in part of an active landslide, through the use of numerical simulations and data obtained during a field study. The approaches adopted include measuring rainfall events and pore-pressure responses in both saturated and unsaturated soils at the site. To account for soil variability, the Richards equation is solved within deterministic and stochastic frameworks. The deterministic simulations considered average water-retention data, adjusted retention data to account for stones or cobbles, retention functions for a heterogeneous pore structure, and continuous retention functions for preferential flow. The stochastic simulations applied the Monte Carlo approach which considers statistical distribution and autocorrelation of the saturated conductivity and its cross correlation with the retention function. Although none of the models is capable of accurately predicting field measurements, appreciable improvement in accuracy was attained using stochastic, preferential flow, and heterogeneous pore-structure models. For the current study, continuum-flow models provide reasonable accuracy for practical purposes, although they are expected to be less accurate than multi-domain preferential flow models.
Deterministic Intracellular Modeling
2003-03-01
eukaryotes encompass all plants, animal, fungi and protists [6:71]. Structures in this class are more defined. For example, cells in this class possess a...affect cells. 5.3 Recommendations Further research into the construction and evaluation of intracellular models would benefit Air Force toxicology studies...manual220/indexE.html. 16. MathWorks, “The Benefits of MATLAB.” Internet, 2003. http://www.mathworks.com/products/matlab/description1.jsp. 17. Mendes
A Bayesian model averaging method for the derivation of reservoir operating rules
NASA Astrophysics Data System (ADS)
Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai
2015-09-01
Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Da Cruz, D. F.; Rochman, D.; Koning, A. J.
2012-07-01
This paper discusses the uncertainty analysis on reactivity and inventory for a typical PWR fuel element as a result of uncertainties in {sup 235,238}U nuclear data. A typical Westinghouse 3-loop fuel assembly fuelled with UO{sub 2} fuel with 4.8% enrichment has been selected. The Total Monte-Carlo method has been applied using the deterministic transport code DRAGON. This code allows the generation of the few-groups nuclear data libraries by directly using data contained in the nuclear data evaluation files. The nuclear data used in this study is from the JEFF3.1 evaluation, and the nuclear data files for {sup 238}U and {supmore » 235}U (randomized for the generation of the various DRAGON libraries) are taken from the nuclear data library TENDL. The total uncertainty (obtained by randomizing all {sup 238}U and {sup 235}U nuclear data in the ENDF files) on the reactor parameters has been split into different components (different nuclear reaction channels). Results show that the TMC method in combination with a deterministic transport code constitutes a powerful tool for performing uncertainty and sensitivity analysis of reactor physics parameters. (authors)« less
CRAX/Cassandra Reliability Analysis Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.
1999-02-10
Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less
Quantifying the uncertainties in life cycle greenhouse gas emissions for UK wheat ethanol
NASA Astrophysics Data System (ADS)
Yan, Xiaoyu; Boies, Adam M.
2013-03-01
Biofuels are increasingly promoted worldwide as a means for reducing greenhouse gas (GHG) emissions from transport. However, current regulatory frameworks and most academic life cycle analyses adopt a deterministic approach in determining the GHG intensities of biofuels and thus ignore the inherent risk associated with biofuel production. This study aims to develop a transparent stochastic method for evaluating UK biofuels that determines both the magnitude and uncertainty of GHG intensity on the basis of current industry practices. Using wheat ethanol as a case study, we show that the GHG intensity could span a range of 40-110 gCO2e MJ-1 when land use change (LUC) emissions and various sources of uncertainty are taken into account, as compared with a regulatory default value of 44 gCO2e MJ-1. This suggests that the current deterministic regulatory framework underestimates wheat ethanol GHG intensity and thus may not be effective in evaluating transport fuels. Uncertainties in determining the GHG intensity of UK wheat ethanol include limitations of available data at a localized scale, and significant scientific uncertainty of parameters such as soil N2O and LUC emissions. Biofuel polices should be robust enough to incorporate the currently irreducible uncertainties and flexible enough to be readily revised when better science is available.
NASA Astrophysics Data System (ADS)
Christensen, Hannah; Moroz, Irene; Palmer, Tim
2015-04-01
Forecast verification is important across scientific disciplines as it provides a framework for evaluating the performance of a forecasting system. In the atmospheric sciences, probabilistic skill scores are often used for verification as they provide a way of unambiguously ranking the performance of different probabilistic forecasts. In order to be useful, a skill score must be proper -- it must encourage honesty in the forecaster, and reward forecasts which are reliable and which have good resolution. A new score, the Error-spread Score (ES), is proposed which is particularly suitable for evaluation of ensemble forecasts. It is formulated with respect to the moments of the forecast. The ES is confirmed to be a proper score, and is therefore sensitive to both resolution and reliability. The ES is tested on forecasts made using the Lorenz '96 system, and found to be useful for summarising the skill of the forecasts. The European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system (EPS) is evaluated using the ES. Its performance is compared to a perfect statistical probabilistic forecast -- the ECMWF high resolution deterministic forecast dressed with the observed error distribution. This generates a forecast that is perfectly reliable if considered over all time, but which does not vary from day to day with the predictability of the atmospheric flow. The ES distinguishes between the dynamically reliable EPS forecasts and the statically reliable dressed deterministic forecasts. Other skill scores are tested and found to be comparatively insensitive to this desirable forecast quality. The ES is used to evaluate seasonal range ensemble forecasts made with the ECMWF System 4. The ensemble forecasts are found to be skilful when compared with climatological or persistence forecasts, though this skill is dependent on region and time of year.
NASA Astrophysics Data System (ADS)
Najafi, Ali; Acar, Erdem; Rais-Rohani, Masoud
2014-02-01
The stochastic uncertainties associated with the material, process and product are represented and propagated to process and performance responses. A finite element-based sequential coupled process-performance framework is used to simulate the forming and energy absorption responses of a thin-walled tube in a manner that both material properties and component geometry can evolve from one stage to the next for better prediction of the structural performance measures. Metamodelling techniques are used to develop surrogate models for manufacturing and performance responses. One set of metamodels relates the responses to the random variables whereas the other relates the mean and standard deviation of the responses to the selected design variables. A multi-objective robust design optimization problem is formulated and solved to illustrate the methodology and the influence of uncertainties on manufacturability and energy absorption of a metallic double-hat tube. The results are compared with those of deterministic and augmented robust optimization problems.
Yue, Meng; Wang, Xiaoyu
2015-07-01
It is well-known that responsive battery energy storage systems (BESSs) are an effective means to improve the grid inertial response to various disturbances including the variability of the renewable generation. One of the major issues associated with its implementation is the difficulty in determining the required BESS capacity mainly due to the large amount of inherent uncertainties that cannot be accounted for deterministically. In this study, a probabilistic approach is proposed to properly size the BESS from the perspective of the system inertial response, as an application of probabilistic risk assessment (PRA). The proposed approach enables a risk-informed decision-making processmore » regarding (1) the acceptable level of solar penetration in a given system and (2) the desired BESS capacity (and minimum cost) to achieve an acceptable grid inertial response with a certain confidence level.« less
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.
1998-01-01
The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.
First Order Reliability Application and Verification Methods for Semistatic Structures
NASA Technical Reports Server (NTRS)
Verderaime, Vincent
1994-01-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, David R; Bartholomew, David B; Moon, Justin
2009-09-08
An apparatus for fixing computational latency within a deterministic region on a network comprises a network interface modem, a high priority module and at least one deterministic peripheral device. The network interface modem is in communication with the network. The high priority module is in communication with the network interface modem. The at least one deterministic peripheral device is connected to the high priority module. The high priority module comprises a packet assembler/disassembler, and hardware for performing at least one operation. Also disclosed is an apparatus for executing at least one instruction on a downhole device within a deterministic region,more » the apparatus comprising a control device, a downhole network, and a downhole device. The control device is near the surface of a downhole tool string. The downhole network is integrated into the tool string. The downhole device is in communication with the downhole network.« less
Stochastic Petri Net extension of a yeast cell cycle model.
Mura, Ivan; Csikász-Nagy, Attila
2008-10-21
This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.
Effect of sample volume on metastable zone width and induction time
NASA Astrophysics Data System (ADS)
Kubota, Noriaki
2012-04-01
The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.
Blocksome, Michael A.; Mamidala, Amith R.
2015-07-07
Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.
Blocksome, Michael A.; Mamidala, Amith R.
2015-07-14
Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.
Realistic Simulation for Body Area and Body-To-Body Networks
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele
2016-01-01
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537
Realistic Simulation for Body Area and Body-To-Body Networks.
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele
2016-04-20
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.
Stochastic switching in biology: from genotype to phenotype
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.
2017-03-01
There has been a resurgence of interest in non-equilibrium stochastic processes in recent years, driven in part by the observation that the number of molecules (genes, mRNA, proteins) involved in gene expression are often of order 1-1000. This means that deterministic mass-action kinetics tends to break down, and one needs to take into account the discrete, stochastic nature of biochemical reactions. One of the major consequences of molecular noise is the occurrence of stochastic biological switching at both the genotypic and phenotypic levels. For example, individual gene regulatory networks can switch between graded and binary responses, exhibit translational/transcriptional bursting, and support metastability (noise-induced switching between states that are stable in the deterministic limit). If random switching persists at the phenotypic level then this can confer certain advantages to cell populations growing in a changing environment, as exemplified by bacterial persistence in response to antibiotics. Gene expression at the single-cell level can also be regulated by changes in cell density at the population level, a process known as quorum sensing. In contrast to noise-driven phenotypic switching, the switching mechanism in quorum sensing is stimulus-driven and thus noise tends to have a detrimental effect. A common approach to modeling stochastic gene expression is to assume a large but finite system and to approximate the discrete processes by continuous processes using a system-size expansion. However, there is a growing need to have some familiarity with the theory of stochastic processes that goes beyond the standard topics of chemical master equations, the system-size expansion, Langevin equations and the Fokker-Planck equation. Examples include stochastic hybrid systems (piecewise deterministic Markov processes), large deviations and the Wentzel-Kramers-Brillouin (WKB) method, adiabatic reductions, and queuing/renewal theory. The major aim of this review is to provide a self-contained survey of these mathematical methods, mainly within the context of biological switching processes at both the genotypic and phenotypic levels. However, applications to other examples of biological switching are also discussed, including stochastic ion channels, diffusion in randomly switching environments, bacterial chemotaxis, and stochastic neural networks.
On higher order discrete phase-locked loops.
NASA Technical Reports Server (NTRS)
Gill, G. S.; Gupta, S. C.
1972-01-01
An exact mathematical model is developed for a discrete loop of a general order particularly suitable for digital computation. The deterministic response of the loop to the phase step and the frequency step is investigated. The design of the digital filter for the second-order loop is considered. Use is made of the incremental phase plane to study the phase error behavior of the loop. The model of the noisy loop is derived and the optimization of the loop filter for minimum mean-square error is considered.
({The) Solar System Large Planets influence on a new Maunder Miniμm}
NASA Astrophysics Data System (ADS)
Yndestad, Harald; Solheim, Jan-Erik
2016-04-01
In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
Palmer, Tim N.; O’Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173
Deterministic and efficient quantum cryptography based on Bell's theorem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen Zengbing; Pan Jianwei; Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, 69120 Heidelberg
2006-05-15
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.
Heart rate variability as determinism with jump stochastic parameters.
Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M
2013-08-01
We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.
NASA Astrophysics Data System (ADS)
Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon
2017-04-01
This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.
Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon
2017-04-10
This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.
Weinberg, Seth H.; Smith, Gregory D.
2012-01-01
Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597
Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon
2017-01-01
This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911
NASA Astrophysics Data System (ADS)
Mukherjee, L.; Zhai, P.; Hu, Y.; Winker, D. M.
2016-12-01
Among the primary factors, which determine the polarized radiation, field of a turbid medium are the single scattering properties of the medium. When multiple types of scatterers are present, the single scattering properties of the scatterers need to be properly mixed in order to find the solutions to the vector radiative transfer theory (VRT). The VRT solvers can be divided into two types: deterministic and stochastic. The deterministic solver can only accept one set of single scattering property in its smallest discretized spatial volume. When the medium contains more than one kind of scatterer, their single scattering properties are averaged, and then used as input for the deterministic solver. The stochastic solver, can work with different kinds of scatterers explicitly. In this work, two different mixing schemes are studied using the Successive Order of Scattering (SOS) method and Monte Carlo (MC) methods. One scheme is used for deterministic and the other is used for the stochastic Monte Carlo method. It is found that the solutions from the two VRT solvers using two different mixing schemes agree with each other extremely well. This confirms the equivalence to the two mixing schemes and also provides a benchmark for the VRT solution for the medium studied.
Signal detection with criterion noise: applications to recognition memory.
Benjamin, Aaron S; Diaz, Michael; Wee, Serena
2009-01-01
A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of discrimination incorporating criterion variability are derived, and the model's relationship with extant models of decision making in discrimination tasks is examined. An experiment evaluating recognition memory for ensembles of word stimuli revealed that criterion noise is not trivial in magnitude and contributes substantially to variance in the slope of the isosensitivity function. The authors discuss how ND-TSD can help explain a number of current and historical puzzles in recognition memory, including the inconsistent relationship between manipulations of learning and the isosensitivity function's slope, the lack of invariance of the slope with manipulations of bias or payoffs, the effects of aging on the decision-making process in recognition, and the nature of responding in remember-know decision tasks. ND-TSD poses novel, theoretically meaningful constraints on theories of recognition and decision making more generally, and provides a mechanism for rapprochement between theories of decision making that employ deterministic response rules and those that postulate probabilistic response rules.
Structure of Monopropellant Spray Flames at Elevated Pressures
1990-01-15
process were developed , both ignoring and considering effects of separated flow, and evaluated using the new measurements. Supercritical combustion...McliJUTV CL*.S’a»’ iCAr ’ON 0’ Igj iadf REPORT DOCUMENTATION PAGE i*. Kiwwr Jicjmry CLASSiFiCATtow unclsssified i*. sicumrr cuusiwcAnoN AUTHORITY...separated flow. Deterministic and stochastic separated flow models were developed which yielded predictions that were similar to each other and were
How Do Marine Pelagic Species Respond to Climate Change? Theories and Observations
NASA Astrophysics Data System (ADS)
Beaugrand, Grégory; Kirby, Richard R.
2018-01-01
In this review, we show how climate affects species, communities, and ecosystems, and why many responses from the species to the biome level originate from the interaction between the species’ ecological niche and changes in the environmental regime in both space and time. We describe a theory that allows us to understand and predict how marine species react to climate-induced changes in ecological conditions, how communities form and are reconfigured, and so how biodiversity is arranged and may respond to climate change. Our study shows that the responses of species to climate change are therefore intelligible—that is, they have a strong deterministic component and can be predicted.
Lattice Truss Structural Response Using Energy Methods
NASA Technical Reports Server (NTRS)
Kenner, Winfred Scottson
1996-01-01
A deterministic methodology is presented for developing closed-form deflection equations for two-dimensional and three-dimensional lattice structures. Four types of lattice structures are studied: beams, plates, shells and soft lattices. Castigliano's second theorem, which entails the total strain energy of a structure, is utilized to generate highly accurate results. Derived deflection equations provide new insight into the bending and shear behavior of the four types of lattices, in contrast to classic solutions of similar structures. Lattice derivations utilizing kinetic energy are also presented, and used to examine the free vibration response of simple lattice structures. Derivations utilizing finite element theory for unique lattice behavior are also presented and validated using the finite element analysis code EAL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaznatcheev, K.; Bertwistle, D.; Cheng, C.
We have explored the capabilities of synchronous ('lock-in') point detection techniques to enhance the x-ray magnetic circular dichroism (XMCD) contrast in scanning x-ray transmission microscopy (STXM) of magnetic thin-film microstructures. Local absorption contrast, measured synchronously with low-amplitude (<10 Oe) and low-frequency (<200 hz) longitudinal fields perturbing the near-remanent magnetization state, reveal a strong spatial dependence of the response, with a roll-off in frequency response above 200 Hz. In this context, synchronous measurement affords us a basis for imaging the relation between energy loss and the sweeping rate. We speculate that the lock-in approach will be uniquely suited for detailing stochasticmore » and deterministic frequency-dependent events in the process of magnetization reversal.« less
Characterization of forced response of density stratified reacting wake
NASA Astrophysics Data System (ADS)
Pawar, Samadhan A.; Sujith, Raman I.; Emerson, Benjamin; Lieuwen, Tim
2018-02-01
The hydrodynamic stability of a reacting wake depends primarily on the density ratio [i.e., ratio of unburnt gas density (ρu) to burnt gas density (ρb)] of the flow across the wake. The variation of the density ratio from high to low value, keeping ρ u / ρ b > 1 , transitions dynamical characteristics of the reacting wake from a linearly globally stable (or convectively unstable) to a globally unstable mode. In this paper, we propose a framework to analyze the effect of harmonic forcing on the deterministic and synchronization characteristics of reacting wakes. Using the recurrence quantification analysis of the forced wake response, we show that the deterministic behaviour of the reacting wake increases as the amplitude of forcing is increased. Furthermore, for different density ratios, we found that the synchronization of the top and bottom branches of the wake with the forcing signal is dependent on whether the mean frequency of the natural oscillations of the wake (fn) is lesser or greater than the frequency of external forcing (ff). We notice that the response of both branches (top and bottom) of the reacting wake to the external forcing is asymmetric and symmetric for the low and high density ratios, respectively. Furthermore, we characterize the phase-locking behaviour between the top and bottom branches of the wake for different values of density ratios. We observe that an increase in the density ratio results in a gradual decrease in the relative phase angle between the top and bottom branches of the wake, which leads to a change in the vortex shedding pattern from a sinuous (anti-phase) to a varicose (in-phase) mode of the oscillations.
Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
360° deterministic magnetization rotation in a three-ellipse magnetoelectric heterostructure
NASA Astrophysics Data System (ADS)
Kundu, Auni A.; Chavez, Andres C.; Keller, Scott M.; Carman, Gregory P.; Lynch, Christopher S.
2018-03-01
A magnetic dipole-coupled magnetoelectric heterostructure comprised of three closely spaced ellipse shapes was designed and shown to be capable of achieving deterministic in-plane magnetization rotation. The design approach used a combination of conventional micromagnetic simulations to obtain preliminary configurations followed by simulations using a fully strain-coupled, time domain micromagnetic code for a detailed assessment of performance. The conventional micromagnetic code has short run times and was used to refine the ellipse shape and orientation, but it does not accurately capture the effects of the strain gradients present in the piezoelectric and magnetostrictive layers that contribute to magnetization reorientation. The fully coupled code was used to assess the effects of strain and magnetic field gradients on precessional switching in the side ellipses and on the resulting dipole-field driven magnetization reorientation in the center ellipse. The work led to a geometry with a CoFeB ellipse (125 nm × 95 nm × 4 nm) positioned between two smaller CoFeB ellipses (75 nm × 50 nm × 4 nm) on a 500 nm PZT-5H film substrate clamped at its bottom surface. The smaller ellipses were oriented at 45° and positioned at 70° and 250° about the central ellipse due to the film deposition on a thick substrate. A 7.3 V pulse applied to the PZT for 0.22 ns produced 180° switching of the magnetization in the outer ellipses that then drove switching in the center ellipse through dipole-dipole coupling. Full 360° deterministic rotation was achieved with a second pulse. The temporal response of the resulting design is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable-speed operation.
NASA Astrophysics Data System (ADS)
Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.
2016-12-01
Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.
NASA Astrophysics Data System (ADS)
González-Carrasco, J. F.
2016-12-01
Southern Peru and northern Chile coastal areas, extended between 12º to 24ºS, have been recognized as a mature seismic gap with a high seismogenic potential associated to seismic moment deficit accumulated since 1877. An important scientific question is which will be the breaking pattern of a future megathrust earthquake, being relevant from hazard assessment perspective. During the last decade, the occurrence of three major subduction earthquakes has given the possibility to acquire outstanding geophysical and geological information to know the behavior of phenomena. An interesting result is the relationship between the maximum slip areas and the spatial distribution of asperities in subduction zones. In this contribution, we propose a methodology to identify a regional pattern of main asperities to construct reliable seismogenic scenarios in a seismic gap. We follow a deterministic approach to explore the distribution of asperities segmentation using geophysical and geodetic data as trench-parallel gravity anomaly (TPGA), interseismic coupling (ISC), b-value, historical moment release, residual bathymetric and gravity anomalies. The combined information represents physical constraints for short and long term suitable regions for future mega earthquakes. To illuminate the asperities distribution, we construct profiles using fault coordinates, along-strike and down-dip direction, of all proxies to define the boundaries of a major asperities (> 100 km). The geometry of a major asperity is useful to define a finite set of future deterministic seismogenic scenarios to evaluate tsunamigenic hazard in main cities of northern zone of Chile (18°S to 24°S).
Bhanji, Jamil P.; Beer, Jennifer S.; Bunge, Silvia A.
2014-01-01
A decision may be difficult because complex information processing is required to evaluate choices according to deterministic decision rules and/or because it is not certain which choice will lead to the best outcome in a probabilistic context. Factors that tax decision making such as decision rule complexity and low decision certainty should be disambiguated for a more complete understanding of the decision making process. Previous studies have examined the brain regions that are modulated by decision rule complexity or by decision certainty but have not examined these factors together in the context of a single task or study. In the present functional magnetic resonance imaging study, both decision rule complexity and decision certainty were varied in comparable decision tasks. Further, the level of certainty about which choice to make (choice certainty) was varied separately from certainty about the final outcome resulting from a choice (outcome certainty). Lateral prefrontal cortex, dorsal anterior cingulate cortex, and bilateral anterior insula were modulated by decision rule complexity. Anterior insula was engaged more strongly by low than high choice certainty decisions, whereas ventromedial prefrontal cortex showed the opposite pattern. These regions showed no effect of the independent manipulation of outcome certainty. The results disambiguate the influence of decision rule complexity, choice certainty, and outcome certainty on activity in diverse brain regions that have been implicated in decision making. Lateral prefrontal cortex plays a key role in implementing deterministic decision rules, ventromedial prefrontal cortex in probabilistic rules, and anterior insula in both. PMID:19781652
The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments
NASA Astrophysics Data System (ADS)
Chen, Fajing; Jiao, Meiyan; Chen, Jing
2013-04-01
Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.
Resilience and vulnerability to a natural hazard: A mathematical framework based on viability theory
NASA Astrophysics Data System (ADS)
Rougé, Charles; Mathias, Jean-Denis; Deffuant, Guillaume
2013-04-01
This deals with the response of a coupled human and natural system (CHANS) to a natural hazard by using the concepts of resilience and vulnerability within the mathematical framework of viability theory. This theory applies to time-evolving systems such as CHANS and assumes that their desirable properties can be defined as a subset of their state space. Policies can also apply to influence the dynamics of such systems: viability theory aims at finding the policies which keep the properties of a controlled dynamical system for so long as no disturbance hits it. The states of the system such that the properties are guaranteed constitute what is called the viability kernel. This viability framework has been extended to describe the response to a perturbation such as a natural hazard. Resilience describes the capacity of the CHANS to recover by getting back in the viability kernel, where its properties are guaranteed until the onset of the next major event. Defined for a given controlled trajectory that the system may take after the event ends, resilience is (a) whether the system comes back to the viability kernel within a given budget such as a time constraint, but also (b) a decreasing function of vulnerability. Computed for a given trajectory as well, vulnerability is a measure of the consequence of violating a property. We propose a family of functions from which cost functions and other vulnerability indicators can be derived for a certain trajectory. There can be several vulnerability functions, representing for instance social, economic or ecological vulnerability, and each representing the violation of an associated property, but these functions need to be ultimately aggregated as a single indicator. Computing the resilience and vulnerability of a trajectory enables the viability framework to describe the response of both deterministic and stochastic systems to hazards. In the deterministic case, there is only one response trajectory for a given action policy, and methods exist to find the actions which yield the most resilient trajectory, namely the least vulnerable trajectory for which recovery is complete. In the stochastic case however, there is a range of possible trajectories. Statistics can be derived from the probability distribution of the resilience and vulnerability of the trajectories. Dynamic programming methods can then yield either the policies that maximize the probability of being resilient by achieving recovery within a given time horizon, or these which minimize a given vulnerability statistic. These objectives are different and can be in contradiction, so that trade-offs may have to be considered between them. The approach is illustrated in both the deterministic and stochastic cases through a simple model of lake eutrophication, for which the desirable ecological properties of the lake conflict with the economic interest of neighboring farmers.
Dynamic Stability of Uncertain Laminated Beams Under Subtangential Loads
NASA Technical Reports Server (NTRS)
Goyal, Vijay K.; Kapania, Rakesh K.; Adelman, Howard (Technical Monitor); Horta, Lucas (Technical Monitor)
2002-01-01
Because of the inherent complexity of fiber-reinforced laminated composites, it can be challenging to manufacture composite structures according to their exact design specifications, resulting in unwanted material and geometric uncertainties. In this research, we focus on the deterministic and probabilistic stability analysis of laminated structures subject to subtangential loading, a combination of conservative and nonconservative tangential loads, using the dynamic criterion. Thus a shear-deformable laminated beam element, including warping effects, is derived to study the deterministic and probabilistic response of laminated beams. This twenty-one degrees of freedom element can be used for solving both static and dynamic problems. In the first-order shear deformable model used here we have employed a more accurate method to obtain the transverse shear correction factor. The dynamic version of the principle of virtual work for laminated composites is expressed in its nondimensional form and the element tangent stiffness and mass matrices are obtained using analytical integration The stability is studied by giving the structure a small disturbance about an equilibrium configuration, and observing if the resulting response remains small. In order to study the dynamic behavior by including uncertainties into the problem, three models were developed: Exact Monte Carlo Simulation, Sensitivity Based Monte Carlo Simulation, and Probabilistic FEA. These methods were integrated into the developed finite element analysis. Also, perturbation and sensitivity analysis have been used to study nonconservative problems, as well as to study the stability analysis, using the dynamic criterion.
NASA Astrophysics Data System (ADS)
Lambrou, George I.; Chatziioannou, Aristotelis; Vlahopoulos, Spiros; Moschovi, Maria; Chrousos, George P.
Biological systems are dynamic and possess properties that depend on two key elements: initial conditions and the response of the system over time. Conceptualizing this on tumor models will influence conclusions drawn with regard to disease initiation and progression. Alterations in initial conditions dynamically reshape the properties of proliferating tumor cells. The present work aims to test the hypothesis of Wolfrom et al., that proliferation shows evidence for deterministic chaos in a manner such that subtle differences in the initial conditions give rise to non-linear response behavior of the system. Their hypothesis, tested on adherent Fao rat hepatoma cells, provides evidence that these cells manifest aperiodic oscillations in their proliferation rate. We have tested this hypothesis with some modifications to the proposed experimental setup. We have used the acute lymphoblastic leukemia cell line CCRF-CEM, as it provides an excellent substrate for modeling proliferation dynamics. Measurements were taken at time points varying from 24h to 48h, extending the assayed populations beyond that of previous published reports that dealt with the complex dynamic behavior of animal cell populations. We conducted flow cytometry studies to examine the apoptotic and necrotic rate of the system, as well as DNA content changes of the cells over time. The cells exhibited a proliferation rate of nonlinear nature, as this rate presented oscillatory behavior. The obtained data have been fit in known models of growth, such as logistic and Gompertzian growth.
Distinct succession patterns of abundant and rare bacteria in temporal microcosms with pollutants.
Jiao, Shuo; Luo, Yantao; Lu, Mingmei; Xiao, Xiao; Lin, Yanbing; Chen, Weimin; Wei, Gehong
2017-06-01
Elucidating the driving forces behind the temporal dynamics of abundant and rare microbes is essential for understanding the assembly and succession of microbial communities. Here, we explored the successional trajectories and mechanisms of abundant and rare bacteria via soil-enrichment subcultures in response to various pollutants (phenanthrene, n-octadecane, and CdCl 2 ) using time-series Illumina sequencing datasets. The results reveal different successional patterns of abundant and rare sub-communities in eighty pollutant-degrading consortia and two original soil samples. A temporal decrease in α-diversity and high turnover rate for β-diversity indicate that deterministic processes are the main drivers of the succession of the abundant sub-community; however, the high cumulative species richness indicates that stochastic processes drive the succession of the rare sub-community. A functional prediction showed that abundant bacteria contribute primary functions to the pollutant-degrading consortia, such as amino acid metabolism, cellular responses to stress, and hydrocarbon degradation. Meanwhile, rare bacteria contribute a substantial fraction of auxiliary functions, such as carbohydrate-active enzymes, fermentation, and homoacetogenesis, which indicates their roles as a source of functional diversity. Our study suggests that the temporal succession of microbes in polluted microcosms is mainly associated with abundant bacteria rather than the high proportion of rare taxa. The major forces (i.e., stochastic or deterministic processes) driving microbial succession could be dependent on the low- or high-abundance community members in temporal microcosms with pollutants. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice
Dai, Junyi; Busemeyer, Jerome R.
2014-01-01
Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188
Cognitive Diagnostic Analysis Using Hierarchically Structured Skills
ERIC Educational Resources Information Center
Su, Yu-Lan
2013-01-01
This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Active temporal multiplexing of indistinguishable heralded single photons
Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.
2016-01-01
It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317
Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres
2018-01-02
Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.
First-order reliability application and verification methods for semistatic structures
NASA Astrophysics Data System (ADS)
Verderaime, V.
1994-11-01
Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.
Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay
2017-11-01
Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.
Deterministic Mean-Field Ensemble Kalman Filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
NASA Astrophysics Data System (ADS)
Han, Qun; Xu, Wei; Sun, Jian-Qiao
2016-09-01
The stochastic response of nonlinear oscillators under periodic and Gaussian white noise excitations is studied with the generalized cell mapping based on short-time Gaussian approximation (GCM/STGA) method. The solutions of the transition probability density functions over a small fraction of the period are constructed by the STGA scheme in order to construct the GCM over one complete period. Both the transient and steady-state probability density functions (PDFs) of a smooth and discontinuous (SD) oscillator are computed to illustrate the application of the method. The accuracy of the results is verified by direct Monte Carlo simulations. The transient responses show the evolution of the PDFs from being Gaussian to non-Gaussian. The effect of a chaotic saddle on the stochastic response is also studied. The stochastic P-bifurcation in terms of the steady-state PDFs occurs with the decrease of the smoothness parameter, which corresponds to the deterministic pitchfork bifurcation.
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
Parameter Estimation in Epidemiology: from Simple to Complex Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico
2011-09-01
We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.
Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen
In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.
NASA Astrophysics Data System (ADS)
Delvecchio, S.; Antoni, J.
2012-02-01
This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.
Tag-mediated cooperation with non-deterministic genotype-phenotype mapping
NASA Astrophysics Data System (ADS)
Zhang, Hong; Chen, Shu
2016-01-01
Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.
Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem
Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...
2016-12-12
In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Probabilistic flood extent estimates from social media flood observations
NASA Astrophysics Data System (ADS)
Brouwer, Tom; Eilander, Dirk; van Loenen, Arnejan; Booij, Martijn J.; Wijnberg, Kathelijne M.; Verkade, Jan S.; Wagemaker, Jurjen
2017-05-01
The increasing number and severity of floods, driven by phenomena such as urbanization, deforestation, subsidence and climate change, create a growing need for accurate and timely flood maps. In this paper we present and evaluate a method to create deterministic and probabilistic flood maps from Twitter messages that mention locations of flooding. A deterministic flood map created for the December 2015 flood in the city of York (UK) showed good performance (F(2) = 0.69; a statistic ranging from 0 to 1, with 1 expressing a perfect fit with validation data). The probabilistic flood maps we created showed that, in the York case study, the uncertainty in flood extent was mainly induced by errors in the precise locations of flood observations as derived from Twitter data. Errors in the terrain elevation data or in the parameters of the applied algorithm contributed less to flood extent uncertainty. Although these maps tended to overestimate the actual probability of flooding, they gave a reasonable representation of flood extent uncertainty in the area. This study illustrates that inherently uncertain data from social media can be used to derive information about flooding.
Chaotic behavior in the locomotion of Amoeba proteus.
Miyoshi, H; Kagawa, Y; Tsuchiya, Y
2001-01-01
The locomotion of Amoeba proteus has been investigated by algorithms evaluating correlation dimension and Lyapunov spectrum developed in the field of nonlinear science. It is presumed by these parameters whether the random behavior of the system is stochastic or deterministic. For the analysis of the nonlinear parameters, n-dimensional time-delayed vectors have been reconstructed from a time series of periphery and area of A. proteus images captured with a charge-coupled-device camera, which characterize its random motion. The correlation dimension analyzed has shown the random motion of A. proteus is subjected only to 3-4 macrovariables, though the system is a complex system composed of many degrees of freedom. Furthermore, the analysis of the Lyapunov spectrum has shown its largest exponent takes positive values. These results indicate the random behavior of A. proteus is chaotic and deterministic motion on an attractor with low dimension. It may be important for the elucidation of the cell locomotion to take account of nonlinear interactions among a small number of dynamics such as the sol-gel transformation, the cytoplasmic streaming, and the relating chemical reaction occurring in the cell.
Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA
NASA Technical Reports Server (NTRS)
Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.
2017-01-01
Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.
Relevance of deterministic chaos theory to studies in functioning of dynamical systems
NASA Astrophysics Data System (ADS)
Glagolev, S. N.; Bukhonova, S. M.; Chikina, E. D.
2018-03-01
The paper considers chaotic behavior of dynamical systems typical for social and economic processes. Approaches to analysis and evaluation of system development processes are studies from the point of view of controllability and determinateness. Explanations are given for necessity to apply non-standard mathematical tools to explain states of dynamical social and economic systems on the basis of fractal theory. Features of fractal structures, such as non-regularity, self-similarity, dimensionality and fractionality are considered.
Computer Models of Underwater Acoustic Propagation.
1980-01-02
deterministic propagation loss result. Development of a model for the more general problem is required, as evidenced by the trends in future sonar designs ...air. The water column itself is treated as an ideal fluid incapable of supporting showr stresses and having a uniform or, at most, piecewise constant...evaluated at any depth (zs 4 z -zN). The layer in which the source is located will be designated by LS and the receiver layer by LR. The depth dependent
Stochastic Seismic Response of an Algiers Site with Random Depth to Bedrock
NASA Astrophysics Data System (ADS)
Badaoui, M.; Berrah, M. K.; Mébarki, A.
2010-05-01
Among the important effects of the Boumerdes earthquake (Algeria, May 21st 2003) was that, within the same zone, the destructions in certain parts were more important than in others. This phenomenon is due to site effects which alter the characteristics of seismic motions and cause concentration of damage during earthquakes. Local site effects such as thickness and mechanical properties of soil layers have important effects on the surface ground motions. This paper deals with the effect of the randomness aspect of the depth to bedrock (soil layers heights) which is assumed to be a random variable with lognormal distribution. This distribution is suitable for strictly non-negative random variables with large values of the coefficient of variation. In this case, Monte Carlo simulations are combined with the stiffness matrix method, used herein as a deterministic method, for evaluating the effect of the depth to bedrock uncertainty on the seismic response of a multilayered soil. This study considers a P and SV wave propagation pattern using input accelerations collected at Keddara station, located at 20 km from the epicenter, as it is located directly on the bedrock. A parametric study is conducted do derive the stochastic behavior of the peak ground acceleration and its response spectrum, the transfer function and the amplification factors. It is found that the soil height heterogeneity causes a widening of the frequency content and an increase in the fundamental frequency of the soil profile, indicating that the resonance phenomenon concerns a larger number of structures.
Tributaries affect the thermal response of lakes to climate change
NASA Astrophysics Data System (ADS)
Råman Vinnå, Love; Wüest, Alfred; Zappa, Massimiliano; Fink, Gabriel; Bouffard, Damien
2018-01-01
Thermal responses of inland waters to climate change varies on global and regional scales. The extent of warming is determined by system-specific characteristics such as fluvial input. Here we examine the impact of ongoing climate change on two alpine tributaries, the Aare River and the Rhône River, and their respective downstream peri-alpine lakes: Lake Biel and Lake Geneva. We propagate regional atmospheric temperature effects into river discharge projections. These, together with anthropogenic heat sources, are in turn incorporated into simple and efficient deterministic models that predict future water temperatures, river-borne suspended sediment concentration (SSC), lake stratification and river intrusion depth/volume in the lakes. Climate-induced shifts in river discharge regimes, including seasonal flow variations, act as positive and negative feedbacks in influencing river water temperature and SSC. Differences in temperature and heating regimes between rivers and lakes in turn result in large seasonal shifts in warming of downstream lakes. The extent of this repressive effect on warming is controlled by the lakes hydraulic residence time. Previous studies suggest that climate change will diminish deep-water oxygen renewal in lakes. We find that climate-related seasonal variations in river temperatures and SSC shift deep penetrating river intrusions from summer towards winter. Thus potentially counteracting the otherwise negative effects associated with climate change on deep-water oxygen content. Our findings provide a template for evaluating the response of similar hydrologic systems to on-going climate change.
Weijers, Laure; Baerwald, Christoph; Mennini, Francesco S; Rodríguez-Heredia, José M; Bergman, Martin J; Choquette, Denis; Herrmann, Kirsten H; Attinà, Giulia; Nappi, Carmela; Merino, Silvia Jimenez; Patel, Chad; Mtibaa, Mondher; Foo, Jason
2017-07-01
Rheumatoid arthritis (RA) is a chronic inflammatory disorder leading to disability and reduced quality of life. Effective treatment with biologic DMARDs poses a significant economic burden. The Abatacept versus Adalimumab Comparison in Biologic-Naïve RA Subjects with Background Methotrexate (AMPLE) trial was a head-to-head, randomized study comparing abatacept in serum anti-citrullinated protein antibody (ACPA)-positive patients, with increasing efficacy across ACPA quartile levels. The aim of this study was to evaluate the cost per response accrued using abatacept versus adalimumab in ACPA-positive and ACPA-negative patients with RA from the health care perspective in Germany, Italy, Spain, the US and Canada. A cost-consequence analysis (CCA) was designed to compare the monthly costs per responding patient/patient in remission. Efficacy, safety and resource use inputs were based on the AMPLE trial. A one-way deterministic sensitivity analysis (OWSA) was also performed to assess the impact of model inputs on the results for total incremental costs. Cost per response in ACPA-positive patients favoured abatacept compared with adalimumab (ACR20, ACR90 and HAQ-DI). Subgroup analysis favoured abatacept with increasing stringency of response criteria and serum ACPA levels. Cost per remission (DAS28-CRP) favoured abatacept in ACPA-negative patients, while cost per CDAI and SDAI favoured abatacept in ACPA-positive patients. Abatacept was consistently favoured in ACPA-Q4 patients across all outcomes and countries. Cost savings were greater with abatacept when more stringent response criteria were applied and also with increasing ACPA levels, which could lead to a lower overall health care budget impact with abatacept compared with adalimumab.
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
A Deterministic Annealing Approach to Clustering AIRS Data
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...
Integrability and Chaos: The Classical Uncertainty
ERIC Educational Resources Information Center
Masoliver, Jaume; Ros, Ana
2011-01-01
In recent years there has been a considerable increase in the publishing of textbooks and monographs covering what was formerly known as random or irregular deterministic motion, now referred to as deterministic chaos. There is still substantial interest in a matter that is included in many graduate and even undergraduate courses on classical…
The development of the deterministic nonlinear PDEs in particle physics to stochastic case
NASA Astrophysics Data System (ADS)
Abdelrahman, Mahmoud A. E.; Sohaly, M. A.
2018-06-01
In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.
Contemporary Genetics for Gender Researchers: Not Your Grandma's Genetics Anymore
ERIC Educational Resources Information Center
Salk, Rachel H.; Hyde, Janet S.
2012-01-01
Over the past century, much of genetics was deterministic, and feminist researchers framed justified criticisms of genetics research. However, over the past two decades, genetics research has evolved remarkably and has moved far from earlier deterministic approaches. Our article provides a brief primer on modern genetics, emphasizing contemporary…
ERIC Educational Resources Information Center
Rambe, Patient; Nel, Liezel
2015-01-01
The discourse of social media adoption in higher education has often been funnelled through utopian and dystopian perspectives, which are polarised but determinist theorisations of human engagement with educational technologies. Consequently, these determinist approaches have obscured a broadened grasp of the situated, socially constructed nature…
NASA Astrophysics Data System (ADS)
Burns, Kimberly Ann
The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.
NASA Astrophysics Data System (ADS)
Cosgrove, B.; Gochis, D.; Clark, E. P.; Cui, Z.; Dugger, A. L.; Fall, G. M.; Feng, X.; Fresch, M. A.; Gourley, J. J.; Khan, S.; Kitzmiller, D.; Lee, H. S.; Liu, Y.; McCreight, J. L.; Newman, A. J.; Oubeidillah, A.; Pan, L.; Pham, C.; Salas, F.; Sampson, K. M.; Smith, M.; Sood, G.; Wood, A.; Yates, D. N.; Yu, W.; Zhang, Y.
2015-12-01
The National Weather Service (NWS) National Water Center(NWC) is collaborating with the NWS National Centers for Environmental Prediction (NCEP) and the National Center for Atmospheric Research (NCAR) to implement a first-of-its-kind operational instance of the Weather Research and Forecasting (WRF)-Hydro model over the Continental United States (CONUS) and contributing drainage areas on the NWS Weather and Climate Operational Supercomputing System (WCOSS) supercomputer. The system will provide seamless, high-resolution, continuously cycling forecasts of streamflow and other hydrologic outputs of value from both deterministic- and ensemble-type runs. WRF-Hydro will form the core of the NWC national water modeling strategy, supporting NWS hydrologic forecast operations along with emergency response and water management efforts of partner agencies. Input and output from the system will be comprehensively verified via the NWC Water Resource Evaluation Service. Hydrologic events occur on a wide range of temporal scales, from fast acting flash floods, to long-term flow events impacting water supply. In order to capture this range of events, the initial operational WRF-Hydro configuration will feature 1) hourly analysis runs, 2) short-and medium-range deterministic forecasts out to two day and ten day horizons and 3) long-range ensemble forecasts out to 30 days. All three of these configurations are underpinned by a 1km execution of the NoahMP land surface model, with channel routing taking place on 2.67 million NHDPlusV2 catchments covering the CONUS and contributing areas. Additionally, the short- and medium-range forecasts runs will feature surface and sub-surface routing on a 250m grid, while the hourly analyses will feature this same 250m routing in addition to nudging-based assimilation of US Geological Survey (USGS) streamflow observations. A limited number of major reservoirs will be configured within the model to begin to represent the first-order impacts of streamflow regulation.
Long-run evolution of the global economy - Part 2: Hindcasts of innovation and growth
NASA Astrophysics Data System (ADS)
Garrett, T. J.
2015-10-01
Long-range climate forecasts use integrated assessment models to link the global economy to greenhouse gas emissions. This paper evaluates an alternative economic framework outlined in part 1 of this study (Garrett, 2014) that approaches the global economy using purely physical principles rather than explicitly resolved societal dynamics. If this model is initialized with economic data from the 1950s, it yields hindcasts for how fast global economic production and energy consumption grew between 2000 and 2010 with skill scores > 90 % relative to a model of persistence in trends. The model appears to attain high skill partly because there was a strong impulse of discovery of fossil fuel energy reserves in the mid-twentieth century that helped civilization to grow rapidly as a deterministic physical response. Forecasting the coming century may prove more of a challenge because the effect of the energy impulse appears to have nearly run its course. Nonetheless, an understanding of the external forces that drive civilization may help development of constrained futures for the coupled evolution of civilization and climate during the Anthropocene.
The ARPAL operational high resolution Poor Man's Ensemble, description and validation
NASA Astrophysics Data System (ADS)
Corazza, Matteo; Sacchetti, Davide; Antonelli, Marta; Drofa, Oxana
2018-05-01
The Meteo Hydrological Functional Center for Civil Protection of the Environmental Protection Agency of the Liguria Region is responsible for issuing forecasts primarily aimed at the Civil Protection needs. Several deterministic high resolution models, run every 6 or 12 h, are regularly used in the Center to elaborate weather forecasts at short to medium range. The Region is frequently affected by severe flash floods over its very small basins, characterized by a steep orography close to the sea. These conditions led the Center in the past years to pay particular attention to the use and development of high resolution model chains for explicit simulation of convective phenomena. For years, the availability of several models has been used by the forecasters for subjective analyses of the potential evolution of the atmosphere and of its uncertainty. More recently, an Interactive Poor Man's Ensemble has been developed, aimed at providing statistical ensemble variables to help forecaster's evaluations. In this paper the structure of this system is described and results are validated using the regional dense ground observational network.
Synthesis of feedback systems with large plant ignorance for prescribed time domain tolerances
NASA Technical Reports Server (NTRS)
Horowitz, I. M.; Sidi, M.
1971-01-01
There is given a minimum-phase plant transfer function, with prescribed bounds on its parameter values. The plant is imbedded in a two-degree-of freedom feedback system, which is to be designed such that the system time response to a deterministic input lies within specified boundaries. Subject to the above, the design should be such as to minimize the effect of sensor noise at the input to the plant. This report presents a design procedure for this purpose, based on frequency response concepts. The time-domain tolerances are translated into equivalent frequency response tolerances. The latter lead to bounds on the loop transmission function in the form of continuous curves on the Nichols chart. The properties of the loop transmission function which satisfy these bounds with minimum effect of sensor noise, are derived.
NASA Astrophysics Data System (ADS)
Fu, Chao; Ren, Xingmin; Yang, Yongfeng; Xia, Yebao; Deng, Wangqun
2018-07-01
A non-intrusive interval precise integration method (IPIM) is proposed in this paper to analyze the transient unbalance response of uncertain rotor systems. The transfer matrix method (TMM) is used to derive the deterministic equations of motion of a hollow-shaft overhung rotor. The uncertain transient dynamic problem is solved by combing the Chebyshev approximation theory with the modified precise integration method (PIM). Transient response bounds are calculated by interval arithmetic of the expansion coefficients. Theoretical error analysis of the proposed method is provided briefly, and its accuracy is further validated by comparing with the scanning method in simulations. Numerical results show that the IPIM can keep good accuracy in vibration prediction of the start-up transient process. Furthermore, the proposed method can also provide theoretical guidance to other transient dynamic mechanical systems with uncertainties.
NASA Astrophysics Data System (ADS)
Kadow, Christopher; Illing, Sebastian; Kunst, Oliver; Pohlmann, Holger; Müller, Wolfgang; Cubasch, Ulrich
2014-05-01
Decadal forecasting of climate variability is a growing need for different parts of society, industry and economy. The German initiative MiKlip (www.fona-miklip.de) focuses on the ongoing processes of medium-term climate prediction. The scientific major project funded by the Federal Ministry of Education and Research in Germany (BMBF) develops a forecast system, that aims for reliable predictions on decadal timescales. Using a single earth system model from the Max-Planck institute (MPI-ESM) and moving from the uninitialized runs on to the first initialized 'Coupled Model Intercomparison Project Phase 5' (CMIP5) hindcast experiments identified possibilities and open scientific tasks. The MiKlip decadal prediction system was improved on different aspects through new initialization techniques and datasets of the ocean and atmosphere. To accompany and emphasize such an improvement of a forecast system, a standardized evaluation system designed by the MiKlip sub-project 'Integrated data and evaluation system for decadal scale prediction' (INTEGRATION) analyzes every step of its evolution. This study aims at combining deterministic and probabilistic skill scores of this prediction system from its unitialized state to anomaly and then full-field oceanic initialization. The improved forecast skill in these different decadal hindcast experiments of surface air temperature and precipitation in the Pacific region and the complex area of the North Atlantic illustrate potential sources of skill. A standardized evaluation leads prediction systems depending on development to find its way to produce reliable forecasts. Different aspects of these research dependencies, e.g. ensemble size, resolution, initializations, etc. will be discussed.
Multivariate Probabilistic Analysis of an Hydrological Model
NASA Astrophysics Data System (ADS)
Franceschini, Samuela; Marani, Marco
2010-05-01
Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model response is highly nonlinear. Higher-order approximations can provide more accurate estimations, but reduce the numerical advantage of the LiM. The results of the uncertainty analysis identify the main sources of uncertainty in the computation of river discharge. In this particular case the spatial variability of rainfall and the model parameters uncertainty are shown to have the greatest impact on discharge evaluation. This, in turn, highlights the need to support any estimated hydrological response with probability information and risk analysis results in order to provide a robust, systematic framework for decision making.
On the Use of Statistics in Design and the Implications for Deterministic Computer Experiments
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
Perhaps the most prevalent use of statistics in engineering design is through Taguchi's parameter and robust design -- using orthogonal arrays to compute signal-to-noise ratios in a process of design improvement. In our view, however, there is an equally exciting use of statistics in design that could become just as prevalent: it is the concept of metamodeling whereby statistical models are built to approximate detailed computer analysis codes. Although computers continue to get faster, analysis codes always seem to keep pace so that their computational time remains non-trivial. Through metamodeling, approximations of these codes are built that are orders of magnitude cheaper to run. These metamodels can then be linked to optimization routines for fast analysis, or they can serve as a bridge for integrating analysis codes across different domains. In this paper we first review metamodeling techniques that encompass design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We discuss their existing applications in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of metamodeling techniques in given situations and how common pitfalls can be avoided.
Natural sequence variants of yeast environmental sensors confer cell-to-cell expression variability
Fehrmann, Steffen; Bottin-Duplus, Hélène; Leonidou, Andri; Mollereau, Esther; Barthelaix, Audrey; Wei, Wu; Steinmetz, Lars M; Yvert, Gaël
2013-01-01
Living systems may have evolved probabilistic bet hedging strategies that generate cell-to-cell phenotypic diversity in anticipation of environmental catastrophes, as opposed to adaptation via a deterministic response to environmental changes. Evolution of bet hedging assumes that genotypes segregating in natural populations modulate the level of intraclonal diversity, which so far has largely remained hypothetical. Using a fluorescent Pmet17-GFP reporter, we mapped four genetic loci conferring to a wild yeast strain an elevated cell-to-cell variability in the expression of MET17, a gene regulated by the methionine pathway. A frameshift mutation in the Erc1p transmembrane transporter, probably resulting from a release of laboratory strains from negative selection, reduced Pmet17-GFP expression variability. At a second locus, cis-regulatory polymorphisms increased mean expression of the Mup1p methionine permease, causing increased expression variability in trans. These results demonstrate that an expression quantitative trait locus (eQTL) can simultaneously have a deterministic effect in cis and a probabilistic effect in trans. Our observations indicate that the evolution of transmembrane transporter genes can tune intraclonal variation and may therefore be implicated in both reactive and anticipatory strategies of adaptation. PMID:24104478
Fluidic Energy Harvester Optimization in Grid Turbulence
NASA Astrophysics Data System (ADS)
Danesh-Yazdi, Amir; Elvin, Niell; Andreopoulos, Yiannis
2017-11-01
Even though it is omnipresent in nature, there has not been a great deal of research in the literature involving turbulence as an energy source for piezoelectric fluidic harvesters. In the present work, a grid-generated turbulence forcing function model which we derived previously is employed in the single degree-of-freedom electromechanical equations to find the power output and tip displacement of piezoelectric cantilever beams. Additionally, we utilize simplified, deterministic models of the turbulence forcing function to obtain closed-form expressions for the power output. These theoretical models are studied using experiments that involve separately placing a hot-wire anemometer probe and a short PVDF beam in flows where turbulence is generated by means of passive and semi-passive grids. From a parametric study on the deterministic models, we show that the white noise forcing function best mimics the experimental data. Furthermore, our parametric study of the response spectrum of a generic fluidic harvester in grid-generated turbulent flow shows that optimum power output is attained for beams placed closer to the grid with a low natural frequency and damping ratio and a large electromechanical coupling coefficient. NSF Grant No. CBET 1033117.
Culpability and blame after pregnancy loss
Hale, B
2007-01-01
The problem of feeling guilty about a pregnancy loss is suggested to be primarily a moral matter and not a medical or psychological one. Two standard approaches to women who blame themselves for a loss are first introduced, characterised as either psychologistic or deterministic. Both these approaches are shown to underdetermine the autonomy of the mother by depending on the notion that the mother is not culpable for the loss if she “could not have acted otherwise”. The inability to act otherwise is explained as not being as strong a determinant of culpability as it may seem at first. Instead, people's culpability for a bad turn of events implies strongly that they have acted for the wrong reasons, which is probably not true in the case of women who have experienced a loss of pregnancy. The practical conclusion of this paper is that women who feel a sense of guilt in the wake of their loss have a good reason to reject both the psychologistic and the deterministic approaches to their guilt—that they are justified in feeling upset about what has gone wrong, even responsible for the life of the child, but are not culpable for the unfortunate turn of events. PMID:17209106
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
Social autopoiesis: A concept in search of a theory
NASA Astrophysics Data System (ADS)
Broonen, Jean Paul
1998-07-01
This paper is a brief report on the issue of extension of the concept of autopoiesis to social systems. The arguments developed by four groups of authors to bring a response to that issue are summarized: Maturana and Varela, the fathers of the concept of autopoieis; Zeleny & Hufford who proposed a simple extension of the concept to social systems; Luhmann and Hejel with two different transformations of the concept; Morgan and his metaphorical perspective. The determinist vs teleological conception of (social) autopoiesis explicitly or implicitly sustained by several authors is emphasized.
NASA Technical Reports Server (NTRS)
Norman, Ryan B.; Badavi, Francis F.; Blattnig, Steve R.; Atwell, William
2011-01-01
A deterministic suite of radiation transport codes, developed at NASA Langley Research Center (LaRC), which describe the transport of electrons, photons, protons, and heavy ions in condensed media is used to simulate exposures from spectral distributions typical of electrons, protons and carbon-oxygen-sulfur (C-O-S) trapped heavy ions in the Jovian radiation environment. The particle transport suite consists of a coupled electron and photon deterministic transport algorithm (CEPTRN) and a coupled light particle and heavy ion deterministic transport algorithm (HZETRN). The primary purpose for the development of the transport suite is to provide a means for the spacecraft design community to rapidly perform numerous repetitive calculations essential for electron, proton and heavy ion radiation exposure assessments in complex space structures. In this paper, the radiation environment of the Galilean satellite Europa is used as a representative boundary condition to show the capabilities of the transport suite. While the transport suite can directly access the output electron spectra of the Jovian environment as generated by the Jet Propulsion Laboratory (JPL) Galileo Interim Radiation Electron (GIRE) model of 2003; for the sake of relevance to the upcoming Europa Jupiter System Mission (EJSM), the 105 days at Europa mission fluence energy spectra provided by JPL is used to produce the corresponding dose-depth curve in silicon behind an aluminum shield of 100 mils ( 0.7 g/sq cm). The transport suite can also accept ray-traced thickness files from a computer-aided design (CAD) package and calculate the total ionizing dose (TID) at a specific target point. In that regard, using a low-fidelity CAD model of the Galileo probe, the transport suite was verified by comparing with Monte Carlo (MC) simulations for orbits JOI--J35 of the Galileo extended mission (1996-2001). For the upcoming EJSM mission with a potential launch date of 2020, the transport suite is used to compute the traditional aluminum-silicon dose-depth calculation as a standard shield-target combination output, as well as the shielding response of high charge (Z) shields such as tantalum (Ta). Finally, a shield optimization algorithm is used to guide the instrument designer with the choice of graded-Z shield analysis.
Probabilistic evaluation of SSME structural components
NASA Astrophysics Data System (ADS)
Rajagopal, K. R.; Newell, J. F.; Ho, H.
1991-05-01
The application is described of Composite Load Spectra (CLS) and Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) family of computer codes to the probabilistic structural analysis of four Space Shuttle Main Engine (SSME) space propulsion system components. These components are subjected to environments that are influenced by many random variables. The applications consider a wide breadth of uncertainties encountered in practice, while simultaneously covering a wide area of structural mechanics. This has been done consistent with the primary design requirement for each component. The probabilistic application studies are discussed using finite element models that have been typically used in the past in deterministic analysis studies.
Deterministic chaos in an ytterbium-doped mode-locked fiber laser
NASA Astrophysics Data System (ADS)
Mélo, Lucas B. A.; Palacios, Guillermo F. R.; Carelli, Pedro V.; Acioli, Lúcio H.; Rios Leite, José R.; de Miranda, Marcio H. G.
2018-05-01
We experimentally study the nonlinear dynamics of a femtosecond ytterbium doped mode-locked fiber laser. With the laser operating in the pulsed regime a route to chaos is presented, starting from stable mode-locking, period two, period four, chaos and period three regimes. Return maps and bifurcation diagrams were extracted from time series for each regime. The analysis of the time series with the laser operating in the quasi mode-locked regime presents deterministic chaos described by an unidimensional Rossler map. A positive Lyapunov exponent $\\lambda = 0.14$ confirms the deterministic chaos of the system. We suggest an explanation about the observed map by relating gain saturation and intra-cavity loss.
The viability of ADVANTG deterministic method for synthetic radiography generation
NASA Astrophysics Data System (ADS)
Bingham, Andrew; Lee, Hyoung K.
2018-07-01
Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.
Distributed modelling of hydrologic regime at three subcatchments of Kopaninský tok catchment
NASA Astrophysics Data System (ADS)
Žlábek, Pavel; Tachecí, Pavel; Kaplická, Markéta; Bystřický, Václav
2010-05-01
Kopaninský tok catchment is situated in crystalline area of Bohemo-Moravian highland hilly region, with cambisol cover and prevailing agricultural land use. It is a subject of long term (since 1980's) observation. Time series (discharge, precipitation, climatic parameters...) are nowadays available in 10 min. time step, water quality average daily composit samples plus samples during events are available. Soil survey resulting in reference soil hydraulic properties for horizons and vegetation cover survey incl. LAI measurement has been done. All parameters were analysed and used for establishing of distributed mathematical models of P6, P52 and P53 subcatchments, using MIKE SHE 2009 WM deterministic hydrologic modelling system. The aim is to simulate long-term hydrologic regime as well as rainfall-runoff events, serving the base for modelling of nitrate regime and agricultural management influence in the next step. Mentioned subcatchments differs in ratio of artificial drainage area, soil types, land use and slope angle. The models are set-up in a regular computational grid of 2 m size. Basic time step was set to 2 hrs, total simulated period covers 3 years. Runoff response and moisture regime is compared using spatially distributed simulation results. Sensitivity analysis revealed most important parameters influencing model response. Importance of spatial distribution of initial conditions was underlined. Further on, different runoff components in terms of their origin, flow paths and travel time were separated using a combination of two runoff separation techniques (a digital filter and a simple conceptual model GROUND) in 12 subcatchments of Kopaninský tok catchment. These two methods were chosen based on a number of methods testing. Ordinations diagrams performed with Canoco software were used to evaluate influence of different catchment parameters on different runoff components. A canonical ordination method analyses (RDA) was used to explain one data set (runoff components - either volumes of each runoff component or occurence of baseflow) with another data set (catchment parameters - proportion of arable land, proportion of forest, proportion of vulnerable zones with high infiltration capacity, average slope, topographic index and runoff coefficient). The influence was analysed both for long-term runoff balance and selected rainfall-runoff events. Keywords: small catchment, water balance modelling, rainfall-runoff modelling, distributed deterministic model, runoff separation, sensitivity analysis
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Fasanella, Edwin L.; Melis, Matthew; Carney, Kelly; Gabrys, Jonathan
2004-01-01
The Space Shuttle Columbia Accident Investigation Board (CAIB) made several recommendations for improving the NASA Space Shuttle Program. An extensive experimental and analytical program has been developed to address two recommendations related to structural impact analysis. The objective of the present work is to demonstrate the application of probabilistic analysis to assess the effect of uncertainties on debris impacts on Space Shuttle Reinforced Carbon-Carbon (RCC) panels. The probabilistic analysis is used to identify the material modeling parameters controlling the uncertainty. A comparison of the finite element results with limited experimental data provided confidence that the simulations were adequately representing the global response of the material. Five input parameters were identified as significantly controlling the response.
Strange nonchaotic attractors for computation
NASA Astrophysics Data System (ADS)
Sathish Aravindh, M.; Venkatesan, A.; Lakshmanan, M.
2018-05-01
We investigate the response of quasiperiodically driven nonlinear systems exhibiting strange nonchaotic attractors (SNAs) to deterministic input signals. We show that if one uses two square waves in an aperiodic manner as input to a quasiperiodically driven double-well Duffing oscillator system, the response of the system can produce logical output controlled by such a forcing. Changing the threshold or biasing of the system changes the output to another logic operation and memory latch. The interplay of nonlinearity and quasiperiodic forcing yields logical behavior, and the emergent outcome of such a system is a logic gate. It is further shown that the logical behaviors persist even for an experimental noise floor. Thus the SNA turns out to be an efficient tool for computation.
Evaluation of Fast-Time Wake Vortex Models using Wake Encounter Flight Test Data
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; VanValkenburg, Randal L.; Bowles, Roland L.; Limon Duparcmeur, Fanny M.; Gloudesman, Thijs; van Lochem, Sander; Ras, Eelco
2014-01-01
This paper describes a methodology for the integration and evaluation of fast-time wake models with flight data. The National Aeronautics and Space Administration conducted detailed flight tests in 1995 and 1997 under the Aircraft Vortex Spacing System Program to characterize wake vortex decay and wake encounter dynamics. In this study, data collected during Flight 705 were used to evaluate NASA's fast-time wake transport and decay models. Deterministic and Monte-Carlo simulations were conducted to define wake hazard bounds behind the wake generator. The methodology described in this paper can be used for further validation of fast-time wake models using en-route flight data, and for determining wake turbulence constraints in the design of air traffic management concepts.
NASA Astrophysics Data System (ADS)
Matte, Simon; Boucher, Marie-Amélie; Boucher, Vincent; Fortier Filion, Thomas-Charles
2017-06-01
A large effort has been made over the past 10 years to promote the operational use of probabilistic or ensemble streamflow forecasts. Numerous studies have shown that ensemble forecasts are of higher quality than deterministic ones. Many studies also conclude that decisions based on ensemble rather than deterministic forecasts lead to better decisions in the context of flood mitigation. Hence, it is believed that ensemble forecasts possess a greater economic and social value for both decision makers and the general population. However, the vast majority of, if not all, existing hydro-economic studies rely on a cost-loss ratio framework that assumes a risk-neutral decision maker. To overcome this important flaw, this study borrows from economics and evaluates the economic value of early warning flood systems using the well-known Constant Absolute Risk Aversion (CARA) utility function, which explicitly accounts for the level of risk aversion of the decision maker. This new framework allows for the full exploitation of the information related to a forecasts' uncertainty, making it especially suited for the economic assessment of ensemble or probabilistic forecasts. Rather than comparing deterministic and ensemble forecasts, this study focuses on comparing different types of ensemble forecasts. There are multiple ways of assessing and representing forecast uncertainty. Consequently, there exist many different means of building an ensemble forecasting system for future streamflow. One such possibility is to dress deterministic forecasts using the statistics of past error forecasts. Such dressing methods are popular among operational agencies because of their simplicity and intuitiveness. Another approach is the use of ensemble meteorological forecasts for precipitation and temperature, which are then provided as inputs to one or many hydrological model(s). In this study, three concurrent ensemble streamflow forecasting systems are compared: simple statistically dressed deterministic forecasts, forecasts based on meteorological ensembles, and a variant of the latter that also includes an estimation of state variable uncertainty. This comparison takes place for the Montmorency River, a small flood-prone watershed in southern central Quebec, Canada. The assessment of forecasts is performed for lead times of 1 to 5 days, both in terms of forecasts' quality (relative to the corresponding record of observations) and in terms of economic value, using the new proposed framework based on the CARA utility function. It is found that the economic value of a forecast for a risk-averse decision maker is closely linked to the forecast reliability in predicting the upper tail of the streamflow distribution. Hence, post-processing forecasts to avoid over-forecasting could help improve both the quality and the value of forecasts.
In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...
The Role of Probability and Intentionality in Preschoolers' Causal Generalizations
ERIC Educational Resources Information Center
Sobel, David M.; Sommerville, Jessica A.; Travers, Lea V.; Blumenthal, Emily J.; Stoddard, Emily
2009-01-01
Three experiments examined whether preschoolers recognize that the causal properties of objects generalize to new members of the same set given either deterministic or probabilistic data. Experiment 1 found that 3- and 4-year-olds were able to make such a generalization given deterministic data but were at chance when they observed probabilistic…
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.
2006-01-01
This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.
Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow
NASA Astrophysics Data System (ADS)
Gupta, Atma Ram; Kumar, Ashwani
2017-12-01
Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.
NASA Technical Reports Server (NTRS)
Hathaway, Michael D.
1986-01-01
Measurements of the unsteady velocity field within the stator row of a transonic axial-flow fan were acquired using a laser anemometer. Measurements were obtained on axisymmetric surfaces located at 10 and 50 percent span from the shroud, with the fan operating at maximum efficiency at design speed. The ensemble-average and variance of the measured velocities are used to identify rotor-wake-generated (deterministic) unsteadiness and turbulence, respectively. Correlations of both deterministic and turbulent velocity fluctuations provide information on the characteristics of unsteady interactions within the stator row. These correlations are derived from the Navier-Stokes equation in a manner similar to deriving the Reynolds stress terms, whereby various averaging operators are used to average the aperiodic, deterministic, and turbulent velocity fluctuations which are known to be present in multistage turbomachines. The correlations of deterministic and turbulent velocity fluctuations throughout the axial fan stator row are presented. In particular, amplification and attenuation of both types of unsteadiness are shown to occur within the stator blade passage.
Down to the roughness scale assessment of piston-ring/liner contacts
NASA Astrophysics Data System (ADS)
Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.
2017-02-01
The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Deterministic seismic hazard macrozonation of India
NASA Astrophysics Data System (ADS)
Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
NASA Astrophysics Data System (ADS)
Liechti, K.; Panziera, L.; Germann, U.; Zappa, M.
2013-10-01
This study explores the limits of radar-based forecasting for hydrological runoff prediction. Two novel radar-based ensemble forecasting chains for flash-flood early warning are investigated in three catchments in the southern Swiss Alps and set in relation to deterministic discharge forecasts for the same catchments. The first radar-based ensemble forecasting chain is driven by NORA (Nowcasting of Orographic Rainfall by means of Analogues), an analogue-based heuristic nowcasting system to predict orographic rainfall for the following eight hours. The second ensemble forecasting system evaluated is REAL-C2, where the numerical weather prediction COSMO-2 is initialised with 25 different initial conditions derived from a four-day nowcast with the radar ensemble REAL. Additionally, three deterministic forecasting chains were analysed. The performance of these five flash-flood forecasting systems was analysed for 1389 h between June 2007 and December 2010 for which NORA forecasts were issued, due to the presence of orographic forcing. A clear preference was found for the ensemble approach. Discharge forecasts perform better when forced by NORA and REAL-C2 rather then by deterministic weather radar data. Moreover, it was observed that using an ensemble of initial conditions at the forecast initialisation, as in REAL-C2, significantly improved the forecast skill. These forecasts also perform better then forecasts forced by ensemble rainfall forecasts (NORA) initialised form a single initial condition of the hydrological model. Thus the best results were obtained with the REAL-C2 forecasting chain. However, for regions where REAL cannot be produced, NORA might be an option for forecasting events triggered by orographic precipitation.
NASA Astrophysics Data System (ADS)
Addor, N.; Jaun, S.; Fundel, F.; Zappa, M.
2012-04-01
The Sihl River flows through Zurich, Switzerland's most populated city, for which it represents the largest flood threat. To anticipate extreme discharge events and provide decision support in case of flood risk, a hydrometeorological ensemble prediction system (HEPS) was launched operationally in 2008. This model chain relies on deterministic (COSMO-7) and probabilistic (COSMO-LEPS) atmospheric forecasts, which are used to force a semi-distributed hydrological model (PREVAH) coupled to a hydraulic model (FLORIS). The resulting hydrological forecasts are eventually communicated to the stakeholders involved in the Sihl discharge management. This fully operational setting provides a real framework with which we assessed the potential of deterministic and probabilistic discharge forecasts for flood mitigation. To study the suitability of HEPS for small-scale basins and to quantify the added value conveyed by the probability information, a 31-month reforecast was produced for the Sihl catchment (336 km2). Several metrics support the conclusion that the performance gain is of up to 2 days lead time for the catchment considered. Brier skill scores show that probabilistic hydrological forecasts outperform their deterministic counterparts for all the lead times and event intensities considered. The small size of the Sihl catchment does not prevent skillful discharge forecasts, but makes them particularly dependent on correct precipitation forecasts. Our evaluation stresses that the capacity of the model to provide confident and reliable mid-term probability forecasts for high discharges is limited. We finally highlight challenges for making decisions on the basis of hydrological predictions, and discuss the need for a tool to be used in addition to forecasts to compare the different mitigation actions possible in the Sihl catchment.
Li, Yan
2017-05-25
The efficiency evaluation model of integrated energy system, involving many influencing factors, and the attribute values are heterogeneous and non-deterministic, usually cannot give specific numerical or accurate probability distribution characteristics, making the final evaluation result deviation. According to the characteristics of the integrated energy system, a hybrid multi-attribute decision-making model is constructed. The evaluation model considers the decision maker's risk preference. In the evaluation of the efficiency of the integrated energy system, the evaluation value of some evaluation indexes is linguistic value, or the evaluation value of the evaluation experts is not consistent. These reasons lead to ambiguity in the decision information, usually in the form of uncertain linguistic values and numerical interval values. In this paper, the risk preference of decision maker is considered when constructing the evaluation model. Interval-valued multiple-attribute decision-making method and fuzzy linguistic multiple-attribute decision-making model are proposed. Finally, the mathematical model of efficiency evaluation of integrated energy system is constructed.
Are seismic hazard assessment errors and earthquake surprises unavoidable?
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir
2013-04-01
Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a demonstrated and sufficient justification of hazard assessment protocols; (b) a more complete learning of the actual range of earthquake hazards to local communities and populations, and (c) a more ethically responsible control over how seismic hazard and seismic risk is implemented to protect public safety. It follows that the international project GEM is on the wrong track, if it continues to base seismic risk estimates on the standard method to assess seismic hazard. The situation is not hopeless and could be improved dramatically due to available geological, geomorphologic, seismic, and tectonic evidences and data combined with deterministic pattern recognition methodologies, specifically, when intending to PREDICT PREDICTABLE, but not the exact size, site, date, and probability of a target event. Understanding the complexity of non-linear dynamics of hierarchically organized systems of blocks-and-faults has led already to methodologies of neo-deterministic seismic hazard analysis and intermediate-term middle- to narrow-range earthquake prediction algorithms tested in real-time applications over the last decades. It proves that Contemporary Science can do a better job in disclosing Natural Hazards, assessing Risks, and delivering such info in advance extreme catastrophes, which are LOW PROBABILITY EVENTS THAT HAPPEN WITH CERTAINTY. Geoscientists must initiate shifting the minds of community from pessimistic disbelieve to optimistic challenging issues of neo-deterministic Hazard Predictability.
NASA Technical Reports Server (NTRS)
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buechler, Elizabeth D.; Pallin, Simon B.; Boudreaux, Philip R.
The indoor air temperature and relative humidity in residential buildings significantly affect material moisture durability, HVAC system performance, and occupant comfort. Therefore, indoor climate data is generally required to define boundary conditions in numerical models that evaluate envelope durability and equipment performance. However, indoor climate data obtained from field studies is influenced by weather, occupant behavior and internal loads, and is generally unrepresentative of the residential building stock. Likewise, whole-building simulation models typically neglect stochastic variables and yield deterministic results that are applicable to only a single home in a specific climate. The
NASA Technical Reports Server (NTRS)
Bennett, A.
1973-01-01
A guidance algorithm that provides precise rendezvous in the deterministic case while requiring only relative state information is developed. A navigation scheme employing only onboard relative measurements is built around a Kalman filter set in measurement coordinates. The overall guidance and navigation procedure is evaluated in the face of measurement errors by a detailed numerical simulation. Results indicate that onboard guidance and navigation for the terminal phase of rendezvous is possible with reasonable limits on measurement errors.
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487
Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara
2016-01-01
Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.
Impact of refining the assessment of dietary exposure to cadmium in the European adult population.
Ferrari, Pietro; Arcella, Davide; Heraud, Fanny; Cappé, Stefano; Fabiansson, Stefan
2013-01-01
Exposure assessment constitutes an important step in any risk assessment of potentially harmful substances present in food. The European Food Safety Authority (EFSA) first assessed dietary exposure to cadmium in Europe using a deterministic framework, resulting in mean values of exposure in the range of health-based guidance values. Since then, the characterisation of foods has been refined to better match occurrence and consumption data, and a new strategy to handle left-censoring in occurrence data was devised. A probabilistic assessment was performed and compared with deterministic estimates, using occurrence values at the European level and consumption data from 14 national dietary surveys. Mean estimates in the probabilistic assessment ranged from 1.38 (95% CI = 1.35-1.44) to 2.08 (1.99-2.23) µg kg⁻¹ bodyweight (bw) week⁻¹ across the different surveys, which were less than 10% lower than deterministic (middle bound) mean values that ranged from 1.50 to 2.20 µg kg⁻¹ bw week⁻¹. Probabilistic 95th percentile estimates of dietary exposure ranged from 2.65 (2.57-2.72) to 4.99 (4.62-5.38) µg kg⁻¹ bw week⁻¹, which were, with the exception of one survey, between 3% and 17% higher than middle-bound deterministic estimates. Overall, the proportion of subjects exceeding the tolerable weekly intake of 2.5 µg kg⁻¹ bw ranged from 14.8% (13.6-16.0%) to 31.2% (29.7-32.5%) according to the probabilistic assessment. The results of this work indicate that mean values of dietary exposure to cadmium in the European population were of similar magnitude using determinist or probabilistic assessments. For higher exposure levels, probabilistic estimates were almost consistently larger than deterministic counterparts, thus reflecting the impact of using the full distribution of occurrence values to determine exposure levels. It is considered prudent to use probabilistic methodology should exposure estimates be close to or exceeding health-based guidance values.
Martinez, Alexander S.; Faist, Akasha M.
2016-01-01
Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333
Macroinvertebrate community assembly in pools created during peatland restoration.
Brown, Lee E; Ramchunder, Sorain J; Beadle, Jeannie M; Holden, Joseph
2016-11-01
Many degraded ecosystems are subject to restoration attempts, providing new opportunities to unravel the processes of ecological community assembly. Restoration of previously drained northern peatlands, primarily to promote peat and carbon accumulation, has created hundreds of thousands of new open water pools. We assessed the potential benefits of this wetland restoration for aquatic biodiversity, and how communities reassemble, by comparing pool ecosystems in regions of the UK Pennines on intact (never drained) versus restored (blocked drainage-ditches) peatland. We also evaluated the conceptual idea that comparing reference ecosystems in terms of their compositional similarity to null assemblages (and thus the relative importance of stochastic versus deterministic assembly) can guide evaluations of restoration success better than analyses of community composition or diversity. Community composition data highlighted some differences in the macroinvertebrate composition of restored pools compared to undisturbed peatland pools, which could be used to suggest that alternative end-points to restoration were influenced by stochastic processes. However, widely used diversity metrics indicated no differences between undisturbed and restored pools. Novel evaluations of restoration using null models confirmed the similarity of deterministic assembly processes from the national species pool across all pools. Stochastic elements were important drivers of between-pool differences at the regional-scale but the scale of these effects was also similar across most of the pools studied. The amalgamation of assembly theory into ecosystem restoration monitoring allows us to conclude with more certainty that restoration has been successful from an ecological perspective in these systems. Evaluation of these UK findings compared to those from peatlands across Europe and North America further suggests that restoring peatland pools delivers significant benefits for aquatic fauna by providing extensive new habitat that is largely equivalent to natural pools. More generally, we suggest that assembly theory could provide new benchmarks for planning and evaluating ecological restoration success. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Sequence Learning Under Uncertainty in Children: Self-Reflection vs. Self-Assertion
Lange-Küttner, Christiane; Averbeck, Bruno B.; Hirsch, Silvia V.; Wießner, Isabel; Lamba, Nishtha
2012-01-01
We know that stochastic feedback impairs children’s associative stimulus–response (S–R) learning (Crone et al., 2004a; Eppinger et al., 2009), but the impact of stochastic feedback on sequence learning that involves deductive reasoning has not been not tested so far. In the current study, 8- to 11-year-old children (N = 171) learned a sequence of four left and right button presses, LLRR, RRLL, LRLR, RLRL, LRRL, and RLLR, which needed to be deduced from feedback because no directional cues were given. One group of children experienced consistent feedback only (deterministic feedback, 100% correct). In this condition, green feedback on the screen indicated that the children had been right when they were right, and red feedback indicated that the children had been wrong when they were wrong. Another group of children experienced inconsistent feedback (stochastic feedback, 85% correct, 15% false), where in some trials, green feedback on the screen could signal that children were right when in fact they were wrong, and red feedback could indicate that they were wrong when in fact they had been right. Independently of age, children’s sequence learning in the stochastic condition was initially much lower than in the deterministic condition, but increased gradually and improved with practice. Responses toward positive vs. negative feedback varied with age. Children were increasingly able to understand that they could have been wrong when feedback indicated they were right (self-reflection), but they remained unable to understand that they could have been right when feedback indicated they were wrong (self-assertion). PMID:22563324
Sequence Learning Under Uncertainty in Children: Self-Reflection vs. Self-Assertion.
Lange-Küttner, Christiane; Averbeck, Bruno B; Hirsch, Silvia V; Wießner, Isabel; Lamba, Nishtha
2012-01-01
We know that stochastic feedback impairs children's associative stimulus-response (S-R) learning (Crone et al., 2004a; Eppinger et al., 2009), but the impact of stochastic feedback on sequence learning that involves deductive reasoning has not been not tested so far. In the current study, 8- to 11-year-old children (N = 171) learned a sequence of four left and right button presses, LLRR, RRLL, LRLR, RLRL, LRRL, and RLLR, which needed to be deduced from feedback because no directional cues were given. One group of children experienced consistent feedback only (deterministic feedback, 100% correct). In this condition, green feedback on the screen indicated that the children had been right when they were right, and red feedback indicated that the children had been wrong when they were wrong. Another group of children experienced inconsistent feedback (stochastic feedback, 85% correct, 15% false), where in some trials, green feedback on the screen could signal that children were right when in fact they were wrong, and red feedback could indicate that they were wrong when in fact they had been right. Independently of age, children's sequence learning in the stochastic condition was initially much lower than in the deterministic condition, but increased gradually and improved with practice. Responses toward positive vs. negative feedback varied with age. Children were increasingly able to understand that they could have been wrong when feedback indicated they were right (self-reflection), but they remained unable to understand that they could have been right when feedback indicated they were wrong (self-assertion).
Rezaeian, Sanaz; Hartzell, Stephen; Sun, Xiaodan; Mendoza, Carlos
2017-01-01
Earthquake ground‐motion recordings are scarce in the central and eastern United States (CEUS) for large‐magnitude events and at close distances. We use two different simulation approaches, a deterministic physics‐based method and a site‐based stochastic method, to simulate ground motions over a wide range of magnitudes. Drawing on previous results for the modeling of recordings from the 2011 Mw 5.8 Mineral, Virginia, earthquake and using the 2001 Mw 7.6 Bhuj, India, earthquake as a tectonic analog for a large magnitude CEUS event, we are able to calibrate the two simulation methods over this magnitude range. Both models show a good fit to the Mineral and Bhuj observations from 0.1 to 10 Hz. Model parameters are then adjusted to obtain simulations for Mw 6.5, 7.0, and 7.6 events in the CEUS. Our simulations are compared with the 2014 U.S. Geological Survey weighted combination of existing ground‐motion prediction equations in the CEUS. The physics‐based simulations show comparable response spectral amplitudes and a fairly similar attenuation with distance. The site‐based stochastic simulations suggest a slightly faster attenuation of the response spectral amplitudes with distance for larger magnitude events and, as a result, slightly lower amplitudes at distances greater than 200 km. Both models are plausible alternatives and, given the few available data points in the CEUS, can be used to represent the epistemic uncertainty in modeling of postulated CEUS large‐magnitude events.
Aspen succession in the Intermountain West: A deterministic model
Dale L. Bartos; Frederick R. Ward; George S. Innis
1983-01-01
A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...
Using stochastic models to incorporate spatial and temporal variability [Exercise 14
Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke
2003-01-01
To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
ERIC Educational Resources Information Center
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2016-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers
Saha, Sourabh K.; Culpepper, Martin L.
2016-04-25
Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.
Stochastic and deterministic models for agricultural production networks.
Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D
2007-07-01
An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
ERIC Educational Resources Information Center
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2015-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
Probabilistic direct counterfactual quantum communication
NASA Astrophysics Data System (ADS)
Zhang, Sheng
2017-02-01
It is striking that the quantum Zeno effect can be used to launch a direct counterfactual communication between two spatially separated parties, Alice and Bob. So far, existing protocols of this type only provide a deterministic counterfactual communication service. However, this counterfactuality should be payed at a price. Firstly, the transmission time is much longer than a classical transmission costs. Secondly, the chained-cycle structure makes them more sensitive to channel noises. Here, we extend the idea of counterfactual communication, and present a probabilistic-counterfactual quantum communication protocol, which is proved to have advantages over the deterministic ones. Moreover, the presented protocol could evolve to a deterministic one solely by adjusting the parameters of the beam splitters. Project supported by the National Natural Science Foundation of China (Grant No. 61300203).
Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun
2017-11-10
In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.
Di Maio, Francesco; Zio, Enrico; Smith, Curtis; ...
2015-07-06
The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs andmore » activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).« less
Salciarini, D.; Godt, J.W.; Savage, W.Z.; Conversini, P.; Baum, R.L.; Michael, J.A.
2006-01-01
We model the rainfall-induced initiation of shallow landslides over a broad region using a deterministic approach, the Transient Rainfall Infiltration and Grid-based Slope-stability (TRIGRS) model that couples an infinite-slope stability analysis with a one-dimensional analytical solution for transient pore pressure response to rainfall infiltration. This model permits the evaluation of regional shallow landslide susceptibility in a Geographic Information System framework, and we use it to analyze susceptibility to shallow landslides in an area in the eastern Umbria Region of central Italy. As shown on a landslide inventory map produced by the Italian National Research Council, the area has been affected in the past by shallow landslides, many of which have transformed into debris flows. Input data for the TRIGRS model include time-varying rainfall, topographic slope, colluvial thickness, initial water table depth, and material strength and hydraulic properties. Because of a paucity of input data, we focus on parametric analyses to calibrate and test the model and show the effect of variation in material properties and initial water table conditions on the distribution of simulated instability in the study area in response to realistic rainfall. Comparing the results with the shallow landslide inventory map, we find more than 80% agreement between predicted shallow landslide susceptibility and the inventory, despite the paucity of input data.
Measurement of the acoustic response of a wind instrument with application to bore reconstruction
NASA Astrophysics Data System (ADS)
van Walstijn, Maarten; Campbell, Murray
2002-11-01
Reconstruction of a bore from measured acoustic response data has been shown to be very useful in studying wind instruments. Such data may be obtained in different ways; directly measuring the frequency-domain response of an acoustic bore has some distinct advantages over directly measuring time-domain data (for example, by pulse reflectometry), but so far has been unsuitable for producing input data for deterministic bore reconstruction algorithms, due to the limited accuracy at high frequencies. In this paper a method is presented for large-bandwidth measurement of the input impedance of a wind instrument using a cylindrical measurement head with multiple wall-mounted microphones. The influence of the number of microphones and the types of calibration impedance on the accuracy will be discussed, and bore reconstructions derived using this technique will be compared with reconstructions obtained using pulse reflectometry. [Work supported by EPSRC.
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Multiple-Input Multiple-Output (MIMO) Linear Systems Extreme Inputs/Outputs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smallwood, David O.
2007-01-01
A linear structure is excited at multiple points with a stationary normal random process. The response of the structure is measured at multiple outputs. If the autospectral densities of the inputs are specified, the phase relationships between the inputs are derived that will minimize or maximize the trace of the autospectral density matrix of the outputs. If the autospectral densities of the outputs are specified, the phase relationships between the outputs that will minimize or maximize the trace of the input autospectral density matrix are derived. It is shown that other phase relationships and ordinary coherence less than one willmore » result in a trace intermediate between these extremes. Least favorable response and some classes of critical response are special cases of the development. It is shown that the derivation for stationary random waveforms can also be applied to nonstationary random, transients, and deterministic waveforms.« less
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Review of probabilistic analysis of dynamic response of systems with random parameters
NASA Technical Reports Server (NTRS)
Kozin, F.; Klosner, J. M.
1989-01-01
The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.
Iannazzo, Sergio; Colombatto, Piero; Ricco, Gabriele; Oliveri, Filippo; Bonino, Ferruccio; Brunetto, Maurizia R
2015-03-01
Rapid virologic response is the best predictor of sustained virologic response with dual therapy in genotype-1 chronic hepatitis C, and its evaluation was proposed to tailor triple therapy in F0-F2 patients. Bio-mathematical modelling of viral dynamics during dual therapy has potentially higher accuracy than rapid virologic in the identification of patients who will eventually achieve sustained response. Study's objective was the cost-effectiveness analysis of a personalized therapy in naïve F0-F2 patients with chronic hepatitis C based on a bio-mathematical model (model-guided strategy) rather than on rapid virologic response (guideline-guided strategy). A deterministic bio-mathematical model of the infected cell dynamics was validated in a cohort of 135 patients treated with dual therapy. A decision-analytic economic model was then developed to compare model-guided and guideline-guided strategies in the Italian setting. The outcomes of the cost-effectiveness analysis with model-guided and guideline-guided strategy were 19.1-19.4 and 18.9-19.3 quality-adjusted-life-years. Total per-patient lifetime costs were €25,200-€26,000 with model-guided strategy and €28,800-€29,900 with guideline-guided strategy. When comparing model-guided with guideline-guided strategy the former resulted more effective and less costly. The adoption of the bio-mathematical predictive criterion has the potential to improve the cost-effectiveness of a personalized therapy for chronic hepatitis C, reserving triple therapy for those patients who really need it. Copyright © 2014 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.
Shukla, Shraddhanand; Roberts, Jason B.; Hoell. Andrew,; Funk, Chris; Robertson, Franklin R.; Kirtmann, Benjamin
2016-01-01
The skill of North American multimodel ensemble (NMME) seasonal forecasts in East Africa (EA), which encompasses one of the most food and water insecure areas of the world, is evaluated using deterministic, categorical, and probabilistic evaluation methods. The skill is estimated for all three primary growing seasons: March–May (MAM), July–September (JAS), and October–December (OND). It is found that the precipitation forecast skill in this region is generally limited and statistically significant over only a small part of the domain. In the case of MAM (JAS) [OND] season it exceeds the skill of climatological forecasts in parts of equatorial EA (Northern Ethiopia) [equatorial EA] for up to 2 (5) [5] months lead. Temperature forecast skill is generally much higher than precipitation forecast skill (in terms of deterministic and probabilistic skill scores) and statistically significant over a majority of the region. Over the region as a whole, temperature forecasts also exhibit greater reliability than the precipitation forecasts. The NMME ensemble forecasts are found to be more skillful and reliable than the forecast from any individual model. The results also demonstrate that for some seasons (e.g. JAS), the predictability of precipitation signals varies and is higher during certain climate events (e.g. ENSO). Finally, potential room for improvement in forecast skill is identified in some models by comparing homogeneous predictability in individual NMME models with their respective forecast skill.
Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...
2015-02-18
Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less
Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-03-17
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.
Efficient Algorithms for Handling Nondeterministic Automata
NASA Astrophysics Data System (ADS)
Vojnar, Tomáš
Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graham, Emily B.; Crump, Alex R.; Resch, Charles T.
2017-03-28
Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets.more » The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.« less
Theory and applications of a deterministic approximation to the coalescent model
Jewett, Ethan M.; Rosenberg, Noah A.
2014-01-01
Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885
An evaluation of consensus techniques for diagnostic interpretation
NASA Astrophysics Data System (ADS)
Sauter, Jake N.; LaBarre, Victoria M.; Furst, Jacob D.; Raicu, Daniela S.
2018-02-01
Learning diagnostic labels from image content has been the standard in computer-aided diagnosis. Most computer-aided diagnosis systems use low-level image features extracted directly from image content to train and test machine learning classifiers for diagnostic label prediction. When the ground truth for the diagnostic labels is not available, reference truth is generated from the experts diagnostic interpretations of the image/region of interest. More specifically, when the label is uncertain, e.g. when multiple experts label an image and their interpretations are different, techniques to handle the label variability are necessary. In this paper, we compare three consensus techniques that are typically used to encode the variability in the experts labeling of the medical data: mean, median and mode, and their effects on simple classifiers that can handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees). Given that the NIH/NCI Lung Image Database Consortium (LIDC) data provides interpretations for lung nodules by up to four radiologists, we leverage the LIDC data to evaluate and compare these consensus approaches when creating computer-aided diagnosis systems for lung nodules. First, low-level image features of nodules are extracted and paired with their radiologists semantic ratings (1= most likely benign, , 5 = most likely malignant); second, machine learning multi-class classifiers that handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees) are built to predict the lung nodules semantic ratings. We show that the mean-based consensus generates the most robust classi- fier overall when compared to the median- and mode-based consensus. Lastly, the results of this study show that, when building CAD systems with uncertain diagnostic interpretation, it is important to evaluate different strategies for encoding and predicting the diagnostic label.
Bladder Cancer Treatment Response Assessment in CT using Radiomics with Deep-Learning.
Cha, Kenny H; Hadjiiski, Lubomir; Chan, Heang-Ping; Weizer, Alon Z; Alva, Ajjai; Cohan, Richard H; Caoili, Elaine M; Paramagul, Chintana; Samala, Ravi K
2017-08-18
Cross-sectional X-ray imaging has become the standard for staging most solid organ malignancies. However, for some malignancies such as urinary bladder cancer, the ability to accurately assess local extent of the disease and understand response to systemic chemotherapy is limited with current imaging approaches. In this study, we explored the feasibility that radiomics-based predictive models using pre- and post-treatment computed tomography (CT) images might be able to distinguish between bladder cancers with and without complete chemotherapy responses. We assessed three unique radiomics-based predictive models, each of which employed different fundamental design principles ranging from a pattern recognition method via deep-learning convolution neural network (DL-CNN), to a more deterministic radiomics feature-based approach and then a bridging method between the two, utilizing a system which extracts radiomics features from the image patterns. Our study indicates that the computerized assessment using radiomics information from the pre- and post-treatment CT of bladder cancer patients has the potential to assist in assessment of treatment response.
A stochastic model for correlated protein motions
NASA Astrophysics Data System (ADS)
Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem
2006-06-01
A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.
ERIC Educational Resources Information Center
Raveendran, Aswathy; Chunawala, Sugra
2015-01-01
Several educators have emphasized that students need to understand science as a human endeavor that is not value free. In the exploratory study reported here, we investigated how doctoral students of biology understand the intersection of values and science in the context of genetic determinism. Deterministic research claims have been critiqued…
The dual reading of general conditionals: The influence of abstract versus concrete contexts.
Wang, Moyun; Yao, Xinyun
2018-04-01
A current main issue on conditionals is whether the meaning of general conditionals (e.g., If a card is red, then it is round) is deterministic (exceptionless) or probabilistic (exception-tolerating). In order to resolve the issue, two experiments examined the influence of conditional contexts (with vs. without frequency information of truth table cases) on the reading of general conditionals. Experiment 1 examined the direct reading of general conditionals in the possibility judgment task. Experiment 2 examined the indirect reading of general conditionals in the truth judgment task. It was found that both the direct and indirect reading of general conditionals exhibited the duality: the predominant deterministic semantic reading of conditionals without frequency information, and the predominant probabilistic pragmatic reading of conditionals with frequency information. The context of general conditionals determined the predominant reading of general conditionals. There were obvious individual differences in reading general conditionals with frequency information. The meaning of general conditionals is relative, depending on conditional contexts. The reading of general conditionals is flexible and complex so that no simple deterministic and probabilistic accounts are able to explain it. The present findings are beyond the extant deterministic and probabilistic accounts of conditionals.
Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T
2017-05-03
One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
Implementation speed of deterministic population passages compared to that of Rabi pulses
NASA Astrophysics Data System (ADS)
Chen, Jingwei; Wei, L. F.
2015-02-01
Fast Rabi π -pulse technique has been widely applied to various coherent quantum manipulations, although it requires precise designs of the pulse areas. Relaxing the precise pulse designs, various rapid adiabatic passage (RAP) approaches have been alternatively utilized to implement various population passages deterministically. However, the usual RAP protocol could not be implemented desirably fast, as the relevant adiabatic condition should be robustly satisfied during the passage. Here, we propose a modified shortcut to adiabaticity (STA) technique to accelerate significantly the desired deterministic quantum state population passages. This transitionless technique is beyond the usual rotating wave approximation (RWA) performed in the recent STA protocols, and thus can be applied to deliver various fast quantum evolutions wherein the relevant counter-rotating effects cannot be neglected. The proposal is demonstrated specifically with the driven two- and three-level systems. Numerical results show that with the present STA technique beyond the RWA the usual Stark-chirped RAPs and stimulated Raman adiabatic passages could be significantly speeded up; the deterministic population passages could be implemented as fast as the widely used fast Rabi π pulses, but are insensitive to the applied pulse areas.
NASA Astrophysics Data System (ADS)
Adams, Mike; Smalian, Silva
2017-09-01
For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
Tillotson, Michael D; Quinn, Thomas P
2016-01-01
Detecting the biological impacts of climate change is a current focus of ecological research and has important applications in conservation and resource management. Owing to a lack of suitable control systems, measuring correlations between time series of biological attributes and hypothesized environmental covariates is a common method for detecting such impacts. These correlative approaches are particularly common in studies of exploited fish species because rich biological time-series data are often available. However, the utility of species-environment relationships for identifying or predicting biological responses to climate change has been questioned because strong correlations often deteriorate as new data are collected. Specifically stating and critically evaluating the mechanistic relationship(s) linking an environmental driver to a biological response may help to address this problem. Using nearly 60 years of data on sockeye salmon from the Kvichak River, Alaska we tested a mechanistic hypothesis linking water temperatures experienced during freshwater rearing to population productivity by modeling a series of intermediate, deterministic relationships and evaluating temporal trends in biological and environmental time-series. We found that warming waters during freshwater rearing have profoundly altered patterns of growth and life history in this population complex yet there has been no significant correlation between water temperature and metrics of productivity commonly used in fisheries management. These findings demonstrate that pairing correlative approaches with careful consideration of the mechanistic links between populations and their environments can help to both avoid spurious correlations and identify biologically important, but not statistically significant relationships, and ultimately producing more robust conclusions about the biological impacts of climate change.
Tillotson, Michael D.; Quinn, Thomas P.
2016-01-01
Detecting the biological impacts of climate change is a current focus of ecological research and has important applications in conservation and resource management. Owing to a lack of suitable control systems, measuring correlations between time series of biological attributes and hypothesized environmental covariates is a common method for detecting such impacts. These correlative approaches are particularly common in studies of exploited fish species because rich biological time-series data are often available. However, the utility of species-environment relationships for identifying or predicting biological responses to climate change has been questioned because strong correlations often deteriorate as new data are collected. Specifically stating and critically evaluating the mechanistic relationship(s) linking an environmental driver to a biological response may help to address this problem. Using nearly 60 years of data on sockeye salmon from the Kvichak River, Alaska we tested a mechanistic hypothesis linking water temperatures experienced during freshwater rearing to population productivity by modeling a series of intermediate, deterministic relationships and evaluating temporal trends in biological and environmental time-series. We found that warming waters during freshwater rearing have profoundly altered patterns of growth and life history in this population complex yet there has been no significant correlation between water temperature and metrics of productivity commonly used in fisheries management. These findings demonstrate that pairing correlative approaches with careful consideration of the mechanistic links between populations and their environments can help to both avoid spurious correlations and identify biologically important, but not statistically significant relationships, and ultimately producing more robust conclusions about the biological impacts of climate change. PMID:27123845
NASA Astrophysics Data System (ADS)
Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.
2012-04-01
Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.
NASA Technical Reports Server (NTRS)
Dec, John A.; Braun, Robert D.
2011-01-01
A finite element ablation and thermal response program is presented for simulation of three-dimensional transient thermostructural analysis. The three-dimensional governing differential equations and finite element formulation are summarized. A novel probabilistic design methodology for thermal protection systems is presented. The design methodology is an eight step process beginning with a parameter sensitivity study and is followed by a deterministic analysis whereby an optimum design can determined. The design process concludes with a Monte Carlo simulation where the probabilities of exceeding design specifications are estimated. The design methodology is demonstrated by applying the methodology to the carbon phenolic compression pads of the Crew Exploration Vehicle. The maximum allowed values of bondline temperature and tensile stress are used as the design specifications in this study.
Real-time logic modelling on SpaceWire
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Ma, Yunpeng; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. However, it cannot meet the deterministic requirement for safety/time critical application in spacecraft, where the delay of real-time (RT) message streams must be guaranteed. Therefore, SpaceWire-D is developed that provides deterministic delivery over a SpaceWire network. Formal analysis and verification of real-time systems is critical to their development and safe implementation, and is a prerequisite for obtaining their safety certification. Failure to meet specified timing constraints such as deadlines in hard real-time systems may lead to catastrophic results. In this paper, a formal verification method, Real-Time Logic (RTL), has been proposed to specify and verify timing properties of SpaceWire-D network. Based on the principal of SpaceWire-D protocol, we firstly analyze the timing properties of fundamental transactions, such as RMAP WRITE, and RMAP READ. After that, the RMAP WRITE transaction structure is modeled in Real-Time Logic (RTL) and Presburger Arithmetic representations. And then, the associated constraint graph and safety analysis is provided. Finally, it is suggested that RTL method can be useful for the protocol evaluation and provision of recommendation for further protocol evolutions.
MT71x: Multi-Temperature Library Based on ENDF/B-VII.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conlin, Jeremy Lloyd; Parsons, Donald Kent; Gray, Mark Girard
The Nuclear Data Team has released a multitemperature transport library, MT71x, based upon ENDF/B-VII.1 with a few modifications as well as additional evaluations for a total of 427 isotope tables. The library was processed using NJOY2012.39 into 23 temperatures. MT71x consists of two sub-libraries; MT71xMG for multigroup energy representation data and MT71xCE for continuous energy representation data. These sub-libraries are suitable for deterministic transport and Monte Carlo transport applications, respectively. The SZAs used are the same for the two sub-libraries; that is, the same SZA can be used for both libraries. This makes comparisons between the two libraries and betweenmore » deterministic and Monte Carlo codes straightforward. Both the multigroup energy and continuous energy libraries were verified and validated with our checking codes checkmg and checkace (multigroup and continuous energy, respectively) Then an expanded suite of tests was used for additional verification and, finally, verified using an extensive suite of critical benchmark models. We feel that this library is suitable for all calculations and is particularly useful for calculations sensitive to temperature effects.« less
Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines
NASA Astrophysics Data System (ADS)
Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł
2018-01-01
Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.
Noise-induced transitions in a double-well oscillator with nonlinear dissipation.
Semenov, Vladimir V; Neiman, Alexander B; Vadivasova, Tatyana E; Anishchenko, Vadim S
2016-05-01
We develop a model of bistable oscillator with nonlinear dissipation. Using a numerical simulation and an electronic circuit realization of this system we study its response to additive noise excitations. We show that depending on noise intensity the system undergoes multiple qualitative changes in the structure of its steady-state probability density function (PDF). In particular, the PDF exhibits two pitchfork bifurcations versus noise intensity, which we describe using an effective potential and corresponding normal form of the bifurcation. These stochastic effects are explained by the partition of the phase space by the nullclines of the deterministic oscillator.
On some stochastic formulations and related statistical moments of pharmacokinetic models.
Matis, J H; Wehrly, T E; Metzler, C M
1983-02-01
This paper presents the deterministic and stochastic model for a linear compartment system with constant coefficients, and it develops expressions for the mean residence times (MRT) and the variances of the residence times (VRT) for the stochastic model. The expressions are relatively simple computationally, involving primarily matrix inversion, and they are elegant mathematically, in avoiding eigenvalue analysis and the complex domain. The MRT and VRT provide a set of new meaningful response measures for pharmacokinetic analysis and they give added insight into the system kinetics. The new analysis is illustrated with an example involving the cholesterol turnover in rats.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coleman, Justin; Slaughter, Andrew; Veeraraghavan, Swetha
Multi-hazard Analysis for STOchastic time-DOmaiN phenomena (MASTODON) is a finite element application that aims at analyzing the response of 3-D soil-structure systems to natural and man-made hazards such as earthquakes, floods and fire. MASTODON currently focuses on the simulation of seismic events and has the capability to perform extensive ‘source-to-site’ simulations including earthquake fault rupture, nonlinear wave propagation and nonlinear soil-structure interaction (NLSSI) analysis. MASTODON is being developed to be a dynamic probabilistic risk assessment framework that enables analysts to not only perform deterministic analyses, but also easily perform probabilistic or stochastic simulations for the purpose of risk assessment.
Stern, Noah; Ginder-Vogel, Matthew; Stegen, James C; Arntzen, Evan; Kennedy, David W; Larget, Bret R; Roden, Eric E
2017-08-15
Hydrologic exchange plays a critical role in biogeochemical cycling within the hyporheic zone (the interface between river water and groundwater) of riverine ecosystems. Such exchange may set limits on the rates of microbial metabolism and impose deterministic selection on microbial communities that adapt to dynamically changing dissolved organic carbon (DOC) sources. This study examined the response of attached microbial communities ( in situ colonized sand packs) from groundwater, hyporheic, and riverbed habitats within the Columbia River hyporheic corridor to "cross-feeding" with either groundwater, river water, or DOC-free artificial fluids. Our working hypothesis was that deterministic selection during in situ colonization would dictate the response to cross-feeding, with communities displaying maximal biomass and respiration when supplied with their native fluid source. In contrast to expectations, the major observation was that the riverbed colonized sand had much higher biomass and respiratory activity, as well as a distinct community structure, compared with those of the hyporheic and groundwater colonized sands. 16S rRNA gene amplicon sequencing revealed a much higher proportion of certain heterotrophic taxa as well as significant numbers of eukaryotic algal chloroplasts in the riverbed colonized sand. Significant quantities of DOC were released from riverbed sediment and colonized sand, and separate experiments showed that the released DOC stimulated respiration in the groundwater and piezometer colonized sand. These results suggest that the accumulation and degradation of labile particulate organic carbon (POC) within the riverbed are likely to release DOC, which may enter the hyporheic corridor during hydrologic exchange, thereby stimulating microbial activity and imposing deterministic selective pressure on the microbial community composition. IMPORTANCE The influence of river water-groundwater mixing on hyporheic zone microbial community structure and function is an important but poorly understood component of riverine biogeochemistry. This study employed an experimental approach to gain insight into how such mixing might be expected to influence the biomass, respiration, and composition of hyporheic zone microbial communities. Colonized sands from three different habitats (groundwater, river water, and hyporheic) were "cross-fed" with either groundwater, river water, or DOC-free artificial fluids. We expected that the colonization history would dictate the response to cross-feeding, with communities displaying maximal biomass and respiration when supplied with their native fluid source. By contrast, the major observation was that the riverbed communities had much higher biomass and respiration, as well as a distinct community structure compared with those of the hyporheic and groundwater colonized sands. These results highlight the importance of riverbed microbial metabolism in organic carbon processing in hyporheic corridors. Copyright © 2017 American Society for Microbiology.
Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?
NASA Astrophysics Data System (ADS)
Choustova, Olga
2007-02-01
We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.
Illustrated structural application of universal first-order reliability method
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.
Observations, theoretical ideas and modeling of turbulent flows: Past, present and future
NASA Technical Reports Server (NTRS)
Chapman, G. T.; Tobak, M.
1985-01-01
Turbulence was analyzed in a historical context featuring the interactions between observations, theoretical ideas, and modeling within three successive movements. These are identified as predominantly statistical, structural and deterministic. The statistical movement is criticized for its failure to deal with the structural elements observed in turbulent flows. The structural movement is criticized for its failure to embody observed structural elements within a formal theory. The deterministic movement is described as having the potential of overcoming these deficiencies by allowing structural elements to exhibit chaotic behavior that is nevertheless embodied within a theory. Four major ideas of this movement are described: bifurcation theory, strange attractors, fractals, and the renormalization group. A framework for the future study of turbulent flows is proposed, based on the premises of the deterministic movement.
Stochastic Seismic Response of an Algiers Site with Random Depth to Bedrock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badaoui, M.; Mebarki, A.; Berrah, M. K.
2010-05-21
Among the important effects of the Boumerdes earthquake (Algeria, May 21{sup st} 2003) was that, within the same zone, the destructions in certain parts were more important than in others. This phenomenon is due to site effects which alter the characteristics of seismic motions and cause concentration of damage during earthquakes. Local site effects such as thickness and mechanical properties of soil layers have important effects on the surface ground motions.This paper deals with the effect of the randomness aspect of the depth to bedrock (soil layers heights) which is assumed to be a random variable with lognormal distribution. Thismore » distribution is suitable for strictly non-negative random variables with large values of the coefficient of variation. In this case, Monte Carlo simulations are combined with the stiffness matrix method, used herein as a deterministic method, for evaluating the effect of the depth to bedrock uncertainty on the seismic response of a multilayered soil. This study considers a P and SV wave propagation pattern using input accelerations collected at Keddara station, located at 20 km from the epicenter, as it is located directly on the bedrock.A parametric study is conducted do derive the stochastic behavior of the peak ground acceleration and its response spectrum, the transfer function and the amplification factors. It is found that the soil height heterogeneity causes a widening of the frequency content and an increase in the fundamental frequency of the soil profile, indicating that the resonance phenomenon concerns a larger number of structures.« less
Ecological specialization and morphological diversification in Greater Antillean boas.
Reynolds, R Graham; Collar, David C; Pasachnik, Stesha A; Niemiller, Matthew L; Puente-Rolón, Alberto R; Revell, Liam J
2016-08-01
Colonization of islands can dramatically influence the evolutionary trajectories of organisms, with both deterministic and stochastic processes driving adaptation and diversification. Some island colonists evolve extremely large or small body sizes, presumably in response to unique ecological circumstances present on islands. One example of this phenomenon, the Greater Antillean boas, includes both small (<90 cm) and large (4 m) species occurring on the Greater Antilles and Bahamas, with some islands supporting pairs or trios of body-size divergent species. These boas have been shown to comprise a monophyletic radiation arising from a Miocene dispersal event to the Greater Antilles, though it is not known whether co-occurrence of small and large species is a result of dispersal or in situ evolution. Here, we provide the first comprehensive species phylogeny for this clade combined with morphometric and ecological data to show that small body size evolved repeatedly on separate islands in association with specialization in substrate use. Our results further suggest that microhabitat specialization is linked to increased rates of head shape diversification among specialists. Our findings show that ecological specialization following island colonization promotes morphological diversity through deterministic body size evolution and cranial morphological diversification that is contingent on island- and species-specific factors. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
Comparing reactive and memory-one strategies of direct reciprocity
NASA Astrophysics Data System (ADS)
Baek, Seung Ki; Jeong, Hyeong-Chai; Hilbe, Christian; Nowak, Martin A.
2016-05-01
Direct reciprocity is a mechanism for the evolution of cooperation based on repeated interactions. When individuals meet repeatedly, they can use conditional strategies to enforce cooperative outcomes that would not be feasible in one-shot social dilemmas. Direct reciprocity requires that individuals keep track of their past interactions and find the right response. However, there are natural bounds on strategic complexity: Humans find it difficult to remember past interactions accurately, especially over long timespans. Given these limitations, it is natural to ask how complex strategies need to be for cooperation to evolve. Here, we study stochastic evolutionary game dynamics in finite populations to systematically compare the evolutionary performance of reactive strategies, which only respond to the co-player’s previous move, and memory-one strategies, which take into account the own and the co-player’s previous move. In both cases, we compare deterministic strategy and stochastic strategy spaces. For reactive strategies and small costs, we find that stochasticity benefits cooperation, because it allows for generous-tit-for-tat. For memory one strategies and small costs, we find that stochasticity does not increase the propensity for cooperation, because the deterministic rule of win-stay, lose-shift works best. For memory one strategies and large costs, however, stochasticity can augment cooperation.
Zhang, Xiao; Liu, Shirong; Li, Xiangzhen; Wang, Jingxin; Ding, Qiong; Wang, Hui; Tian, Chao; Yao, Minjie; An, Jiaxing; Huang, Yongtao
2016-03-01
To understand the temporal responses of soil prokaryotic communities to clear-cutting disturbance, we examined the changes in soil bacterial and archaeal community composition, structure and diversity along a chronosequence of forest successional restoration using high-throughput 16S rRNA gene sequencing. Our results demonstrated that clear-cutting significantly altered soil bacterial community structure, while no significant shifts of soil archaeal communities were observed. The hypothesis that soil bacterial communities would become similar to those of surrounding intact primary forest with natural regeneration was supported by the shifts in the bacterial community composition and structure. Bacterial community diversity patterns induced by clear-cutting were consistent with the intermediate disturbance hypothesis. Dynamics of bacterial communities was mostly driven by soil properties, which collectively explained more than 70% of the variation in bacterial community composition. Community assembly data revealed that clear-cutting promoted the importance of the deterministic processes in shaping bacterial communities, coinciding with the resultant low resource environments. But assembly processes in the secondary forest returned a similar level compared to the intact primary forest. These findings suggest that bacterial community dynamics may be predictable during the natural recovery process. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Guo, H.; Karpov, M.; Lucas, E.; Kordts, A.; Pfeiffer, M. H. P.; Brasch, V.; Lihachev, G.; Lobanov, V. E.; Gorodetsky, M. L.; Kippenberg, T. J.
2017-01-01
Temporal dissipative Kerr solitons in optical microresonators enable the generation of ultrashort pulses and low-noise frequency combs at microwave repetition rates. They have been demonstrated in a growing number of microresonator platforms, enabling chip-scale frequency combs, optical synthesis of low-noise microwaves and multichannel coherent communications. In all these applications, accessing and maintaining a single-soliton state is a key requirement--one that remains an outstanding challenge. Here, we study the dynamics of multiple-soliton states and report the discovery of a simple mechanism that deterministically switches the soliton state by reducing the number of solitons one by one. We demonstrate this control in Si3N4 and MgF2 resonators and, moreover, we observe a secondary peak to emerge in the response of the system to a pump modulation, an effect uniquely associated with the soliton regime. Exploiting this feature, we map the multi-stability diagram of a microresonator experimentally. Our measurements show the physical mechanism of the soliton switching and provide insight into soliton dynamics in microresonators. The technique provides a method to sequentially reduce, monitor and stabilize an arbitrary state with solitons, in particular allowing for feedback stabilization of single-soliton states, which is necessary for practical applications.