Sample records for deterministic site characterization

  1. Development of DCGLs by using both probabilistic and deterministic analyses in RESRAD (onsite) and RESRAD-OFFSITE codes.

    PubMed

    Kamboj, Sunita; Yu, Charley; Johnson, Robert

    2013-05-01

    The Derived Concentration Guideline Levels for two building areas previously used in waste processing and storage at Argonne National Laboratory were developed using both probabilistic and deterministic radiological environmental pathway analysis. Four scenarios were considered. The two current uses considered were on-site industrial use and off-site residential use with farming. The two future uses (i.e., after an institutional control period of 100 y) were on-site recreational use and on-site residential use with farming. The RESRAD-OFFSITE code was used for the current-use off-site residential/farming scenario and RESRAD (onsite) was used for the other three scenarios. Contaminants of concern were identified from the past operations conducted in the buildings and the actual characterization done at the site. Derived Concentration Guideline Levels were developed for all four scenarios using deterministic and probabilistic approaches, which include both "peak-of-the-means" and "mean-of-the-peaks" analyses. The future-use on-site residential/farming scenario resulted in the most restrictive Derived Concentration Guideline Levels for most radionuclides.

  2. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  3. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  4. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  5. Protein Aggregation/Folding: The Role of Deterministic Singularities of Sequence Hydrophobicity as Determined by Nonlinear Signal Analysis of Acylphosphatase and Aβ(1–40)

    PubMed Central

    Zbilut, Joseph P.; Colosimo, Alfredo; Conti, Filippo; Colafranceschi, Mauro; Manetti, Cesare; Valerio, MariaCristina; Webber, Charles L.; Giuliani, Alessandro

    2003-01-01

    The problem of protein folding vs. aggregation was investigated in acylphosphatase and the amyloid protein Aβ(1–40) by means of nonlinear signal analysis of their chain hydrophobicity. Numerical descriptors of recurrence patterns provided the basis for statistical evaluation of folding/aggregation distinctive features. Static and dynamic approaches were used to elucidate conditions coincident with folding vs. aggregation using comparisons with known protein secondary structure classifications, site-directed mutagenesis studies of acylphosphatase, and molecular dynamics simulations of amyloid protein, Aβ(1–40). The results suggest that a feature derived from principal component space characterized by the smoothness of singular, deterministic hydrophobicity patches plays a significant role in the conditions governing protein aggregation. PMID:14645049

  6. Successional dynamics in Neotropical forests are as uncertain as they are predictable

    PubMed Central

    Norden, Natalia; Angarita, Héctor A.; Bongers, Frans; Martínez-Ramos, Miguel; Granzow-de la Cerda, Iñigo; van Breugel, Michiel; Lebrija-Trejos, Edwin; Meave, Jorge A.; Vandermeer, John; Williamson, G. Bruce; Finegan, Bryan; Mesquita, Rita; Chazdon, Robin L.

    2015-01-01

    Although forest succession has traditionally been approached as a deterministic process, successional trajectories of vegetation change vary widely, even among nearby stands with similar environmental conditions and disturbance histories. Here, we provide the first attempt, to our knowledge, to quantify predictability and uncertainty during succession based on the most extensive long-term datasets ever assembled for Neotropical forests. We develop a novel approach that integrates deterministic and stochastic components into different candidate models describing the dynamical interactions among three widely used and interrelated forest attributes—stem density, basal area, and species density. Within each of the seven study sites, successional trajectories were highly idiosyncratic, even when controlling for prior land use, environment, and initial conditions in these attributes. Plot factors were far more important than stand age in explaining successional trajectories. For each site, the best-fit model was able to capture the complete set of time series in certain attributes only when both the deterministic and stochastic components were set to similar magnitudes. Surprisingly, predictability of stem density, basal area, and species density did not show consistent trends across attributes, study sites, or land use history, and was independent of plot size and time series length. The model developed here represents the best approach, to date, for characterizing autogenic successional dynamics and demonstrates the low predictability of successional trajectories. These high levels of uncertainty suggest that the impacts of allogenic factors on rates of change during tropical forest succession are far more pervasive than previously thought, challenging the way ecologists view and investigate forest regeneration. PMID:26080411

  7. Successional dynamics in Neotropical forests are as uncertain as they are predictable.

    PubMed

    Norden, Natalia; Angarita, Héctor A; Bongers, Frans; Martínez-Ramos, Miguel; Granzow-de la Cerda, Iñigo; van Breugel, Michiel; Lebrija-Trejos, Edwin; Meave, Jorge A; Vandermeer, John; Williamson, G Bruce; Finegan, Bryan; Mesquita, Rita; Chazdon, Robin L

    2015-06-30

    Although forest succession has traditionally been approached as a deterministic process, successional trajectories of vegetation change vary widely, even among nearby stands with similar environmental conditions and disturbance histories. Here, we provide the first attempt, to our knowledge, to quantify predictability and uncertainty during succession based on the most extensive long-term datasets ever assembled for Neotropical forests. We develop a novel approach that integrates deterministic and stochastic components into different candidate models describing the dynamical interactions among three widely used and interrelated forest attributes--stem density, basal area, and species density. Within each of the seven study sites, successional trajectories were highly idiosyncratic, even when controlling for prior land use, environment, and initial conditions in these attributes. Plot factors were far more important than stand age in explaining successional trajectories. For each site, the best-fit model was able to capture the complete set of time series in certain attributes only when both the deterministic and stochastic components were set to similar magnitudes. Surprisingly, predictability of stem density, basal area, and species density did not show consistent trends across attributes, study sites, or land use history, and was independent of plot size and time series length. The model developed here represents the best approach, to date, for characterizing autogenic successional dynamics and demonstrates the low predictability of successional trajectories. These high levels of uncertainty suggest that the impacts of allogenic factors on rates of change during tropical forest succession are far more pervasive than previously thought, challenging the way ecologists view and investigate forest regeneration.

  8. CPT site characterization for seismic hazards in the New Madrid seismic zone

    USGS Publications Warehouse

    Liao, T.; Mayne, P.W.; Tuttle, M.P.; Schweig, E.S.; Van Arsdale, R.B.

    2002-01-01

    A series of cone penetration tests (CPTs) were conducted in the vicinity of the New Madrid seismic zone in central USA for quantifying seismic hazards, obtaining geotechnical soil properties, and conducting studies at liquefaction sites related to the 1811-1812 and prehistoric New Madrid earthquakes. The seismic piezocone provides four independent measurements for delineating the stratigraphy, liquefaction potential, and site amplification parameters. At the same location, two independent assessments of soil liquefaction susceptibility can be made using both the normalized tip resistance (qc1N) and shear wave velocity (Vs1). In lieu of traditional deterministic approaches, the CPT data can be processed using probability curves to assess the level and likelihood of future liquefaction occurrence. ?? 2002 Elsevier Science Ltd. All rights reserved.

  9. Predicting coexistence of plants subject to a tolerance-competition trade-off.

    PubMed

    Haegeman, Bart; Sari, Tewfik; Etienne, Rampal S

    2014-06-01

    Ecological trade-offs between species are often invoked to explain species coexistence in ecological communities. However, few mathematical models have been proposed for which coexistence conditions can be characterized explicitly in terms of a trade-off. Here we present a model of a plant community which allows such a characterization. In the model plant species compete for sites where each site has a fixed stress condition. Species differ both in stress tolerance and competitive ability. Stress tolerance is quantified as the fraction of sites with stress conditions low enough to allow establishment. Competitive ability is quantified as the propensity to win the competition for empty sites. We derive the deterministic, discrete-time dynamical system for the species abundances. We prove the conditions under which plant species can coexist in a stable equilibrium. We show that the coexistence conditions can be characterized graphically, clearly illustrating the trade-off between stress tolerance and competitive ability. We compare our model with a recently proposed, continuous-time dynamical system for a tolerance-fecundity trade-off in plant communities, and we show that this model is a special case of the continuous-time version of our model.

  10. Aquatic bacterial assemblage structure in Pozas Azules, Cuatro Cienegas Basin, Mexico: Deterministic vs. stochastic processes.

    PubMed

    Espinosa-Asuar, Laura; Escalante, Ana Elena; Gasca-Pineda, Jaime; Blaz, Jazmín; Peña, Lorena; Eguiarte, Luis E; Souza, Valeria

    2015-06-01

    The aim of this study was to determine the contributions of stochastic vs. deterministic processes in the distribution of microbial diversity in four ponds (Pozas Azules) within a temporally stable aquatic system in the Cuatro Cienegas Basin, State of Coahuila, Mexico. A sampling strategy for sites that were geographically delimited and had low environmental variation was applied to avoid obscuring distance effects. Aquatic bacterial diversity was characterized following a culture-independent approach (16S sequencing of clone libraries). The results showed a correlation between bacterial beta diversity (1-Sorensen) and geographic distance (distance decay of similarity), which indicated the influence of stochastic processes related to dispersion in the assembly of the ponds' bacterial communities. Our findings are the first to show the influence of dispersal limitation in the prokaryotic diversity distribution of Cuatro Cienegas Basin. Copyright© by the Spanish Society for Microbiology and Institute for Catalan Studies.

  11. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  12. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    NASA Astrophysics Data System (ADS)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions Euros, shows that geological and geophysical investigations necessary to assess a reliable deterministic hazard evaluation are largely justified.

  13. Multiple scattering of waves in random media: Application to the study of the city-site effect in Mexico City area.

    NASA Astrophysics Data System (ADS)

    Ishizawa, O. A.; Clouteau, D.

    2007-12-01

    Long-duration, amplifications and spatial response's variability of the seismic records registered in Mexico City during the September 1985 earthquake cannot only be explained by the soil velocity model. We will try to explain these phenomena by studying the extent of the effect of buildings' diffracted wave fields during an earthquake. The main question is whether the presence of a large number of buildings can significantly modify the seismic wave field. We are interested in the interaction between the incident wave field propagating in a stratified half- space and a large number of structures at the free surface, i.e., the coupled city-site effect. We study and characterize the seismic wave propagation regimes in a city using the theory of wave propagation in random media. In the coupled city-site system, the buildings are modeled as resonant scatterers uniformly distributed at the surface of a deterministic, horizontally layered elastic half-space representing the soil. Based on the mean-field and the field correlation equations, we build a theoretical model which takes into account the multiple scattering of seismic waves and allows us to describe the coupled city-site system behavior in a simple and rapid way. The results obtained for the configurationally averaged field quantities are validated by means of 3D results for the seismic response of a deterministic model. The numerical simulations of this model are computed with MISS3D code based on classical Soil-Structure Interaction techniques and on a variational coupling between Boundary Integral Equations for a layered soil and a modal Finite Element approach for the buildings. This work proposes a detailed numerical and a theoretical analysis of the city-site interaction (CSI) in Mexico City area. The principal parameters in the study of the CSI are the buildings resonant frequency distribution, the soil characteristics of the site, the urban density and position of the buildings in the city, as well as the type of incident wave. The main results of the theoretical and numerical models allow us to characterize the seismic movement in urban areas.

  14. Lévy-like behaviour in deterministic models of intelligent agents exploring heterogeneous environments

    NASA Astrophysics Data System (ADS)

    Boyer, D.; Miramontes, O.; Larralde, H.

    2009-10-01

    Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.

  15. Estimate Tsunami Flow Conditions and Large-Debris Tracks for the Design of Coastal Infrastructures along Coastlines of the U.S. Pacific Northwest

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Thomas, S.; Zhou, H.; Arcas, D.; Titov, V. V.

    2017-12-01

    The increasing potential tsunami hazards pose great challenges for infrastructures along the coastlines of the U.S. Pacific Northwest. Tsunami impact at a coastal site is usually assessed from deterministic scenarios based on 10,000 years of geological records in the Cascadia Subduction Zone (CSZ). Aside from these deterministic methods, the new ASCE 7-16 tsunami provisions provide engineering design criteria of tsunami loads on buildings based on a probabilistic approach. This work develops a site-specific model near Newport, OR using high-resolution grids, and compute tsunami inundation depth and velocities at the study site resulted from credible probabilistic and deterministic earthquake sources in the Cascadia Subduction Zone. Three Cascadia scenarios, two deterministic scenarios, XXL1 and L1, and a 2,500-yr probabilistic scenario compliant with the new ASCE 7-16 standard, are simulated using combination of a depth-averaged shallow water model for offshore propagation and a Boussinesq-type model for onshore inundation. We speculate on the methods and procedure to obtain the 2,500-year probabilistic scenario for Newport that is compliant with the ASCE 7-16 tsunami provisions. We provide details of model results, particularly the inundation depth and flow speed for a new building, which will also be designated as a tsunami vertical evacuation shelter, at Newport, Oregon. We show that the ASCE 7-16 consistent hazards are between those obtained from deterministic L1 and XXL1 scenarios, and the greatest impact on the building may come from later waves. As a further step, we utilize the inundation model results to numerically compute tracks of large vessels in the vicinity of the building site and estimate if these vessels will impact on the building site during the extreme XXL1 and ASCE 7-16 hazard-consistent scenarios. Two-step study is carried out first to study tracks of massless particles and then large vessels with assigned mass considering drag force, inertial force, ship grounding and mooring. The simulation results show that none of the large vessels will impact on the building site in all tested scenarios.

  16. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  17. Habitat connectivity and in-stream vegetation control temporal variability of benthic invertebrate communities.

    PubMed

    Huttunen, K-L; Mykrä, H; Oksanen, J; Astorga, A; Paavola, R; Muotka, T

    2017-05-03

    One of the key challenges to understanding patterns of β diversity is to disentangle deterministic patterns from stochastic ones. Stochastic processes may mask the influence of deterministic factors on community dynamics, hindering identification of the mechanisms causing variation in community composition. We studied temporal β diversity (among-year dissimilarity) of macroinvertebrate communities in near-pristine boreal streams across 14 years. To assess whether the observed β diversity deviates from that expected by chance, and to identify processes (deterministic vs. stochastic) through which different explanatory factors affect community variability, we used a null model approach. We observed that at the majority of sites temporal β diversity was low indicating high community stability. When stochastic variation was unaccounted for, connectivity was the only variable explaining temporal β diversity, with weakly connected sites exhibiting higher community variability through time. After accounting for stochastic effects, connectivity lost importance, suggesting that it was related to temporal β diversity via random colonization processes. Instead, β diversity was best explained by in-stream vegetation, community variability decreasing with increasing bryophyte cover. These results highlight the potential of stochastic factors to dampen the influence of deterministic processes, affecting our ability to understand and predict changes in biological communities through time.

  18. Allee effects and the spatial dynamics of a locally endangered butterfly, the high brown fritillary (Argynnis adippe).

    PubMed

    Bonsall, Michael B; Dooley, Claire A; Kasparson, Anna; Brereton, Tom; Roy, David B; Thomas, Jeremy A

    2014-01-01

    Conservation of endangered species necessitates a full appreciation of the ecological processes affecting the regulation, limitation, and persistence of populations. These processes are influenced by birth, death, and dispersal events, and characterizing them requires careful accounting of both the deterministic and stochastic processes operating at both local and regional population levels. We combined ecological theory and observations on Allee effects by linking mathematical analysis and the spatial and temporal population dynamics patterns of a highly endangered butterfly, the high brown fritillary, Argynnis adippe. Our theoretical analysis showed that the role of density-dependent feedbacks in the presence of local immigration can influence the strength of Allee effects. Linking this theory to the analysis of the population data revealed strong evidence for both negative density dependence and Allee effects at the landscape or regional scale. These regional dynamics are predicted to be highly influenced by immigration. Using a Bayesian state-space approach, we characterized the local-scale births, deaths, and dispersal effects together with measurement and process uncertainty in the metapopulation. Some form of an Allee effect influenced almost three-quarters of these local populations. Our joint analysis of the deterministic and stochastic dynamics suggests that a conservation priority for this species would be to increase resource availability in currently occupied and, more importantly, in unoccupied sites.

  19. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Enhancing the photon-extraction efficiency of site-controlled quantum dots by deterministically fabricated microlenses

    NASA Astrophysics Data System (ADS)

    Kaganskiy, Arsenty; Fischbach, Sarah; Strittmatter, André; Rodt, Sven; Heindel, Tobias; Reitzenstein, Stephan

    2018-04-01

    We report on the realization of scalable single-photon sources (SPSs) based on single site-controlled quantum dots (SCQDs) and deterministically fabricated microlenses. The fabrication process comprises the buried-stressor growth technique complemented with low-temperature in-situ electron-beam lithography for the integration of SCQDs into microlens structures with high yield and high alignment accuracy. The microlens-approach leads to a broadband enhancement of the photon-extraction efficiency of up to (21 ± 2)% and a high suppression of multi-photon events with g (2)(τ = 0) < 0.06 without background subtraction. The demonstrated combination of site-controlled growth of QDs and in-situ electron-beam lithography is relevant for arrays of efficient SPSs which, can be applied in photonic quantum circuits and advanced quantum computation schemes.

  1. Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran

    NASA Astrophysics Data System (ADS)

    Ney, B.; Askari, M.

    2009-04-01

    Evaluation Seismicity west of block-lut for Deterministic Seismic Hazard Assessment of Shahdad ,Iran Behnoosh Neyestani , Mina Askari Students of Science and Research University,Iran. Seismic Hazard Assessment has been done for Shahdad city in this study , and four maps (Kerman-Bam-Nakhil Ab-Allah Abad) has been prepared to indicate the Deterministic estimate of Peak Ground Acceleration (PGA) in this area. Deterministic Seismic Hazard Assessment has been preformed for a region in eastern Iran (Shahdad) based on the available geological, seismological and geophysical information and seismic zoning map of region has been constructed. For this assessment first Seimotectonic map of study region in a radius of 100km is prepared using geological maps, distribution of historical and instrumental earthquake data and focal mechanism solutions it is used as the base map for delineation of potential seismic sources. After that minimum distance, for every seismic sources until site (Shahdad) and maximum magnitude for each source have been determined. In Shahdad ,according to results, peak ground acceleration using the Yoshimitsu Fukushima &Teiji Tanaka'1990 attenuation relationship is estimated to be 0.58 g, that is related to the movement of nayband fault with distance 2.4km of the site and maximum magnitude Ms=7.5.

  2. Combining Deterministic structures and stochastic heterogeneity for transport modeling

    NASA Astrophysics Data System (ADS)

    Zech, Alraune; Attinger, Sabine; Dietrich, Peter; Teutsch, Georg

    2017-04-01

    Contaminant transport in highly heterogeneous aquifers is extremely challenging and subject of current scientific debate. Tracer plumes often show non-symmetric but highly skewed plume shapes. Predicting such transport behavior using the classical advection-dispersion-equation (ADE) in combination with a stochastic description of aquifer properties requires a dense measurement network. This is in contrast to the available information for most aquifers. A new conceptual aquifer structure model is presented which combines large-scale deterministic information and the stochastic approach for incorporating sub-scale heterogeneity. The conceptual model is designed to allow for a goal-oriented, site specific transport analysis making use of as few data as possible. Thereby the basic idea is to reproduce highly skewed tracer plumes in heterogeneous media by incorporating deterministic contrasts and effects of connectivity instead of using unimodal heterogeneous models with high variances. The conceptual model consists of deterministic blocks of mean hydraulic conductivity which might be measured by pumping tests indicating values differing in orders of magnitudes. A sub-scale heterogeneity is introduced within every block. This heterogeneity can be modeled as bimodal or log-normal distributed. The impact of input parameters, structure and conductivity contrasts is investigated in a systematic manor. Furthermore, some first successful implementation of the model was achieved for the well known MADE site.

  3. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  4. Comment on: Supervisory Asymmetric Deterministic Secure Quantum Communication

    NASA Astrophysics Data System (ADS)

    Kao, Shih-Hung; Tsai, Chia-Wei; Hwang, Tzonelih

    2012-12-01

    In 2010, Xiu et al. (Optics Communications 284:2065-2069, 2011) proposed several applications based on a new secure four-site distribution scheme using χ-type entangled states. This paper points out that one of these applications, namely, supervisory asymmetric deterministic secure quantum communication, is subject to an information leakage problem, in which the receiver can extract two bits of a three-bit secret message without the supervisor's permission. An enhanced protocol is proposed to resolve this problem.

  5. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  6. Quantifying diffusion MRI tractography of the corticospinal tract in brain tumors with deterministic and probabilistic methods☆

    PubMed Central

    Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I.; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S.; Henry, Roland G.

    2013-01-01

    Introduction Diffusion MRI tractography has been increasingly used to delineate white matter pathways in vivo for which the leading clinical application is presurgical mapping of eloquent regions. However, there is rare opportunity to quantify the accuracy or sensitivity of these approaches to delineate white matter fiber pathways in vivo due to the lack of a gold standard. Intraoperative electrical stimulation (IES) provides a gold standard for the location and existence of functional motor pathways that can be used to determine the accuracy and sensitivity of fiber tracking algorithms. In this study we used intraoperative stimulation from brain tumor patients as a gold standard to estimate the sensitivity and accuracy of diffusion tensor MRI (DTI) and q-ball models of diffusion with deterministic and probabilistic fiber tracking algorithms for delineation of motor pathways. Methods We used preoperative high angular resolution diffusion MRI (HARDI) data (55 directions, b = 2000 s/mm2) acquired in a clinically feasible time frame from 12 patients who underwent a craniotomy for resection of a cerebral glioma. The corticospinal fiber tracts were delineated with DTI and q-ball models using deterministic and probabilistic algorithms. We used cortical and white matter IES sites as a gold standard for the presence and location of functional motor pathways. Sensitivity was defined as the true positive rate of delineating fiber pathways based on cortical IES stimulation sites. For accuracy and precision of the course of the fiber tracts, we measured the distance between the subcortical stimulation sites and the tractography result. Positive predictive rate of the delineated tracts was assessed by comparison of subcortical IES motor function (upper extremity, lower extremity, face) with the connection of the tractography pathway in the motor cortex. Results We obtained 21 cortical and 8 subcortical IES sites from intraoperative mapping of motor pathways. Probabilistic q-ball had the best sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p < 0.001) and the probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). Discussion This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and intra-operative management decisions. PMID:24273719

  7. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  8. Calibration of a distributed routing rainfall-runoff model at four urban sites near Miami, Florida

    USGS Publications Warehouse

    Doyle, W. Harry; Miller, Jeffrey E.

    1980-01-01

    Urban stormwater data from four Miami, Fla. catchments were collected and compiled by the U.S. Geological Survey and were used for testing the applicability of deterministic modeling for characterizing stormwater flows from small land-use areas. A description of model calibration and verification is presented for: (1) A 40.8 acre single-family residential area, (2) a 58.3-acre highway area, (3) a 20.4-acre commercial area, and (4) a 14.7-acre multifamily residential area. Rainfall-runoff data for 80, 108, 114, and 52 storms at sites, 1, 2, 3, and 4, respectively, were collected, analyzed, and stored on direct-access files. Rainfall and runoff data for these storms (at 1-minute time intervals) were used in flow-modeling simulation analyses. A distributed routing Geological Survey rainfall-runoff model was used to determine rainfall excess and route overland and channel flows at each site. Optimization of soil-moisture- accounting and infiltration parameters was performed during the calibration phases. The results of this study showed that, with qualifications, an acceptable verification of the Geological Survey model can be achieved. (Kosco-USGS)

  9. Dynamics of pollutant discharge in combined sewer systems during rain events: chance or determinism?

    PubMed

    Hannouche, A; Chebbo, G; Joannis, C

    2014-01-01

    A large database of continuous flow and turbidity measurements cumulating data on hundreds of rain events and dry weather days from two sites in Paris (called Quais and Clichy) and one in Lyon (called Ecully) is presented. This database is used to characterize and compare the behaviour of the three sites at the inter-events scale. The analysis is probed through three various variables: total volumes and total suspended solids (TSS) masses and concentrations during both wet and dry weather periods in addition to the contributions of diverse-origin sources to event flow volume and TSS load values. The results obtained confirm the previous findings regarding the spatial consistency of TSS fluxes and concentrations between both sites in Paris having similar land uses. Moreover, masses and concentrations are proven to be correlated between Parisian sites in a way that implies the possibility of some deterministic processes being reproducible from one catchment to another for a particular rain event. The results also demonstrate the importance of the contribution of wastewater and sewer deposits to the total events' loads and show that such contributions are not specific to Paris sewer networks.

  10. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  11. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications.

    PubMed

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-10

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  12. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead–based applications

    PubMed Central

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-01-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911

  13. Sampled-Data Consensus of Linear Multi-agent Systems With Packet Losses.

    PubMed

    Zhang, Wenbing; Tang, Yang; Huang, Tingwen; Kurths, Jurgen

    In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.In this paper, the consensus problem is studied for a class of multi-agent systems with sampled data and packet losses, where random and deterministic packet losses are considered, respectively. For random packet losses, a Bernoulli-distributed white sequence is used to describe packet dropouts among agents in a stochastic way. For deterministic packet losses, a switched system with stable and unstable subsystems is employed to model packet dropouts in a deterministic way. The purpose of this paper is to derive consensus criteria, such that linear multi-agent systems with sampled-data and packet losses can reach consensus. By means of the Lyapunov function approach and the decomposition method, the design problem of a distributed controller is solved in terms of convex optimization. The interplay among the allowable bound of the sampling interval, the probability of random packet losses, and the rate of deterministic packet losses are explicitly derived to characterize consensus conditions. The obtained criteria are closely related to the maximum eigenvalue of the Laplacian matrix versus the second minimum eigenvalue of the Laplacian matrix, which reveals the intrinsic effect of communication topologies on consensus performance. Finally, simulations are given to show the effectiveness of the proposed results.

  14. Immersion freezing of supermicron mineral dust particles: freezing results, testing different schemes for describing ice nucleation, and ice nucleation active site densities.

    PubMed

    Wheeler, M J; Mason, R H; Steunenberg, K; Wagstaff, M; Chou, C; Bertram, A K

    2015-05-14

    Ice nucleation on mineral dust particles is known to be an important process in the atmosphere. To accurately implement ice nucleation on mineral dust particles in atmospheric simulations, a suitable theory or scheme is desirable to describe laboratory freezing data in atmospheric models. In the following, we investigated ice nucleation by supermicron mineral dust particles [kaolinite and Arizona Test Dust (ATD)] in the immersion mode. The median freezing temperature for ATD was measured to be approximately -30 °C compared with approximately -36 °C for kaolinite. The freezing results were then used to test four different schemes previously used to describe ice nucleation in atmospheric models. In terms of ability to fit the data (quantified by calculating the reduced chi-squared values), the following order was found for ATD (from best to worst): active site, pdf-α, deterministic, single-α. For kaolinite, the following order was found (from best to worst): active site, deterministic, pdf-α, single-α. The variation in the predicted median freezing temperature per decade change in the cooling rate for each of the schemes was also compared with experimental results from other studies. The deterministic model predicts the median freezing temperature to be independent of cooling rate, while experimental results show a weak dependence on cooling rate. The single-α, pdf-α, and active site schemes all agree with the experimental results within roughly a factor of 2. On the basis of our results and previous results where different schemes were tested, the active site scheme is recommended for describing the freezing of ATD and kaolinite particles. We also used our ice nucleation results to determine the ice nucleation active site (INAS) density for the supermicron dust particles tested. Using the data, we show that the INAS densities of supermicron kaolinite and ATD particles studied here are smaller than the INAS densities of submicron kaolinite and ATD particles previously reported in the literature.

  15. Boise Hydrogeophysical Research Site: Control Volume/Test Cell and Community Research Asset

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Bradford, J.; Malama, B.

    2008-12-01

    The Boise Hydrogeophysical Research Site (BHRS) is a research wellfield or field-scale test facility developed in a shallow, coarse, fluvial aquifer with the objectives of supporting: (a) development of cost- effective, non- or minimally-invasive quantitative characterization and imaging methods in heterogeneous aquifers using hydrologic and geophysical techniques; (b) examination of fundamental relationships and processes at multiple scales; (c) testing theories and models for groundwater flow and solute transport; and (d) educating and training of students in multidisciplinary subsurface science and engineering. The design of the wells and the wellfield support modular use and reoccupation of wells for a wide range of single-well, cross-hole, multiwell and multilevel hydrologic, geophysical, and combined hydrologic-geophysical experiments. Efforts to date by Boise State researchers and collaborators have been largely focused on: (a) establishing the 3D distributions of geologic, hydrologic, and geophysical parameters which can then be used as the basis for jointly inverting hard and soft data to return the 3D K distribution and (b) developing subsurface measurement and imaging methods including tomographic characterization and imaging methods. At this point the hydrostratigraphic framework of the BHRS is known to be a hierarchical multi-scale system which includes layers and lenses that are recognized with geologic, hydrologic, radar, seismic, and EM methods; details are now emerging which may allow 3D deterministic characterization of zones and/or material variations at the meter scale in the central wellfield. Also the site design and subsurface framework have supported a variety of testing configurations for joint hydrologic and geophysical experiments. Going forward we recognize the opportunity to increase the R&D returns from use of the BHRS with additional infrastructure (especially for monitoring the vadose zone and surface water-groundwater interactions), more collaborative activity, and greater access to site data. Our broader goal of becoming more available as a research asset for the scientific community also supports the long-term business plan of increasing funding opportunities to maintain and operate the site.

  16. Reliability analysis of hydrologic containment of liquefied petroleum gas within unlined rock caverns.

    NASA Astrophysics Data System (ADS)

    Gao, X.; Yan, E. C.; Yeh, T. C. J.; Wang, Y.; Liang, Y.; Hao, Y.

    2017-12-01

    Notice that most of the underground liquefied petroleum gas (LPG) storage caverns are constructed in unlined rock caverns (URCs), where the variability of hydraulic properties (in particular, hydraulic conductivity) has significant impacts on hydrologic containment performance. However, it is practically impossible to characterize the spatial distribution of these properties in detail at the site of URCs. This dilemma forces us to cope with uncertainty in our evaluations of gas containment. As a consequence, the uncertainty-based analysis is deemed more appropriate than the traditional deterministic analysis. The objectives of this paper are 1) to introduce a numerical first order method to calculate the gas containment reliability within a heterogeneous, two-dimensional unlined rock caverns, and 2) to suggest a strategy for improving the gas containment reliability. In order to achieve these goals, we first introduced the stochastic continuum representation of saturated hydraulic conductivity (Ks) of fractured rock and analyzed the spatial variability of Ks at a field site. We then conducted deterministic simulations to demonstrate the importance of heterogeneity of Ks in the analysis of gas tightness performance of URCs. Considering the uncertainty of the heterogeneity in the real world situations, we subsequently developed a numerical first order method (NFOM) to determine the gas tightness reliability at crucial locations of URCs. Using the NFOM, the effect of spatial variability of Ks on gas tightness reliability was investigated. Results show that as variance or spatial structure anisotropy of Ks increases, most of the gas tightness reliability at crucial locations reduces. Meanwhile, we compare the results of NFOM with those of Monte Carlo simulation, and we find the accuracy of NFOM is mainly affected by the magnitude of the variance of Ks. At last, for improving gas containment reliability at crucial locations at this study site, we suggest that vertical water-curtain holes should be installed in the pillar rather than increasing density of horizontal water-curtain boreholes.

  17. Tracing of paleo-shear zones by self-potential data inversion: case studies from the KTB, Rittsteig, and Grossensees graphite-bearing fault planes

    NASA Astrophysics Data System (ADS)

    Mehanee, Salah A.

    2015-01-01

    This paper describes a new method for tracing paleo-shear zones of the continental crust by self-potential (SP) data inversion. The method falls within the deterministic inversion framework, and it is exclusively applicable for the interpretation of the SP anomalies measured along a profile over sheet-type structures such as conductive thin films of interconnected graphite precipitations formed on shear planes. The inverse method fits a residual SP anomaly by a single thin sheet and recovers the characteristic parameters (depth to the top h, extension in depth a, amplitude coefficient k, and amount and direction of dip θ) of the sheet. This method minimizes an objective functional in the space of the logarithmed and non-logarithmed model parameters (log( h), log( a), log( k), and θ) successively by the steepest descent (SD) and Gauss-Newton (GN) techniques in order to essentially maintain the stability and convergence of this inverse method. Prior to applying the method to real data, its accuracy, convergence, and stability are successfully verified on numerical examples with and without noise. The method is then applied to SP profiles from the German Continental Deep Drilling Program (Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschla - KTB), Rittsteig, and Grossensees sites in Germany for tracing paleo-shear planes coated with graphitic deposits. The comparisons of geologic sections constructed in this paper (based on the proposed deterministic approach) against the existing published interpretations (obtained based on trial-and-error modeling) for the SP data of the KTB and Rittsteig sites have revealed that the deterministic approach suggests some new details that are of some geological significance. The findings of the proposed inverse scheme are supported by available drilling and other geophysical data. Furthermore, the real SP data of the Grossensees site have been interpreted (apparently for the first time ever) by the deterministic inverse scheme from which interpretive geologic cross sections are suggested. The computational efficiency, analysis of the numerical examples investigated, and comparisons of the real data inverted here have demonstrated that the developed deterministic approach is advantageous to the existing interpretation methods, and it is suitable for meaningful interpretation of SP data acquired elsewhere over graphitic occurrences on fault planes.

  18. Deterministic strain-induced arrays of quantum emitters in a two-dimensional semiconductor

    PubMed Central

    Branny, Artur; Kumar, Santosh; Proux, Raphaël; Gerardot, Brian D

    2017-01-01

    An outstanding challenge in quantum photonics is scalability, which requires positioning of single quantum emitters in a deterministic fashion. Site positioning progress has been made in established platforms including defects in diamond and self-assembled quantum dots, albeit often with compromised coherence and optical quality. The emergence of single quantum emitters in layered transition metal dichalcogenide semiconductors offers new opportunities to construct a scalable quantum architecture. Here, using nanoscale strain engineering, we deterministically achieve a two-dimensional lattice of quantum emitters in an atomically thin semiconductor. We create point-like strain perturbations in mono- and bi-layer WSe2 which locally modify the band-gap, leading to efficient funnelling of excitons towards isolated strain-tuned quantum emitters that exhibit high-purity single photon emission. We achieve near unity emitter creation probability and a mean positioning accuracy of 120±32 nm, which may be improved with further optimization of the nanopillar dimensions. PMID:28530219

  19. Deterministic generation of multiparticle entanglement by quantum Zeno dynamics.

    PubMed

    Barontini, Giovanni; Hohmann, Leander; Haas, Florian; Estève, Jérôme; Reichel, Jakob

    2015-09-18

    Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications. Copyright © 2015, American Association for the Advancement of Science.

  20. Elliptical quantum dots as on-demand single photons sources with deterministic polarization states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teng, Chu-Hsiang; Demory, Brandon; Ku, Pei-Cheng, E-mail: peicheng@umich.edu

    In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.

  1. Seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel

    NASA Astrophysics Data System (ADS)

    Kaláb, Zdeněk; Šílený, Jan; Lednická, Markéta

    2017-07-01

    This paper deals with the seismic stability of the survey areas of potential sites for the deep geological repository of the spent nuclear fuel in the Czech Republic. The basic source of data for historical earthquakes up to 1990 was the seismic website [1-]. The most intense earthquake described occurred on September 15, 1590 in the Niederroesterreich region (Austria) in the historical period; its reported intensity is Io = 8-9. The source of the contemporary seismic data for the period since 1991 to the end of 2014 was the website [11]. It may be stated based on the databases and literature review that in the period from 1900, no earthquake exceeding magnitude 5.1 originated in the territory of the Czech Republic. In order to evaluate seismicity and to assess the impact of seismic effects at depths of hypothetical deep geological repository for the next time period, the neo-deterministic method was selected as an extension of the probabilistic method. Each one out of the seven survey areas were assessed by the neo-deterministic evaluation of the seismic wave-field excited by selected individual events and determining the maximum loading. Results of seismological databases studies and neo-deterministic analysis of Čihadlo locality are presented.

  2. Predicting Lg Coda Using Synthetic Seismograms and Media With Stochastic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Tibuleac, I. M.; Stroujkova, A.; Bonner, J. L.; Mayeda, K.

    2005-12-01

    Recent examinations of the characteristics of coda-derived Sn and Lg spectra for yield estimation have shown that the spectral peak of Nevada Test Site (NTS) explosion spectra is depth-of-burial dependent, and that this peak is shifted to higher frequencies for Lop Nor explosions at the same depths. To confidently use coda-based yield formulas, we need to understand and predict coda spectral shape variations with depth, source media, velocity structure, topography, and geological heterogeneity. We present results of a coda modeling study to predict Lg coda. During the initial stages of this research, we have acquired and parameterized a deterministic 6 deg. x 6 deg. velocity and attenuation model centered on the Nevada Test Site. Near-source data are used to constrain density and attenuation profiles for the upper five km. The upper crust velocity profiles are quilted into a background velocity profile at depths greater than five km. The model is parameterized for use in a modified version of the Generalized Fourier Method in two dimensions (GFM2D). We modify this model to include stochastic heterogeneities of varying correlation lengths within the crust. Correlation length, Hurst number and fractional velocity perturbation of the heterogeneities are used to construct different realizations of the random media. We use nuclear explosion and earthquake cluster waveform analysis, as well as well log and geological information to constrain the stochastic parameters for a path between the NTS and the seismic stations near Mina, Nevada. Using multiple runs, we quantify the effects of variations in the stochastic parameters, of heterogeneity location in the crust and attenuation on coda amplitude and spectral characteristics. We calibrate these parameters by matching synthetic earthquake Lg coda envelopes to coda envelopes of local earthquakes with well-defined moments and mechanisms. We generate explosion synthetics for these calibrated deterministic and stochastic models. Secondary effects, including a compensated linear vector dipole source, are superposed on the synthetics in order to adequately characterize the Lg generation. We use this technique to characterize the effects of depth of burial on the coda spectral shapes.

  3. Tularosa Basin Play Fairway Analysis

    DOE Data Explorer

    Adam Brandt

    2015-11-15

    This submission contains several shapefiles used for a deterministic PFA, as well as a heat composite risk segment with union overlay, and training sites used for weights of evidence. More detailed metadata can be found in the specific file.

  4. A grammar inference approach for predicting kinase specific phosphorylation sites.

    PubMed

    Datta, Sutapa; Mukhopadhyay, Subhasis

    2015-01-01

    Kinase mediated phosphorylation site detection is the key mechanism of post translational mechanism that plays an important role in regulating various cellular processes and phenotypes. Many diseases, like cancer are related with the signaling defects which are associated with protein phosphorylation. Characterizing the protein kinases and their substrates enhances our ability to understand the mechanism of protein phosphorylation and extends our knowledge of signaling network; thereby helping us to treat such diseases. Experimental methods for predicting phosphorylation sites are labour intensive and expensive. Also, manifold increase of protein sequences in the databanks over the years necessitates the improvement of high speed and accurate computational methods for predicting phosphorylation sites in protein sequences. Till date, a number of computational methods have been proposed by various researchers in predicting phosphorylation sites, but there remains much scope of improvement. In this communication, we present a simple and novel method based on Grammatical Inference (GI) approach to automate the prediction of kinase specific phosphorylation sites. In this regard, we have used a popular GI algorithm Alergia to infer Deterministic Stochastic Finite State Automata (DSFA) which equally represents the regular grammar corresponding to the phosphorylation sites. Extensive experiments on several datasets generated by us reveal that, our inferred grammar successfully predicts phosphorylation sites in a kinase specific manner. It performs significantly better when compared with the other existing phosphorylation site prediction methods. We have also compared our inferred DSFA with two other GI inference algorithms. The DSFA generated by our method performs superior which indicates that our method is robust and has a potential for predicting the phosphorylation sites in a kinase specific manner.

  5. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  6. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  7. Seismic hazard study for selected sites in New Mexico and Nevada

    NASA Astrophysics Data System (ADS)

    Johnston, J. C.

    1983-12-01

    Seismic hazard evaluations were conducted for specific sites in New Mexico and Nevada. For New Mexico, a model of seismicity was developed from historical accounts of medium to large shocks and the current microactivity record from local networks. Ninety percent confidence levels at Albuquerque and Roswell were computed to be 56 gals for a 10-year period and 77 gals for a 20-year period. Values of ground motion for Clovis were below these values. Peak velocity and displacement were also computed for each site. Deterministic spectra based on the estimated maximum credible earthquake for the zones which the sites occupy were also computed. For the sites in Nevada, the regionalizations used in Battis (1982) for the uniform seismicity model were slightly modified. For 10- and 20-year time periods, peak acceleration values for Indian Springs were computed to be 94 gals and 123 gals and for Hawthorne 206 gals and 268 gals. Deterministic spectra were also computed. The input parameters were well determined for the analysis for the Nevada sites because of the abundance of data. The values computed for New Mexico, however, are likely upper limits. As more data are collected from the area of the Rio Grande rift zone, the pattern of seismicity will become better understood. At this time a more detailed, and thus more accurate, model may emerge.

  8. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  9. Intergrated 3-D Ground-Penetrating Radar,Outcrop,and Boreholoe Data Applied to Reservoir Characterization and Flow Simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMechan et al.

    2001-08-31

    Existing reservoir models are based on 2-D outcrop;3-D aspects are inferred from correlation between wells,and so are inadequately constrained for reservoir simulations. To overcome these deficiencies, we initiated a multidimensional characterization of reservoir analogs in the Cretaceous Ferron Sandstone in Utah.The study was conducted at two sites(Corbula Gulch Coyote Basin); results from both sites are contained in this report. Detailed sedimentary facies maps of cliff faces define the geometry and distribution of potential reservoir flow units, barriers and baffles at the outcrop. High resolution 2-D and 3-D ground penetrating radar(GPR) images extend these reservoir characteristics into 3-D to allow developmentmore » of realistic 3-D reservoir models. Models use geometric information from the mapping and the GPR data, petrophysical data from surface and cliff-face outcrops, lab analyses of outcrop and core samples, and petrography. The measurements are all integrated into a single coordinate system using GPS and laser mapping of the main sedimentologic features and boundaries. The final step is analysis of results of 3-D fluid flow modeling to demonstrate applicability of our reservoir analog studies to well siting and reservoir engineering for maximization of hydrocarbon production. The main goals of this project are achieved. These are the construction of a deterministic 3-D reservoir analog model from a variety of geophysical and geologic measurements at the field sites, integrating these into comprehensive petrophysical models, and flow simulation through these models. This unique approach represents a significant advance in characterization and use of reservoir analogs. To data,the team has presented five papers at GSA and AAPG meetings produced a technical manual, and completed 15 technical papers. The latter are the main content of this final report. In addition,the project became part of 5 PhD dissertations, 3 MS theses,and two senior undergraduate research projects.« less

  10. Characterization of unsaturated zone hydrogeologic units using matrix properties and depositional history in a complex volcanic environment

    USGS Publications Warehouse

    Flint, Lorraine E.; Buesch, David C.; Flint, Alan L.

    2006-01-01

    Characterization of the physical and unsaturated hydrologic properties of subsurface materials is necessary to calculate flow and transport for land use practices and to evaluate subsurface processes such as perched water or lateral diversion of water, which are influenced by features such as faults, fractures, and abrupt changes in lithology. Input for numerical flow models typically includes parameters that describe hydrologic properties and the initial and boundary conditions for all materials in the unsaturated zone, such as bulk density, porosity, and particle density, saturated hydraulic conductivity, moisture-retention characteristics, and field water content. We describe an approach for systematically evaluating the site features that contribute to water flow, using physical and hydraulic data collected at the laboratory scale, to provide a representative set of physical and hydraulic parameters for numerically calculating flow of water through the materials at a site. An example case study from analyses done for the heterogeneous, layered, volcanic rocks at Yucca Mountain is presented, but the general approach for parameterization could be applied at any site where depositional processes follow deterministic patterns. Hydrogeologic units at this site were defined using (i) a database developed from 5320 rock samples collected from the coring of 23 shallow (<100 m) and 10 deep (500–1000 m) boreholes, (ii) lithostratigraphic boundaries and corresponding relations to porosity, (iii) transition zones with pronounced changes in properties over short vertical distances, (iv) characterization of the influence of mineral alteration on hydrologic properties such as permeability and moisture-retention characteristics, and (v) a statistical analysis to evaluate where boundaries should be adjusted to minimize the variance within layers. Model parameters developed in this study, and the relation of flow properties to porosity, can be used to produce detailed and accurate representations of the core-scale hydrologic processes ongoing at Yucca Mountain.

  11. Deterministic radiative coupling of two semiconductor quantum dots to the optical mode of a photonic crystal nanocavity.

    PubMed

    Calic, M; Jarlov, C; Gallo, P; Dwir, B; Rudra, A; Kapon, E

    2017-06-22

    A system of two site-controlled semiconductor quantum dots (QDs) is deterministically integrated with a photonic crystal membrane nano-cavity. The two QDs are identified via their reproducible emission spectral features, and their coupling to the fundamental cavity mode is established by emission co-polarization and cavity feeding features. A theoretical model accounting for phonon interaction and pure dephasing reproduces the observed results and permits extraction of the light-matter coupling constant for this system. The demonstrated approach offers a platform for scaling up the integration of QD systems and nano-photonic elements for integrated quantum photonics applications.

  12. Challenges Ahead for Nuclear Facility Site-Specific Seismic Hazard Assessment in France: The Alternative Energies and the Atomic Energy Commission (CEA) Vision

    NASA Astrophysics Data System (ADS)

    Berge-Thierry, C.; Hollender, F.; Guyonnet-Benaize, C.; Baumont, D.; Ameri, G.; Bollinger, L.

    2017-09-01

    Seismic analysis in the context of nuclear safety in France is currently guided by a pure deterministic approach based on Basic Safety Rule ( Règle Fondamentale de Sûreté) RFS 2001-01 for seismic hazard assessment, and on the ASN/2/01 Guide that provides design rules for nuclear civil engineering structures. After the 2011 Tohohu earthquake, nuclear operators worldwide were asked to estimate the ability of their facilities to sustain extreme seismic loads. The French licensees then defined the `hard core seismic levels', which are higher than those considered for design or re-assessment of the safety of a facility. These were initially established on a deterministic basis, and they have been finally justified through state-of-the-art probabilistic seismic hazard assessments. The appreciation and propagation of uncertainties when assessing seismic hazard in France have changed considerably over the past 15 years. This evolution provided the motivation for the present article, the objectives of which are threefold: (1) to provide a description of the current practices in France to assess seismic hazard in terms of nuclear safety; (2) to discuss and highlight the sources of uncertainties and their treatment; and (3) to use a specific case study to illustrate how extended source modeling can help to constrain the key assumptions or parameters that impact upon seismic hazard assessment. This article discusses in particular seismic source characterization, strong ground motion prediction, and maximal magnitude constraints, according to the practice of the French Atomic Energy Commission. Due to increases in strong motion databases in terms of the number and quality of the records in their metadata and the uncertainty characterization, several recently published empirical ground motion prediction models are eligible for seismic hazard assessment in France. We show that propagation of epistemic and aleatory uncertainties is feasible in a deterministic approach, as in a probabilistic way. Assessment of seismic hazard in France in the framework of the safety of nuclear facilities should consider these recent advances. In this sense, the opening of discussions with all of the stakeholders in France to update the reference documents (i.e., RFS 2001-01; ASN/2/01 Guide) appears appropriate in the short term.

  13. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  14. Construction of a Quantum Matter Synthesizer

    NASA Astrophysics Data System (ADS)

    Trisnadi, Jonathan; McDonald, Mickey; Chin, Cheng

    2017-04-01

    We report progress on the construction of a new platform to manipulate ultracold atoms. The ``Quantum Matter Synthesizer (QMS)'' will have the capability of deterministically preparing large 2D arrays of atoms with single site addressability. Cesium atoms are first transferred into a science cell (specially textured to reduce reflectance to 0.1% across a wide range of wavelengths and incident angles) via a moving 1D lattice, where they are loaded into a magic-wavelength, far-detuned 2D optical lattice. Two NA=0.8 microscope objectives surround the science cell from above and below. The lower objective will be used to project an array of optical tweezers created via a digital micromirror device (DMD) onto the atom-trapping plane, which will be used to rearrange atoms into a desired configuration after first taking a site-resolved fluorescence image of the initial atomic distribution with the upper objective. We provide updates on our magnetic-optical trap and Raman-sideband cooling performance, characterization of the resolution of our microscope objectives, and stability tests for the objective mounting structure.

  15. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    PubMed Central

    2018-01-01

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the properties of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Last, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site. PMID:29386401

  16. Characteristics of Early Stages of Corrosion Fatigue in Aircraft Skin

    DOT National Transportation Integrated Search

    1996-02-01

    SRI International is conducting research to characterize and quantitatively describe the early stages of corrosion fatigue in the fuselage skin of commercial aircraft. Specific objectives are to gain an improved deterministic understanding of the tra...

  17. Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚ E, 29-31˚ N)

    NASA Astrophysics Data System (ADS)

    Askari, M.; Ney, Beh

    2009-04-01

    Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚E, 29-31˚N) Mina Askari, Behnoosh Neyestani Students of Science and Research University,Iran. Deterministic seismic hazard assessment has been performed in Center-East IRAN, including Kerman and adjacent regions of 100km is selected. A catalogue of earthquakes in the region, including historical earthquakes and instrumental earthquakes is provided. A total of 25 potential seismic source zones in the region delineated as area sources for seismic hazard assessment based on geological, seismological and geophysical information, then minimum distance for every seismic sources until site (Kerman) and maximum magnitude for each source have been determined, eventually using the N. A. ABRAHAMSON and J. J. LITEHISER '1989 attenuation relationship, maximum acceleration is estimated to be 0.38g, that is related to the movement of blind fault with maximum magnitude of this source is Ms=5.5.

  18. PROBABILISTIC SAFETY ASSESSMENT OF OPERATIONAL ACCIDENTS AT THE WASTE ISOLATION PILOT PLANT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucker, D.F.

    2000-09-01

    This report presents a probabilistic safety assessment of radioactive doses as consequences from accident scenarios to complement the deterministic assessment presented in the Waste Isolation Pilot Plant (WIPP) Safety Analysis Report (SAR). The International Council of Radiation Protection (ICRP) recommends both assessments be conducted to ensure that ''an adequate level of safety has been achieved and that no major contributors to risk are overlooked'' (ICRP 1993). To that end, the probabilistic assessment for the WIPP accident scenarios addresses the wide range of assumptions, e.g. the range of values representing the radioactive source of an accident, that could possibly have beenmore » overlooked by the SAR. Routine releases of radionuclides from the WIPP repository to the environment during the waste emplacement operations are expected to be essentially zero. In contrast, potential accidental releases from postulated accident scenarios during waste handling and emplacement could be substantial, which necessitates the need for radiological air monitoring and confinement barriers (DOE 1999). The WIPP Safety Analysis Report (SAR) calculated doses from accidental releases to the on-site (at 100 m from the source) and off-site (at the Exclusive Use Boundary and Site Boundary) public by a deterministic approach. This approach, as demonstrated in the SAR, uses single-point values of key parameters to assess the 50-year, whole-body committed effective dose equivalent (CEDE). The basic assumptions used in the SAR to formulate the CEDE are retained for this report's probabilistic assessment. However, for the probabilistic assessment, single-point parameter values were replaced with probability density functions (PDF) and were sampled over an expected range. Monte Carlo simulations were run, in which 10,000 iterations were performed by randomly selecting one value for each parameter and calculating the dose. Statistical information was then derived from the 10,000 iteration batch, which included 5%, 50%, and 95% dose likelihood, and the sensitivity of each assumption to the calculated doses. As one would intuitively expect, the doses from the probabilistic assessment for most scenarios were found to be much less than the deterministic assessment. The lower dose of the probabilistic assessment can be attributed to a ''smearing'' of values from the high and low end of the PDF spectrum of the various input parameters. The analysis also found a potential weakness in the deterministic analysis used in the SAR, a detail on drum loading was not taken into consideration. Waste emplacement operations thus far have handled drums from each shipment as a single unit, i.e. drums from each shipment are kept together. Shipments typically come from a single waste stream, and therefore the curie loading of each drum can be considered nearly identical to that of its neighbor. Calculations show that if there are large numbers of drums used in the accident scenario assessment, e.g. 28 drums in the waste hoist failure scenario (CH5), then the probabilistic dose assessment calculations will diverge from the deterministically determined doses. As it is currently calculated, the deterministic dose assessment assumes one drum loaded to the maximum allowable (80 PE-Ci), and the remaining are 10% of the maximum. The effective average of drum curie content is therefore less in the deterministic assessment than the probabilistic assessment for a large number of drums. EEG recommends that the WIPP SAR calculations be revisited and updated to include a probabilistic safety assessment.« less

  19. Fisher-Wright model with deterministic seed bank and selection.

    PubMed

    Koopmann, Bendix; Müller, Johannes; Tellier, Aurélien; Živković, Daniel

    2017-04-01

    Seed banks are common characteristics to many plant species, which allow storage of genetic diversity in the soil as dormant seeds for various periods of time. We investigate an above-ground population following a Fisher-Wright model with selection coupled with a deterministic seed bank assuming the length of the seed bank is kept constant and the number of seeds is large. To assess the combined impact of seed banks and selection on genetic diversity, we derive a general diffusion model. The applied techniques outline a path of approximating a stochastic delay differential equation by an appropriately rescaled stochastic differential equation. We compute the equilibrium solution of the site-frequency spectrum and derive the times to fixation of an allele with and without selection. Finally, it is demonstrated that seed banks enhance the effect of selection onto the site-frequency spectrum while slowing down the time until the mutation-selection equilibrium is reached. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A simple closed-form solution for assessing concentration uncertainty

    NASA Astrophysics Data System (ADS)

    de Barros, F. P. J.; Fiori, Aldo; Bellin, Alberto

    2011-12-01

    We propose closed-form approximate solutions for the moments of a nonreactive tracer that can be used in applications, such as risk analysis. This is in line with the tenet that analytical solutions provide useful information, with minimum cost, during initial site characterization efforts and can serve as a preliminary screening tool when used with prior knowledge. We show that with the help of a few assumptions, the first-order solutions of the concentration moments proposed by Fiori and Dagan (2000) can be further simplified to assume a form similar to well-known deterministic solutions, therefore facilitating their use in applications. A highly anisotropic formation is assumed, and we neglect the transverse components of the two-particle correlation trajectory. The proposed solution compares well with the work of Fiori and Dagan while presenting the same simplicity of use of existing solutions for homogeneous porous media.

  1. Recurrent patterns of atrial depolarization during atrial fibrillation assessed by recurrence plot quantification.

    PubMed

    Censi, F; Barbaro, V; Bartolini, P; Calcagnini, G; Michelucci, A; Gensini, G F; Cerutti, S

    2000-01-01

    The aim of this study was to determine the presence of organization of atrial activation processes during atrial fibrillation (AF) by assessing whether the activation sequences are wholly random or are governed by deterministic mechanisms. We performed both linear and nonlinear analyses based on the cross correlation function (CCF) and recurrence plot quantification (RPQ), respectively. Recurrence plots were quantified by three variables: percent recurrence (PR), percent determinism (PD), and entropy of recurrences (ER). We recorded bipolar intra-atrial electrograms in two atrial sites during chronic AF in 19 informed subjects, following two protocols. In one, both recording sites were in the right atrium; in the other protocol, one site was in the right atrium, the other one in the left atrium. We extracted 19 episodes of type I AF (Wells' classification). RPQ detected transient recurrent patterns in all the episodes, while CCF was significant only in ten episodes. Surrogate data analysis, based on a cross-phase randomization procedure, decreased PR, PD, and ER values. The detection of spatiotemporal recurrent patterns together with the surrogate data results indicate that during AF a certain degree of local organization exists, likely caused by deterministic mechanisms of activation.

  2. A procedure to select ground-motion time histories for deterministic seismic hazard analysis from the Next Generation Attenuation (NGA) database

    NASA Astrophysics Data System (ADS)

    Huang, Duruo; Du, Wenqi; Zhu, Hong

    2017-10-01

    In performance-based seismic design, ground-motion time histories are needed for analyzing dynamic responses of nonlinear structural systems. However, the number of ground-motion data at design level is often limited. In order to analyze seismic performance of structures, ground-motion time histories need to be either selected from recorded strong-motion database or numerically simulated using stochastic approaches. In this paper, a detailed procedure to select proper acceleration time histories from the Next Generation Attenuation (NGA) database for several cities in Taiwan is presented. Target response spectra are initially determined based on a local ground-motion prediction equation under representative deterministic seismic hazard analyses. Then several suites of ground motions are selected for these cities using the Design Ground Motion Library (DGML), a recently proposed interactive ground-motion selection tool. The selected time histories are representatives of the regional seismic hazard and should be beneficial to earthquake studies when comprehensive seismic hazard assessments and site investigations are unavailable. Note that this method is also applicable to site-specific motion selections with the target spectra near the ground surface considering the site effect.

  3. Persistent ecological shifts in marine molluscan assemblages across the end-Cretaceous mass extinction.

    PubMed

    Aberhan, Martin; Kiessling, Wolfgang

    2015-06-09

    Contemporary biodiversity loss and population declines threaten to push the biosphere toward a tipping point with irreversible effects on ecosystem composition and function. As a potential example of a global-scale regime shift in the geological past, we assessed ecological changes across the end-Cretaceous mass extinction based on molluscan assemblages at four well-studied sites. By contrasting preextinction and postextinction rank abundance and numerical abundance in 19 molluscan modes of life--each defined as a unique combination of mobility level, feeding mode, and position relative to the substrate--we find distinct shifts in ecospace utilization, which significantly exceed predictions from null models. The magnitude of change in functional traits relative to normal temporal fluctuations at far-flung sites indicates that molluscan assemblages shifted to differently structured systems and faunal response was global. The strengths of temporal ecological shifts, however, are mostly within the range of preextinction site-to-site variability, demonstrating that local ecological turnover was similar to geographic variation over a broad latitudinal range. In conjunction with varied site-specific temporal patterns of individual modes of life, these spatial and temporal heterogeneities argue against a concerted phase shift of molluscan assemblages from one well-defined regime to another. At a broader ecological level, by contrast, congruent tendencies emerge and suggest deterministic processes. These patterns comprise the well-known increase of deposit-feeding mollusks in postextinction assemblages and increases in predators and predator-resistant modes of life, i.e., those characterized by elevated mobility and infaunal life habits.

  4. A deterministic and stochastic model for the system dynamics of tumor-immune responses to chemotherapy

    NASA Astrophysics Data System (ADS)

    Liu, Xiangdong; Li, Qingze; Pan, Jianxin

    2018-06-01

    Modern medical studies show that chemotherapy can help most cancer patients, especially for those diagnosed early, to stabilize their disease conditions from months to years, which means the population of tumor cells remained nearly unchanged in quite a long time after fighting against immune system and drugs. In order to better understand the dynamics of tumor-immune responses under chemotherapy, deterministic and stochastic differential equation models are constructed to characterize the dynamical change of tumor cells and immune cells in this paper. The basic dynamical properties, such as boundedness, existence and stability of equilibrium points, are investigated in the deterministic model. Extended stochastic models include stochastic differential equations (SDEs) model and continuous-time Markov chain (CTMC) model, which accounts for the variability in cellular reproduction, growth and death, interspecific competitions, and immune response to chemotherapy. The CTMC model is harnessed to estimate the extinction probability of tumor cells. Numerical simulations are performed, which confirms the obtained theoretical results.

  5. How the growth rate of host cells affects cancer risk in a deterministic way

    NASA Astrophysics Data System (ADS)

    Draghi, Clément; Viger, Louise; Denis, Fabrice; Letellier, Christophe

    2017-09-01

    It is well known that cancers are significantly more often encountered in some tissues than in other ones. In this paper, by using a deterministic model describing the interactions between host, effector immune and tumor cells at the tissue level, we show that this can be explained by the dependency of tumor growth on parameter values characterizing the type as well as the state of the tissue considered due to the "way of life" (environmental factors, food consumption, drinking or smoking habits, etc.). Our approach is purely deterministic and, consequently, the strong correlation (r = 0.99) between the number of detectable growing tumors and the growth rate of cells from the nesting tissue can be explained without evoking random mutation arising during DNA replications in nonmalignant cells or "bad luck". Strategies to limit the mortality induced by cancer could therefore be well based on improving the way of life, that is, by better preserving the tissue where mutant cells randomly arise.

  6. Analysis of deterministic swapping of photonic and atomic states through single-photon Raman interaction

    NASA Astrophysics Data System (ADS)

    Rosenblum, Serge; Borne, Adrien; Dayan, Barak

    2017-03-01

    The long-standing goal of deterministic quantum interactions between single photons and single atoms was recently realized in various experiments. Among these, an appealing demonstration relied on single-photon Raman interaction (SPRINT) in a three-level atom coupled to a single-mode waveguide. In essence, the interference-based process of SPRINT deterministically swaps the qubits encoded in a single photon and a single atom, without the need for additional control pulses. It can also be harnessed to construct passive entangling quantum gates, and can therefore form the basis for scalable quantum networks in which communication between the nodes is carried out only by single-photon pulses. Here we present an analytical and numerical study of SPRINT, characterizing its limitations and defining parameters for its optimal operation. Specifically, we study the effect of losses, imperfect polarization, and the presence of multiple excited states. In all cases we discuss strategies for restoring the operation of SPRINT.

  7. Analysis of wireless sensor network topology and estimation of optimal network deployment by deterministic radio channel characterization.

    PubMed

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2015-02-05

    One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.

  8. Evaluating the risk of death via the hematopoietic syndrome mode for prolonged exposure of nuclear workers to radiation delivered at very low rates.

    PubMed

    Scott, B R; Lyzlov, A F; Osovets, S V

    1998-05-01

    During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A; and (6) the expected cases of death via the hematopoietic syndrome mode for Mayak workers chronically exposed during work shifts at Site A to gamma rays and neutrons can be predicted using ln(2)B M[D]; where B (pronounced "beh") is the number of workers at risk (criticality accident victims excluded); and M[D] is the average (mean) value of D (averaged over the worker population at risk, for Site A, for the time period considered). These results can be used to facilitate a Phase II study of deterministic radiation effects among Mayak workers chronically exposed to gamma rays and neutrons.

  9. Modeling the within-host dynamics of cholera: bacterial-viral interaction.

    PubMed

    Wang, Xueying; Wang, Jin

    2017-08-01

    Novel deterministic and stochastic models are proposed in this paper for the within-host dynamics of cholera, with a focus on the bacterial-viral interaction. The deterministic model is a system of differential equations describing the interaction among the two types of vibrios and the viruses. The stochastic model is a system of Markov jump processes that is derived based on the dynamics of the deterministic model. The multitype branching process approximation is applied to estimate the extinction probability of bacteria and viruses within a human host during the early stage of the bacterial-viral infection. Accordingly, a closed-form expression is derived for the disease extinction probability, and analytic estimates are validated with numerical simulations. The local and global dynamics of the bacterial-viral interaction are analysed using the deterministic model, and the result indicates that there is a sharp disease threshold characterized by the basic reproduction number [Formula: see text]: if [Formula: see text], vibrios ingested from the environment into human body will not cause cholera infection; if [Formula: see text], vibrios will grow with increased toxicity and persist within the host, leading to human cholera. In contrast, the stochastic model indicates, more realistically, that there is always a positive probability of disease extinction within the human host.

  10. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Yen Ting; Buchler, Nicolas E.

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  11. Efficient analysis of stochastic gene dynamics in the non-adiabatic regime using piecewise deterministic Markov processes

    DOE PAGES

    Lin, Yen Ting; Buchler, Nicolas E.

    2018-01-31

    Single-cell experiments show that gene expression is stochastic and bursty, a feature that can emerge from slow switching between promoter states with different activities. In addition to slow chromatin and/or DNA looping dynamics, one source of long-lived promoter states is the slow binding and unbinding kinetics of transcription factors to promoters, i.e. the non-adiabatic binding regime. Here, we introduce a simple analytical framework, known as a piecewise deterministic Markov process (PDMP), that accurately describes the stochastic dynamics of gene expression in the non-adiabatic regime. We illustrate the utility of the PDMP on a non-trivial dynamical system by analysing the propertiesmore » of a titration-based oscillator in the non-adiabatic limit. We first show how to transform the underlying chemical master equation into a PDMP where the slow transitions between promoter states are stochastic, but whose rates depend upon the faster deterministic dynamics of the transcription factors regulated by these promoters. We show that the PDMP accurately describes the observed periods of stochastic cycles in activator and repressor-based titration oscillators. We then generalize our PDMP analysis to more complicated versions of titration-based oscillators to explain how multiple binding sites lengthen the period and improve coherence. Finally, we show how noise-induced oscillation previously observed in a titration-based oscillator arises from non-adiabatic and discrete binding events at the promoter site.« less

  12. Exposure Assessment Tools by Tiers and Types - Deterministic and Probabilistic Assessments

    EPA Pesticide Factsheets

    EPA ExpoBox is a toolbox for exposure assessors. Its purpose is to provide a compendium of exposure assessment and risk characterization tools that will present comprehensive step-by-step guidance and links to relevant exposure assessment data bases

  13. The origin of life is a spatially localized stochastic transition

    PubMed Central

    2012-01-01

    Background Life depends on biopolymer sequences as catalysts and as genetic material. A key step in the Origin of Life is the emergence of an autocatalytic system of biopolymers. Here we study computational models that address the way a living autocatalytic system could have emerged from a non-living chemical system, as envisaged in the RNA World hypothesis. Results We consider (i) a chemical reaction system describing RNA polymerization, and (ii) a simple model of catalytic replicators that we call the Two’s Company model. Both systems have two stable states: a non-living state, characterized by a slow spontaneous rate of RNA synthesis, and a living state, characterized by rapid autocatalytic RNA synthesis. The origin of life is a transition between these two stable states. The transition is driven by stochastic concentration fluctuations involving relatively small numbers of molecules in a localized region of space. These models are simulated on a two-dimensional lattice in which reactions occur locally on single sites and diffusion occurs by hopping of molecules to neighbouring sites. Conclusions If diffusion is very rapid, the system is well-mixed. The transition to life becomes increasingly difficult as the lattice size is increased because the concentration fluctuations that drive the transition become relatively smaller when larger numbers of molecules are involved. In contrast, when diffusion occurs at a finite rate, concentration fluctuations are local. The transition to life occurs in one local region and then spreads across the rest of the surface. The transition becomes easier with larger lattice sizes because there are more independent regions in which it could occur. The key observations that apply to our models and to the real world are that the origin of life is a rare stochastic event that is localized in one region of space due to the limited rate of diffusion of the molecules involved and that the subsequent spread across the surface is deterministic. It is likely that the time required for the deterministic spread is much shorter than the waiting time for the origin, in which case life evolves only once on a planet, and then rapidly occupies the whole surface. Reviewers Reviewed by Omer Markovitch (nominated by Doron Lancet), Claus Wilke, and Nobuto Takeuchi (nominated by Eugene Koonin). PMID:23176307

  14. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  15. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  16. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    NASA Astrophysics Data System (ADS)

    Quick, J.; Dykes, K.; Graf, P.; Zahle, F.

    2016-09-01

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained with increasing risk aversion on the part of the designer.

  17. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation

    PubMed Central

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother’s old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother’s old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington’s genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation. PMID:26761487

  18. Asymmetrical Damage Partitioning in Bacteria: A Model for the Evolution of Stochasticity, Determinism, and Genetic Assimilation.

    PubMed

    Chao, Lin; Rang, Camilla Ulla; Proenca, Audrey Menegaz; Chao, Jasper Ubirajara

    2016-01-01

    Non-genetic phenotypic variation is common in biological organisms. The variation is potentially beneficial if the environment is changing. If the benefit is large, selection can favor the evolution of genetic assimilation, the process by which the expression of a trait is transferred from environmental to genetic control. Genetic assimilation is an important evolutionary transition, but it is poorly understood because the fitness costs and benefits of variation are often unknown. Here we show that the partitioning of damage by a mother bacterium to its two daughters can evolve through genetic assimilation. Bacterial phenotypes are also highly variable. Because gene-regulating elements can have low copy numbers, the variation is attributed to stochastic sampling. Extant Escherichia coli partition asymmetrically and deterministically more damage to the old daughter, the one receiving the mother's old pole. By modeling in silico damage partitioning in a population, we show that deterministic asymmetry is advantageous because it increases fitness variance and hence the efficiency of natural selection. However, we find that symmetrical but stochastic partitioning can be similarly beneficial. To examine why bacteria evolved deterministic asymmetry, we modeled the effect of damage anchored to the mother's old pole. While anchored damage strengthens selection for asymmetry by creating additional fitness variance, it has the opposite effect on symmetry. The difference results because anchored damage reinforces the polarization of partitioning in asymmetric bacteria. In symmetric bacteria, it dilutes the polarization. Thus, stochasticity alone may have protected early bacteria from damage, but deterministic asymmetry has evolved to be equally important in extant bacteria. We estimate that 47% of damage partitioning is deterministic in E. coli. We suggest that the evolution of deterministic asymmetry from stochasticity offers an example of Waddington's genetic assimilation. Our model is able to quantify the evolution of the assimilation because it characterizes the fitness consequences of variation.

  19. Changes in assembly processes in soil bacterial communities following a wildfire disturbance.

    PubMed

    Ferrenberg, Scott; O'Neill, Sean P; Knelman, Joseph E; Todd, Bryan; Duggan, Sam; Bradley, Daniel; Robinson, Taylor; Schmidt, Steven K; Townsend, Alan R; Williams, Mark W; Cleveland, Cory C; Melbourne, Brett A; Jiang, Lin; Nemergut, Diana R

    2013-06-01

    Although recent work has shown that both deterministic and stochastic processes are important in structuring microbial communities, the factors that affect the relative contributions of niche and neutral processes are poorly understood. The macrobiological literature indicates that ecological disturbances can influence assembly processes. Thus, we sampled bacterial communities at 4 and 16 weeks following a wildfire and used null deviation analysis to examine the role that time since disturbance has in community assembly. Fire dramatically altered bacterial community structure and diversity as well as soil chemistry for both time-points. Community structure shifted between 4 and 16 weeks for both burned and unburned communities. Community assembly in burned sites 4 weeks after fire was significantly more stochastic than in unburned sites. After 16 weeks, however, burned communities were significantly less stochastic than unburned communities. Thus, we propose a three-phase model featuring shifts in the relative importance of niche and neutral processes as a function of time since disturbance. Because neutral processes are characterized by a decoupling between environmental parameters and community structure, we hypothesize that a better understanding of community assembly may be important in determining where and when detailed studies of community composition are valuable for predicting ecosystem function.

  20. Changes in assembly processes in soil bacterial communities following a wildfire disturbance

    PubMed Central

    Ferrenberg, Scott; O'Neill, Sean P; Knelman, Joseph E; Todd, Bryan; Duggan, Sam; Bradley, Daniel; Robinson, Taylor; Schmidt, Steven K; Townsend, Alan R; Williams, Mark W; Cleveland, Cory C; Melbourne, Brett A; Jiang, Lin; Nemergut, Diana R

    2013-01-01

    Although recent work has shown that both deterministic and stochastic processes are important in structuring microbial communities, the factors that affect the relative contributions of niche and neutral processes are poorly understood. The macrobiological literature indicates that ecological disturbances can influence assembly processes. Thus, we sampled bacterial communities at 4 and 16 weeks following a wildfire and used null deviation analysis to examine the role that time since disturbance has in community assembly. Fire dramatically altered bacterial community structure and diversity as well as soil chemistry for both time-points. Community structure shifted between 4 and 16 weeks for both burned and unburned communities. Community assembly in burned sites 4 weeks after fire was significantly more stochastic than in unburned sites. After 16 weeks, however, burned communities were significantly less stochastic than unburned communities. Thus, we propose a three-phase model featuring shifts in the relative importance of niche and neutral processes as a function of time since disturbance. Because neutral processes are characterized by a decoupling between environmental parameters and community structure, we hypothesize that a better understanding of community assembly may be important in determining where and when detailed studies of community composition are valuable for predicting ecosystem function. PMID:23407312

  1. A probabilistic fatigue analysis of multiple site damage

    NASA Technical Reports Server (NTRS)

    Rohrbaugh, S. M.; Ruff, D.; Hillberry, B. M.; Mccabe, G.; Grandt, A. F., Jr.

    1994-01-01

    The variability in initial crack size and fatigue crack growth is incorporated in a probabilistic model that is used to predict the fatigue lives for unstiffened aluminum alloy panels containing multiple site damage (MSD). The uncertainty of the damage in the MSD panel is represented by a distribution of fatigue crack lengths that are analytically derived from equivalent initial flaw sizes. The variability in fatigue crack growth rate is characterized by stochastic descriptions of crack growth parameters for a modified Paris crack growth law. A Monte-Carlo simulation explicitly describes the MSD panel by randomly selecting values from the stochastic variables and then grows the MSD cracks with a deterministic fatigue model until the panel fails. Different simulations investigate the influences of the fatigue variability on the distributions of remaining fatigue lives. Six cases that consider fixed and variable conditions of initial crack size and fatigue crack growth rate are examined. The crack size distribution exhibited a dominant effect on the remaining fatigue life distribution, and the variable crack growth rate exhibited a lesser effect on the distribution. In addition, the probabilistic model predicted that only a small percentage of the life remains after a lead crack develops in the MSD panel.

  2. Optimization of forest wildlife objectives

    Treesearch

    John Hof; Robert Haight

    2007-01-01

    This chapter presents an overview of methods for optimizing wildlife-related objectives. These objectives hinge on landscape pattern, so we refer to these methods as "spatial optimization." It is currently possible to directly capture deterministic characterizations of the most basic spatial relationships: proximity relationships (including those that lead to...

  3. Protecting groundwater resources at biosolids recycling sites.

    PubMed

    McFarland, Michael J; Kumarasamy, Karthik; Brobst, Robert B; Hais, Alan; Schmitz, Mark D

    2013-01-01

    In developing the national biosolids recycling rule (Title 40 of the Code of Federal Regulation Part 503 or Part 503), the USEPA conducted deterministic risk assessments whose results indicated that the probability of groundwater impairment associated with biosolids recycling was insignificant. Unfortunately, the computational capabilities available for performing risk assessments of pollutant fate and transport at that time were limited. Using recent advances in USEPA risk assessment methodology, the present study evaluates whether the current national biosolids pollutant limits remain protective of groundwater quality. To take advantage of new risk assessment approaches, a computer-based groundwater risk characterization screening tool (RCST) was developed using USEPA's Multimedia, Multi-pathway, Multi-receptor Exposure and Risk Assessment program. The RCST, which generates a noncarcinogenic human health risk estimate (i.e., hazard quotient [HQ] value), has the ability to conduct screening-level risk characterizations. The regulated heavy metals modeled in this study were As, Cd, Ni, Se, and Zn. Results from RCST application to biosolids recycling sites located in Yakima County, Washington, indicated that biosolids could be recycled at rates as high as 90 Mg ha, with no negative human health effects associated with groundwater consumption. Only under unrealistically high biosolids land application rates were public health risks characterized as significant (HQ ≥ 1.0). For example, by increasing the biosolids application rate and pollutant concentrations to 900 Mg ha and 10 times the regulatory limit, respectively, the HQ values varied from 1.4 (Zn) to 324.0 (Se). Since promulgation of Part 503, no verifiable cases of groundwater contamination by regulated biosolids pollutants have been reported. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  4. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    NASA Astrophysics Data System (ADS)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  5. Development of probabilistic multimedia multipathway computer codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; LePoire, D.; Gnanapragasam, E.

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less

  6. Methods For Self-Organizing Software

    DOEpatents

    Bouchard, Ann M.; Osbourn, Gordon C.

    2005-10-18

    A method for dynamically self-assembling and executing software is provided, containing machines that self-assemble execution sequences and data structures. In addition to ordered functions calls (found commonly in other software methods), mutual selective bonding between bonding sites of machines actuates one or more of the bonding machines. Two or more machines can be virtually isolated by a construct, called an encapsulant, containing a population of machines and potentially other encapsulants that can only bond with each other. A hierarchical software structure can be created using nested encapsulants. Multi-threading is implemented by populations of machines in different encapsulants that are interacting concurrently. Machines and encapsulants can move in and out of other encapsulants, thereby changing the functionality. Bonding between machines' sites can be deterministic or stochastic with bonding triggering a sequence of actions that can be implemented by each machine. A self-assembled execution sequence occurs as a sequence of stochastic binding between machines followed by their deterministic actuation. It is the sequence of bonding of machines that determines the execution sequence, so that the sequence of instructions need not be contiguous in memory.

  7. A Comparison of Deterministic and Stochastic Modeling Approaches for Biochemical Reaction Systems: On Fixed Points, Means, and Modes.

    PubMed

    Hahl, Sayuri K; Kremling, Andreas

    2016-01-01

    In the mathematical modeling of biochemical reactions, a convenient standard approach is to use ordinary differential equations (ODEs) that follow the law of mass action. However, this deterministic ansatz is based on simplifications; in particular, it neglects noise, which is inherent to biological processes. In contrast, the stochasticity of reactions is captured in detail by the discrete chemical master equation (CME). Therefore, the CME is frequently applied to mesoscopic systems, where copy numbers of involved components are small and random fluctuations are thus significant. Here, we compare those two common modeling approaches, aiming at identifying parallels and discrepancies between deterministic variables and possible stochastic counterparts like the mean or modes of the state space probability distribution. To that end, a mathematically flexible reaction scheme of autoregulatory gene expression is translated into the corresponding ODE and CME formulations. We show that in the thermodynamic limit, deterministic stable fixed points usually correspond well to the modes in the stationary probability distribution. However, this connection might be disrupted in small systems. The discrepancies are characterized and systematically traced back to the magnitude of the stoichiometric coefficients and to the presence of nonlinear reactions. These factors are found to synergistically promote large and highly asymmetric fluctuations. As a consequence, bistable but unimodal, and monostable but bimodal systems can emerge. This clearly challenges the role of ODE modeling in the description of cellular signaling and regulation, where some of the involved components usually occur in low copy numbers. Nevertheless, systems whose bimodality originates from deterministic bistability are found to sustain a more robust separation of the two states compared to bimodal, but monostable systems. In regulatory circuits that require precise coordination, ODE modeling is thus still expected to provide relevant indications on the underlying dynamics.

  8. Randomized central limit theorems: A unified theory.

    PubMed

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  9. Non-Lipschitzian dynamics for neural net modelling

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    1989-01-01

    Failure of the Lipschitz condition in unstable equilibrium points of dynamical systems leads to a multiple-choice response to an initial deterministic input. The evolution of such systems is characterized by a special type of unpredictability measured by unbounded Liapunov exponents. Possible relation of these systems to future neural networks is discussed.

  10. Inverse and forward modeling under uncertainty using MRE-based Bayesian approach

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Rubin, Y.

    2004-12-01

    A stochastic inverse approach for subsurface characterization is proposed and applied to shallow vadose zone at a winery field site in north California and to a gas reservoir at the Ormen Lange field site in the North Sea. The approach is formulated in a Bayesian-stochastic framework, whereby the unknown parameters are identified in terms of their statistical moments or their probabilities. Instead of the traditional single-valued estimation /prediction provided by deterministic methods, the approach gives a probability distribution for an unknown parameter. This allows calculating the mean, the mode, and the confidence interval, which is useful for a rational treatment of uncertainty and its consequences. The approach also allows incorporating data of various types and different error levels, including measurements of state variables as well as information such as bounds on or statistical moments of the unknown parameters, which may represent prior information. To obtain minimally subjective prior probabilities required for the Bayesian approach, the principle of Minimum Relative Entropy (MRE) is employed. The approach is tested in field sites for flow parameters identification and soil moisture estimation in the vadose zone and for gas saturation estimation at great depth below the ocean floor. Results indicate the potential of coupling various types of field data within a MRE-based Bayesian formalism for improving the estimation of the parameters of interest.

  11. USING A RISK-BASED METHODOLOGY FOR THE TRANSFER OF RADIOACTIVE MATERIAL WITHIN THE SAVANNAH RIVER SITE BOUNDARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loftin, B; Watkins, R; Loibl, M

    2010-06-03

    Shipment of radioactive materials (RAM) is discussed in the Code of Federal Regulations in parts of both 49 CFR and 10 CFR. The regulations provide the requirements and rules necessary for the safe shipment of RAM across public highways, railways, waterways, and through the air. These shipments are sometimes referred to as in-commerce shipments. Shipments of RAM entirely within the boundaries of Department of Energy sites, such as the Savannah River Site (SRS), can be made using methodology allowing provisions to maintain equivalent safety while deviating from the regulations for in-commerce shipments. These onsite shipments are known as transfers atmore » the SRS. These transfers must follow the requirements approved in a site-specific Transportation Safety Document (TSD). The TSD defines how the site will transfer materials so that they have equivalence to the regulations. These equivalences are documented in an Onsite Safety Assessment (OSA). The OSA can show how a particular packaging used onsite is equivalent to that which would be used for an in-commerce shipment. This is known as a deterministic approach. However, when a deterministic approach is not viable, the TSD allows for a risk-based OSA to be written. These risk-based assessments show that if a packaging does not provide the necessary safety to ensure that materials are not released (during normal or accident conditions) then the worst-case release of materials does not result in a dose consequence worse than that defined for the SRS. This paper will discuss recent challenges and successes using this methodology at the SRS.« less

  12. Uncertainty-accounting environmental policy and management of water systems.

    PubMed

    Baresel, Christian; Destouni, Georgia

    2007-05-15

    Environmental policies for water quality and ecosystem management do not commonly require explicit stochastic accounts of uncertainty and risk associated with the quantification and prediction of waterborne pollutant loads and abatement effects. In this study, we formulate and investigate a possible environmental policy that does require an explicit stochastic uncertainty account. We compare both the environmental and economic resource allocation performance of such an uncertainty-accounting environmental policy with that of deterministic, risk-prone and risk-averse environmental policies under a range of different hypothetical, yet still possible, scenarios. The comparison indicates that a stochastic uncertainty-accounting policy may perform better than deterministic policies over a range of different scenarios. Even in the absence of reliable site-specific data, reported literature values appear to be useful for such a stochastic account of uncertainty.

  13. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, J.; Dykes, K.; Graf, P.

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less

  14. Optimization under Uncertainty of Site-Specific Turbine Configurations: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quick, Julian; Dykes, Katherine; Graf, Peter

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. If there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtained withmore » increasing risk aversion on the part of the designer.« less

  15. Optimization Under Uncertainty of Site-Specific Turbine Configurations

    DOE PAGES

    Quick, J.; Dykes, K.; Graf, P.; ...

    2016-10-03

    Uncertainty affects many aspects of wind energy plant performance and cost. In this study, we explore opportunities for site-specific turbine configuration optimization that accounts for uncertainty in the wind resource. As a demonstration, a simple empirical model for wind plant cost of energy is used in an optimization under uncertainty to examine how different risk appetites affect the optimal selection of a turbine configuration for sites of different wind resource profiles. Lastly, if there is unusually high uncertainty in the site wind resource, the optimal turbine configuration diverges from the deterministic case and a generally more conservative design is obtainedmore » with increasing risk aversion on the part of the designer.« less

  16. Caring Teachers and Symbolic Violence: Engaging the Productive Struggle in Practice and Research

    ERIC Educational Resources Information Center

    Scott, Brigitte C.

    2012-01-01

    Symbolic violence may not be a desirable theory to apply to public schooling--its structuralist limitations render it deterministic, lacking in human agency, and unpalatable to researchers and educators who see schools as viable and productive sites of social transformation. Perhaps for these reasons, it seems little has been written about…

  17. Persistence and memory timescales in root-zone soil moisture dynamics

    Treesearch

    Khaled Ghannam; Taro Nakai; Athanasios Paschalis; Andrew C. Oishi; Ayumi Kotani; Yasunori Igarashi; Tomo' omi Kumagai; Gabriel G. Katul

    2016-01-01

    The memory timescale that characterizes root-zone soil moisture remains the dominant measure in seasonal forecasts of land-climate interactions. This memory is a quasi-deterministic timescale associated with the losses (e.g., evapotranspiration) from the soil column and is often interpreted as persistence in soil moisture states. Persistence, however,...

  18. Implementation and characterization of active feed-forward for deterministic linear optics quantum computing

    NASA Astrophysics Data System (ADS)

    Böhi, P.; Prevedel, R.; Jennewein, T.; Stefanov, A.; Tiefenbacher, F.; Zeilinger, A.

    2007-12-01

    In general, quantum computer architectures which are based on the dynamical evolution of quantum states, also require the processing of classical information, obtained by measurements of the actual qubits that make up the computer. This classical processing involves fast, active adaptation of subsequent measurements and real-time error correction (feed-forward), so that quantum gates and algorithms can be executed in a deterministic and hence error-free fashion. This is also true in the linear optical regime, where the quantum information is stored in the polarization state of photons. The adaptation of the photon’s polarization can be achieved in a very fast manner by employing electro-optical modulators, which change the polarization of a trespassing photon upon appliance of a high voltage. In this paper we discuss techniques for implementing fast, active feed-forward at the single photon level and we present their application in the context of photonic quantum computing. This includes the working principles and the characterization of the EOMs as well as a description of the switching logics, both of which allow quantum computation at an unprecedented speed.

  19. Stochastic simulations on a model of circadian rhythm generation.

    PubMed

    Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin

    2008-01-01

    Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.

  20. Validation of a Deterministic Vibroacoustic Response Prediction Model

    NASA Technical Reports Server (NTRS)

    Caimi, Raoul E.; Margasahayam, Ravi

    1997-01-01

    This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.

  1. Classification and unification of the microscopic deterministic traffic models.

    PubMed

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.

  2. On the applicability of low-dimensional models for convective flow reversals at extreme Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Mannattil, Manu; Pandey, Ambrish; Verma, Mahendra K.; Chakraborty, Sagar

    2017-12-01

    Constructing simpler models, either stochastic or deterministic, for exploring the phenomenon of flow reversals in fluid systems is in vogue across disciplines. Using direct numerical simulations and nonlinear time series analysis, we illustrate that the basic nature of flow reversals in convecting fluids can depend on the dimensionless parameters describing the system. Specifically, we find evidence of low-dimensional behavior in flow reversals occurring at zero Prandtl number, whereas we fail to find such signatures for reversals at infinite Prandtl number. Thus, even in a single system, as one varies the system parameters, one can encounter reversals that are fundamentally different in nature. Consequently, we conclude that a single general low-dimensional deterministic model cannot faithfully characterize flow reversals for every set of parameter values.

  3. Quantitative susceptibility mapping of human brain at 3T: a multisite reproducibility study.

    PubMed

    Lin, P-Y; Chao, T-C; Wu, M-L

    2015-03-01

    Quantitative susceptibility mapping of the human brain has demonstrated strong potential in examining iron deposition, which may help in investigating possible brain pathology. This study assesses the reproducibility of quantitative susceptibility mapping across different imaging sites. In this study, the susceptibility values of 5 regions of interest in the human brain were measured on 9 healthy subjects following calibration by using phantom experiments. Each of the subjects was imaged 5 times on 1 scanner with the same procedure repeated on 3 different 3T systems so that both within-site and cross-site quantitative susceptibility mapping precision levels could be assessed. Two quantitative susceptibility mapping algorithms, similar in principle, one by using iterative regularization (iterative quantitative susceptibility mapping) and the other with analytic optimal solutions (deterministic quantitative susceptibility mapping), were implemented, and their performances were compared. Results show that while deterministic quantitative susceptibility mapping had nearly 700 times faster computation speed, residual streaking artifacts seem to be more prominent compared with iterative quantitative susceptibility mapping. With quantitative susceptibility mapping, the putamen, globus pallidus, and caudate nucleus showed smaller imprecision on the order of 0.005 ppm, whereas the red nucleus and substantia nigra, closer to the skull base, had a somewhat larger imprecision of approximately 0.01 ppm. Cross-site errors were not significantly larger than within-site errors. Possible sources of estimation errors are discussed. The reproducibility of quantitative susceptibility mapping in the human brain in vivo is regionally dependent, and the precision levels achieved with quantitative susceptibility mapping should allow longitudinal and multisite studies such as aging-related changes in brain tissue magnetic susceptibility. © 2015 by American Journal of Neuroradiology.

  4. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments.

    PubMed

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2017-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments - one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.

  5. Time Domain and Frequency Domain Deterministic Channel Modeling for Tunnel/Mining Environments

    PubMed Central

    Zhou, Chenming; Jacksha, Ronald; Yan, Lincan; Reyes, Miguel; Kovalchik, Peter

    2018-01-01

    Understanding wireless channels in complex mining environments is critical for designing optimized wireless systems operated in these environments. In this paper, we propose two physics-based, deterministic ultra-wideband (UWB) channel models for characterizing wireless channels in mining/tunnel environments — one in the time domain and the other in the frequency domain. For the time domain model, a general Channel Impulse Response (CIR) is derived and the result is expressed in the classic UWB tapped delay line model. The derived time domain channel model takes into account major propagation controlling factors including tunnel or entry dimensions, frequency, polarization, electrical properties of the four tunnel walls, and transmitter and receiver locations. For the frequency domain model, a complex channel transfer function is derived analytically. Based on the proposed physics-based deterministic channel models, channel parameters such as delay spread, multipath component number, and angular spread are analyzed. It is found that, despite the presence of heavy multipath, both channel delay spread and angular spread for tunnel environments are relatively smaller compared to that of typical indoor environments. The results and findings in this paper have application in the design and deployment of wireless systems in underground mining environments.† PMID:29457801

  6. The Study of the Relationship between Probabilistic Design and Axiomatic Design Methodology. Volume 2

    NASA Technical Reports Server (NTRS)

    Onwubiko, Chin-Yere; Onyebueke, Landon

    1996-01-01

    The structural design, or the design of machine elements, has been traditionally based on deterministic design methodology. The deterministic method considers all design parameters to be known with certainty. This methodology is, therefore, inadequate to design complex structures that are subjected to a variety of complex, severe loading conditions. A nonlinear behavior that is dependent on stress, stress rate, temperature, number of load cycles, and time is observed on all components subjected to complex conditions. These complex conditions introduce uncertainties; hence, the actual factor of safety margin remains unknown. In the deterministic methodology, the contingency of failure is discounted; hence, there is a use of a high factor of safety. It may be most useful in situations where the design structures are simple. The probabilistic method is concerned with the probability of non-failure performance of structures or machine elements. It is much more useful in situations where the design is characterized by complex geometry, possibility of catastrophic failure, sensitive loads and material properties. Also included: Comparative Study of the use of AGMA Geometry Factors and Probabilistic Design Methodology in the Design of Compact Spur Gear Set.

  7. Seismic hazard assessment of the cultural heritage sites: A case study in Cappadocia (Turkey)

    NASA Astrophysics Data System (ADS)

    Seyrek, Evren; Orhan, Ahmet; Dinçer, İsmail

    2014-05-01

    Turkey is one of the most seismically active regions in the world. Major earthquakes with the potential of threatening life and property occur frequently here. In the last decade, over 50,000 residents lost their lives, commonly as a result of building failures in seismic events. The Cappadocia region is one of the most important touristic sites in Turkey. At the same time, the region has been included to the Word Heritage List by UNESCO at 1985 due to its natural, historical and cultural values. The region is undesirably affected by several environmental conditions, which are subjected in many previous studies. But, there are limited studies about the seismic evaluation of the region. Some of the important historical and cultural heritage sites are: Goreme Open Air Museum, Uchisar Castle, Ortahisar Castle, Derinkuyu Underground City and Ihlara Valley. According to seismic hazard zonation map published by the Ministry of Reconstruction and Settlement these heritage sites fall in Zone III, Zone IV and Zone V. This map show peak ground acceleration or 10 percent probability of exceedance in 50 years for bedrock. In this connection, seismic hazard assessment of these heritage sites has to be evaluated. In this study, seismic hazard calculations are performed both deterministic and probabilistic approaches with local site conditions. A catalog of historical and instrumental earthquakes is prepared and used in this study. The seismic sources have been identified for seismic hazard assessment based on geological, seismological and geophysical information. Peak Ground Acceleration (PGA) at bed rock level is calculated for different seismic sources using available attenuation relationship formula applicable to Turkey. The result of the present study reveals that the seismic hazard at these sites is closely matching with the Seismic Zonation map published by the Ministry of Reconstruction and Settlement. Keywords: Seismic Hazard Assessment, Probabilistic Approach, Deterministic Approach, Historical Heritage, Cappadocia.

  8. Vibroacoustic Response of Pad Structures to Space Shuttle Launch Acoustic Loads

    NASA Technical Reports Server (NTRS)

    Margasahayam, R. N.; Caimi, Raoul E.

    1995-01-01

    This paper presents a deterministic theory for the random vibration problem for predicting the response of structures in the low-frequency range (0 to 20 hertz) of launch transients. Also presented are some innovative ways to characterize noise and highlights of ongoing test-analysis correlation efforts titled the Verification Test Article (VETA) project.

  9. The multi temporal/multi-model approach to predictive uncertainty assessment in real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Barbetta, Silvia; Coccia, Gabriele; Moramarco, Tommaso; Brocca, Luca; Todini, Ezio

    2017-08-01

    This work extends the multi-temporal approach of the Model Conditional Processor (MCP-MT) to the multi-model case and to the four Truncated Normal Distributions (TNDs) approach, demonstrating the improvement on the single-temporal one. The study is framed in the context of probabilistic Bayesian decision-making that is appropriate to take rational decisions on uncertain future outcomes. As opposed to the direct use of deterministic forecasts, the probabilistic forecast identifies a predictive probability density function that represents a fundamental knowledge on future occurrences. The added value of MCP-MT is the identification of the probability that a critical situation will happen within the forecast lead-time and when, more likely, it will occur. MCP-MT is thoroughly tested for both single-model and multi-model configurations at a gauged site on the Tiber River, central Italy. The stages forecasted by two operative deterministic models, STAFOM-RCM and MISDc, are considered for the study. The dataset used for the analysis consists of hourly data from 34 flood events selected on a time series of six years. MCP-MT improves over the original models' forecasts: the peak overestimation and the rising limb delayed forecast, characterizing MISDc and STAFOM-RCM respectively, are significantly mitigated, with a reduced mean error on peak stage from 45 to 5 cm and an increased coefficient of persistence from 0.53 up to 0.75. The results show that MCP-MT outperforms the single-temporal approach and is potentially useful for supporting decision-making because the exceedance probability of hydrometric thresholds within a forecast horizon and the most probable flooding time can be estimated.

  10. Scattering effects of machined optical surfaces

    NASA Astrophysics Data System (ADS)

    Thompson, Anita Kotha

    1998-09-01

    Optical fabrication is one of the most labor-intensive industries in existence. Lensmakers use pitch to affix glass blanks to metal chucks that hold the glass as they grind it with tools that have not changed much in fifty years. Recent demands placed on traditional optical fabrication processes in terms of surface accuracy, smoothnesses, and cost effectiveness has resulted in the exploitation of precision machining technology to develop a new generation of computer numerically controlled (CNC) optical fabrication equipment. This new kind of precision machining process is called deterministic microgrinding. The most conspicuous feature of optical surfaces manufactured by the precision machining processes (such as single-point diamond turning or deterministic microgrinding) is the presence of residual cutting tool marks. These residual tool marks exhibit a highly structured topography of periodic azimuthal or radial deterministic marks in addition to random microroughness. These distinct topographic features give rise to surface scattering effects that can significantly degrade optical performance. In this dissertation project we investigate the scattering behavior of machined optical surfaces and their imaging characteristics. In particular, we will characterize the residual optical fabrication errors and relate the resulting scattering behavior to the tool and machine parameters in order to evaluate and improve the deterministic microgrinding process. Other desired information derived from the investigation of scattering behavior is the optical fabrication tolerances necessary to satisfy specific image quality requirements. Optical fabrication tolerances are a major cost driver for any precision optical manufacturing technology. The derivation and control of the optical fabrication tolerances necessary for different applications and operating wavelength regimes will play a unique and central role in establishing deterministic microgrinding as a preferred and a cost-effective optical fabrication process. Other well understood optical fabrication processes will also be reviewed and a performance comparison with the conventional grinding and polishing technique will be made to determine any inherent advantages in the optical quality of surfaces produced by other techniques.

  11. Nano-iron Tracer Test for Characterizing Preferential Flow Path in Fractured Rock

    NASA Astrophysics Data System (ADS)

    Chia, Y.; Chuang, P. Y.

    2015-12-01

    Deterministic description of the discrete features interpreted from site characterization is desirable for developing a discrete fracture network conceptual model. It is often difficult, however, to delineate preferential flow path through a network of discrete fractures in the field. A preliminary cross-borehole nano-iron tracer test was conducted to characterize the preferential flow path in fractured shale bedrock at a hydrogeological research station. Prior to the test, heat-pulse flowmeter measurements were performed to detect permeable fracture zones at both the injection well and the observation well. While a few fracture zones are found permeable, most are not really permeable. Chemical reduction method was used to synthesize nano zero-valent iron particles with a diameter of 50~150 nm. The conductivity of nano-iron solution is about 3100 μs/cm. The recorded fluid conductivity shows the arrival of nano-iron solution in the observation well 11.5 minutes after it was released from the injection well. The magnetism of zero-valent iron enables it to be absorbed on magnet array designed to locate the depth of incoming tracer. We found nearly all of absorbed iron on the magnet array in the observation well were distributed near the most permeable fracture zone. The test results revealed a preferential flow path through a permeable fracture zone between the injection well and the observation well. The estimated hydraulic conductivity of the connected fracture is 2.2 × 10-3 m/s. This preliminary study indicated that nano-iron tracer test has the potential to characterize preferential flow path in fractured rock.

  12. Dual Roles for Spike Signaling in Cortical Neural Populations

    PubMed Central

    Ballard, Dana H.; Jehee, Janneke F. M.

    2011-01-01

    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing γ frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using γ latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the γ latency coding. PMID:21687798

  13. El Niño$-$Southern Oscillation frequency cascade

    DOE PAGES

    Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel

    2015-10-19

    The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less

  14. Deterministic separation of cancer cells from blood at 10 mL/min

    NASA Astrophysics Data System (ADS)

    Loutherback, Kevin; D'Silva, Joseph; Liu, Liyu; Wu, Amy; Austin, Robert H.; Sturm, James C.

    2012-12-01

    Circulating tumor cells (CTCs) and circulating clusters of cancer and stromal cells have been identified in the blood of patients with malignant cancer and can be used as a diagnostic for disease severity, assess the efficacy of different treatment strategies and possibly determine the eventual location of metastatic invasions for possible treatment. There is thus a critical need to isolate, propagate and characterize viable CTCs and clusters of cancer cells with their associated stroma cells. Here, we present a microfluidic device for mL/min flow rate, continuous-flow capture of viable CTCs from blood using deterministic lateral displacement (DLD) arrays. We show here that a DLD array device can isolate CTCs from blood with capture efficiency greater than 85% CTCs at volumetric flow rates of up to 10 mL/min with no effect on cell viability.

  15. Current fluctuations in periodically driven systems

    NASA Astrophysics Data System (ADS)

    Barato, Andre C.; Chetrite, Raphael

    2018-05-01

    Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.

  16. El Niño$-$Southern Oscillation frequency cascade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuecker, Malte F.; Jin, Fei -Fei; Timmermann, Axel

    The El Niño$-$Southern Oscillation (ENSO) phenomenon, the most pronounced feature of internally generated climate variability, occurs on interannual timescales and impacts the global climate system through an interaction with the annual cycle. The tight coupling between ENSO and the annual cycle is particularly pronounced over the tropical Western Pacific. In this paper, we show that this nonlinear interaction results in a frequency cascade in the atmospheric circulation, which is characterized by deterministic high-frequency variability on near-annual and subannual timescales. Finally, through climate model experiments and observational analysis, it is documented that a substantial fraction of the anomalous Northwest Pacific anticyclonemore » variability, which is the main atmospheric link between ENSO and the East Asian Monsoon system, can be explained by these interactions and is thus deterministic and potentially predictable.« less

  17. Safety design approach for external events in Japan sodium-cooled fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamano, H.; Kubo, S.; Tani, A.

    2012-07-01

    This paper describes a safety design approach for external events in the design study of Japan sodium-cooled fast reactor. An emphasis is introduction of a design extension external condition (DEEC). In addition to seismic design, other external events such as tsunami, strong wind, abnormal temperature, etc. were addressed in this study. From a wide variety of external events consisting of natural hazards and human-induced ones, a screening method was developed in terms of siting, consequence, frequency to select representative events. Design approaches for these events were categorized on the probabilistic, statistical and deterministic basis. External hazard conditions were considered mainlymore » for DEECs. In the probabilistic approach, the DEECs of earthquake, tsunami and strong wind were defined as 1/10 of exceedance probability of the external design bases. The other representative DEECs were also defined based on statistical or deterministic approaches. (authors)« less

  18. Fuzzy linear model for production optimization of mining systems with multiple entities

    NASA Astrophysics Data System (ADS)

    Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar

    2011-12-01

    Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.

  19. Adequacy assessment of composite generation and transmission systems incorporating wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Gao, Yi

    The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.

  20. Niche partitioning in arbuscular mycorrhizal communities in temperate grasslands: a lesson from adjacent serpentine and nonserpentine habitats.

    PubMed

    Kohout, Petr; Doubková, Pavla; Bahram, Mohammad; Suda, Jan; Tedersoo, Leho; Voříšková, Jana; Sudová, Radka

    2015-04-01

    Arbuscular mycorrhizal fungi (AMF) represent an important soil microbial group playing a fundamental role in many terrestrial ecosystems. We explored the effects of deterministic (soil characteristics, host plant life stage, neighbouring plant communities) and stochastic processes on AMF colonization, richness and community composition in roots of Knautia arvensis (Dipsacaceae) plants from three serpentine grasslands and adjacent nonserpentine sites. Methodically, the study was based on 454-sequencing of the ITS region of rDNA. In total, we detected 81 molecular taxonomical operational units (MOTUs) belonging to the Glomeromycota. Serpentine character of the site negatively influenced AMF root colonization, similarly as higher Fe concentration. AMF MOTUs richness linearly increased along a pH gradient from 3.5 to 5.8. Contrary, K and Cr soil concentration had a negative influence on AMF MOTUs richness. We also detected a strong relation between neighbouring plant community composition and AMF MOTUs richness. Although spatial distance between the sampled sites (c. 0.3-3 km) contributed to structuring AMF communities in K. arvensis roots, environmental parameters were key factors in this respect. In particular, the composition of AMF communities was shaped by the complex of serpentine conditions, pH and available soil Ni concentration. The composition of AMF communities was also dependent on host plant life stage (vegetative vs. generative). Our study supports the dominance of deterministic factors in structuring AMF communities in heterogeneous environment composed of an edaphic mosaic of serpentine and nonserpentine soils. © 2015 John Wiley & Sons Ltd.

  1. Neo-Deterministic Seismic Hazard Assessment at Watts Bar Nuclear Power Plant Site, Tennessee, USA

    NASA Astrophysics Data System (ADS)

    Brandmayr, E.; Cameron, C.; Vaccari, F.; Fasan, M.; Romanelli, F.; Magrin, A.; Vlahovic, G.

    2017-12-01

    Watts Bar Nuclear Power Plant (WBNPP) is located within the Eastern Tennessee Seismic Zone (ETSZ), the second most naturally active seismic zone in the US east of the Rocky Mountains. The largest instrumental earthquakes in the ETSZ are M 4.6, although paleoseismic evidence supports events of M≥6.5. Events are mainly strike-slip and occur on steeply dipping planes at an average depth of 13 km. In this work, we apply the neo-deterministic seismic hazard assessment to estimate the potential seismic input at the plant site, which has been recently targeted by the Nuclear Regulatory Commission for a seismic hazard reevaluation. First, we perform a parametric test on some seismic source characteristics (i.e. distance, depth, strike, dip and rake) using a one-dimensional regional bedrock model to define the most conservative scenario earthquakes. Then, for the selected scenario earthquakes, the estimate of the ground motion input at WBNPP is refined using a two-dimensional local structural model (based on the plant's operator documentation) with topography, thus looking for site amplification and different possible rupture processes at the source. WBNNP features a safe shutdown earthquake (SSE) design with PGA of 0.18 g and maximum spectral amplification (SA, 5% damped) of 0.46 g (at periods between 0.15 and 0.5 s). Our results suggest that, although for most of the considered scenarios the PGA is relatively low, SSE values can be reached and exceeded in the case of the most conservative scenario earthquakes.

  2. Are Individual Differences in Performance on Perceptual and Cognitive Optimization Problems Determined by General Intelligence?

    ERIC Educational Resources Information Center

    Burns, Nicholas R.; Lee, Michael D.; Vickers, Douglas

    2006-01-01

    Studies of human problem solving have traditionally used deterministic tasks that require the execution of a systematic series of steps to reach a rational and optimal solution. Most real-world problems, however, are characterized by uncertainty, the need to consider an enormous number of variables and possible courses of action at each stage in…

  3. Machine tools error characterization and compensation by on-line measurement of artifact

    NASA Astrophysics Data System (ADS)

    Wahid Khan, Abdul; Chen, Wuyi; Wu, Lili

    2009-11-01

    Most manufacturing machine tools are utilized for mass production or batch production with high accuracy at a deterministic manufacturing principle. Volumetric accuracy of machine tools depends on the positional accuracy of the cutting tool, probe or end effector related to the workpiece in the workspace volume. In this research paper, a methodology is presented for volumetric calibration of machine tools by on-line measurement of an artifact or an object of a similar type. The machine tool geometric error characterization was carried out through a standard or an artifact, having similar geometry to the mass production or batch production product. The artifact was measured at an arbitrary position in the volumetric workspace with a calibrated Renishaw touch trigger probe system. Positional errors were stored into a computer for compensation purpose, to further run the manufacturing batch through compensated codes. This methodology was found quite effective to manufacture high precision components with more dimensional accuracy and reliability. Calibration by on-line measurement gives the advantage to improve the manufacturing process by use of deterministic manufacturing principle and found efficient and economical but limited to the workspace or envelop surface of the measured artifact's geometry or the profile.

  4. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  5. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  6. Kinetics of Thermal Unimolecular Decomposition of Acetic Anhydride: An Integrated Deterministic and Stochastic Model.

    PubMed

    Mai, Tam V-T; Duong, Minh V; Nguyen, Hieu T; Lin, Kuang C; Huynh, Lam K

    2017-04-27

    An integrated deterministic and stochastic model within the master equation/Rice-Ramsperger-Kassel-Marcus (ME/RRKM) framework was first used to characterize temperature- and pressure-dependent behaviors of thermal decomposition of acetic anhydride in a wide range of conditions (i.e., 300-1500 K and 0.001-100 atm). Particularly, using potential energy surface and molecular properties obtained from high-level electronic structure calculations at CCSD(T)/CBS, macroscopic thermodynamic properties and rate coefficients of the title reaction were derived with corrections for hindered internal rotation and tunneling treatments. Being in excellent agreement with the scattered experimental data, the results from deterministic and stochastic frameworks confirmed and complemented each other to reveal that the main decomposition pathway proceeds via a 6-membered-ring transition state with the 0 K barrier of 35.2 kcal·mol -1 . This observation was further understood and confirmed by the sensitivity analysis on the time-resolved species profiles and the derived rate coefficients with respect to the ab initio barriers. Such an agreement suggests the integrated model can be confidently used for a wide range of conditions as a powerful postfacto and predictive tool in detailed chemical kinetic modeling and simulation for the title reaction and thus can be extended to complex chemical reactions.

  7. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal

    NASA Astrophysics Data System (ADS)

    Wronna, M.; Omira, R.; Baptista, M. A.

    2015-11-01

    In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2.

  8. Assimilating MODIS-based albedo and snow cover fraction into the Common Land Model to improve snow depth simulation with direct insertion and deterministic ensemble Kalman filter methods

    NASA Astrophysics Data System (ADS)

    Xu, Jianhui; Shu, Hong

    2014-09-01

    This study assesses the analysis performance of assimilating the Moderate Resolution Imaging Spectroradiometer (MODIS)-based albedo and snow cover fraction (SCF) separately or jointly into the physically based Common Land Model (CoLM). A direct insertion method (DI) is proposed to assimilate the black and white-sky albedos into the CoLM. The MODIS-based albedo is calculated with the MODIS bidirectional reflectance distribution function (BRDF) model parameters product (MCD43B1) and the solar zenith angle as estimated in the CoLM for each time step. Meanwhile, the MODIS SCF (MOD10A1) is assimilated into the CoLM using the deterministic ensemble Kalman filter (DEnKF) method. A new DEnKF-albedo assimilation scheme for integrating the DI and DEnKF assimilation schemes is proposed. Our assimilation results are validated against in situ snow depth observations from November 2008 to March 2009 at five sites in the Altay region of China. The experimental results show that all three data assimilation schemes can improve snow depth simulations. But overall, the DEnKF-albedo assimilation shows the best analysis performance as it significantly reduces the bias and root-mean-square error (RMSE) during the snow accumulation and ablation periods at all sites except for the Fuyun site. The SCF assimilation via DEnKF produces better results than the albedo assimilation via DI, implying that the albedo assimilation that indirectly updates the snow depth state variable is less efficient than the direct SCF assimilation. For the Fuyun site, the DEnKF-albedo scheme tends to overestimate the snow depth accumulation with the maximum bias and RMSE values because of the large positive innovation (observation minus forecast).

  9. Stochastic Seismic Inversion and Migration for Offshore Site Investigation in the Northern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Son, J.; Medina-Cetina, Z.

    2017-12-01

    We discuss the comparison between deterministic and stochastic optimization approaches to the nonlinear geophysical full-waveform inverse problem, based on the seismic survey data from Mississippi Canyon in the Northern Gulf of Mexico. Since the subsea engineering and offshore construction projects actively require reliable ground models from various site investigations, the primary goal of this study is to reconstruct the accurate subsurface information of the soil and rock material profiles under the seafloor. The shallow sediment layers have naturally formed heterogeneous formations which may cause unwanted marine landslides or foundation failures of underwater infrastructure. We chose the quasi-Newton and simulated annealing as deterministic and stochastic optimization algorithms respectively. Seismic forward modeling based on finite difference method with absorbing boundary condition implements the iterative simulations in the inverse modeling. We briefly report on numerical experiments using a synthetic data as an offshore ground model which contains shallow artificial target profiles of geomaterials under the seafloor. We apply the seismic migration processing and generate Voronoi tessellation on two-dimensional space-domain to improve the computational efficiency of the imaging stratigraphical velocity model reconstruction. We then report on the detail of a field data implementation, which shows the complex geologic structures in the Northern Gulf of Mexico. Lastly, we compare the new inverted image of subsurface site profiles in the space-domain with the previously processed seismic image in the time-domain at the same location. Overall, stochastic optimization for seismic inversion with migration and Voronoi tessellation show significant promise to improve the subsurface imaging of ground models and improve the computational efficiency required for the full waveform inversion. We anticipate that by improving the inversion process of shallow layers from geophysical data will better support the offshore site investigation.

  10. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  11. Comparison of three controllers applied to helicopter vibration

    NASA Technical Reports Server (NTRS)

    Leyland, Jane A.

    1992-01-01

    A comparison was made of the applicability and suitability of the deterministic controller, the cautious controller, and the dual controller for the reduction of helicopter vibration by using higher harmonic blade pitch control. A randomly generated linear plant model was assumed and the performance index was defined to be a quadratic output metric of this linear plant. A computer code, designed to check out and evaluate these controllers, was implemented and used to accomplish this comparison. The effects of random measurement noise, the initial estimate of the plant matrix, and the plant matrix propagation rate were determined for each of the controllers. With few exceptions, the deterministic controller yielded the greatest vibration reduction (as characterized by the quadratic output metric) and operated with the greatest reliability. Theoretical limitations of these controllers were defined and appropriate candidate alternative methods, including one method particularly suitable to the cockpit, were identified.

  12. Deterministic phase slips in mesoscopic superconducting rings

    PubMed Central

    Petković, I.; Lollo, A.; Glazman, L. I.; Harris, J. G. E.

    2016-01-01

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity. PMID:27882924

  13. Deterministic phase slips in mesoscopic superconducting rings.

    PubMed

    Petković, I; Lollo, A; Glazman, L I; Harris, J G E

    2016-11-24

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter's free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg-Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. We also demonstrate that phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.

  14. Hyperchaotic Dynamics for Light Polarization in a Laser Diode

    NASA Astrophysics Data System (ADS)

    Bonatto, Cristian

    2018-04-01

    It is shown that a highly randomlike behavior of light polarization states in the output of a free-running laser diode, covering the whole Poincaré sphere, arises as a result from a fully deterministic nonlinear process, which is characterized by a hyperchaotic dynamics of two polarization modes nonlinearly coupled with a semiconductor medium, inside the optical cavity. A number of statistical distributions were found to describe the deterministic data of the low-dimensional nonlinear flow, such as lognormal distribution for the light intensity, Gaussian distributions for the electric field components and electron densities, Rice and Rayleigh distributions, and Weibull and negative exponential distributions, for the modulus and intensity of the orthogonal linear components of the electric field, respectively. The presented results could be relevant for the generation of single units of compact light source devices to be used in low-dimensional optical hyperchaos-based applications.

  15. Amygdala and Ventral Striatum Make Distinct Contributions to Reinforcement Learning.

    PubMed

    Costa, Vincent D; Dal Monte, Olga; Lucas, Daniel R; Murray, Elisabeth A; Averbeck, Bruno B

    2016-10-19

    Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL, we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with an RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys' choice reaction times, which emphasized a speed-accuracy trade-off that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. Published by Elsevier Inc.

  16. Amygdala and ventral striatum make distinct contributions to reinforcement learning

    PubMed Central

    Costa, Vincent D.; Monte, Olga Dal; Lucas, Daniel R.; Murray, Elisabeth A.; Averbeck, Bruno B.

    2016-01-01

    Summary Reinforcement learning (RL) theories posit that dopaminergic signals are integrated within the striatum to associate choices with outcomes. Often overlooked is that the amygdala also receives dopaminergic input and is involved in Pavlovian processes that influence choice behavior. To determine the relative contributions of the ventral striatum (VS) and amygdala to appetitive RL we tested rhesus macaques with VS or amygdala lesions on deterministic and stochastic versions of a two-arm bandit reversal learning task. When learning was characterized with a RL model relative to controls, amygdala lesions caused general decreases in learning from positive feedback and choice consistency. By comparison, VS lesions only affected learning in the stochastic task. Moreover, the VS lesions hastened the monkeys’ choice reaction times, which emphasized a speed-accuracy tradeoff that accounted for errors in deterministic learning. These results update standard accounts of RL by emphasizing distinct contributions of the amygdala and VS to RL. PMID:27720488

  17. Deterministic implementation of a bright, on-demand single photon source with near-unity indistinguishability via quantum dot imaging.

    PubMed

    He, Yu-Ming; Liu, Jin; Maier, Sebastian; Emmerling, Monika; Gerhardt, Stefan; Davanço, Marcelo; Srinivasan, Kartik; Schneider, Christian; Höfling, Sven

    2017-07-20

    Deterministic techniques enabling the implementation and engineering of bright and coherent solid-state quantum light sources are key for the reliable realization of a next generation of quantum devices. Such a technology, at best, should allow one to significantly scale up the number of implemented devices within a given processing time. In this work, we discuss a possible technology platform for such a scaling procedure, relying on the application of nanoscale quantum dot imaging to the pillar microcavity architecture, which promises to combine very high photon extraction efficiency and indistinguishability. We discuss the alignment technology in detail, and present the optical characterization of a selected device which features a strongly Purcell-enhanced emission output. This device, which yields an extraction efficiency of η = (49 ± 4) %, facilitates the emission of photons with (94 ± 2.7) % indistinguishability.

  18. Health risk assessment of inorganic arsenic intake of Ronphibun residents via duplicate diet study.

    PubMed

    Saipan, Piyawat; Ruangwises, Suthep

    2009-06-01

    To assess health risk from exposure to inorganic arsenic via duplicate portion sampling method in Ronphibun residents. A hundred and forty samples (140 subject-days) were collected from participants in Ronphibun sub-district. Inorganic arsenic in duplicate diet sample was determined by acid digestion and hydride generation-atomic absorption spectrometry. Deterministic risk assessment is referenced throughout the present paper using United States Environmental Protection Agency (U.S. EPA) guidelines. The average daily dose and lifetime average daily dose of inorganic arsenic via duplicate diet were 0.0021 mg/kg/d and 0.00084 mg/kg/d, respectively. The risk estimates in terms of hazard quotient was 6.98 and cancer risk was 1.26 x 10(-3). The results of deterministic risk characterization both hazard quotient and cancer risk from exposure inorganic arsenic in duplicate diets were greater than safety risk levels of hazard quotient (1) and cancer risk (1 x 10(-4)).

  19. Deterministic phase slips in mesoscopic superconducting rings

    DOE PAGES

    Petković, Ivana; Lollo, A.; Glazman, L. I.; ...

    2016-11-24

    The properties of one-dimensional superconductors are strongly influenced by topological fluctuations of the order parameter, known as phase slips, which cause the decay of persistent current in superconducting rings and the appearance of resistance in superconducting wires. Despite extensive work, quantitative studies of phase slips have been limited by uncertainty regarding the order parameter’s free-energy landscape. Here we show detailed agreement between measurements of the persistent current in isolated flux-biased rings and Ginzburg–Landau theory over a wide range of temperature, magnetic field and ring size; this agreement provides a quantitative picture of the free-energy landscape. Furthermore, we also demonstrate thatmore » phase slips occur deterministically as the barrier separating two competing order parameter configurations vanishes. These results will enable studies of quantum and thermal phase slips in a well-characterized system and will provide access to outstanding questions regarding the nature of one-dimensional superconductivity.« less

  20. Earthquake Loading Assessment to Evaluate Liquefaction Potential in Emilia-Romagna Region

    NASA Astrophysics Data System (ADS)

    Daminelli, R.; Marcellini, A.; Tento, A.

    2016-12-01

    The May-June 2012 seismic sequence that struck Lombardia and Emilia-Romagna consisted of seven main events of magnitude greater than 5 followed by numerous aftershocks. The strongest earthquakes occurred on May 20 (M=5.9) and May 29 (M=5.8). The widespread soil liquefaction, unexpected because of the moderate magnitude of the events, pushed the local authorities to issue research projects aimed to define the earthquake loading to evaluate the liquefaction safety factor. The reasons explained below led us to adopt a deterministic hazard approach to evaluate the seismic parameters relevant to liquefaction assessment, despite the fact that the Italian Seismic Building Code (NTC08) is based on probabilistic hazard analysis. For urban planning and building design geologists generally adopt the CRR/CSR technique to assess liquefaction potential; therefore we considered PGA and a design magnitude to be representative of the seismic loading. The procedure adopted consists: a) identification of seismic source zones and characterization of each zone by the maximum magnitude; b) evaluation of the source to site distance and c) adoption of a suitable attenuation law to compute the expected PGA at the site, given the site condition and the design magnitude. The design magnitude can be: the maximum magnitude; the magnitude that causes the largest PGA, or both. The PGA values obtained are larger with respect to the 474 years return period PGA prescribed by NTC08 for the seismic design for ordinary buildings. We conducted a CPTU resistance test intended to define the CRR at the village of Cavezzo, situated in the epicentral area of the 2012 earthquake. The CRR/CSR ratio led to an elevated liquefaction risk at the analysed site. On the contrary the adoption of the 474 years return period PGA of the NTCO8 prescribed for Cavezzo site led to a negligible liquefaction risk. Note that very close to the investigated site several liquefaction phenomena were observed.

  1. Earthquake Loading Assessment to Evaluate Liquefaction Potential in Emilia-Romagna Region

    NASA Astrophysics Data System (ADS)

    Daminelli, Rosastella; Marcellini, Alberto; Tento, Alberto

    2017-04-01

    The May-June 2012 seismic sequence that struck Lombardia and Emilia-Romagna consisted of seven main events of magnitude greater than 5 followed by numerous aftershocks. The strongest earthquakes occurred on May 20 (M=5.9) and May 29 (M=5.8). The widespread soil liquefaction, unexpected because of the moderate magnitude of the events, pushed the local authorities to issue research projects aimed to define the earthquake loading to evaluate the liquefaction safety factor. The reasons explained below led us to adopt a deterministic hazard approach to evaluate the seismic parameters relevant to liquefaction assessment, despite the fact that the Italian Seismic Building Code (NTC08) is based on probabilistic hazard analysis. For urban planning and building design geologists generally adopt the CRR/CSR technique to assess liquefaction potential; therefore we considered PGA and a design magnitude to be representative of the seismic loading. The procedure adopted consists: a) identification of seismic source zones and characterization of each zone by the maximum magnitude; b) evaluation of the source to site distance and c) adoption of a suitable attenuation law to compute the expected PGA at the site, given the site condition and the design magnitude. The design magnitude can be: the maximum magnitude; the magnitude that causes the largest PGA, or both. The PGA values obtained are larger with respect to the 474 years return period PGA prescribed by NTC08 for the seismic design for ordinary buildings. We conducted a CPTU resistance test intended to define the CRR at the village of Cavezzo, situated in the epicentral area of the 2012 earthquake. The CRR/CSR ratio led to an elevated liquefaction risk at the analysed site. On the contrary the adoption of the 474 years return period PGA of the NTCO8 prescribed for Cavezzo site led to a negligible liquefaction risk. Note that very close to the investigated site several liquefaction phenomena were observed.

  2. The impact of wildland fires on calcareous Mediterranean pedosystems (Sardinia, Italy) - An integrated multiple approach.

    PubMed

    Capra, Gian Franco; Tidu, Simona; Lovreglio, Raffaella; Certini, Giacomo; Salis, Michele; Bacciu, Valentina; Ganga, Antonio; Filzmoser, Peter

    2018-05-15

    Sardinia (Italy), the second largest island of the Mediterranean Sea, is a fire-prone land. Most Sardinian environments over time were shaped by fire, but some of them are too intrinsically fragile to withstand the currently increasing fire frequency. Calcareous pedoenvironments represent a significant part of Mediterranean areas, and require important efforts to prevent long-lasting degradation from fire. The aim of this study was to assess through an integrated multiple approach the impact of a single and highly severe wildland fire on limestone-derived soils. For this purpose, we selected two recently burned sites, Sant'Antioco and Laconi. Soil was sampled from 80 points on a 100×100m grid - 40 in the burned area and 40 in unburned one - and analyzed for particle size fractions, pH, electrical conductivity, organic carbon, total N, total P, and water repellency (WR). Fire behavior (surface rate of spread (ROS), fireline intensity (FLI), flame length (FL)) was simulated by BehavePlus 5.0.5 software. Comparisons between burned and unburned areas were done through ANOVA as well as deterministic and stochastic interpolation techniques; multiple correlations among parameters were evaluated by principal factor analysis (PFA) and differences/similarities between areas by principal component analysis (PCA). In both sites, fires were characterized by high severity and determined significant changes to some soil properties. The PFA confirmed the key ecological role played by fire in both sites, with the variability of a four-modeled components mainly explained by fire parameters, although the induced changes on soils were mainly site-specific. The PCA revealed the presence of two main "driving factors": slope (in Sant'Antioco), which increased the magnitude of ROS and FLI; and soil properties (in Laconi), which mostly affected FL. In both sites, such factors played a direct role in differentiating fire behavior and sites, while they played an indirect role in determining some effects on soil. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  4. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    NASA Astrophysics Data System (ADS)

    Graves, Robert; Jordan, Thomas H.; Callaghan, Scott; Deelman, Ewa; Field, Edward; Juve, Gideon; Kesselman, Carl; Maechling, Philip; Mehta, Gaurang; Milner, Kevin; Okaya, David; Small, Patrick; Vahi, Karan

    2011-03-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i.e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and magnitude uncertainty estimates used in the definition of the ruptures than is found in the traditional GMPE approach. This reinforces the need for continued development of a better understanding of earthquake source characterization and the constitutive relations that govern the earthquake rupture process.

  5. CyberShake: A Physics-Based Seismic Hazard Model for Southern California

    USGS Publications Warehouse

    Graves, R.; Jordan, T.H.; Callaghan, S.; Deelman, E.; Field, E.; Juve, G.; Kesselman, C.; Maechling, P.; Mehta, G.; Milner, K.; Okaya, D.; Small, P.; Vahi, K.

    2011-01-01

    CyberShake, as part of the Southern California Earthquake Center's (SCEC) Community Modeling Environment, is developing a methodology that explicitly incorporates deterministic source and wave propagation effects within seismic hazard calculations through the use of physics-based 3D ground motion simulations. To calculate a waveform-based seismic hazard estimate for a site of interest, we begin with Uniform California Earthquake Rupture Forecast, Version 2.0 (UCERF2.0) and identify all ruptures within 200 km of the site of interest. We convert the UCERF2.0 rupture definition into multiple rupture variations with differing hypocenter locations and slip distributions, resulting in about 415,000 rupture variations per site. Strain Green Tensors are calculated for the site of interest using the SCEC Community Velocity Model, Version 4 (CVM4), and then, using reciprocity, we calculate synthetic seismograms for each rupture variation. Peak intensity measures are then extracted from these synthetics and combined with the original rupture probabilities to produce probabilistic seismic hazard curves for the site. Being explicitly site-based, CyberShake directly samples the ground motion variability at that site over many earthquake cycles (i. e., rupture scenarios) and alleviates the need for the ergodic assumption that is implicitly included in traditional empirically based calculations. Thus far, we have simulated ruptures at over 200 sites in the Los Angeles region for ground shaking periods of 2 s and longer, providing the basis for the first generation CyberShake hazard maps. Our results indicate that the combination of rupture directivity and basin response effects can lead to an increase in the hazard level for some sites, relative to that given by a conventional Ground Motion Prediction Equation (GMPE). Additionally, and perhaps more importantly, we find that the physics-based hazard results are much more sensitive to the assumed magnitude-area relations and magnitude uncertainty estimates used in the definition of the ruptures than is found in the traditional GMPE approach. This reinforces the need for continued development of a better understanding of earthquake source characterization and the constitutive relations that govern the earthquake rupture process. ?? 2010 Springer Basel AG.

  6. Immersion freezing of internally and externally mixed mineral dust species analyzed by stochastic and deterministic models

    NASA Astrophysics Data System (ADS)

    Wong, B.; Kilthau, W.; Knopf, D. A.

    2017-12-01

    Immersion freezing is recognized as the most important ice crystal formation process in mixed-phase cloud environments. It is well established that mineral dust species can act as efficient ice nucleating particles. Previous research has focused on determination of the ice nucleation propensity of individual mineral dust species. In this study, the focus is placed on how different mineral dust species such as illite, kaolinite and feldspar, initiate freezing of water droplets when present in internal and external mixtures. The frozen fraction data for single and multicomponent mineral dust droplet mixtures are recorded under identical cooling rates. Additionally, the time dependence of freezing is explored. Externally and internally mixed mineral dust droplet samples are exposed to constant temperatures (isothermal freezing experiments) and frozen fraction data is recorded based on time intervals. Analyses of single and multicomponent mineral dust droplet samples include different stochastic and deterministic models such as the derivation of the heterogeneous ice nucleation rate coefficient (J­­het), the single contact angle (α) description, the α-PDF model, active sites representation, and the deterministic model. Parameter sets derived from freezing data of single component mineral dust samples are evaluated for prediction of cooling rate dependent and isothermal freezing of multicomponent externally or internally mixed mineral dust samples. The atmospheric implications of our findings are discussed.

  7. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  8. Remediating radium contaminated legacy sites: Advances made through machine learning in routine monitoring of "hot" particles.

    PubMed

    Varley, Adam; Tyler, Andrew; Smith, Leslie; Dale, Paul; Davies, Mike

    2015-07-15

    The extensive use of radium during the 20th century for industrial, military and pharmaceutical purposes has led to a large number of contaminated legacy sites across Europe and North America. Sites that pose a high risk to the general public can present expensive and long-term remediation projects. Often the most pragmatic remediation approach is through routine monitoring operating gamma-ray detectors to identify, in real-time, the signal from the most hazardous heterogeneous contamination (hot particles); thus facilitating their removal and safe disposal. However, current detection systems do not fully utilise all spectral information resulting in low detection rates and ultimately an increased risk to the human health. The aim of this study was to establish an optimised detector-algorithm combination. To achieve this, field data was collected using two handheld detectors (sodium iodide and lanthanum bromide) and a number of Monte Carlo simulated hot particles were randomly injected into the field data. This allowed for the detection rate of conventional deterministic (gross counts) and machine learning (neural networks and support vector machines) algorithms to be assessed. The results demonstrated that a Neural Network operated on a sodium iodide detector provided the best detection capability. Compared to deterministic approaches, this optimised detection system could detect a hot particle on average 10cm deeper into the soil column or with half of the activity at the same depth. It was also found that noise presented by internal contamination restricted lanthanum bromide for this application. Copyright © 2015. Published by Elsevier B.V.

  9. A Deterministic Model to Quantify Risk and Guide Mitigation Strategies to Reduce Bluetongue Virus Transmission in California Dairy Cattle

    PubMed Central

    Mayo, Christie; Shelley, Courtney; MacLachlan, N. James; Gardner, Ian; Hartley, David; Barker, Christopher

    2016-01-01

    The global distribution of bluetongue virus (BTV) has been changing recently, perhaps as a result of climate change. To evaluate the risk of BTV infection and transmission in a BTV-endemic region of California, sentinel dairy cows were evaluated for BTV infection, and populations of Culicoides vectors were collected at different sites using carbon dioxide. A deterministic model was developed to quantify risk and guide future mitigation strategies to reduce BTV infection in California dairy cattle. The greatest risk of BTV transmission was predicted within the warm Central Valley of California that contains the highest density of dairy cattle in the United States. Temperature and parameters associated with Culicoides vectors (transmission probabilities, carrying capacity, and survivorship) had the greatest effect on BTV’s basic reproduction number, R0. Based on these analyses, optimal control strategies for reducing BTV infection risk in dairy cattle will be highly reliant upon early efforts to reduce vector abundance during the months prior to peak transmission. PMID:27812161

  10. Experimental realization of real-time feedback-control of single-atom arrays

    NASA Astrophysics Data System (ADS)

    Kim, Hyosub; Lee, Woojun; Ahn, Jaewook

    2016-05-01

    Deterministic loading of neutral atoms on particular locations has remained a challenging problem. Here we show, in a proof-of-principle experimental demonstration, that such deterministic loading can be achieved by rearrangement of atoms. In the experiment, cold rubidium atom were trapped by optical tweezers, which are the hologram images made by a liquid-crystal spatial light modulator (LC-SLM). After the initial occupancy was identified, the hologram was actively controlled to rearrange the captured atoms on to unfilled sites. For this, we developed a new flicker-free hologram algorithm that enables holographic atom translation. Our demonstration show that up to N=9 atoms were simultaneously moved in the 2D plane with the movable degrees of freedom of 2N=18 and the fidelity of 99% for single-atom 5- μm translation. It is hoped that our in situ atom rearrangement becomes useful in scaling quantum computers. Samsung Science and Technology Foundation [SSTF-BA1301-12].

  11. A Magnetorheological Polishing-Based Approach for Studying Precision Microground Surfaces of Tungsten Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafrir, S.N.; Lambropoulos, J.C.; Jacobs, S.D.

    2007-03-23

    Surface features of tungsten carbide composites processed by bound abrasive deterministic microgrinding and magnetorheological finishing (MRF) were studied for five WC-Ni composites, including one binderless material. All the materials studied were nonmagnetic with different microstructures and mechanical properties. White-light interferometry, scanning electron microscopy, and atomic force microscopy were used to characterize the surfaces after various grinding steps, surface etching, and MRF spot-taking.

  12. CRAX/Cassandra Reliability Analysis Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, D.

    1999-02-10

    Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less

  13. Common features and peculiarities of the seismic activity at Phlegraean Fields, Long Valley, and Vesuvius

    USGS Publications Warehouse

    Marzocchi, W.; Vilardo, G.; Hill, D.P.; Ricciardi, G.P.; Ricco, C.

    2001-01-01

    We analyzed and compared the seismic activity that has occurred in the last two to three decades in three distinct volcanic areas: Phlegraean Fields, Italy; Vesuvius, Italy; and Long Valley, California. Our main goal is to identify and discuss common features and peculiarities in the temporal evolution of earthquake sequences that may reflect similarities and differences in the generating processes between these volcanic systems. In particular, we tried to characterize the time series of the number of events and of the seismic energy release in terms of stochastic, deterministic, and chaotic components. The time sequences from each area consist of thousands of earthquakes that allow a detailed quantitative analysis and comparison. The results obtained showed no evidence for either deterministic or chaotic components in the earthquake sequences in Long Valley caldera, which appears to be dominated by stochastic behavior. In contrast, earthquake sequences at Phlegrean Fields and Mount Vesuvius show a deterministic signal mainly consisting of a 24-hour periodicity. Our analysis suggests that the modulation in seismicity is in some way related to thermal diurnal processes, rather than luni-solar tidal effects. Independently from the process that generates these periodicities on the seismicity., it is suggested that the lack (or presence) of diurnal cycles is seismic swarms of volcanic areas could be closely linked to the presence (or lack) of magma motion.

  14. Seismic Hazard Assessment for a Characteristic Earthquake Scenario: Probabilistic-Deterministic Method

    NASA Astrophysics Data System (ADS)

    mouloud, Hamidatou

    2016-04-01

    The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.

  15. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  16. Second Cancers After Fractionated Radiotherapy: Stochastic Population Dynamics Effects

    NASA Technical Reports Server (NTRS)

    Sachs, Rainer K.; Shuryak, Igor; Brenner, David; Fakir, Hatim; Hahnfeldt, Philip

    2007-01-01

    When ionizing radiation is used in cancer therapy it can induce second cancers in nearby organs. Mainly due to longer patient survival times, these second cancers have become of increasing concern. Estimating the risk of solid second cancers involves modeling: because of long latency times, available data is usually for older, obsolescent treatment regimens. Moreover, modeling second cancers gives unique insights into human carcinogenesis, since the therapy involves administering well characterized doses of a well studied carcinogen, followed by long-term monitoring. In addition to putative radiation initiation that produces pre-malignant cells, inactivation (i.e. cell killing), and subsequent cell repopulation by proliferation can be important at the doses relevant to second cancer situations. A recent initiation/inactivation/proliferation (IIP) model characterized quantitatively the observed occurrence of second breast and lung cancers, using a deterministic cell population dynamics approach. To analyze ifradiation-initiated pre-malignant clones become extinct before full repopulation can occur, we here give a stochastic version of this I I model. Combining Monte Carlo simulations with standard solutions for time-inhomogeneous birth-death equations, we show that repeated cycles of inactivation and repopulation, as occur during fractionated radiation therapy, can lead to distributions of pre-malignant cells per patient with variance >> mean, even when pre-malignant clones are Poisson-distributed. Thus fewer patients would be affected, but with a higher probability, than a deterministic model, tracking average pre-malignant cell numbers, would predict. Our results are applied to data on breast cancers after radiotherapy for Hodgkin disease. The stochastic IIP analysis, unlike the deterministic one, indicates: a) initiated, pre-malignant cells can have a growth advantage during repopulation, not just during the longer tumor latency period that follows; b) weekend treatment gaps during radiotherapy, apart from decreasing the probability of eradicating the primary cancer, substantially increase the risk of later second cancers.

  17. Characterizing the topology of probabilistic biological networks.

    PubMed

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Biological interactions are often uncertain events, that may or may not take place with some probability. This uncertainty leads to a massive number of alternative interaction topologies for each such network. The existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. In this paper, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. Using our mathematical representation, we develop a method that can accurately describe the degree distribution of such networks. We also take one more step and extend our method to accurately compute the joint-degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. Our method works quickly even for entire protein-protein interaction (PPI) networks. It also helps us find an adequate mathematical model using MLE. We perform a comparative study of node-degree and joint-degree distributions in two types of biological networks: the classical deterministic networks and the more flexible probabilistic networks. Our results confirm that power-law and log-normal models best describe degree distributions for both probabilistic and deterministic networks. Moreover, the inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected. We also show that probabilistic networks are more robust for node-degree distribution computation than the deterministic ones. all the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/projects/probNet/.

  18. Spatio-Temporal Data Analysis at Scale Using Models Based on Gaussian Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Michael

    Gaussian processes are the most commonly used statistical model for spatial and spatio-temporal processes that vary continuously. They are broadly applicable in the physical sciences and engineering and are also frequently used to approximate the output of complex computer models, deterministic or stochastic. We undertook research related to theory, computation, and applications of Gaussian processes as well as some work on estimating extremes of distributions for which a Gaussian process assumption might be inappropriate. Our theoretical contributions include the development of new classes of spatial-temporal covariance functions with desirable properties and new results showing that certain covariance models lead tomore » predictions with undesirable properties. To understand how Gaussian process models behave when applied to deterministic computer models, we derived what we believe to be the first significant results on the large sample properties of estimators of parameters of Gaussian processes when the actual process is a simple deterministic function. Finally, we investigated some theoretical issues related to maxima of observations with varying upper bounds and found that, depending on the circumstances, standard large sample results for maxima may or may not hold. Our computational innovations include methods for analyzing large spatial datasets when observations fall on a partially observed grid and methods for estimating parameters of a Gaussian process model from observations taken by a polar-orbiting satellite. In our application of Gaussian process models to deterministic computer experiments, we carried out some matrix computations that would have been infeasible using even extended precision arithmetic by focusing on special cases in which all elements of the matrices under study are rational and using exact arithmetic. The applications we studied include total column ozone as measured from a polar-orbiting satellite, sea surface temperatures over the Pacific Ocean, and annual temperature extremes at a site in New York City. In each of these applications, our theoretical and computational innovations were directly motivated by the challenges posed by analyzing these and similar types of data.« less

  19. Computing exponentially faster: implementing a non-deterministic universal Turing machine using DNA

    PubMed Central

    Currin, Andrew; Korovin, Konstantin; Ababi, Maria; Roper, Katherine; Kell, Douglas B.; Day, Philip J.

    2017-01-01

    The theory of computer science is based around universal Turing machines (UTMs): abstract machines able to execute all possible algorithms. Modern digital computers are physical embodiments of classical UTMs. For the most important class of problem in computer science, non-deterministic polynomial complete problems, non-deterministic UTMs (NUTMs) are theoretically exponentially faster than both classical UTMs and quantum mechanical UTMs (QUTMs). However, no attempt has previously been made to build an NUTM, and their construction has been regarded as impossible. Here, we demonstrate the first physical design of an NUTM. This design is based on Thue string rewriting systems, and thereby avoids the limitations of most previous DNA computing schemes: all the computation is local (simple edits to strings) so there is no need for communication, and there is no need to order operations. The design exploits DNA's ability to replicate to execute an exponential number of computational paths in P time. Each Thue rewriting step is embodied in a DNA edit implemented using a novel combination of polymerase chain reactions and site-directed mutagenesis. We demonstrate that the design works using both computational modelling and in vitro molecular biology experimentation: the design is thermodynamically favourable, microprogramming can be used to encode arbitrary Thue rules, all classes of Thue rule can be implemented, and non-deterministic rule implementation. In an NUTM, the resource limitation is space, which contrasts with classical UTMs and QUTMs where it is time. This fundamental difference enables an NUTM to trade space for time, which is significant for both theoretical computer science and physics. It is also of practical importance, for to quote Richard Feynman ‘there's plenty of room at the bottom’. This means that a desktop DNA NUTM could potentially utilize more processors than all the electronic computers in the world combined, and thereby outperform the world's current fastest supercomputer, while consuming a tiny fraction of its energy. PMID:28250099

  20. Towards the simplest hydrodynamic lattice-gas model.

    PubMed

    Boghosian, Bruce M; Love, Peter J; Meyer, David A

    2002-03-15

    It has been known since 1986 that it is possible to construct simple lattice-gas cellular automata whose hydrodynamics are governed by the Navier-Stokes equations in two dimensions. The simplest such model heretofore known has six bits of state per site on a triangular lattice. In this work, we demonstrate that it is possible to construct a model with only five bits of state per site on a Kagome lattice. Moreover, the model has a simple, deterministic set of collision rules and is easily implemented on a computer. In this work, we derive the equilibrium distribution function for this lattice-gas automaton and carry out the Chapman-Enskog analysis to determine the form of the Navier-Stokes equations.

  1. Laser targets compensate for limitations in inertial confinement fusion drivers

    NASA Astrophysics Data System (ADS)

    Kilkenny, J. D.; Alexander, N. B.; Nikroo, A.; Steinman, D. A.; Nobile, A.; Bernat, T.; Cook, R.; Letts, S.; Takagi, M.; Harding, D.

    2005-10-01

    Success in inertial confinement fusion (ICF) requires sophisticated, characterized targets. The increasing fidelity of three-dimensional (3D), radiation hydrodynamic computer codes has made it possible to design targets for ICF which can compensate for limitations in the existing single shot laser and Z pinch ICF drivers. Developments in ICF target fabrication technology allow more esoteric target designs to be fabricated. At present, requirements require new deterministic nano-material fabrication on micro scale.

  2. Geometric Universality in Brain Allosteric Protein Dynamics: Complex Hydrophobic Transformation Predicts Mutual Recognition by Polypeptides and Proteins,

    DTIC Science & Technology

    1986-10-01

    organic acids using the Hammett equation , has been called the hydrophobic effect.’ Water adjusts its geometry to maximize the number of intact hydrogen...understanding both structural stability with respect to the underlying equations (not initial values) and phase transitions in these dynamical hierarchies...for quantitative characterization. Although the complicated behavior is gen- erated by deterministic equations , its description in entropies leads to

  3. Inclusion of Multiple Functional Types in an Automaton Model of Bioturbation and Their Effects on Sediments Properties

    DTIC Science & Technology

    2007-09-30

    if the traditional models adequately parameterize and characterize the actual mixing. As an example of the application of this method , we have...2) Deterministic Modelling Results. As noted above, we are working on a stochastic method of modelling transient and short-lived tracers...heterogeneity. RELATED PROJECTS We have worked in collaboration with Peter Jumars (Univ. Maine), and his PhD student Kelley Dorgan, who are measuring

  4. Radial variations of large-scale magnetohydrodynamic fluctuations in the solar wind

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Goldstein, M. L.

    1983-01-01

    Two time periods are studied for which comprehensive data coverage is available at both 1 AU using IMP-8 and ISEE-3 and beyond using Voyager 1. One of these periods is characterized by the predominance of corotating stream interactions. Relatively small scale transient flows characterize the second period. The evolution of these flows with heliocentric distance is studied using power spectral techniques. The evolution of the transient dominated period is consistent with the hypothesis of turbulent evolution including an inverse cascade of large scales. The evolution of the corotating period is consistent with the entrainment of slow streams by faster streams in a deterministic model.

  5. Used Nuclear Fuel Loading and Structural Performance Under Normal Conditions of Transport- Demonstration of Approach and Results on Used Fuel Performance Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adkins, Harold; Geelhood, Ken; Koeppel, Brian

    2013-09-30

    This document addresses Oak Ridge National Laboratory milestone M2FT-13OR0822015 Demonstration of Approach and Results on Used Nuclear Fuel Performance Characterization. This report provides results of the initial demonstration of the modeling capability developed to perform preliminary deterministic evaluations of moderate-to-high burnup used nuclear fuel (UNF) mechanical performance under normal conditions of storage (NCS) and normal conditions of transport (NCT) conditions. This report also provides results from the sensitivity studies that have been performed. Finally, discussion on the long-term goals and objectives of this initiative are provided.

  6. A measurement of disorder in binary sequences

    NASA Astrophysics Data System (ADS)

    Gong, Longyan; Wang, Haihong; Cheng, Weiwen; Zhao, Shengmei

    2015-03-01

    We propose a complex quantity, AL, to characterize the degree of disorder of L-length binary symbolic sequences. As examples, we respectively apply it to typical random and deterministic sequences. One kind of random sequences is generated from a periodic binary sequence and the other is generated from the logistic map. The deterministic sequences are the Fibonacci and Thue-Morse sequences. In these analyzed sequences, we find that the modulus of AL, denoted by |AL | , is a (statistically) equivalent quantity to the Boltzmann entropy, the metric entropy, the conditional block entropy and/or other quantities, so it is a useful quantitative measure of disorder. It can be as a fruitful index to discern which sequence is more disordered. Moreover, there is one and only one value of |AL | for the overall disorder characteristics. It needs extremely low computational costs. It can be easily experimentally realized. From all these mentioned, we believe that the proposed measure of disorder is a valuable complement to existing ones in symbolic sequences.

  7. Topological chaos of the spatial prisoner's dilemma game on regular networks.

    PubMed

    Jin, Weifeng; Chen, Fangyue

    2016-02-21

    The spatial version of evolutionary prisoner's dilemma on infinitely large regular lattice with purely deterministic strategies and no memories among players is investigated in this paper. Based on the statistical inferences, it is pertinent to confirm that the frequency of cooperation for characterizing its macroscopic behaviors is very sensitive to the initial conditions, which is the most practically significant property of chaos. Its intrinsic complexity is then justified on firm ground from the theory of symbolic dynamics; that is, this game is topologically mixing and possesses positive topological entropy on its subsystems. It is demonstrated therefore that its frequency of cooperation could not be adopted by simply averaging over several steps after the game reaches the equilibrium state. Furthermore, the chaotically changing spatial patterns via empirical observations can be defined and justified in view of symbolic dynamics. It is worth mentioning that the procedure proposed in this work is also applicable to other deterministic spatial evolutionary games therein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Deterministic Evolutionary Trajectories Influence Primary Tumor Growth: TRACERx Renal.

    PubMed

    Turajlic, Samra; Xu, Hang; Litchfield, Kevin; Rowan, Andrew; Horswell, Stuart; Chambers, Tim; O'Brien, Tim; Lopez, Jose I; Watkins, Thomas B K; Nicol, David; Stares, Mark; Challacombe, Ben; Hazell, Steve; Chandra, Ashish; Mitchell, Thomas J; Au, Lewis; Eichler-Jonsson, Claudia; Jabbar, Faiz; Soultati, Aspasia; Chowdhury, Simon; Rudman, Sarah; Lynch, Joanna; Fernando, Archana; Stamp, Gordon; Nye, Emma; Stewart, Aengus; Xing, Wei; Smith, Jonathan C; Escudero, Mickael; Huffman, Adam; Matthews, Nik; Elgar, Greg; Phillimore, Ben; Costa, Marta; Begum, Sharmin; Ward, Sophia; Salm, Max; Boeing, Stefan; Fisher, Rosalie; Spain, Lavinia; Navas, Carolina; Grönroos, Eva; Hobor, Sebastijan; Sharma, Sarkhara; Aurangzeb, Ismaeel; Lall, Sharanpreet; Polson, Alexander; Varia, Mary; Horsfield, Catherine; Fotiadis, Nicos; Pickering, Lisa; Schwarz, Roland F; Silva, Bruno; Herrero, Javier; Luscombe, Nick M; Jamal-Hanjani, Mariam; Rosenthal, Rachel; Birkbak, Nicolai J; Wilson, Gareth A; Pipek, Orsolya; Ribli, Dezso; Krzystanek, Marcin; Csabai, Istvan; Szallasi, Zoltan; Gore, Martin; McGranahan, Nicholas; Van Loo, Peter; Campbell, Peter; Larkin, James; Swanton, Charles

    2018-04-19

    The evolutionary features of clear-cell renal cell carcinoma (ccRCC) have not been systematically studied to date. We analyzed 1,206 primary tumor regions from 101 patients recruited into the multi-center prospective study, TRACERx Renal. We observe up to 30 driver events per tumor and show that subclonal diversification is associated with known prognostic parameters. By resolving the patterns of driver event ordering, co-occurrence, and mutual exclusivity at clone level, we show the deterministic nature of clonal evolution. ccRCC can be grouped into seven evolutionary subtypes, ranging from tumors characterized by early fixation of multiple mutational and copy number drivers and rapid metastases to highly branched tumors with >10 subclonal drivers and extensive parallel evolution associated with attenuated progression. We identify genetic diversity and chromosomal complexity as determinants of patient outcome. Our insights reconcile the variable clinical behavior of ccRCC and suggest evolutionary potential as a biomarker for both intervention and surveillance. Copyright © 2018 Francis Crick Institute. Published by Elsevier Inc. All rights reserved.

  9. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  10. Application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) of the rare earth elements (REEs) in beneficiation rare earth waste from the gold processing: case study

    NASA Astrophysics Data System (ADS)

    Bieda, Bogusław; Grzesik, Katarzyna

    2017-11-01

    The study proposes an stochastic approach based on Monte Carlo (MC) simulation for life cycle assessment (LCA) method limited to life cycle inventory (LCI) study for rare earth elements (REEs) recovery from the secondary materials processes production applied to the New Krankberg Mine in Sweden. The MC method is recognizes as an important tool in science and can be considered the most effective quantification approach for uncertainties. The use of stochastic approach helps to characterize the uncertainties better than deterministic method. Uncertainty of data can be expressed through a definition of probability distribution of that data (e.g. through standard deviation or variance). The data used in this study are obtained from: (i) site-specific measured or calculated data, (ii) values based on literature, (iii) the ecoinvent process "rare earth concentrate, 70% REO, from bastnäsite, at beneficiation". Environmental emissions (e.g, particulates, uranium-238, thorium-232), energy and REE (La, Ce, Nd, Pr, Sm, Dy, Eu, Tb, Y, Sc, Yb, Lu, Tm, Y, Gd) have been inventoried. The study is based on a reference case for the year 2016. The combination of MC analysis with sensitivity analysis is the best solution for quantified the uncertainty in the LCI/LCA. The reliability of LCA results may be uncertain, to a certain degree, but this uncertainty can be noticed with the help of MC method.

  11. Habitat selection and productivity of least terns on the lower Platte River, Nebraska

    USGS Publications Warehouse

    Kirsch, Eileen M.

    1996-01-01

    Least terns (Sterna antillarum) were studied on the lower Platte River, Nebraska, where this endangered population nests on natural sandbar habitat and on sandpit sites created by gravel dredging adjacent to the river. Theoretically terns should select habitats according to habitat suitability. However, the introduction of sandpits and conversion of tallgrass prairies along the river banks to agriculture, residential, and wooded areas may have affected terns' abilities to distinguish suitable habitat or the suitability of nesting habitats in general. I examined habitat selection and productivity of least terns to determine if terns selected habitat according to suitability (as indicated by productivity), what factors affected habitat selection and productivity, and if estimated productivity could support this population. Available habitats of both types were characterized and quantified using aerial videography (1989-90), and habitat use was assessed from census data (1987-90). Productivity of adults and causes and correlates of egg and chick mortality were estimated (1987-90). Population trend was assessed with a deterministic model using my estimates of productivity and a range of survival estimates for Laridae reported in the literature. Terns tended to use river sites with large midstream sandbars and a wide channel, and large sandpit sites with large surface areas of water relative to unused sites on both habitats. Number of sites and area of sand available were estimated using discriminant function analysis of variables quantified from video scenes of both habitats. Terns apparently did not use all potentially available sandbar and sandpit sites because discriminant function factor scores for used and unused sites overlapped broadly for both habitats. Terns did not prefer 1 habitat over the other. Although proportions of available sites used were greater on sandpits than on the river, proportions of available sand used did not differ between habitats. Proportion of terns using each habitat was similar to proportion of available sand on each habitat. The distribution of nest initiation dates and rates of colony-site turnover also were similar on both habitats. Productivity did not differ between habitats but varied significantly among sites. Nest success, fledging success, and fledglings per pair averaged 0.54, 0.28, and 0.47, respectively. Key factor analysis revealed that chick survival had a greater influence on production of fledglings (on both sandbars and sandpits) than did failure to produce a maximum clutch size or egg mortality. Most egg mortality was caused by predation on sandpits and by flooding on sandbars. Predation was suspected as the major cause of loss for chicks on both habitats. Path analysis revealed no strong or consistent correlations among mortality, numbers of nests and chicks, track trails of intruders into colonies, and habitat variables at colonies on either habitat. Theoretically, terns should not prefer a habitat when habitats are equally suitable if terns have had time to respond to habitat changes. Although sandbars and sandpits appeared equally suitable and terns did not prefer either habitat, local productivity will not support this population unless annual postfledging survival is higher than current estimates for the species. Population trend estimated with fledglings per pair = 0.50 was negative for all but the highest (ca 0.90) rates of annual postfledging survival. Furthermore, deterministic models like the one used in this study overstimate trend. Productivity insufficient to support the local population, in spite of habitat use that reflects habitat suitability, could be due to increased predation caused by habitat alteration adjacent to the river that may have changed the predator community. Alternatively, terns in this area could persist in spite of prevailing low productivity because they are relatively long-lived birds, if highly productive years occasionally occur or if this population is augmented by immigrants from elsewhere.

  12. Soils: man-caused radioactivity and radiation forecast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gablin, Vassily

    2007-07-01

    Available in abstract form only. Full text of publication follows: One of the main tasks of the radiation safety guarantee is non-admission of the excess over critical radiation levels. In Russia they are man-caused radiation levels. Meanwhile any radiation measurement represents total radioactivity. That is why it is hard to assess natural and man-caused contributions to total radioactivity. It is shown that soil radioactivity depends on natural factors including radioactivity of rocks and cosmic radiation as well as man-caused factors including nuclear and non-nuclear technologies. Whole totality of these factors includes unpredictable (non-deterministic) factors - nuclear explosions and radiation accidents,more » and predictable ones (deterministic) - all the rest. Deterministic factors represent background radioactivity whose trends is the base of the radiation forecast. Non-deterministic factors represent man-caused radiation treatment contribution which is to be controlled. This contribution is equal to the difference in measured radioactivity and radiation background. The way of calculation of background radioactivity is proposed. Contemporary soils are complicated technologically influenced systems with multi-leveled spatial and temporary inhomogeneity of radionuclides distribution. Generally analysis area can be characterized by any set of factors of soil radioactivity including natural and man-caused factors. Natural factors are cosmic radiation and radioactivity of rocks. Man-caused factors are shown on Fig. 1. It is obvious that man-caused radioactivity is due to both artificial and natural emitters. Any result of radiation measurement represents total radioactivity i.e. the sum of activities resulting from natural and man-caused emitters. There is no gauge which could separately measure natural and man-caused radioactivity. That is why it is so hard to assess natural and man-caused contributions to soil radioactivity. It would have been possible if human activity had led to contamination of soil only by artificial radionuclides. But we can view a totality of soil radioactivity factors in the following way. (author)« less

  13. Stochastic Analysis and Probabilistic Downscaling of Soil Moisture

    NASA Astrophysics Data System (ADS)

    Deshon, J. P.; Niemann, J. D.; Green, T. R.; Jones, A. S.

    2017-12-01

    Soil moisture is a key variable for rainfall-runoff response estimation, ecological and biogeochemical flux estimation, and biodiversity characterization, each of which is useful for watershed condition assessment. These applications require not only accurate, fine-resolution soil-moisture estimates but also confidence limits on those estimates and soil-moisture patterns that exhibit realistic statistical properties (e.g., variance and spatial correlation structure). The Equilibrium Moisture from Topography, Vegetation, and Soil (EMT+VS) model downscales coarse-resolution (9-40 km) soil moisture from satellite remote sensing or land-surface models to produce fine-resolution (10-30 m) estimates. The model was designed to produce accurate deterministic soil-moisture estimates at multiple points, but the resulting patterns do not reproduce the variance or spatial correlation of observed soil-moisture patterns. The primary objective of this research is to generalize the EMT+VS model to produce a probability density function (pdf) for soil moisture at each fine-resolution location and time. Each pdf has a mean that is equal to the deterministic soil-moisture estimate, and the pdf can be used to quantify the uncertainty in the soil-moisture estimates and to simulate soil-moisture patterns. Different versions of the generalized model are hypothesized based on how uncertainty enters the model, whether the uncertainty is additive or multiplicative, and which distributions describe the uncertainty. These versions are then tested by application to four catchments with detailed soil-moisture observations (Tarrawarra, Satellite Station, Cache la Poudre, and Nerrigundah). The performance of the generalized models is evaluated by comparing the statistical properties of the simulated soil-moisture patterns to those of the observations and the deterministic EMT+VS model. The versions of the generalized EMT+VS model with normally distributed stochastic components produce soil-moisture patterns with more realistic statistical properties than the deterministic model. Additionally, the results suggest that the variance and spatial correlation of the stochastic soil-moisture variations do not vary consistently with the spatial-average soil moisture.

  14. Controlled deterministic implantation by nanostencil lithography at the limit of ion-aperture straggling

    NASA Astrophysics Data System (ADS)

    Alves, A. D. C.; Newnham, J.; van Donkelaar, J. A.; Rubanov, S.; McCallum, J. C.; Jamieson, D. N.

    2013-04-01

    Solid state electronic devices fabricated in silicon employ many ion implantation steps in their fabrication. In nanoscale devices deterministic implants of dopant atoms with high spatial precision will be needed to overcome problems with statistical variations in device characteristics and to open new functionalities based on controlled quantum states of single atoms. However, to deterministically place a dopant atom with the required precision is a significant technological challenge. Here we address this challenge with a strategy based on stepped nanostencil lithography for the construction of arrays of single implanted atoms. We address the limit on spatial precision imposed by ion straggling in the nanostencil—fabricated with the readily available focused ion beam milling technique followed by Pt deposition. Two nanostencils have been fabricated; a 60 nm wide aperture in a 3 μm thick Si cantilever and a 30 nm wide aperture in a 200 nm thick Si3N4 membrane. The 30 nm wide aperture demonstrates the fabricating process for sub-50 nm apertures while the 60 nm aperture was characterized with 500 keV He+ ion forward scattering to measure the effect of ion straggling in the collimator and deduce a model for its internal structure using the GEANT4 ion transport code. This model is then applied to simulate collimation of a 14 keV P+ ion beam in a 200 nm thick Si3N4 membrane nanostencil suitable for the implantation of donors in silicon. We simulate collimating apertures with widths in the range of 10-50 nm because we expect the onset of J-coupling in a device with 30 nm donor spacing. We find that straggling in the nanostencil produces mis-located implanted ions with a probability between 0.001 and 0.08 depending on the internal collimator profile and the alignment with the beam direction. This result is favourable for the rapid prototyping of a proof-of-principle device containing multiple deterministically implanted dopants.

  15. Health Monitoring for Airframe Structural Characterization

    NASA Technical Reports Server (NTRS)

    Munns, Thomas E.; Kent, Renee M.; Bartolini, Antony; Gause, Charles B.; Borinski, Jason W.; Dietz, Jason; Elster, Jennifer L.; Boyd, Clark; Vicari, Larry; Ray, Asok; hide

    2002-01-01

    This study established requirements for structural health monitoring systems, identified and characterized a prototype structural sensor system, developed sensor interpretation algorithms, and demonstrated the sensor systems on operationally realistic test articles. Fiber-optic corrosion sensors (i.e., moisture and metal ion sensors) and low-cycle fatigue sensors (i.e., strain and acoustic emission sensors) were evaluated to validate their suitability for monitoring aging degradation; characterize the sensor performance in aircraft environments; and demonstrate placement processes and multiplexing schemes. In addition, a unique micromachined multimeasure and sensor concept was developed and demonstrated. The results show that structural degradation of aircraft materials could be effectively detected and characterized using available and emerging sensors. A key component of the structural health monitoring capability is the ability to interpret the information provided by sensor system in order to characterize the structural condition. Novel deterministic and stochastic fatigue damage development and growth models were developed for this program. These models enable real time characterization and assessment of structural fatigue damage.

  16. Optical characterization limits of nanoparticle aggregates at different wavelengths using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Eriçok, Ozan Burak; Ertürk, Hakan

    2018-07-01

    Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.

  17. Surface slope metrology of highly curved x-ray optics with an interferometric microscope

    NASA Astrophysics Data System (ADS)

    Gevorkyan, Gevork S.; Centers, Gary; Polonska, Kateryna S.; Nikitin, Sergey M.; Lacey, Ian; Yashchuk, Valeriy V.

    2017-09-01

    The development of deterministic polishing techniques has given rise to vendors that manufacture high quality threedimensional x-ray optics. The surface metrology on these optics remains a difficult task. For the fabrication, vendors usually use unique surface metrology tools, generally developed on site, that are not available in the optical metrology labs at x-ray facilities. At the Advanced Light Source X-Ray Optics Laboratory, we have developed a rather straightforward interferometric-microscopy-based procedure capable of sub microradian characterization of sagittal slope variation of x-ray optics for two-dimensionally focusing and collimating (such as ellipsoids, paraboloids, etc.). In the paper, we provide the mathematical foundation of the procedure and describe the related instrument calibration. We also present analytical expression describing the ideal surface shape in the sagittal direction of a spheroid specified by the conjugate parameters of the optic's beamline application. The expression is useful when analyzing data obtained with such optics. The high efficiency of the developed measurement and data analysis procedures is demonstrated in results of measurements with a number of x-ray optics with sagittal radius of curvature between 56 mm and 480 mm. We also discuss potential areas of further improvement.

  18. Community assembly of a euryhaline fish microbiome during salinity acclimation.

    PubMed

    Schmidt, Victor T; Smith, Katherine F; Melvin, Donald W; Amaral-Zettler, Linda A

    2015-05-01

    Microbiomes play a critical role in promoting a range of host functions. Microbiome function, in turn, is dependent on its community composition. Yet, how microbiome taxa are assembled from their regional species pool remains unclear. Many possible drivers have been hypothesized, including deterministic processes of competition, stochastic processes of colonization and migration, and physiological 'host-effect' habitat filters. The contribution of each to assembly in nascent or perturbed microbiomes is important for understanding host-microbe interactions and host health. In this study, we characterized the bacterial communities in a euryhaline fish and the surrounding tank water during salinity acclimation. To assess the relative influence of stochastic versus deterministic processes in fish microbiome assembly, we manipulated the bacterial species pool around each fish by changing the salinity of aquarium water. Our results show a complete and repeatable turnover of dominant bacterial taxa in the microbiomes from individuals of the same species after acclimation to the same salinity. We show that changes in fish microbiomes are not correlated with corresponding changes to abundant taxa in tank water communities and that the dominant taxa in fish microbiomes are rare in the aquatic surroundings, and vice versa. Our results suggest that bacterial taxa best able to compete within the unique host environment at a given salinity appropriate the most niche space, independent of their relative abundance in tank water communities. In this experiment, deterministic processes appear to drive fish microbiome assembly, with little evidence for stochastic colonization. © 2015 John Wiley & Sons Ltd.

  19. Traveling Salesman Problem for Surveillance Mission Using Particle Swarm Optimization

    DTIC Science & Technology

    2001-03-20

    design of experiments, results of the experiments, and qualitative and quantitative analysis . Conclusions and recommendations based on the qualitative and...characterize the algorithm. Such analysis and comparison between LK and a non-deterministic algorithm produces claims such as "Lin-Kernighan algorithm takes... based on experiments 5 and 6. All other parameters are the same as the baseline (see 4.2.1.2). 4.2.2.6 Experiment 10 - Fine Tuning PSO AS: 85,95% Global

  20. Implementing the effect of the rupture directivity on PSHA maps: Application to the Marmara Region (Turkey)

    NASA Astrophysics Data System (ADS)

    Herrero, Andre; Spagnuolo, Elena; Akinci, Aybige; Pucci, Stefano

    2016-04-01

    In the present study we attempted to improve the seismic hazard assessment taking into account possible sources of epistemic uncertainty and the azimuthal variability of the ground motions which, at a particular site, is significantly influenced by the rupture mechanism and the rupture direction relative to the site. As a study area we selected Marmara Region (Turkey), especially the city of Istanbul which is characterized by one of the highest levels of seismic risk in Europe and the Mediterranean region. The seismic hazard in the city is mainly associated with two active fault segments which are located at about 20-30 km south of Istanbul. In this perspective first we proposed a methodology to incorporate this new information such as nucleation point in a probabilistic seismic hazard analysis (PSHA) framework. Secondly we introduced information about those fault segments by focusing on the fault rupture characteristics which affect the azimuthal variations of the ground motion spatial distribution i.e. source directivity effect and its influence on the probabilistic seismic hazard analyses (PSHA). An analytical model developed by Spudich and Chiou (2008) is used as a corrective factor that modifies the Next Generation Attenuation (NGA, Power et al. 2008) ground motion predictive equations (GMPEs) introducing rupture related parameters that generally lump together into the term directivity effect. We used the GMPEs as derived by the Abrahamson and Silva (2008) and the Boore and Atkinson (2008); our results are given in terms of 10% probability of exceedance of PSHA (at several periods from 0.5 s to 10 s) in 50 years on rock site condition; the correction for directivity introduces a significant contribution to the percentage ratio between the seismic hazards computed using the directivity model respect to the seismic hazard standard practice. In particular, we benefited the dynamic simulation from a previous study (Aochi & Utrich, 2015) aimed at evaluating the seismic potential of the Marmara region to derive a statistical distribution for nucleation position. Our results suggest that accounting for rupture related parameters in a PSHA using deterministic information from dynamic models is feasible and in particular, the use of a non-uniform statistical distribution for nucleation position has serious consequences on the hazard assessment. Since the directivity effect is conditional on the nucleation position the hazard map changes with the assumptions made. A worst case scenario (both the faults are rupturing towards the city of Istanbul) predicts up to 25% change than the standard formulation at 2 sec and increases with longer periods. The former result is heavily different if a deterministically based nucleation position is assumed.

  1. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    NASA Astrophysics Data System (ADS)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer project between Sandia National Laboratories and Taiwan's Institute of Nuclear Energy Research (INER) for the preliminary assessment of several candidate low-level waste repository sites. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE AC04 94AL85000.

  2. Sensitivity Tests Between Vs30 and Detailed Shear Wave Profiles Using 1D and 3D Site Response Analysis, Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    West, Loyd Travis

    Site characterization is an essential aspect of hazard analysis and the time-averaged shear-wave velocity to 30 m depth "Vs30" for site-class has become a critical parameter in site-specific and probabilistic hazard analysis. Yet, the general applicability of Vs30 can be ambiguous and much debate and research surround its application. In 2007, in part to mitigate the uncertainty associated with the use of Vs30 in Las Vegas Valley, the Clark County Building Department (CCBD) in collaboration with the Nevada System of Higher Education (NSHE) embarked on an endeavor to map Vs30 using a geophysical methods approach for a site-class microzonation map of over 500 square miles (1500 km2) in southern Nevada. The resulting dataset, described by Pancha et al. (2017), contains over 10,700 1D shear-wave-velocity-depth profiles (SWVP) that constitute a rich database of 3D shear-wave velocity structure that is both laterally and vertical heterogenous. This study capitalizes on the uniquely detailed and spatially dense CCBD database to carry out sensitivity tests on the detailed shear-wave-velocity-profiles and the Vs30 utilizing 1D and 3D site-response approaches. Sensitivity tests are derived from the 1D oscillator response of a single-degree-of-freedom-oscillator and from 3D finite-difference deterministic simulations up to 15 Hz frequency using similar model parameters. Results demonstrate that the detailed SWVP are amplifying ground motions by roughly 50% over the simple Vs30 models, above 4.6 Hz frequency. Numerical simulations also depict significant lateral resonance, focusing, and scattering from seismic energy attributed to the 3D small-scale heterogeneities of the shear-wave-velocity profiles that result in a 70% increase in peak ground velocity. Additionally, PGV ratio maps clearly establish that the increased amplification from the detailed SWVPs is consistent throughout the model space. As a corollary, this study demonstrates the use of finite-differencing numerical based methods to simulate ground motions at high frequencies, up to 15 Hz.

  3. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  4. Identifying variably saturated water-flow patterns in a steep hillslope under intermittent heavy rainfall

    USGS Publications Warehouse

    El-Kadi, A. I.; Torikai, J.D.

    2001-01-01

    The objective of this paper is to identify water-flow patterns in part of an active landslide, through the use of numerical simulations and data obtained during a field study. The approaches adopted include measuring rainfall events and pore-pressure responses in both saturated and unsaturated soils at the site. To account for soil variability, the Richards equation is solved within deterministic and stochastic frameworks. The deterministic simulations considered average water-retention data, adjusted retention data to account for stones or cobbles, retention functions for a heterogeneous pore structure, and continuous retention functions for preferential flow. The stochastic simulations applied the Monte Carlo approach which considers statistical distribution and autocorrelation of the saturated conductivity and its cross correlation with the retention function. Although none of the models is capable of accurately predicting field measurements, appreciable improvement in accuracy was attained using stochastic, preferential flow, and heterogeneous pore-structure models. For the current study, continuum-flow models provide reasonable accuracy for practical purposes, although they are expected to be less accurate than multi-domain preferential flow models.

  5. Post-processing method for wind speed ensemble forecast using wind speed and direction

    NASA Astrophysics Data System (ADS)

    Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin

    2017-04-01

    Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.

  6. Stochastic empirical loading and dilution model for analysis of flows, concentrations, and loads of highway runoff constituents

    USGS Publications Warehouse

    Granato, Gregory E.; Jones, Susan C.

    2014-01-01

    In cooperation with FHWA, the U.S. Geological Survey developed the stochastic empirical loading and dilution model (SELDM) to supersede the 1990 FHWA runoff quality model. The SELDM tool is designed to transform disparate and complex scientific data into meaningful information about the adverse risks of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such measures for reducing such risks. The SELDM tool is easy to use because much of the information and data needed to run it are embedded in the model and obtained by defining the site location and five simple basin properties. Information and data from thousands of sites across the country were compiled to facilitate the use of the SELDM tool. A case study illustrates how to use the SELDM tool for conducting the types of sensitivity analyses needed to properly assess water quality risks. For example, the use of deterministic values to model upstream stormflows instead of representative variations in prestorm flow and runoff may substantially overestimate the proportion of highway runoff in downstream flows. Also, the risks for total phosphorus excursions are substantially affected by the selected criteria and the modeling methods used. For example, if a single deterministic concentration is used rather than a stochastic population of values to model upstream concentrations, then the percentage of water quality excursions in the downstream receiving waters may depend entirely on the selected upstream concentration.

  7. Calibration of semi-stochastic procedure for simulating high-frequency ground motions

    USGS Publications Warehouse

    Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert

    2013-01-01

    Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw < 7 to zero for Mw 8. Ground motions simulated with the updated parameterization exhibit significantly reduced distance attenuation bias and revised dispersion terms are more compatible with those from empirical models but remain lower at large distances (e.g., > 100 km).

  8. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  9. Recharge at the Hanford Site: Status report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gee, G.W.

    A variety of field programs designed to evaluate recharge and other water balance components including precipitation, infiltration, evaporation, and water storage changes, have been carried out at the Hanford Site since 1970. Data from these programs have indicated that a wide range of recharge rates can occur depending upon specific site conditions. Present evidence suggests that minimum recharge occurs where soils are fine-textured and surfaces are vegetated with deep-rooted plants. Maximum recharge occurs where coarse soils or gravels exist at the surface and soils are kept bare. Recharge can occur in areas where shallow-rooted plants dominate the surface, particularly wheremore » soils are coarse-textured. Recharge estimates have been made for the site using simulation models. A US Geological Survey model that attempts to account for climate variability, soil storage parameters, and plant factors has calculated recharge values ranging from near zero to an average of about 1 cm/yr for the Hanford Site. UNSAT-H, a deterministic model developed for the site, appears to be the best code available for estimating recharge on a site-specific basis. Appendix I contains precipitation data from January 1979 to June 1987. 42 refs., 11 figs., 11 tabs.« less

  10. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  11. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  12. Phase ordering in disordered and inhomogeneous systems

    NASA Astrophysics Data System (ADS)

    Corberi, Federico; Zannetti, Marco; Lippiello, Eugenio; Burioni, Raffaella; Vezzani, Alessandro

    2015-06-01

    We study numerically the coarsening dynamics of the Ising model on a regular lattice with random bonds and on deterministic fractal substrates. We propose a unifying interpretation of the phase-ordering processes based on two classes of dynamical behaviors characterized by different growth laws of the ordered domain size, namely logarithmic or power law, respectively. It is conjectured that the interplay between these dynamical classes is regulated by the same topological feature that governs the presence or the absence of a finite-temperature phase transition.

  13. Correlations in electrically coupled chaotic lasers.

    PubMed

    Rosero, E J; Barbosa, W A S; Martinez Avila, J F; Khoury, A Z; Rios Leite, J R

    2016-09-01

    We show how two electrically coupled semiconductor lasers having optical feedback can present simultaneous antiphase correlated fast power fluctuations, and strong in-phase synchronized spikes of chaotic power drops. This quite counterintuitive phenomenon is demonstrated experimentally and confirmed by numerical solutions of a deterministic dynamical system of rate equations. The occurrence of negative and positive cross correlation between parts of a complex system according to time scales, as proved in our simple arrangement, is relevant for the understanding and characterization of collective properties in complex networks.

  14. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  15. Nuclear test ban treaty verification: Improving test ban monitoring with empirical and model-based signal processing

    DOE PAGES

    Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...

    2012-05-01

    In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.

  16. Photon-photon entanglement with a single trapped atom.

    PubMed

    Weber, B; Specht, H P; Müller, T; Bochmann, J; Mücke, M; Moehring, D L; Rempe, G

    2009-01-23

    An experiment is performed where a single rubidium atom trapped within a high-finesse optical cavity emits two independently triggered entangled photons. The entanglement is mediated by the atom and is characterized both by a Bell inequality violation of S=2.5, as well as full quantum-state tomography, resulting in a fidelity exceeding F=90%. The combination of cavity-QED and trapped atom techniques makes our protocol inherently deterministic--an essential step for the generation of scalable entanglement between the nodes of a distributed quantum network.

  17. Scattering theory of stochastic electromagnetic light waves.

    PubMed

    Wang, Tao; Zhao, Daomu

    2010-07-15

    We generalize scattering theory to stochastic electromagnetic light waves. It is shown that when a stochastic electromagnetic light wave is scattered from a medium, the properties of the scattered field can be characterized by a 3 x 3 cross-spectral density matrix. An example of scattering of a spatially coherent electromagnetic light wave from a deterministic medium is discussed. Some interesting phenomena emerge, including the changes of the spectral degree of coherence and of the spectral degree of polarization of the scattered field.

  18. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  19. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  20. Characterizing uncertainty and variability in physiologically based pharmacokinetic models: state of the science and needs for research and implementation.

    PubMed

    Barton, Hugh A; Chiu, Weihsueh A; Setzer, R Woodrow; Andersen, Melvin E; Bailer, A John; Bois, Frédéric Y; Dewoskin, Robert S; Hays, Sean; Johanson, Gunnar; Jones, Nancy; Loizou, George; Macphail, Robert C; Portier, Christopher J; Spendiff, Martin; Tan, Yu-Mei

    2007-10-01

    Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.

  1. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  2. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  3. Randomized central limit theorems: A unified theory

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo; Klafter, Joseph

    2010-08-01

    The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles’ aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles’ extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic—scaling all ensemble components by a common deterministic scale. However, there are “random environment” settings in which the underlying scaling schemes are stochastic—scaling the ensemble components by different random scales. Examples of such settings include Holtsmark’s law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)—in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes—and present “randomized counterparts” to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.

  4. A robust approach to chance constrained optimal power flow with renewable generation

    DOE PAGES

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2016-09-01

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less

  5. Integrated probabilistic risk assessment for nanoparticles: the case of nanosilica in food.

    PubMed

    Jacobs, Rianne; van der Voet, Hilko; Ter Braak, Cajo J F

    Insight into risks of nanotechnology and the use of nanoparticles is an essential condition for the social acceptance and safe use of nanotechnology. One of the problems with which the risk assessment of nanoparticles is faced is the lack of data, resulting in uncertainty in the risk assessment. We attempt to quantify some of this uncertainty by expanding a previous deterministic study on nanosilica (5-200 nm) in food into a fully integrated probabilistic risk assessment. We use the integrated probabilistic risk assessment method in which statistical distributions and bootstrap methods are used to quantify uncertainty and variability in the risk assessment. Due to the large amount of uncertainty present, this probabilistic method, which separates variability from uncertainty, contributed to a better understandable risk assessment. We found that quantifying the uncertainties did not increase the perceived risk relative to the outcome of the deterministic study. We pinpointed particular aspects of the hazard characterization that contributed most to the total uncertainty in the risk assessment, suggesting that further research would benefit most from obtaining more reliable data on those aspects.

  6. The Endogenous GRP78 Interactome in Human Head and Neck Cancers: A Deterministic Role of Cell Surface GRP78 in Cancer Stemness.

    PubMed

    Chen, Hsin-Ying; Chang, Joseph Tung-Chieh; Chien, Kun-Yi; Lee, Yun-Shien; You, Guo-Rung; Cheng, Ann-Joy

    2018-01-11

    Cell surface glucose regulated protein 78 (GRP78), an endoplasmic reticulum (ER) chaperone, was suggested to be a cancer stem cell marker, but the influence of this molecule on cancer stemness is poorly characterized. In this study, we developed a mass spectrometry platform to detect the endogenous interactome of GRP78 and investigated its role in cancer stemness. The interactome results showed that cell surface GRP78 associates with multiple molecules. The influence of cell population heterogeneity of head and neck cancer cell lines (OECM1, FaDu, and BM2) according to the cell surface expression levels of GRP78 and the GRP78 interactome protein, Progranulin, was investigated. The four sorted cell groups exhibited distinct cell cycle distributions, asymmetric/symmetric cell divisions, and different relative expression levels of stemness markers. Our results demonstrate that cell surface GRP78 promotes cancer stemness, whereas drives cells toward a non-stemlike phenotype when it chaperones Progranulin. We conclude that cell surface GRP78 is a chaperone exerting a deterministic influence on cancer stemness.

  7. Deterministic Line-Shape Programming of Silicon Nanowires for Extremely Stretchable Springs and Electronics.

    PubMed

    Xue, Zhaoguo; Sun, Mei; Dong, Taige; Tang, Zhiqiang; Zhao, Yaolong; Wang, Junzhuan; Wei, Xianlong; Yu, Linwei; Chen, Qing; Xu, Jun; Shi, Yi; Chen, Kunji; Roca I Cabarrocas, Pere

    2017-12-13

    Line-shape engineering is a key strategy to endow extra stretchability to 1D silicon nanowires (SiNWs) grown with self-assembly processes. We here demonstrate a deterministic line-shape programming of in-plane SiNWs into extremely stretchable springs or arbitrary 2D patterns with the aid of indium droplets that absorb amorphous Si precursor thin film to produce ultralong c-Si NWs along programmed step edges. A reliable and faithful single run growth of c-SiNWs over turning tracks with different local curvatures has been established, while high resolution transmission electron microscopy analysis reveals a high quality monolike crystallinity in the line-shaped engineered SiNW springs. Excitingly, in situ scanning electron microscopy stretching and current-voltage characterizations also demonstrate a superelastic and robust electric transport carried by the SiNW springs even under large stretching of more than 200%. We suggest that this highly reliable line-shape programming approach holds a strong promise to extend the mature c-Si technology into the development of a new generation of high performance biofriendly and stretchable electronics.

  8. Measurement Matrix Design for Phase Retrieval Based on Mutual Information

    NASA Astrophysics Data System (ADS)

    Shlezinger, Nir; Dabora, Ron; Eldar, Yonina C.

    2018-01-01

    In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.

  9. Chaotic behavior in the locomotion of Amoeba proteus.

    PubMed

    Miyoshi, H; Kagawa, Y; Tsuchiya, Y

    2001-01-01

    The locomotion of Amoeba proteus has been investigated by algorithms evaluating correlation dimension and Lyapunov spectrum developed in the field of nonlinear science. It is presumed by these parameters whether the random behavior of the system is stochastic or deterministic. For the analysis of the nonlinear parameters, n-dimensional time-delayed vectors have been reconstructed from a time series of periphery and area of A. proteus images captured with a charge-coupled-device camera, which characterize its random motion. The correlation dimension analyzed has shown the random motion of A. proteus is subjected only to 3-4 macrovariables, though the system is a complex system composed of many degrees of freedom. Furthermore, the analysis of the Lyapunov spectrum has shown its largest exponent takes positive values. These results indicate the random behavior of A. proteus is chaotic and deterministic motion on an attractor with low dimension. It may be important for the elucidation of the cell locomotion to take account of nonlinear interactions among a small number of dynamics such as the sol-gel transformation, the cytoplasmic streaming, and the relating chemical reaction occurring in the cell.

  10. Earthquake induced liquefaction hazard, probability and risk assessment in the city of Kolkata, India: its historical perspective and deterministic scenario

    NASA Astrophysics Data System (ADS)

    Nath, Sankar Kumar; Srivastava, Nishtha; Ghatak, Chitralekha; Adhikari, Manik Das; Ghosh, Ambarish; Sinha Ray, S. P.

    2018-01-01

    Liquefaction-induced ground failure is one amongst the leading causes of infrastructure damage due to the impact of large earthquakes in unconsolidated, non-cohesive, water saturated alluvial terrains. The city of Kolkata is located on the potentially liquefiable alluvial fan deposits of Ganga-Bramhaputra-Meghna Delta system with subsurface litho-stratigraphic sequence comprising of varying percentages of clay, cohesionless silt, sand, and gravel interbedded with decomposed wood and peat. Additionally, the region has moderately shallow groundwater condition especially in the post-monsoon seasons. In view of burgeoning population, there had been unplanned expansion of settlements in the hazardous geological, geomorphological, and hydrological conditions exposing the city to severe liquefaction hazard. The 1897 Shillong and 1934 Bihar-Nepal earthquakes both of M w 8.1 reportedly induced Modified Mercalli Intensity of IV-V and VI-VII respectively in the city reportedly triggering widespread to sporadic liquefaction condition with surface manifestation of sand boils, lateral spreading, ground subsidence, etc., thus posing a strong case for liquefaction potential analysis in the terrain. With the motivation of assessing seismic hazard, vulnerability, and risk of the city of Kolkata through a consorted federal funding stipulated for all the metros and upstart urban centers in India located in BIS seismic zones III, IV, and V with population more than one million, an attempt has been made here to understand the liquefaction susceptibility condition of Kolkata under the impact of earthquake loading employing modern multivariate techniques and also to predict deterministic liquefaction scenario of the city in the event of a probabilistic seismic hazard condition with 10% probability of exceedance in 50 years and a return period of 475 years. We conducted in-depth geophysical and geotechnical investigations in the city encompassing 435 km2 area. The stochastically synthesized bedrock ground motion for both the 1897 and 1934 earthquakes on non-linear analysis of local site conditions through DEEPSOIL Geotechnical analysis package present surface level peak ground acceleration of the order of 0.05-0.14 g for the 1934 Bihar-Nepal earthquake while for the 1897 Shillong earthquake it is found to be in the range of 0.03-0.11 g. The factor of safety (FOS) against liquefaction, the probability of liquefaction ( P L), the liquefaction potential index (LPI), and the liquefaction risk index are estimated under the influence of these two earthquakes wherein the city is classified into severe (LPI > 15), high (5 < LPI ≤ 15), moderate (0 < LPI ≤ 5), and non-liquefiable (LPI = 0) susceptibility zones. While the 1934 Bihar-Nepal earthquake induced moderate to severe liquefaction hazard condition in the city in mostly the deltaic plain and interdistributary marsh geomorphologic units with 13.5% sites exhibiting moderate hazard with a median LPI of 1.8, 8.5% sites depicting high with a median LPI of 9.1 and 4% sites with a median LPI of 18.9 exhibiting severe hazard condition, 1897 Shillong earthquake induced mostly non-liquefaction condition with very few sites depicting moderate and high liquefaction hazard. A conservative liquefaction hazard scenario of the city on the other hand estimated through deterministic approach for 10% probability of exceedance in 50 years predicts a high hazard zone in the 3.5-19 m depth region with FOS < 1 and P L > 65% comprising of coarse-grained sediments of sand, silty sand, and clayey silty sand in mostly the deltaic plain geomorphologic unit with 39.1% sites depicting severe liquefaction hazard with a median LPI of 28.3. A non-linear regression analysis on both the historical and deterministic liquefaction scenarios in P L versus LPI domain with ± 1 standard deviation confidence bound generated a cubic polynomial relationship between the two liquefaction hazard proxies. This study considered a bench mark for other cities in the country and elsewhere forms an integral part of the mega-seismic microzonation endeavors undertaken in all the earthquake-prone counties in the world.

  11. Tsunamigenic scenarios for southern Peru and northern Chile seismic gap: Deterministic and probabilistic hybrid approach for hazard assessment

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Yanez, G. A.; Melgar, D.; Salazar, P.; Shrivastava, M. N.; Das, R.; Catalan, P. A.; Cienfuegos, R.

    2017-12-01

    Plausible worst-case tsunamigenic scenarios definition plays a relevant role in tsunami hazard assessment focused in emergency preparedness and evacuation planning for coastal communities. During the last decade, the occurrence of major and moderate tsunamigenic earthquakes along worldwide subduction zones has given clues about critical parameters involved in near-field tsunami inundation processes, i.e. slip spatial distribution, shelf resonance of edge waves and local geomorphology effects. To analyze the effects of these seismic and hydrodynamic variables over the epistemic uncertainty of coastal inundation, we implement a combined methodology using deterministic and probabilistic approaches to construct 420 tsunamigenic scenarios in a mature seismic gap of southern Peru and northern Chile, extended from 17ºS to 24ºS. The deterministic scenarios are calculated using a regional distribution of trench-parallel gravity anomaly (TPGA) and trench-parallel topography anomaly (TPTA), three-dimensional Slab 1.0 worldwide subduction zones geometry model and published interseismic coupling (ISC) distributions. As result, we find four higher slip deficit zones interpreted as major seismic asperities of the gap, used in a hierarchical tree scheme to generate ten tsunamigenic scenarios with seismic magnitudes fluctuates between Mw 8.4 to Mw 8.9. Additionally, we construct ten homogeneous slip scenarios as inundation baseline. For the probabilistic approach, we implement a Karhunen - Loève expansion to generate 400 stochastic tsunamigenic scenarios over the maximum extension of the gap, with the same magnitude range of the deterministic sources. All the scenarios are simulated through a non-hydrostatic tsunami model Neowave 2D, using a classical nesting scheme, for five coastal major cities in northern Chile (Arica, Iquique, Tocopilla, Mejillones and Antofagasta) obtaining high resolution data of inundation depth, runup, coastal currents and sea level elevation. The probabilistic kinematic tsunamigenic scenarios give a more realistic slip patterns, similar to maximum slip amount of major past earthquakes. For all studied sites, the peak of slip location and shelf resonance is a first order control for the observed coastal inundation depths results.

  12. The Validity of Quasi-Steady-State Approximations in Discrete Stochastic Simulations

    PubMed Central

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R.

    2014-01-01

    In biochemical networks, reactions often occur on disparate timescales and can be characterized as either fast or slow. The quasi-steady-state approximation (QSSA) utilizes timescale separation to project models of biochemical networks onto lower-dimensional slow manifolds. As a result, fast elementary reactions are not modeled explicitly, and their effect is captured by nonelementary reaction-rate functions (e.g., Hill functions). The accuracy of the QSSA applied to deterministic systems depends on how well timescales are separated. Recently, it has been proposed to use the nonelementary rate functions obtained via the deterministic QSSA to define propensity functions in stochastic simulations of biochemical networks. In this approach, termed the stochastic QSSA, fast reactions that are part of nonelementary reactions are not simulated, greatly reducing computation time. However, it is unclear when the stochastic QSSA provides an accurate approximation of the original stochastic simulation. We show that, unlike the deterministic QSSA, the validity of the stochastic QSSA does not follow from timescale separation alone, but also depends on the sensitivity of the nonelementary reaction rate functions to changes in the slow species. The stochastic QSSA becomes more accurate when this sensitivity is small. Different types of QSSAs result in nonelementary functions with different sensitivities, and the total QSSA results in less sensitive functions than the standard or the prefactor QSSA. We prove that, as a result, the stochastic QSSA becomes more accurate when nonelementary reaction functions are obtained using the total QSSA. Our work provides an apparently novel condition for the validity of the QSSA in stochastic simulations of biochemical reaction networks with disparate timescales. PMID:25099817

  13. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    USGS Publications Warehouse

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  14. Probabilistic versus deterministic skill in predicting the western North Pacific-East Asian summer monsoon variability with multimodel ensembles

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Qun; Yang, Dejian; Xie, Qian; Zhang, Yaocun; Ren, Xuejuan; Tang, Youmin

    2017-04-01

    Based on historical forecasts of three quasi-operational multi-model ensemble (MME) systems, this study assesses the superiority of coupled MME over contributing single-model ensembles (SMEs) and over uncoupled atmospheric MME in predicting the Western North Pacific-East Asian summer monsoon variability. The probabilistic and deterministic forecast skills are measured by Brier skill score (BSS) and anomaly correlation (AC), respectively. A forecast-format dependent MME superiority over SMEs is found. The probabilistic forecast skill of the MME is always significantly better than that of each SME, while the deterministic forecast skill of the MME can be lower than that of some SMEs. The MME superiority arises from both the model diversity and the ensemble size increase in the tropics, and primarily from the ensemble size increase in the subtropics. The BSS is composed of reliability and resolution, two attributes characterizing probabilistic forecast skill. The probabilistic skill increase of the MME is dominated by the dramatic improvement in reliability, while resolution is not always improved, similar to AC. A monotonic resolution-AC relationship is further found and qualitatively explained, whereas little relationship can be identified between reliability and AC. It is argued that the MME's success in improving the reliability arises from an effective reduction of the overconfidence in forecast distributions. Moreover, it is examined that the seasonal predictions with coupled MME are more skillful than those with the uncoupled atmospheric MME forced by persisting sea surface temperature (SST) anomalies, since the coupled MME has better predicted the SST anomaly evolution in three key regions.

  15. Refreeze experiments with water droplets containing different types of ice nuclei interpreted by classical nucleation theory

    NASA Astrophysics Data System (ADS)

    Kaufmann, Lukas; Marcolli, Claudia; Luo, Beiping; Peter, Thomas

    2017-03-01

    Homogeneous nucleation of ice in supercooled water droplets is a stochastic process. In its classical description, the growth of the ice phase requires the emergence of a critical embryo from random fluctuations of water molecules between the water bulk and ice-like clusters, which is associated with overcoming an energy barrier. For heterogeneous ice nucleation on ice-nucleating surfaces both stochastic and deterministic descriptions are in use. Deterministic (singular) descriptions are often favored because the temperature dependence of ice nucleation on a substrate usually dominates the stochastic time dependence, and the ease of representation facilitates the incorporation in climate models. Conversely, classical nucleation theory (CNT) describes heterogeneous ice nucleation as a stochastic process with a reduced energy barrier for the formation of a critical embryo in the presence of an ice-nucleating surface. The energy reduction is conveniently parameterized in terms of a contact angle α between the ice phase immersed in liquid water and the heterogeneous surface. This study investigates various ice-nucleating agents in immersion mode by subjecting them to repeated freezing cycles to elucidate and discriminate the time and temperature dependences of heterogeneous ice nucleation. Freezing rates determined from such refreeze experiments are presented for Hoggar Mountain dust, birch pollen washing water, Arizona test dust (ATD), and also nonadecanol coatings. For the analysis of the experimental data with CNT, we assumed the same active site to be always responsible for freezing. Three different CNT-based parameterizations were used to describe rate coefficients for heterogeneous ice nucleation as a function of temperature, all leading to very similar results: for Hoggar Mountain dust, ATD, and larger nonadecanol-coated water droplets, the experimentally determined increase in freezing rate with decreasing temperature is too shallow to be described properly by CNT using the contact angle α as the only fit parameter. Conversely, birch pollen washing water and small nonadecanol-coated water droplets show temperature dependencies of freezing rates steeper than predicted by all three CNT parameterizations. Good agreement of observations and calculations can be obtained when a pre-factor β is introduced to the rate coefficient as a second fit parameter. Thus, the following microphysical picture emerges: heterogeneous freezing occurs at ice-nucleating sites that need a minimum (critical) surface area to host embryos of critical size to grow into a crystal. Fits based on CNT suggest that the critical active site area is in the range of 10-50 nm2, with the exact value depending on sample, temperature, and CNT-based parameterization. Two fitting parameters are needed to characterize individual active sites. The contact angle α lowers the energy barrier that has to be overcome to form the critical embryo at the site compared to the homogeneous case where the critical embryo develops in the volume of water. The pre-factor β is needed to adjust the calculated slope of freezing rate increase with temperature decrease. When this slope is steep, this can be interpreted as a high frequency of nucleation attempts, so that nucleation occurs immediately when the temperature is low enough for the active site to accommodate a critical embryo. This is the case for active sites of birch pollen washing water and for small droplets coated with nonadecanol. If the pre-factor is low, the frequency of nucleation attempts is low and the increase in freezing rate with decreasing temperature is shallow. This is the case for Hoggar Mountain dust, the large droplets coated with nonadecanol, and ATD. Various hypotheses why the value of the pre-factor depends on the nature of the active sites are discussed.

  16. Objective models of EMG signals for cyclic processes such as a human gait

    NASA Astrophysics Data System (ADS)

    Babska, Luiza; Selegrat, Monika; Dusza, Jacek J.

    2016-09-01

    EMG signals are small potentials appearing at the surface of human skin during muscle work. They arise due to changes in the physiological state of cell membranes in the muscle fibers. They are characterized by a relatively low frequency range (500 Hz) and a low amplitude signal (of the order of μV), making it difficult to record. Raw EMG signal is inherently random shape. However we can distinguish certain features related to the activation of the muscles of a deterministic or quasi-deterministic associated with the movement and its parametric description. Objective models of EMG signals were created on the base of actual data obtained from the VICON system installed at the University of Physical Education in Warsaw. The object of research (healthy woman) moved repeatedly after a fixed track. On her body 35 reflective markers to record the gait kinematics and 8 electrodes to record EMG signals were placed. We obtained research data included more than 1,000 EMG signals synchronized with the phases of gait. Test result of the work is an algorithm for obtaining the average EMG signal received from the multiple registration gait cycles carried out in the same reproducible conditions. The method described in the article is essentially a pre-finding measurement data from the two quasi-synchronous signals at different sampling frequencies for further processing. This signal is characterized by a significant reduction of high frequency noise and emphasis on the specific characteristics of individual records found in muscle activity.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  18. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  19. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  20. Statistical Characterization and Classification of Edge-Localized Plasma Instabilities

    NASA Astrophysics Data System (ADS)

    Webster, A. J.; Dendy, R. O.

    2013-04-01

    The statistics of edge-localized plasma instabilities (ELMs) in toroidal magnetically confined fusion plasmas are considered. From first principles, standard experimentally motivated assumptions are shown to determine a specific probability distribution for the waiting times between ELMs: the Weibull distribution. This is confirmed empirically by a statistically rigorous comparison with a large data set from the Joint European Torus. The successful characterization of ELM waiting times enables future work to progress in various ways. Here we present a quantitative classification of ELM types, complementary to phenomenological approaches. It also informs us about the nature of ELM processes, such as whether they are random or deterministic. The methods are extremely general and can be applied to numerous other quasiperiodic intermittent phenomena.

  1. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  2. Learning automata-based solutions to the nonlinear fractional knapsack problem with applications to optimal resource allocation.

    PubMed

    Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin

    2007-02-01

    This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.

  3. Appearance of deterministic mixing behavior from ensembles of fluctuating hydrodynamics simulations of the Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    Narayanan, Kiran; Samtaney, Ravi

    2018-04-01

    We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z (x ,t ) of the form Z (x ,t ) →1 /√{h ▵ t }N (i h ,n Δ t ) for spatial interval h , time interval Δ t , h , and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Z (h3,h03)>x ,t →1 /√ ▵ t max(i h ,n Δ t ) , with h0=ξ h ∀h

  4. Nonlinear dynamics in flow through unsaturated fractured-porous media: Status and perspectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faybishenko, Boris

    2002-11-27

    The need has long been recognized to improve predictions of flow and transport in partially saturated heterogeneous soils and fractured rock of the vadose zone for many practical applications, such as remediation of contaminated sites, nuclear waste disposal in geological formations, and climate predictions. Until recently, flow and transport processes in heterogeneous subsurface media with oscillating irregularities were assumed to be random and were not analyzed using methods of nonlinear dynamics. The goals of this paper are to review the theoretical concepts, present the results, and provide perspectives on investigations of flow and transport in unsaturated heterogeneous soils and fracturedmore » rock, using the methods of nonlinear dynamics and deterministic chaos. The results of laboratory and field investigations indicate that the nonlinear dynamics of flow and transport processes in unsaturated soils and fractured rocks arise from the dynamic feedback and competition between various nonlinear physical processes along with complex geometry of flow paths. Although direct measurements of variables characterizing the individual flow processes are not technically feasible, their cumulative effect can be characterized by analyzing time series data using the models and methods of nonlinear dynamics and chaos. Identifying flow through soil or rock as a nonlinear dynamical system is important for developing appropriate short- and long-time predictive models, evaluating prediction uncertainty, assessing the spatial distribution of flow characteristics from time series data, and improving chemical transport simulations. Inferring the nature of flow processes through the methods of nonlinear dynamics could become widely used in different areas of the earth sciences.« less

  5. A deterministic mathematical model for bidirectional excluded flow with Langmuir kinetics.

    PubMed

    Zarai, Yoram; Margaliot, Michael; Tuller, Tamir

    2017-01-01

    In many important cellular processes, including mRNA translation, gene transcription, phosphotransfer, and intracellular transport, biological "particles" move along some kind of "tracks". The motion of these particles can be modeled as a one-dimensional movement along an ordered sequence of sites. The biological particles (e.g., ribosomes or RNAPs) have volume and cannot surpass one another. In some cases, there is a preferred direction of movement along the track, but in general the movement may be bidirectional, and furthermore the particles may attach or detach from various regions along the tracks. We derive a new deterministic mathematical model for such transport phenomena that may be interpreted as a dynamic mean-field approximation of an important model from mechanical statistics called the asymmetric simple exclusion process (ASEP) with Langmuir kinetics. Using tools from the theory of monotone dynamical systems and contraction theory we show that the model admits a unique steady-state, and that every solution converges to this steady-state. Furthermore, we show that the model entrains (or phase locks) to periodic excitations in any of its forward, backward, attachment, or detachment rates. We demonstrate an application of this phenomenological transport model for analyzing ribosome drop off in mRNA translation.

  6. Efficient Integrative Multi-SNP Association Analysis via Deterministic Approximation of Posteriors.

    PubMed

    Wen, Xiaoquan; Lee, Yeji; Luca, Francesca; Pique-Regi, Roger

    2016-06-02

    With the increasing availability of functional genomic data, incorporating genomic annotations into genetic association analysis has become a standard procedure. However, the existing methods often lack rigor and/or computational efficiency and consequently do not maximize the utility of functional annotations. In this paper, we propose a rigorous inference procedure to perform integrative association analysis incorporating genomic annotations for both traditional GWASs and emerging molecular QTL mapping studies. In particular, we propose an algorithm, named deterministic approximation of posteriors (DAP), which enables highly efficient and accurate joint enrichment analysis and identification of multiple causal variants. We use a series of simulation studies to highlight the power and computational efficiency of our proposed approach and further demonstrate it by analyzing the cross-population eQTL data from the GEUVADIS project and the multi-tissue eQTL data from the GTEx project. In particular, we find that genetic variants predicted to disrupt transcription factor binding sites are enriched in cis-eQTLs across all tissues. Moreover, the enrichment estimates obtained across the tissues are correlated with the cell types for which the annotations are derived. Copyright © 2016 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  7. Multidisciplinary insight into clonal expansion of HTLV-1-infected cells in adult T-cell leukemia via modeling by deterministic finite automata coupled with high-throughput sequencing.

    PubMed

    Farmanbar, Amir; Firouzi, Sanaz; Park, Sung-Joon; Nakai, Kenta; Uchimaru, Kaoru; Watanabe, Toshiki

    2017-01-31

    Clonal expansion of leukemic cells leads to onset of adult T-cell leukemia (ATL), an aggressive lymphoid malignancy with a very poor prognosis. Infection with human T-cell leukemia virus type-1 (HTLV-1) is the direct cause of ATL onset, and integration of HTLV-1 into the human genome is essential for clonal expansion of leukemic cells. Therefore, monitoring clonal expansion of HTLV-1-infected cells via isolation of integration sites assists in analyzing infected individuals from early infection to the final stage of ATL development. However, because of the complex nature of clonal expansion, the underlying mechanisms have yet to be clarified. Combining computational/mathematical modeling with experimental and clinical data of integration site-based clonality analysis derived from next generation sequencing technologies provides an appropriate strategy to achieve a better understanding of ATL development. As a comprehensively interdisciplinary project, this study combined three main aspects: wet laboratory experiments, in silico analysis and empirical modeling. We analyzed clinical samples from HTLV-1-infected individuals with a broad range of proviral loads using a high-throughput methodology that enables isolation of HTLV-1 integration sites and accurate measurement of the size of infected clones. We categorized clones into four size groups, "very small", "small", "big", and "very big", based on the patterns of clonal growth and observed clone sizes. We propose an empirical formal model based on deterministic finite state automata (DFA) analysis of real clinical samples to illustrate patterns of clonal expansion. Through the developed model, we have translated biological data of clonal expansion into the formal language of mathematics and represented the observed clonality data with DFA. Our data suggest that combining experimental data (absolute size of clones) with DFA can describe the clonality status of patients. This kind of modeling provides a basic understanding as well as a unique perspective for clarifying the mechanisms of clonal expansion in ATL.

  8. A method for deterministic statistical downscaling of daily precipitation at a monsoonal site in Eastern China

    NASA Astrophysics Data System (ADS)

    Liu, Yonghe; Feng, Jinming; Liu, Xiu; Zhao, Yadi

    2017-12-01

    Statistical downscaling (SD) is a method that acquires the local information required for hydrological impact assessment from large-scale atmospheric variables. Very few statistical and deterministic downscaling models for daily precipitation have been conducted for local sites influenced by the East Asian monsoon. In this study, SD models were constructed by selecting the best predictors and using generalized linear models (GLMs) for Feixian, a site in the Yishu River Basin and Shandong Province. By calculating and mapping Spearman rank correlation coefficients between the gridded standardized values of five large-scale variables and daily observed precipitation, different cyclonic circulation patterns were found for monsoonal precipitation in summer (June-September) and winter (November-December and January-March); the values of the gridded boxes with the highest absolute correlations for observed precipitation were selected as predictors. Data for predictors and predictands covered the period 1979-2015, and different calibration and validation periods were divided when fitting and validating the models. Meanwhile, the bootstrap method was also used to fit the GLM. All the above thorough validations indicated that the models were robust and not sensitive to different samples or different periods. Pearson's correlations between downscaled and observed precipitation (logarithmically transformed) on a daily scale reached 0.54-0.57 in summer and 0.56-0.61 in winter, and the Nash-Sutcliffe efficiency between downscaled and observed precipitation reached 0.1 in summer and 0.41 in winter. The downscaled precipitation partially reflected exact variations in winter and main trends in summer for total interannual precipitation. For the number of wet days, both winter and summer models were able to reflect interannual variations. Other comparisons were also made in this study. These results demonstrated that when downscaling, it is appropriate to combine a correlation-based predictor selection across a spatial domain with GLM modeling.

  9. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    NASA Astrophysics Data System (ADS)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  10. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay

    Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less

  11. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DOE PAGES

    Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay; ...

    2017-07-17

    Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less

  12. Identifying tropical dry forests extent and succession via the use of machine learning techniques

    NASA Astrophysics Data System (ADS)

    Li, Wei; Cao, Sen; Campos-Vargas, Carlos; Sanchez-Azofeifa, Arturo

    2017-12-01

    Information on ecosystem services as a function of the successional stage for secondary tropical dry forests (TDFs) is scarce and limited. Secondary TDFs succession is defined as regrowth following a complete forest clearance for cattle growth or agriculture activities. In the context of large conservation initiatives, the identification of the extent, structure and composition of secondary TDFs can serve as key elements to estimate the effectiveness of such activities. As such, in this study we evaluate the use of a Hyperspectral MAPper (HyMap) dataset and a waveform LIDAR dataset for characterization of different levels of intra-secondary forests stages at the Santa Rosa National Park (SRNP) Environmental Monitoring Super Site located in Costa Rica. Specifically, a multi-task learning based machine learning classifier (MLC-MTL) is employed on the first shortwave infrared (SWIR1) of HyMap in order to identify the variability of aboveground biomass of secondary TDFs along a successional gradient. Our paper recognizes that the process of ecological succession is not deterministic but a combination of transitional forests types along a stochastic path that depends on ecological, edaphic, land use, and micro-meteorological conditions, and our results provide a new way to obtain the spatial distribution of three main types of TDFs successional stages.

  13. Towards an Empirically Based Parametric Explosion Spectral Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Walter, W R; Ruppert, S

    2009-08-31

    Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before been tested. The focus of our work is on the local and regional distances (< 2000 km) and phases (Pn, Pg, Sn, Lg) necessary to see small explosions. We are developing a parametric model of the nuclear explosion seismic source spectrum that is compatible with the earthquake-based geometrical spreading and attenuation models developed using the Magnitude Distance Amplitude Correction (MDAC) techniques (Walter and Taylor, 2002). The explosion parametric model will be particularly important in regions without any priormore » explosion data for calibration. The model is being developed using the available body of seismic data at local and regional distances for past nuclear explosions at foreign and domestic test sites. Parametric modeling is a simple and practical approach for widespread monitoring applications, prior to the capability to carry out fully deterministic modeling. The achievable goal of our parametric model development is to be able to predict observed local and regional distance seismic amplitudes for event identification and yield determination in regions with incomplete or no prior history of underground nuclear testing. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.« less

  14. Potential water resource impacts of hydraulic fracturing from unconventional oil production in the Bakken shale.

    PubMed

    Shrestha, Namita; Chilkoor, Govinda; Wilder, Joseph; Gadhamshetty, Venkataramana; Stone, James J

    2017-01-01

    Modern drilling techniques, notably horizontal drilling and hydraulic fracturing, have enabled unconventional oil production (UOP) from the previously inaccessible Bakken Shale Formation located throughout Montana, North Dakota (ND) and the Canadian province of Saskatchewan. The majority of UOP from the Bakken shale occurs in ND, strengthening its oil industry and businesses, job market, and its gross domestic product. However, similar to UOP from other low-permeability shales, UOP from the Bakken shale can result in environmental and human health effects. For example, UOP from the ND Bakken shale generates a voluminous amount of saline wastewater including produced and flowback water that are characterized by unusual levels of total dissolved solids (350 g/L) and elevated levels of toxic and radioactive substances. Currently, 95% of the saline wastewater is piped or trucked onsite prior to disposal into Class II injection wells. Oil and gas wastewater (OGW) spills that occur during transport to injection sites can potentially result in drinking water resource contamination. This study presents a critical review of potential water resource impacts due to deterministic (freshwater withdrawals and produced water management) and probabilistic events (spills due to leaking pipelines and truck accidents) related to UOP from the Bakken shale in ND. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Universal photonic quantum computation via time-delayed feedback

    PubMed Central

    Pichler, Hannes; Choi, Soonwon; Zoller, Peter; Lukin, Mikhail D.

    2017-01-01

    We propose and analyze a deterministic protocol to generate two-dimensional photonic cluster states using a single quantum emitter via time-delayed quantum feedback. As a physical implementation, we consider a single atom or atom-like system coupled to a 1D waveguide with a distant mirror, where guided photons represent the qubits, while the mirror allows the implementation of feedback. We identify the class of many-body quantum states that can be produced using this approach and characterize them in terms of 2D tensor network states. PMID:29073057

  16. Effect of multiplicative noise on stationary stochastic process

    NASA Astrophysics Data System (ADS)

    Kargovsky, A. V.; Chikishev, A. Yu.; Chichigina, O. A.

    2018-03-01

    An open system that can be analyzed using the Langevin equation with multiplicative noise is considered. The stationary state of the system results from a balance of deterministic damping and random pumping simulated as noise with controlled periodicity. The dependence of statistical moments of the variable that characterizes the system on parameters of the problem is studied. A nontrivial decrease in the mean value of the main variable with an increase in noise stochasticity is revealed. Applications of the results in several physical, chemical, biological, and technical problems of natural and humanitarian sciences are discussed.

  17. Periodicity and chaos from switched flow systems - Contrasting examples of discretely controlled continuous systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joseph; Ramadge, Peter J.

    1993-01-01

    We analyze two examples of the discrete control of a continuous variable system. These examples exhibit what may be regarded as the two extremes of complexity of the closed-loop behavior: one is eventually periodic, the other is chaotic. Our examples are derived from sampled deterministic flow models. These are of interest in their own right but have also been used as models for certain aspects of manufacturing systems. In each case, we give a precise characterization of the closed-loop behavior.

  18. LETTER TO THE EDITOR: Backbones of traffic jams

    NASA Astrophysics Data System (ADS)

    Shikhar Gupta, Himadri; Ramaswamy, Ramakrishna

    1996-11-01

    We study the jam phase of the deterministic traffic model in two dimensions. Within the jam phase, there is a phase transition, from a self-organized jam (formed by initial synchronization followed by jamming), to a random-jam structure. The backbone of the jam is defined and used to analyse self-organization in the jam. The fractal dimension and interparticle correlations on the backbone indicate a continous phase transition at density 0305-4470/29/21/003/img1 with critical exponent 0305-4470/29/21/003/img2, which are characterized through simulations.

  19. Monitoring the Heavens, Today, and Tomorrow

    NASA Technical Reports Server (NTRS)

    Johnson, Nicholas L.

    2006-01-01

    The current Earth satellite population in LEO for all sizes is relatively well-established by a combination of deterministic and statistical means. At higher altitudes, the population of satellites with diameters of less than 1 m is not well defined. Although a few new sensors might become operational in the near- to mid-term, no major improvement in environment characterization is anticipated during this period. With the increasing deployment of micro- and pico-satellites and with the continued growth of the small debris population, a need exists for better space surveillance to support spacecraft design and operations.

  20. Probabilistic model of nonlinear penalties due to collision-induced timing jitter for calculation of the bit error ratio in wavelength-division-multiplexed return-to-zero systems

    NASA Astrophysics Data System (ADS)

    Sinkin, Oleg V.; Grigoryan, Vladimir S.; Menyuk, Curtis R.

    2006-12-01

    We introduce a fully deterministic, computationally efficient method for characterizing the effect of nonlinearity in optical fiber transmission systems that utilize wavelength-division multiplexing and return-to-zero modulation. The method accurately accounts for bit-pattern-dependent nonlinear distortion due to collision-induced timing jitter and for amplifier noise. We apply this method to calculate the error probability as a function of channel spacing in a prototypical multichannel return-to-zero undersea system.

  1. Continuum mesoscopic framework for multiple interacting species and processes on multiple site types and/or crystallographic planes.

    PubMed

    Chatterjee, Abhijit; Vlachos, Dionisios G

    2007-07-21

    While recently derived continuum mesoscopic equations successfully bridge the gap between microscopic and macroscopic physics, so far they have been derived only for simple lattice models. In this paper, general deterministic continuum mesoscopic equations are derived rigorously via nonequilibrium statistical mechanics to account for multiple interacting surface species and multiple processes on multiple site types and/or different crystallographic planes. Adsorption, desorption, reaction, and surface diffusion are modeled. It is demonstrated that contrary to conventional phenomenological continuum models, microscopic physics, such as the interaction potential, determines the final form of the mesoscopic equation. Models of single component diffusion and binary diffusion of interacting particles on single-type site lattice and of single component diffusion on complex microporous materials' lattices consisting of two types of sites are derived, as illustrations of the mesoscopic framework. Simplification of the diffusion mesoscopic model illustrates the relation to phenomenological models, such as the Fickian and Maxwell-Stefan transport models. It is demonstrated that the mesoscopic equations are in good agreement with lattice kinetic Monte Carlo simulations for several prototype examples studied.

  2. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  3. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  4. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  5. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  6. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  7. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  8. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  9. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  10. Assessment of effectiveness of geologic isolation systems. Geologic-simulation model for a hypothetical site in the Columbia Plateau. Volume 2: results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, M.G.; Petrie, G.M.; Baldwin, A.J.

    1982-06-01

    This report contains the input data and computer results for the Geologic Simulation Model. This model is described in detail in the following report: Petrie, G.M., et. al. 1981. Geologic Simulation Model for a Hypothetical Site in the Columbia Plateau, Pacific Northwest Laboratory, Richland, Washington. The Geologic Simulation Model is a quasi-deterministic process-response model which simulates, for a million years into the future, the development of the geologic and hydrologic systems of the ground-water basin containing the Pasco Basin. Effects of natural processes on the ground-water hydrologic system are modeled principally by rate equations. The combined effects and synergistic interactionsmore » of different processes are approximated by linear superposition of their effects during discrete time intervals in a stepwise-integration approach.« less

  11. FRACOR-software toolbox for deterministic mapping of fracture corridors in oil fields on AutoCAD platform

    NASA Astrophysics Data System (ADS)

    Ozkaya, Sait I.

    2018-03-01

    Fracture corridors are interconnected large fractures in a narrow sub vertical tabular array, which usually traverse entire reservoir vertically and extended for several hundreds of meters laterally. Fracture corridors with their huge conductivities constitute an important element of many fractured reservoirs. Unlike small diffuse fractures, actual fracture corridors must be mapped deterministically for simulation or field development purposes. Fracture corridors can be identified and quantified definitely with borehole image logs and well testing. However, there are rarely sufficient image logs or well tests, and it is necessary to utilize various fracture corridor indicators with varying degrees of reliability. Integration of data from many different sources, in turn, requires a platform with powerful editing and layering capability. Available commercial reservoir characterization software packages, with layering and editing capabilities, can be cost intensive. CAD packages are far more affordable and may easily acquire the versatility and power of commercial software packages with addition of a small software toolbox. The objective of this communication is to present FRACOR, a software toolbox which enables deterministic 2D fracture corridor mapping and modeling on AutoCAD platform. The FRACOR toolbox is written in AutoLISPand contains several independent routines to import and integrate available fracture corridor data from an oil field, and export results as text files. The resulting fracture corridor maps consists mainly of fracture corridors with different confidence levels from combination of static and dynamic data and exclusion zones where no fracture corridor can exist. The exported text file of fracture corridors from FRACOR can be imported into an upscaling programs to generate fracture grid for dual porosity simulation or used for field development and well planning.

  12. A reliable simultaneous representation of seismic hazard and of ground shaking recurrence

    NASA Astrophysics Data System (ADS)

    Peresan, A.; Panza, G. F.; Magrin, A.; Vaccari, F.

    2015-12-01

    Different earthquake hazard maps may be appropriate for different purposes - such as emergency management, insurance and engineering design. Accounting for the lower occurrence rate of larger sporadic earthquakes may allow to formulate cost-effective policies in some specific applications, provided that statistically sound recurrence estimates are used, which is not typically the case of PSHA (Probabilistic Seismic Hazard Assessment). We illustrate the procedure to associate the expected ground motions from Neo-deterministic Seismic Hazard Assessment (NDSHA) to an estimate of their recurrence. Neo-deterministic refers to a scenario-based approach, which allows for the construction of a broad range of earthquake scenarios via full waveforms modeling. From the synthetic seismograms the estimates of peak ground acceleration, velocity and displacement, or any other parameter relevant to seismic engineering, can be extracted. NDSHA, in its standard form, defines the hazard computed from a wide set of scenario earthquakes (including the largest deterministically or historically defined credible earthquake, MCE) and it does not supply the frequency of occurrence of the expected ground shaking. A recent enhanced variant of NDSHA that reliably accounts for recurrence has been developed and it is applied to the Italian territory. The characterization of the frequency-magnitude relation can be performed by any statistically sound method supported by data (e.g. multi-scale seismicity model), so that a recurrence estimate is associated to each of the pertinent sources. In this way a standard NDSHA map of ground shaking is obtained simultaneously with the map of the corresponding recurrences. The introduction of recurrence estimates in NDSHA naturally allows for the generation of ground shaking maps at specified return periods. This permits a straightforward comparison between NDSHA and PSHA maps.

  13. Metallic-thin-film instability with spatially correlated thermal noise.

    PubMed

    Diez, Javier A; González, Alejandro G; Fernández, Roberto

    2016-01-01

    We study the effects of stochastic thermal fluctuations on the instability of the free surface of a flat liquid metallic film on a solid substrate. These fluctuations are represented by a stochastic noise term added to the deterministic equation for the film thickness within the long-wave approximation. Unlike the case of polymeric films, we find that this noise, while remaining white in time, must be colored in space, at least in some regimes. The corresponding noise term is characterized by a nonzero correlation length, ℓ_{c}, which, combined with the size of the system, leads to a dimensionless parameter β that accounts for the relative importance of the spatial correlation (β∼ℓ_{c}^{-1}). We perform the linear stability analysis (LSA) of the film both with and without the noise term and find that for ℓ_{c} larger than some critical value (depending on the system size), the wavelength of the peak of the spectrum is larger than that corresponding to the deterministic case, while for smaller ℓ_{c} this peak corresponds to smaller wavelength than the latter. Interestingly, whatever the value of ℓ_{c}, the peak always approaches the deterministic one for larger times. We compare LSA results with the numerical simulations of the complete nonlinear problem and find a good agreement in the power spectra for early times at different values of β. For late times, we find that the stochastic LSA predicts well the position of the dominant wavelength, showing that nonlinear interactions do not modify the trends of the early linear stages. Finally, we fit the theoretical spectra to experimental data from a nanometric laser-melted copper film and find that at later times, the adjustment requires smaller values of β (larger space correlations).

  14. Neo-deterministic seismic hazard scenarios for India—a preventive tool for disaster mitigation

    NASA Astrophysics Data System (ADS)

    Parvez, Imtiyaz A.; Magrin, Andrea; Vaccari, Franco; Ashish; Mir, Ramees R.; Peresan, Antonella; Panza, Giuliano Francesco

    2017-11-01

    Current computational resources and physical knowledge of the seismic wave generation and propagation processes allow for reliable numerical and analytical models of waveform generation and propagation. From the simulation of ground motion, it is easy to extract the desired earthquake hazard parameters. Accordingly, a scenario-based approach to seismic hazard assessment has been developed, namely the neo-deterministic seismic hazard assessment (NDSHA), which allows for a wide range of possible seismic sources to be used in the definition of reliable scenarios by means of realistic waveforms modelling. Such reliable and comprehensive characterization of expected earthquake ground motion is essential to improve building codes, particularly for the protection of critical infrastructures and for land use planning. Parvez et al. (Geophys J Int 155:489-508, 2003) published the first ever neo-deterministic seismic hazard map of India by computing synthetic seismograms with input data set consisting of structural models, seismogenic zones, focal mechanisms and earthquake catalogues. As described in Panza et al. (Adv Geophys 53:93-165, 2012), the NDSHA methodology evolved with respect to the original formulation used by Parvez et al. (Geophys J Int 155:489-508, 2003): the computer codes were improved to better fit the need of producing realistic ground shaking maps and ground shaking scenarios, at different scale levels, exploiting the most significant pertinent progresses in data acquisition and modelling. Accordingly, the present study supplies a revised NDSHA map for India. The seismic hazard, expressed in terms of maximum displacement (Dmax), maximum velocity (Vmax) and design ground acceleration (DGA), has been extracted from the synthetic signals and mapped on a regular grid over the studied territory.

  15. Metallic-thin-film instability with spatially correlated thermal noise

    NASA Astrophysics Data System (ADS)

    Diez, Javier A.; González, Alejandro G.; Fernández, Roberto

    2016-01-01

    We study the effects of stochastic thermal fluctuations on the instability of the free surface of a flat liquid metallic film on a solid substrate. These fluctuations are represented by a stochastic noise term added to the deterministic equation for the film thickness within the long-wave approximation. Unlike the case of polymeric films, we find that this noise, while remaining white in time, must be colored in space, at least in some regimes. The corresponding noise term is characterized by a nonzero correlation length, ℓc, which, combined with the size of the system, leads to a dimensionless parameter β that accounts for the relative importance of the spatial correlation (β ˜ℓc-1 ). We perform the linear stability analysis (LSA) of the film both with and without the noise term and find that for ℓc larger than some critical value (depending on the system size), the wavelength of the peak of the spectrum is larger than that corresponding to the deterministic case, while for smaller ℓc this peak corresponds to smaller wavelength than the latter. Interestingly, whatever the value of ℓc, the peak always approaches the deterministic one for larger times. We compare LSA results with the numerical simulations of the complete nonlinear problem and find a good agreement in the power spectra for early times at different values of β . For late times, we find that the stochastic LSA predicts well the position of the dominant wavelength, showing that nonlinear interactions do not modify the trends of the early linear stages. Finally, we fit the theoretical spectra to experimental data from a nanometric laser-melted copper film and find that at later times, the adjustment requires smaller values of β (larger space correlations).

  16. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  17. Recurrence quantification analysis of global stock markets

    NASA Astrophysics Data System (ADS)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  18. Domain imaging in ferroelectric thin films via channeling-contrast backscattered electron microscopy

    DOE PAGES

    Ihlefeld, Jon F.; Michael, Joseph R.; McKenzie, Bonnie B.; ...

    2016-09-16

    We report that ferroelastic domain walls provide opportunities for deterministically controlling mechanical, optical, electrical, and thermal energy. Domain wall characterization in micro- and nanoscale systems, where their spacing may be of the order of 100 nm or less is presently limited to only a few techniques, such as piezoresponse force microscopy and transmission electron microscopy. These respective techniques cannot, however, independently characterize domain polarization orientation and domain wall motion in technologically relevant capacitor structures or in a non-destructive manner, thus presenting a limitation of their utility. In this work, we show how backscatter scanning electron microscopy utilizing channeling contrast yieldmore » can image the ferroelastic domain structure of ferroelectric films with domain wall spacing as narrow as 10 nm.« less

  19. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  20. Experimental demonstration on the deterministic quantum key distribution based on entangled photons.

    PubMed

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-10

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  1. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    PubMed Central

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-01-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582

  2. On the number of different dynamics in Boolean networks with deterministic update schedules.

    PubMed

    Aracena, J; Demongeot, J; Fanchon, E; Montalva, M

    2013-04-01

    Deterministic Boolean networks are a type of discrete dynamical systems widely used in the modeling of genetic networks. The dynamics of such systems is characterized by the local activation functions and the update schedule, i.e., the order in which the nodes are updated. In this paper, we address the problem of knowing the different dynamics of a Boolean network when the update schedule is changed. We begin by proving that the problem of the existence of a pair of update schedules with different dynamics is NP-complete. However, we show that certain structural properties of the interaction diagraph are sufficient for guaranteeing distinct dynamics of a network. In [1] the authors define equivalence classes which have the property that all the update schedules of a given class yield the same dynamics. In order to determine the dynamics associated to a network, we develop an algorithm to efficiently enumerate the above equivalence classes by selecting a representative update schedule for each class with a minimum number of blocks. Finally, we run this algorithm on the well known Arabidopsis thaliana network to determine the full spectrum of its different dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. About influence of input rate random part of nonstationary queue system on statistical estimates of its macroscopic indicators

    NASA Astrophysics Data System (ADS)

    Korelin, Ivan A.; Porshnev, Sergey V.

    2018-05-01

    A model of the non-stationary queuing system (NQS) is described. The input of this model receives a flow of requests with input rate λ = λdet (t) + λrnd (t), where λdet (t) is a deterministic function depending on time; λrnd (t) is a random function. The parameters of functions λdet (t), λrnd (t) were identified on the basis of statistical information on visitor flows collected from various Russian football stadiums. The statistical modeling of NQS is carried out and the average statistical dependences are obtained: the length of the queue of requests waiting for service, the average wait time for the service, the number of visitors entered to the stadium on the time. It is shown that these dependencies can be characterized by the following parameters: the number of visitors who entered at the time of the match; time required to service all incoming visitors; the maximum value; the argument value when the studied dependence reaches its maximum value. The dependences of these parameters on the energy ratio of the deterministic and random component of the input rate are investigated.

  4. Multiple object tracking with non-unique data-to-object association via generalized hypothesis testing. [tracking several aircraft near each other or ships at sea

    NASA Technical Reports Server (NTRS)

    Porter, D. W.; Lefler, R. M.

    1979-01-01

    A generalized hypothesis testing approach is applied to the problem of tracking several objects where several different associations of data with objects are possible. Such problems occur, for instance, when attempting to distinctly track several aircraft maneuvering near each other or when tracking ships at sea. Conceptually, the problem is solved by first, associating data with objects in a statistically reasonable fashion and then, tracking with a bank of Kalman filters. The objects are assumed to have motion characterized by a fixed but unknown deterministic portion plus a random process portion modeled by a shaping filter. For example, the object might be assumed to have a mean straight line path about which it maneuvers in a random manner. Several hypothesized associations of data with objects are possible because of ambiguity as to which object the data comes from, false alarm/detection errors, and possible uncertainty in the number of objects being tracked. The statistical likelihood function is computed for each possible hypothesized association of data with objects. Then the generalized likelihood is computed by maximizing the likelihood over parameters that define the deterministic motion of the object.

  5. Study of selected phenotype switching strategies in time varying environment

    NASA Astrophysics Data System (ADS)

    Horvath, Denis; Brutovsky, Branislav

    2016-03-01

    Population heterogeneity plays an important role across many research, as well as the real-world, problems. The population heterogeneity relates to the ability of a population to cope with an environment change (or uncertainty) preventing its extinction. However, this ability is not always desirable as can be exemplified by an intratumor heterogeneity which positively correlates with the development of resistance to therapy. Causation of population heterogeneity is therefore in biology and medicine an intensively studied topic. In this paper the evolution of a specific strategy of population diversification, the phenotype switching, is studied at a conceptual level. The presented simulation model studies evolution of a large population of asexual organisms in a time-varying environment represented by a stochastic Markov process. Each organism disposes with a stochastic or nonlinear deterministic switching strategy realized by discrete-time models with evolvable parameters. We demonstrate that under rapidly varying exogenous conditions organisms operate in the vicinity of the bet-hedging strategy, while the deterministic patterns become relevant as the environmental variations are less frequent. Statistical characterization of the steady state regimes of the populations is done using the Hellinger and Kullback-Leibler functional distances and the Hamming distance.

  6. Review of smoothing methods for enhancement of noisy data from heavy-duty LHD mining machines

    NASA Astrophysics Data System (ADS)

    Wodecki, Jacek; Michalak, Anna; Stefaniak, Paweł

    2018-01-01

    Appropriate analysis of data measured on heavy-duty mining machines is essential for processes monitoring, management and optimization. Some particular classes of machines, for example LHD (load-haul-dump) machines, hauling trucks, drilling/bolting machines etc. are characterized with cyclicity of operations. In those cases, identification of cycles and their segments or in other words - simply data segmentation is a key to evaluate their performance, which may be very useful from the management point of view, for example leading to introducing optimization to the process. However, in many cases such raw signals are contaminated with various artifacts, and in general are expected to be very noisy, which makes the segmentation task very difficult or even impossible. To deal with that problem, there is a need for efficient smoothing methods that will allow to retain informative trends in the signals while disregarding noises and other undesired non-deterministic components. In this paper authors present a review of various approaches to diagnostic data smoothing. Described methods can be used in a fast and efficient way, effectively cleaning the signals while preserving informative deterministic behaviour, that is a crucial to precise segmentation and other approaches to industrial data analysis.

  7. Lyapunov exponents for one-dimensional aperiodic photonic bandgap structures

    NASA Astrophysics Data System (ADS)

    Kissel, Glen J.

    2011-10-01

    Existing in the "gray area" between perfectly periodic and purely randomized photonic bandgap structures are the socalled aperoidic structures whose layers are chosen according to some deterministic rule. We consider here a onedimensional photonic bandgap structure, a quarter-wave stack, with the layer thickness of one of the bilayers subject to being either thin or thick according to five deterministic sequence rules and binary random selection. To produce these aperiodic structures we examine the following sequences: Fibonacci, Thue-Morse, Period doubling, Rudin-Shapiro, as well as the triadic Cantor sequence. We model these structures numerically with a long chain (approximately 5,000,000) of transfer matrices, and then use the reliable algorithm of Wolf to calculate the (upper) Lyapunov exponent for the long product of matrices. The Lyapunov exponent is the statistically well-behaved variable used to characterize the Anderson localization effect (exponential confinement) when the layers are randomized, so its calculation allows us to more precisely compare the purely randomized structure with its aperiodic counterparts. It is found that the aperiodic photonic systems show much fine structure in their Lyapunov exponents as a function of frequency, and, in a number of cases, the exponents are quite obviously fractal.

  8. Detecting nonlinear dynamics of functional connectivity

    NASA Astrophysics Data System (ADS)

    LaConte, Stephen M.; Peltier, Scott J.; Kadah, Yasser; Ngan, Shing-Chung; Deshpande, Gopikrishna; Hu, Xiaoping

    2004-04-01

    Functional magnetic resonance imaging (fMRI) is a technique that is sensitive to correlates of neuronal activity. The application of fMRI to measure functional connectivity of related brain regions across hemispheres (e.g. left and right motor cortices) has great potential for revealing fundamental physiological brain processes. Primarily, functional connectivity has been characterized by linear correlations in resting-state data, which may not provide a complete description of its temporal properties. In this work, we broaden the measure of functional connectivity to study not only linear correlations, but also those arising from deterministic, non-linear dynamics. Here the delta-epsilon approach is extended and applied to fMRI time series. The method of delays is used to reconstruct the joint system defined by a reference pixel and a candidate pixel. The crux of this technique relies on determining whether the candidate pixel provides additional information concerning the time evolution of the reference. As in many correlation-based connectivity studies, we fix the reference pixel. Every brain location is then used as a candidate pixel to estimate the spatial pattern of deterministic coupling with the reference. Our results indicate that measured connectivity is often emphasized in the motor cortex contra-lateral to the reference pixel, demonstrating the suitability of this approach for functional connectivity studies. In addition, discrepancies with traditional correlation analysis provide initial evidence for non-linear dynamical properties of resting-state fMRI data. Consequently, the non-linear characterization provided from our approach may provide a more complete description of the underlying physiology and brain function measured by this type of data.

  9. Sustainability likelihood of remediation options for metal-contaminated soil/sediment.

    PubMed

    Chen, Season S; Taylor, Jessica S; Baek, Kitae; Khan, Eakalak; Tsang, Daniel C W; Ok, Yong Sik

    2017-05-01

    Multi-criteria analysis and detailed impact analysis were carried out to assess the sustainability of four remedial alternatives for metal-contaminated soil/sediment at former timber treatment sites and harbour sediment with different scales. The sustainability was evaluated in the aspects of human health and safety, environment, stakeholder concern, and land use, under four different scenarios with varying weighting factors. The Monte Carlo simulation was performed to reveal the likelihood of accomplishing sustainable remediation with different treatment options at different sites. The results showed that in-situ remedial technologies were more sustainable than ex-situ ones, where in-situ containment demonstrated both the most sustainable result and the highest probability to achieve sustainability amongst the four remedial alternatives in this study, reflecting the lesser extent of off-site and on-site impacts. Concerns associated with ex-situ options were adverse impacts tied to all four aspects and caused by excavation, extraction, and off-site disposal. The results of this study suggested the importance of considering the uncertainties resulting from the remedial options (i.e., stochastic analysis) in addition to the overall sustainability scores (i.e., deterministic analysis). The developed framework and model simulation could serve as an assessment for the sustainability likelihood of remedial options to ensure sustainable remediation of contaminated sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Methods of linking mothers and infants using health plan data for studies of pregnancy outcomes.

    PubMed

    Johnson, Karin E; Beaton, Sarah J; Andrade, Susan E; Cheetham, T Craig; Scott, Pamela E; Hammad, Tarek A; Dashevsky, Inna; Cooper, William O; Davis, Robert L; Pawloski, Pamala A; Raebel, Marsha A; Smith, David H; Toh, Sengwee; Li, De-Kun; Haffenreffer, Katherine; Dublin, Sascha

    2013-07-01

    Research on medication safety in pregnancy often utilizes health plan and birth certificate records. This study discusses methods used to link mothers with infants, a crucial step in such research. We describe how eight sites participating in the Medication Exposure in Pregnancy Risk Evaluation Program created linkages between deliveries, infants and birth certificates for the 2001-2007 birth cohorts. We describe linkage rates across sites, and for two sites, we compare the characteristics of populations linked using different methods. Of 299,260 deliveries, 256,563 (86%; range by site, 74-99%) could be linked to infants using a deterministic algorithm. At two sites, using birth certificate data to augment mother-infant linkage increased the representation of mothers who were Hispanic or non-White, younger, Medicaid recipients, or had low educational level. A total of 236,460 (92%; range by site, 82-100%) deliveries could be linked to a birth certificate. Tailored approaches enabled linking most deliveries to infants and to birth certificates, even when data systems differed. The methods used may affect the composition of the population identified. Linkages established with such methods can support sound pharmacoepidemiology studies of maternal drug exposure outside the context of a formal registry. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  12. Loss estimation in southeast Korea from a scenario earthquake using the deterministic method in HAZUS

    NASA Astrophysics Data System (ADS)

    Kang, S.; Kim, K.; Suk, B.; Yoo, H.

    2007-12-01

    Strong ground motion attenuation relationship represents a comprehensive trend of ground shakings at sites with distances from the source, geology, local soil conditions, and others. It is necessary to develop an attenuation relationship with careful considerations of characteristics of the target area for reliable seismic hazard/risk assessments. In the study, observed ground motions from the January 2007 magnitude 4.9 Odaesan earthquake and the events occurring in the Gyeongsang provinces are compared with the previously proposed ground attenuation relationships in the Korean Peninsula to select most appropriate one. In the meantime, a few strong ground motion attenuation relationships are proposed and introduced in HAZUS, which have been designed for the Western United States and the Central and Eastern United States. The selected relationship from the ones for the Korean Peninsula has been compared with attenuation relationships available in HAZUS. Then, the attenuation relation for the Western United States proposed by Sadigh et al. (1997) for the Site Class B has been selected for this study. Reliability of the assessment will be improved by using an appropriate attenuation relation. It has been used for the earthquake loss estimation of the Gyeongju area located in southeast Korea using the deterministic method in HAZUS with a scenario earthquake (M=6.7). Our preliminary estimates show 15.6% damage of houses, shelter needs for about three thousands residents, and 75 life losses in the study area for the scenario events occurring at 2 A.M. Approximately 96% of hospitals will be in normal operation in 24 hours from the proposed event. Losses related to houses will be more than 114 million US dollars. Application of the improved methodology for loss estimation in Korea will help decision makers for planning disaster responses and hazard mitigation.

  13. Inverse kinematic problem for a random gradient medium in geometric optics approximation

    NASA Astrophysics Data System (ADS)

    Petersen, N. V.

    1990-03-01

    Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.

  14. Quasi-Static Probabilistic Structural Analyses Process and Criteria

    NASA Technical Reports Server (NTRS)

    Goldberg, B.; Verderaime, V.

    1999-01-01

    Current deterministic structural methods are easily applied to substructures and components, and analysts have built great design insights and confidence in them over the years. However, deterministic methods cannot support systems risk analyses, and it was recently reported that deterministic treatment of statistical data is inconsistent with error propagation laws that can result in unevenly conservative structural predictions. Assuming non-nal distributions and using statistical data formats throughout prevailing stress deterministic processes lead to a safety factor in statistical format, which integrated into the safety index, provides a safety factor and first order reliability relationship. The embedded safety factor in the safety index expression allows a historically based risk to be determined and verified over a variety of quasi-static metallic substructures consistent with the traditional safety factor methods and NASA Std. 5001 criteria.

  15. Effect of Uncertainty on Deterministic Runway Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2012-01-01

    Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.

  16. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    PubMed

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  17. Efficient room-temperature source of polarized single photons

    DOEpatents

    Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.

    2007-08-07

    An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.

  18. Site-specific Seismic Hazard Assessment to Establish Elastic Design Properties for Oman Museum-Across Ages, Manah, Sultante of Oman

    NASA Astrophysics Data System (ADS)

    El Hussain, I. W.

    2017-12-01

    The current study provides a site specific deterministic seismic hazard assessment (DSHA) at the selected site for establishing the Oman Museum-Across Ages at Manah area, as a part of a comprehensive geotechnical and seismological plan to design the facilities accordingly. The DSHA first defines the seismic sources that might influence the site and assesses the maximum possible earthquake magnitude for each of them. By assuming each of these maximum earthquakes to occur at a location placing them at the closest distances to the site, the ground motion is predicted utilizing empirical ground motion prediction equations. The local site effects are performed by determining the fundamental frequency of the soft soil using HVSR technique and by estimating amplification spectra using the soil characteristics (mainly shear-wave velocity). Shear-wave velocity has been evaluated using the MASW technique. The maximum amplification value of 2.1 at spectral period 0.06 sec is observed at the ground surface, while the largest amplification value at the top of the conglomerate layer (at 5m depth) is 1.6 for a spectral period of 0.04 Sec. The maximum median 5% damped peak ground acceleration is found to be 0.263g at a spectral period of 0.1 sec. Keywords: DSHA; Site Effects; HVSR; MASW; PGA; Spectral Period

  19. SITE CHARACTERIZATION LIBRARY VERSION 3.0

    EPA Science Inventory

    The Site Characterization Library is a CD that provides a centralized, field-portable source for site characterization information. Version 3 of the Site Characterization Library contains additional (from earlier versions) electronic documents and computer programs related to th...

  20. A TTC upgrade proposal using bidirectional 10G-PON FTTH technology

    NASA Astrophysics Data System (ADS)

    Kolotouros, D. M.; Baron, S.; Soos, C.; Vasey, F.

    2015-04-01

    A new generation FPGA-based Timing-Trigger and Control (TTC) system based on emerging Passive Optical Network (PON) technology is being proposed to replace the existing off-detector TTC system used by the LHC experiments. High split ratio, dynamic software partitioning, low and deterministic latency, as well as low jitter are required. Exploiting the latest available technologies allows delivering higher capacity together with bidirectionality, a feature absent from the legacy TTC system. This article focuses on the features and capabilities of the latest TTC-PON prototype based on 10G-PON FTTH components along with some metrics characterizing its performance.

  1. Asymptotic Behaviour of Ground States for Mixtures of Ferromagnetic and Antiferromagnetic Interactions in a Dilute Regime

    NASA Astrophysics Data System (ADS)

    Braides, Andrea; Causin, Andrea; Piatnitski, Andrey; Solci, Margherita

    2018-06-01

    We consider randomly distributed mixtures of bonds of ferromagnetic and antiferromagnetic type in a two-dimensional square lattice with probability 1-p and p, respectively, according to an i.i.d. random variable. We study minimizers of the corresponding nearest-neighbour spin energy on large domains in Z^2. We prove that there exists p_0 such that for p≤ p_0 such minimizers are characterized by a majority phase; i.e., they take identically the value 1 or - 1 except for small disconnected sets. A deterministic analogue is also proved.

  2. Density waves in granular flow

    NASA Astrophysics Data System (ADS)

    Herrmann, H. J.; Flekkøy, E.; Nagel, K.; Peng, G.; Ristow, G.

    Ample experimental evidence has shown the existence of spontaneous density waves in granular material flowing through pipes or hoppers. Using Molecular Dynamics Simulations we show that several types of waves exist and find that these density fluctuations follow a 1/f spectrum. We compare this behaviour to deterministic one-dimensional traffic models. If positions and velocities are continuous variables the model shows self-organized criticality driven by the slowest car. We also present Lattice Gas and Boltzmann Lattice Models which reproduce the experimentally observed effects. Density waves are spontaneously generated when the viscosity has a nonlinear dependence on density which characterizes granular flow.

  3. Limit Theorems for Dispersing Billiards with Cusps

    NASA Astrophysics Data System (ADS)

    Bálint, P.; Chernov, N.; Dolgopyat, D.

    2011-12-01

    Dispersing billiards with cusps are deterministic dynamical systems with a mild degree of chaos, exhibiting "intermittent" behavior that alternates between regular and chaotic patterns. Their statistical properties are therefore weak and delicate. They are characterized by a slow (power-law) decay of correlations, and as a result the classical central limit theorem fails. We prove that a non-classical central limit theorem holds, with a scaling factor of {sqrt{nlog n}} replacing the standard {sqrt{n}} . We also derive the respective Weak Invariance Principle, and we identify the class of observables for which the classical CLT still holds.

  4. Asymptotic Behaviour of Ground States for Mixtures of Ferromagnetic and Antiferromagnetic Interactions in a Dilute Regime

    NASA Astrophysics Data System (ADS)

    Braides, Andrea; Causin, Andrea; Piatnitski, Andrey; Solci, Margherita

    2018-04-01

    We consider randomly distributed mixtures of bonds of ferromagnetic and antiferromagnetic type in a two-dimensional square lattice with probability 1-p and p, respectively, according to an i.i.d. random variable. We study minimizers of the corresponding nearest-neighbour spin energy on large domains in Z^2 . We prove that there exists p_0 such that for p≤p_0 such minimizers are characterized by a majority phase; i.e., they take identically the value 1 or - 1 except for small disconnected sets. A deterministic analogue is also proved.

  5. Interesting examples of supervised continuous variable systems

    NASA Technical Reports Server (NTRS)

    Chase, Christopher; Serrano, Joe; Ramadge, Peter

    1990-01-01

    The authors analyze two simple deterministic flow models for multiple buffer servers which are examples of the supervision of continuous variable systems by a discrete controller. These systems exhibit what may be regarded as the two extremes of complexity of the closed loop behavior: one is eventually periodic, the other is chaotic. The first example exhibits chaotic behavior that could be characterized statistically. The dual system, the switched server system, exhibits very predictable behavior, which is modeled by a finite state automaton. This research has application to multimodal discrete time systems where the controller can choose from a set of transition maps to implement.

  6. 3D calcite heterostructures for dynamic and deformable mineralized matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jaeseok; Wang, Yucai; Jiang, Yuanwen

    Scales are rooted in soft tissues, and are regenerated by specialized cells. The realization of dynamic synthetic analogues with inorganic materials has been a significant challenge, because the abiological regeneration sites that could yield deterministic growth behavior are hard to form. Here we overcome this fundamental hurdle by constructing a mutable and deformable array of three-dimensional calcite heterostructures that are partially locked in silicone. Individual calcite crystals exhibit asymmetrical dumbbell shapes and are prepared by a parallel tectonic approach under ambient conditions. Furthermore, the silicone matrix immobilizes the epitaxial nucleation sites through self-templated cavities, which enables symmetry breaking in reactionmore » dynamics and scalable manipulation of the mineral ensembles. With this platform, we devise several mineral-enabled dynamic surfaces and interfaces. For example, we show that the induced growth of minerals yields localized inorganic adhesion for biological tissue and reversible focal encapsulation for sensitive components in flexible electronics.« less

  7. Simulation of rockfalls triggered by earthquakes

    USGS Publications Warehouse

    Kobayashi, Y.; Harp, E.L.; Kagawa, T.

    1990-01-01

    A computer program to simulate the downslope movement of boulders in rolling or bouncing modes has been developed and applied to actual rockfalls triggered by the Mammoth Lakes, California, earthquake sequence in 1980 and the Central Idaho earthquake in 1983. In order to reproduce a movement mode where bouncing predominated, we introduced an artificial unevenness to the slope surface by adding a small random number to the interpolated value of the mid-points between the adjacent surveyed points. Three hundred simulations were computed for each site by changing the random number series, which determined distances and bouncing intervals. The movement of the boulders was, in general, rather erratic depending on the random numbers employed, and the results could not be seen as deterministic but stochastic. The closest agreement between calculated and actual movements was obtained at the site with the most detailed and accurate topographic measurements. ?? 1990 Springer-Verlag.

  8. 3D calcite heterostructures for dynamic and deformable mineralized matrices

    DOE PAGES

    Yi, Jaeseok; Wang, Yucai; Jiang, Yuanwen; ...

    2017-09-11

    Scales are rooted in soft tissues, and are regenerated by specialized cells. The realization of dynamic synthetic analogues with inorganic materials has been a significant challenge, because the abiological regeneration sites that could yield deterministic growth behavior are hard to form. Here we overcome this fundamental hurdle by constructing a mutable and deformable array of three-dimensional calcite heterostructures that are partially locked in silicone. Individual calcite crystals exhibit asymmetrical dumbbell shapes and are prepared by a parallel tectonic approach under ambient conditions. Furthermore, the silicone matrix immobilizes the epitaxial nucleation sites through self-templated cavities, which enables symmetry breaking in reactionmore » dynamics and scalable manipulation of the mineral ensembles. With this platform, we devise several mineral-enabled dynamic surfaces and interfaces. For example, we show that the induced growth of minerals yields localized inorganic adhesion for biological tissue and reversible focal encapsulation for sensitive components in flexible electronics.« less

  9. Scaling theory for the quasideterministic limit of continuous bifurcations.

    PubMed

    Kessler, David A; Shnerb, Nadav M

    2012-05-01

    Deterministic rate equations are widely used in the study of stochastic, interacting particles systems. This approach assumes that the inherent noise, associated with the discreteness of the elementary constituents, may be neglected when the number of particles N is large. Accordingly, it fails close to the extinction transition, when the amplitude of stochastic fluctuations is comparable with the size of the population. Here we present a general scaling theory of the transition regime for spatially extended systems. We demonstrate this through a detailed study of two fundamental models for out-of-equilibrium phase transitions: the Susceptible-Infected-Susceptible (SIS) that belongs to the directed percolation equivalence class and the Susceptible-Infected-Recovered (SIR) model belonging to the dynamic percolation class. Implementing the Ginzburg criteria we show that the width of the fluctuation-dominated region scales like N^{-κ}, where N is the number of individuals per site and κ=2/(d_{u}-d), d_{u} is the upper critical dimension. Other exponents that control the approach to the deterministic limit are shown to be calculable once κ is known. The theory is extended to include the corrections to the front velocity above the transition. It is supported by the results of extensive numerical simulations for systems of various dimensionalities.

  10. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E [Greenback, TN; Guillorn, Michael A [Ithaca, NY; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-08-23

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.

  11. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology

    PubMed Central

    Gao, Fei; Li, Ye; Novak, Igor L.; Slepchenko, Boris M.

    2016-01-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium ‘sparks’ as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell. PMID:27959915

  12. Stochasticity and determinism in models of hematopoiesis.

    PubMed

    Kimmel, Marek

    2014-01-01

    This chapter represents a novel view of modeling in hematopoiesis, synthesizing both deterministic and stochastic approaches. Whereas the stochastic models work in situations where chance dominates, for example when the number of cells is small, or under random mutations, the deterministic models are more important for large-scale, normal hematopoiesis. New types of models are on the horizon. These models attempt to account for distributed environments such as hematopoietic niches and their impact on dynamics. Mixed effects of such structures and chance events are largely unknown and constitute both a challenge and promise for modeling. Our discussion is presented under the separate headings of deterministic and stochastic modeling; however, the connections between both are frequently mentioned. Four case studies are included to elucidate important examples. We also include a primer of deterministic and stochastic dynamics for the reader's use.

  13. Progressively expanded neural network for automatic material identification in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Paheding, Sidike

    The science of hyperspectral remote sensing focuses on the exploitation of the spectral signatures of various materials to enhance capabilities including object detection, recognition, and material characterization. Hyperspectral imagery (HSI) has been extensively used for object detection and identification applications since it provides plenty of spectral information to uniquely identify materials by their reflectance spectra. HSI-based object detection algorithms can be generally classified into stochastic and deterministic approaches. Deterministic approaches are comparatively simple to apply since it is usually based on direct spectral similarity such as spectral angles or spectral correlation. In contrast, stochastic algorithms require statistical modeling and estimation for target class and non-target class. Over the decades, many single class object detection methods have been proposed in the literature, however, deterministic multiclass object detection in HSI has not been explored. In this work, we propose a deterministic multiclass object detection scheme, named class-associative spectral fringe-adjusted joint transform correlation. Human brain is capable of simultaneously processing high volumes of multi-modal data received every second of the day. In contrast, a machine sees input data simply as random binary numbers. Although machines are computationally efficient, they are inferior when comes to data abstraction and interpretation. Thus, mimicking the learning strength of human brain has been current trend in artificial intelligence. In this work, we present a biological inspired neural network, named progressively expanded neural network (PEN Net), based on nonlinear transformation of input neurons to a feature space for better pattern differentiation. In PEN Net, discrete fixed excitations are disassembled and scattered in the feature space as a nonlinear line. Each disassembled element on the line corresponds to a pattern with similar features. Unlike the conventional neural network where hidden neurons need to be iteratively adjusted to achieve better accuracy, our proposed PEN Net does not require hidden neurons tuning which achieves better computational efficiency, and it has also shown superior performance in HSI classification tasks compared to the state-of-the-arts. Spectral-spatial features based HSI classification framework has shown stronger strength compared to spectral-only based methods. In our lastly proposed technique, PEN Net is incorporated with multiscale spatial features (i.e., multiscale complete local binary pattern) to perform a spectral-spatial classification of HSI. Several experiments demonstrate excellent performance of our proposed technique compared to the more recent developed approaches.

  14. Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations

    NASA Astrophysics Data System (ADS)

    Savran, William Harvey

    High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.

  15. Characterization of the interplay between DNA repair and CRISPR/Cas9-induced DNA lesions at an endogenous locus

    PubMed Central

    Bothmer, Anne; Phadke, Tanushree; Barrera, Luis A.; Margulies, Carrie M; Lee, Christina S.; Buquicchio, Frank; Moss, Sean; Abdulkerim, Hayat S.; Selleck, William; Jayaram, Hariharan; Myer, Vic E.; Cotta-Ramusino, Cecilia

    2017-01-01

    The CRISPR–Cas9 system provides a versatile toolkit for genome engineering that can introduce various DNA lesions at specific genomic locations. However, a better understanding of the nature of these lesions and the repair pathways engaged is critical to realizing the full potential of this technology. Here we characterize the different lesions arising from each Cas9 variant and the resulting repair pathway engagement. We demonstrate that the presence and polarity of the overhang structure is a critical determinant of double-strand break repair pathway choice. Similarly, single nicks deriving from different Cas9 variants differentially activate repair: D10A but not N863A-induced nicks are repaired by homologous recombination. Finally, we demonstrate that homologous recombination is required for repairing lesions using double-stranded, but not single-stranded DNA as a template. This detailed characterization of repair pathway choice in response to CRISPR–Cas9 enables a more deterministic approach for designing research and therapeutic genome engineering strategies. PMID:28067217

  16. Three-dimensional silicon inverse photonic quasicrystals for infrared wavelengths.

    PubMed

    Ledermann, Alexandra; Cademartiri, Ludovico; Hermatschweiler, Martin; Toninelli, Costanza; Ozin, Geoffrey A; Wiersma, Diederik S; Wegener, Martin; von Freymann, Georg

    2006-12-01

    Quasicrystals are a class of lattices characterized by a lack of translational symmetry. Nevertheless, the points of the lattice are deterministically arranged, obeying rotational symmetry. Thus, we expect properties that are different from both crystals and glasses. Indeed, naturally occurring electronic quasicrystals (for example, AlPdMn metal alloys) show peculiar electronic, vibrational and physico-chemical properties. Regarding artificial quasicrystals for electromagnetic waves, three-dimensional (3D) structures have recently been realized at GHz frequencies and 2D structures have been reported for the near-infrared region. Here, we report on the first fabrication and characterization of 3D quasicrystals for infrared frequencies. Using direct laser writing combined with a silicon inversion procedure, we achieve high-quality silicon inverse icosahedral structures. Both polymeric and silicon quasicrystals are characterized by means of electron microscopy and visible-light Laue diffraction. The diffraction patterns of structures with a local five-fold real-space symmetry axis reveal a ten-fold symmetry as required by theory for 3D structures.

  17. Comparing Newmark

    NASA Astrophysics Data System (ADS)

    Rodríguez-Peces, M. J.; García-Mayordomo, J.; Azañón-Hernández, J. M.; Jabaloy-Sánchez, A.

    2009-04-01

    The Lorca Basin (Eastern Betic Cordillera, SE Spain) is one of the most seismically active regions of Spain. In this area there are well known cases of earthquake-induced slope instabilities associated to specific earthquakes (e.g., Bullas 2002, La Paca 2005). Furthermore, this area is characterized by moderate magnitude seismicity which mainly produces rock-falls and avalanches. In this work we present the results of our research at regional and site scales. For the regional scale, we have used a geographic information system (GIS) to develop an implementation of the Newmark's sliding rigid block method. We have particularly proposed a new variation of Newmark's method to consider soil and topographic amplification effects. Subsequently, we produced "Newmark displacement" maps for both probabilistic and deterministic seismic scenarios in the Lorca Basin. Probabilistic seismic scenarios consider three hazard maps in terms of peak ground acceleration (PGA) on rock corresponding to the 475-, 975- and 2475-year return periods (exceedance probability of 10, 5 and 2% in 50 years, respectively) in the Murcia Region. Deterministic seismic scenarios consider the occurrence of the most probable earthquake for a 475-year return period (Mw=5.0) at every location, or either a complete rupture of Lorca-Totana (Mw=6.7) or Puerto Lumbreras-Lorca (Mw=6.8) segments of Alhama de Murcia Fault. The Newmark displacement maps allowed us to identify areas with the highest potential seismic hazard, and also locate areas for future particular studies. We have found that rock-falls produced during the last earthquakes in Lorca Basin (e.g., Bullas 2002, La Paca 2005) match very well with areas with values of Newmark displacement lower than 2 cm in all the seismic scenarios considered. Therefore, it seems that low values of Newmark displacements are very likely associated with rock-falls. To support this hypothesis we have applied the Newmark method at a site scale. To do this, we have selected La Paca rock-fall which was generated during La Paca 2005 earthquake (mbLg=4.7, IEMS=VI-VII). We have used a terrestrial laser scanner in order to obtain a high resolution digital elevation model of La Paca rock-fall area. Moreover, we have performed a back-analysis based on field data to estimate the static safety factor previous to the earthquake and the critical acceleration. Furthermore, we have selected a representative strong ground motion record for La Paca earthquake from international databases. The critical acceleration and the peak ground acceleration values obtained from the strong ground motion record allowed us to estimate the actual soil and topographic amplification effects. Finally, we have calculated analytically the real Newmark displacement at La Paca rock-fall and we have compared this displacement with our GIS estimation in order to improve the calibration of Newmark's method at the regional scale.

  18. Hybrid deterministic/stochastic simulation of complex biochemical systems.

    PubMed

    Lecca, Paola; Bagagiolo, Fabio; Scarpa, Marina

    2017-11-21

    In a biological cell, cellular functions and the genetic regulatory apparatus are implemented and controlled by complex networks of chemical reactions involving genes, proteins, and enzymes. Accurate computational models are indispensable means for understanding the mechanisms behind the evolution of a complex system, not always explored with wet lab experiments. To serve their purpose, computational models, however, should be able to describe and simulate the complexity of a biological system in many of its aspects. Moreover, it should be implemented by efficient algorithms requiring the shortest possible execution time, to avoid enlarging excessively the time elapsing between data analysis and any subsequent experiment. Besides the features of their topological structure, the complexity of biological networks also refers to their dynamics, that is often non-linear and stiff. The stiffness is due to the presence of molecular species whose abundance fluctuates by many orders of magnitude. A fully stochastic simulation of a stiff system is computationally time-expensive. On the other hand, continuous models are less costly, but they fail to capture the stochastic behaviour of small populations of molecular species. We introduce a new efficient hybrid stochastic-deterministic computational model and the software tool MoBioS (MOlecular Biology Simulator) implementing it. The mathematical model of MoBioS uses continuous differential equations to describe the deterministic reactions and a Gillespie-like algorithm to describe the stochastic ones. Unlike the majority of current hybrid methods, the MoBioS algorithm divides the reactions' set into fast reactions, moderate reactions, and slow reactions and implements a hysteresis switching between the stochastic model and the deterministic model. Fast reactions are approximated as continuous-deterministic processes and modelled by deterministic rate equations. Moderate reactions are those whose reaction waiting time is greater than the fast reaction waiting time but smaller than the slow reaction waiting time. A moderate reaction is approximated as a stochastic (deterministic) process if it was classified as a stochastic (deterministic) process at the time at which it crosses the threshold of low (high) waiting time. A Gillespie First Reaction Method is implemented to select and execute the slow reactions. The performances of MoBios were tested on a typical example of hybrid dynamics: that is the DNA transcription regulation. The simulated dynamic profile of the reagents' abundance and the estimate of the error introduced by the fully deterministic approach were used to evaluate the consistency of the computational model and that of the software tool.

  19. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  20. Prospective testing of neo-deterministic seismic hazard scenarios for the Italian territory

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Magrin, Andrea; Vaccari, Franco; Kossobokov, Vladimir; Panza, Giuliano F.

    2013-04-01

    A reliable and comprehensive characterization of expected seismic ground shaking, eventually including the related time information, is essential in order to develop effective mitigation strategies and increase earthquake preparedness. Moreover, any effective tool for SHA must demonstrate its capability in anticipating the ground shaking related with large earthquake occurrences, a result that can be attained only through rigorous verification and validation process. So far, the major problems in classical probabilistic methods for seismic hazard assessment, PSHA, consisted in the adequate description of the earthquake recurrence, particularly for the largest and sporadic events, and of the attenuation models, which may be unable to account for the complexity of the medium and of the seismic sources and are often weekly constrained by the available observations. Current computational resources and physical knowledge of the seismic waves generation and propagation processes allow nowadays for viable numerical and analytical alternatives to the use of attenuation relations. Accordingly, a scenario-based neo-deterministic approach, NDSHA, to seismic hazard assessment has been proposed, which allows considering a wide range of possible seismic sources as the starting point for deriving scenarios by means of full waveforms modeling. The method does not make use of attenuation relations and naturally supplies realistic time series of ground shaking, including reliable estimates of ground displacement readily applicable to seismic isolation techniques. Based on NDSHA, an operational integrated procedure for seismic hazard assessment has been developed, that allows for the definition of time dependent scenarios of ground shaking, through the routine updating of formally defined earthquake predictions. The integrated NDSHA procedure for seismic input definition, which is currently applied to the Italian territory, combines different pattern recognition techniques, designed for the space-time identification of strong earthquakes, with algorithms for the realistic modeling of ground motion. Accordingly, a set of deterministic scenarios of ground motion at bedrock, which refers to the time interval when a strong event is likely to occur within the alerted area, can be defined by means of full waveform modeling, both at regional and local scale. CN and M8S predictions, as well as the related time-dependent ground motion scenarios associated with the alarmed areas, are regularly updated every two months since 2006. The routine application of the time-dependent NDSHA approach provides information that can be useful in assigning priorities for timely mitigation actions and, at the same time, allows for a rigorous prospective testing and validation of the proposed methodology. As an example, for sites where ground shaking values greater than 0.2 g are estimated at bedrock, further investigations can be performed taking into account the local soil conditions, to assess the performances of relevant structures, such as historical and strategic buildings. The issues related with prospective testing and validation of the time-dependent NDSHA scenarios will be discussed, illustrating the results obtained for the recent strong earthquakes in Italy, including the May 20, 2012 Emilia earthquake.

  1. Pro Free Will Priming Enhances “Risk-Taking” Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies

    PubMed Central

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum. PMID:27018854

  2. Pro Free Will Priming Enhances "Risk-Taking" Behavior in the Iowa Gambling Task, but Not in the Balloon Analogue Risk Task: Two Independent Priming Studies.

    PubMed

    Schrag, Yann; Tremea, Alessandro; Lagger, Cyril; Ohana, Noé; Mohr, Christine

    2016-01-01

    Studies indicated that people behave less responsibly after exposure to information containing deterministic statements as compared to free will statements or neutral statements. Thus, deterministic primes should lead to enhanced risk-taking behavior. We tested this prediction in two studies with healthy participants. In experiment 1, we tested 144 students (24 men) in the laboratory using the Iowa Gambling Task. In experiment 2, we tested 274 participants (104 men) online using the Balloon Analogue Risk Task. In the Iowa Gambling Task, the free will priming condition resulted in more risky decisions than both the deterministic and neutral priming conditions. We observed no priming effects on risk-taking behavior in the Balloon Analogue Risk Task. To explain these unpredicted findings, we consider the somatic marker hypothesis, a gain frequency approach as well as attention to gains and / or inattention to losses. In addition, we highlight the necessity to consider both pro free will and deterministic priming conditions in future studies. Importantly, our and previous results indicate that the effects of pro free will and deterministic priming do not oppose each other on a frequently assumed continuum.

  3. Ion implantation for deterministic single atom devices

    NASA Astrophysics Data System (ADS)

    Pacheco, J. L.; Singh, M.; Perry, D. L.; Wendt, J. R.; Ten Eyck, G.; Manginell, R. P.; Pluym, T.; Luhman, D. R.; Lilly, M. P.; Carroll, M. S.; Bielejec, E.

    2017-12-01

    We demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  4. Counterfactual Quantum Deterministic Key Distribution

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng; Wang, Jian; Tang, Chao-Jing

    2013-01-01

    We propose a new counterfactual quantum cryptography protocol concerning about distributing a deterministic key. By adding a controlled blocking operation module to the original protocol [T.G. Noh, Phys. Rev. Lett. 103 (2009) 230501], the correlation between the polarizations of the two parties, Alice and Bob, is extended, therefore, one can distribute both deterministic keys and random ones using our protocol. We have also given a simple proof of the security of our protocol using the technique we ever applied to the original protocol. Most importantly, our analysis produces a bound tighter than the existing ones.

  5. Ion implantation for deterministic single atom devices

    DOE PAGES

    Pacheco, J. L.; Singh, M.; Perry, D. L.; ...

    2017-12-04

    Here, we demonstrate a capability of deterministic doping at the single atom level using a combination of direct write focused ion beam and solid-state ion detectors. The focused ion beam system can position a single ion to within 35 nm of a targeted location and the detection system is sensitive to single low energy heavy ions. This platform can be used to deterministically fabricate single atom devices in materials where the nanostructure and ion detectors can be integrated, including donor-based qubits in Si and color centers in diamond.

  6. Deterministic quantum splitter based on time-reversed Hong-Ou-Mandel interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jun; Lee, Kim Fook; Kumar, Prem

    2007-09-15

    By utilizing a fiber-based indistinguishable photon-pair source in the 1.55 {mu}m telecommunications band [J. Chen et al., Opt. Lett. 31, 2798 (2006)], we present the first, to the best of our knowledge, deterministic quantum splitter based on the principle of time-reversed Hong-Ou-Mandel quantum interference. The deterministically separated identical photons' indistinguishability is then verified by using a conventional Hong-Ou-Mandel quantum interference, which exhibits a near-unity dip visibility of 94{+-}1%, making this quantum splitter useful for various quantum information processing applications.

  7. A nonlinear dynamics approach for incorporating wind-speed patterns into wind-power project evaluation.

    PubMed

    Huffaker, Ray; Bittelli, Marco

    2015-01-01

    Wind-energy production may be expanded beyond regions with high-average wind speeds (such as the Midwest U.S.A.) to sites with lower-average speeds (such as the Southeast U.S.A.) by locating favorable regional matches between natural wind-speed and energy-demand patterns. A critical component of wind-power evaluation is to incorporate wind-speed dynamics reflecting documented diurnal and seasonal behavioral patterns. Conventional probabilistic approaches remove patterns from wind-speed data. These patterns must be restored synthetically before they can be matched with energy-demand patterns. How to accurately restore wind-speed patterns is a vexing problem spurring an expanding line of papers. We propose a paradigm shift in wind power evaluation that employs signal-detection and nonlinear-dynamics techniques to empirically diagnose whether synthetic pattern restoration can be avoided altogether. If the complex behavior of observed wind-speed records is due to nonlinear, low-dimensional, and deterministic system dynamics, then nonlinear dynamics techniques can reconstruct wind-speed dynamics from observed wind-speed data without recourse to conventional probabilistic approaches. In the first study of its kind, we test a nonlinear dynamics approach in an application to Sugarland Wind-the first utility-scale wind project proposed in Florida, USA. We find empirical evidence of a low-dimensional and nonlinear wind-speed attractor characterized by strong temporal patterns that match up well with regular daily and seasonal electricity demand patterns.

  8. Self-Organization in 2D Traffic Flow Model with Jam-Avoiding Drive

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-04-01

    A stochastic cellular automaton (CA) model is presented to investigate the traffic jam by self-organization in the two-dimensional (2D) traffic flow. The CA model is the extended version of the 2D asymmetric exclusion model to take into account jam-avoiding drive. Each site contains either a car moving to the up, a car moving to the right, or is empty. A up car can shift right with probability p ja if it is blocked ahead by other cars. It is shown that the three phases (the low-density phase, the intermediate-density phase and the high-density phase) appear in the traffic flow. The intermediate-density phase is characterized by the right moving of up cars. The jamming transition to the high-density jamming phase occurs with higher density of cars than that without jam-avoiding drive. The jamming transition point p 2c increases with the shifting probability p ja. In the deterministic limit of p ja=1, it is found that a new jamming transition occurs from the low-density synchronized-shifting phase to the high-density moving phase with increasing density of cars. In the synchronized-shifting phase, all up cars do not move to the up but shift to the right by synchronizing with the move of right cars. We show that the jam-avoiding drive has an important effect on the dynamical jamming transition.

  9. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    PubMed

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more niche-driven dynamics in later successional stages. Grazing reduces predictability in both successional trends and species-level dynamics, especially in plant functional groups that are not well adapted to disturbance. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  10. Earthquake Hazard Mitigation Using a Systems Analysis Approach to Risk Assessment

    NASA Astrophysics Data System (ADS)

    Legg, M.; Eguchi, R. T.

    2015-12-01

    The earthquake hazard mitigation goal is to reduce losses due to severe natural events. The first step is to conduct a Seismic Risk Assessment consisting of 1) hazard estimation, 2) vulnerability analysis, 3) exposure compilation. Seismic hazards include ground deformation, shaking, and inundation. The hazard estimation may be probabilistic or deterministic. Probabilistic Seismic Hazard Assessment (PSHA) is generally applied to site-specific Risk assessments, but may involve large areas as in a National Seismic Hazard Mapping program. Deterministic hazard assessments are needed for geographically distributed exposure such as lifelines (infrastructure), but may be important for large communities. Vulnerability evaluation includes quantification of fragility for construction or components including personnel. Exposure represents the existing or planned construction, facilities, infrastructure, and population in the affected area. Risk (expected loss) is the product of the quantified hazard, vulnerability (damage algorithm), and exposure which may be used to prepare emergency response plans, retrofit existing construction, or use community planning to avoid hazards. The risk estimate provides data needed to acquire earthquake insurance to assist with effective recovery following a severe event. Earthquake Scenarios used in Deterministic Risk Assessments provide detailed information on where hazards may be most severe, what system components are most susceptible to failure, and to evaluate the combined effects of a severe earthquake to the whole system or community. Casualties (injuries and death) have been the primary factor in defining building codes for seismic-resistant construction. Economic losses may be equally significant factors that can influence proactive hazard mitigation. Large urban earthquakes may produce catastrophic losses due to a cascading of effects often missed in PSHA. Economic collapse may ensue if damaged workplaces, disruption of utilities, and resultant loss of income produces widespread default on payments. With increased computational power and more complete inventories of exposure, Monte Carlo methods may provide more accurate estimation of severe losses and the opportunity to increase resilience of vulnerable systems and communities.

  11. On the preventive management of sediment-related sewer blockages: a combined maintenance and routing optimization approach.

    PubMed

    Fontecha, John E; Akhavan-Tabatabaei, Raha; Duque, Daniel; Medaglia, Andrés L; Torres, María N; Rodríguez, Juan Pablo

    In this work we tackle the problem of planning and scheduling preventive maintenance (PM) of sediment-related sewer blockages in a set of geographically distributed sites that are subject to non-deterministic failures. To solve the problem, we extend a combined maintenance and routing (CMR) optimization approach which is a procedure based on two components: (a) first a maintenance model is used to determine the optimal time to perform PM operations for each site and second (b) a mixed integer program-based split procedure is proposed to route a set of crews (e.g., sewer cleaners, vehicles equipped with winches or rods and dump trucks) in order to perform PM operations at a near-optimal minimum expected cost. We applied the proposed CMR optimization approach to two (out of five) operative zones in the city of Bogotá (Colombia), where more than 100 maintenance operations per zone must be scheduled on a weekly basis. Comparing the CMR against the current maintenance plan, we obtained more than 50% of cost savings in 90% of the sites.

  12. Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.

    PubMed

    Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon

    2013-04-15

    The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Technologies for precision manufacture of current and future windows and domes

    NASA Astrophysics Data System (ADS)

    Hallock, Bob; Shorey, Aric

    2009-05-01

    The final finish and characterization of windows and domes presents a number of challenges in achieving desired precision with acceptable cost and schedule. This becomes more difficult with advanced materials and as window and dome shapes and requirements become more complex, including acute angle corners, transmitted wavefront specifications, aspheric geometries and trending toward conformal surfaces. Magnetorheological Finishing (MRF®) and Magnetorheological Jet (MR Jet®), along with metrology provided by Sub-aperture Stitching Interferometry (SSI®) have several unique attributes that provide them advantages in enhancing fabrication of current and next generation windows and domes. The advantages that MRF brings to the precision finishing of a wide range of shapes such as flats, spheres (including hemispheres), cylinders, aspheres and even freeform optics, has been well documented. Recent advancements include the ability to finish freeform shapes up to 2-meters in size as well as progress in finishing challenging IR materials. Due to its shear-based removal mechanism in contrast to the pressure-based process of other techniques, edges are not typically rolled, in particular on parts with acute angle corners. MR Jet provides additional benefits, particularly in the finishing of the inside of steep concave domes and other irregular shapes. The ability of MR Jet to correct the figure of conformal domes deterministically and to high precision has been demonstrated. Combining these technologies with metrology techniques, such as SSI provides a solution for finishing current and future windows and domes in a reliable, deterministic and cost-effective way. The ability to use the SSI to characterize a range of shapes such as domes and aspheres, as well as progress in using MRF and MR Jet for finishing conventional and conformal windows and domes with increasing size and complexity of design will be presented.

  14. Characterization of forced response of density stratified reacting wake

    NASA Astrophysics Data System (ADS)

    Pawar, Samadhan A.; Sujith, Raman I.; Emerson, Benjamin; Lieuwen, Tim

    2018-02-01

    The hydrodynamic stability of a reacting wake depends primarily on the density ratio [i.e., ratio of unburnt gas density (ρu) to burnt gas density (ρb)] of the flow across the wake. The variation of the density ratio from high to low value, keeping ρ u / ρ b > 1 , transitions dynamical characteristics of the reacting wake from a linearly globally stable (or convectively unstable) to a globally unstable mode. In this paper, we propose a framework to analyze the effect of harmonic forcing on the deterministic and synchronization characteristics of reacting wakes. Using the recurrence quantification analysis of the forced wake response, we show that the deterministic behaviour of the reacting wake increases as the amplitude of forcing is increased. Furthermore, for different density ratios, we found that the synchronization of the top and bottom branches of the wake with the forcing signal is dependent on whether the mean frequency of the natural oscillations of the wake (fn) is lesser or greater than the frequency of external forcing (ff). We notice that the response of both branches (top and bottom) of the reacting wake to the external forcing is asymmetric and symmetric for the low and high density ratios, respectively. Furthermore, we characterize the phase-locking behaviour between the top and bottom branches of the wake for different values of density ratios. We observe that an increase in the density ratio results in a gradual decrease in the relative phase angle between the top and bottom branches of the wake, which leads to a change in the vortex shedding pattern from a sinuous (anti-phase) to a varicose (in-phase) mode of the oscillations.

  15. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  17. First Order Reliability Application and Verification Methods for Semistatic Structures

    NASA Technical Reports Server (NTRS)

    Verderaime, Vincent

    1994-01-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored by conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments, its stress audits are shown to be arbitrary and incomplete, and it compromises high strength materials performance. A reliability method is proposed which combines first order reliability principles with deterministic design variables and conventional test technique to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety index expression. The application is reduced to solving for a factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and with the pace of semistatic structural designs.

  18. Apparatus for fixing latency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, David R; Bartholomew, David B; Moon, Justin

    2009-09-08

    An apparatus for fixing computational latency within a deterministic region on a network comprises a network interface modem, a high priority module and at least one deterministic peripheral device. The network interface modem is in communication with the network. The high priority module is in communication with the network interface modem. The at least one deterministic peripheral device is connected to the high priority module. The high priority module comprises a packet assembler/disassembler, and hardware for performing at least one operation. Also disclosed is an apparatus for executing at least one instruction on a downhole device within a deterministic region,more » the apparatus comprising a control device, a downhole network, and a downhole device. The control device is near the surface of a downhole tool string. The downhole network is integrated into the tool string. The downhole device is in communication with the downhole network.« less

  19. Stochastic Petri Net extension of a yeast cell cycle model.

    PubMed

    Mura, Ivan; Csikász-Nagy, Attila

    2008-10-21

    This paper presents the definition, solution and validation of a stochastic model of the budding yeast cell cycle, based on Stochastic Petri Nets (SPN). A specific family of SPNs is selected for building a stochastic version of a well-established deterministic model. We describe the procedure followed in defining the SPN model from the deterministic ODE model, a procedure that can be largely automated. The validation of the SPN model is conducted with respect to both the results provided by the deterministic one and the experimental results available from literature. The SPN model catches the behavior of the wild type budding yeast cells and a variety of mutants. We show that the stochastic model matches some characteristics of budding yeast cells that cannot be found with the deterministic model. The SPN model fine-tunes the simulation results, enriching the breadth and the quality of its outcome.

  20. Effect of sample volume on metastable zone width and induction time

    NASA Astrophysics Data System (ADS)

    Kubota, Noriaki

    2012-04-01

    The metastable zone width (MSZW) and the induction time, measured for a large sample (say>0.1 L) are reproducible and deterministic, while, for a small sample (say<1 mL), these values are irreproducible and stochastic. Such behaviors of MSZW and induction time were theoretically discussed both with stochastic and deterministic models. Equations for the distribution of stochastic MSZW and induction time were derived. The average values of stochastic MSZW and induction time both decreased with an increase in sample volume, while, the deterministic MSZW and induction time remained unchanged. Such different behaviors with variation in sample volume were explained in terms of detection sensitivity of crystallization events. The average values of MSZW and induction time in the stochastic model were compared with the deterministic MSZW and induction time, respectively. Literature data reported for paracetamol aqueous solution were explained theoretically with the presented models.

  1. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  2. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOEpatents

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-14

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  3. SMARTe Site Characterization Tool. In: SMARTe20ll, EPA/600/C-10/007

    EPA Science Inventory

    The purpose of the Site Characterization Tool is to: (1) develop a sample design for collecting site characterization data and (2) perform data analysis on uploaded data. The sample design part helps to determine how many samples should be collected to characterize a site with ...

  4. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  5. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems.

    PubMed

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-21

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  6. Analyzing simulation-based PRA data through traditional and topological clustering: A BWR station blackout case study

    DOE PAGES

    Maljovec, D.; Liu, S.; Wang, B.; ...

    2015-07-14

    Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less

  7. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  8. A Deterministic and Random Propagation Study with the Design of an Open Path 320 GHz to 340 GHz Transmissometer

    NASA Astrophysics Data System (ADS)

    Scally, Lawrence J.

    This program was implemented by Lawrence J. Scally for a Ph.D. under the EECE department at the University of Colorado at Boulder with most funding provided by the U.S. Army. Professor Gasiewski is the advisor and guider for the entire program; he has a strong history decades ago in this type of program. This program is developing a more advanced than previous years transmissometer, called Terahertz Atmospheric and Ionospheric Propagation, Absorption and Scattering System (TAIPAS), on an open path between the University of Colorado EE building roof and the mesa on owned by National Institute of Standards and Technology (NIST); NIST has invested money, location and support for the program. Besides designing and building the transmissometer, that has never be accomplished at this level, the system also analyzes the atmospheric propagation of frequencies by scanning between 320 GHz and 340 GHz, which includes the peak absorption frequency at 325.1529 GHz due to water absorption. The processing and characterization of the deterministic and random propagation characteristics of the atmosphere in the real world was significantly started; this will be executed with varies aerosols for decades on the permanently mounted system that is accessible 24/7 via a network over the CU Virtual Private Network (VPN).

  9. Hybrid models for chemical reaction networks: Multiscale theory and application to gene regulatory systems

    NASA Astrophysics Data System (ADS)

    Winkelmann, Stefanie; Schütte, Christof

    2017-09-01

    Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

  10. Study site characterization. Chapter 2

    Treesearch

    Chris Potter; Richard Birdsey

    2008-01-01

    This chapter is an overview of the main site characterization requirements at landscape-scale sampling locations. The overview is organized according to multiple "Site Attribute" headings that require descriptions throughout a given study site area, leading ultimately to a sufficient overall site characterization. Guidance is provided to describe the major...

  11. Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics

    NASA Technical Reports Server (NTRS)

    Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian

    2010-01-01

    A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed

  12. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing

    PubMed Central

    Palmer, Tim N.; O’Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173

  13. Deterministic and efficient quantum cryptography based on Bell's theorem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Zengbing; Pan Jianwei; Physikalisches Institut, Universitaet Heidelberg, Philosophenweg 12, 69120 Heidelberg

    2006-05-15

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.

  14. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  15. Stochastic assembly in a subtropical forest chronosequence: evidence from contrasting changes of species, phylogenetic and functional dissimilarity over succession.

    PubMed

    Mi, Xiangcheng; Swenson, Nathan G; Jia, Qi; Rao, Mide; Feng, Gang; Ren, Haibao; Bebber, Daniel P; Ma, Keping

    2016-09-07

    Deterministic and stochastic processes jointly determine the community dynamics of forest succession. However, it has been widely held in previous studies that deterministic processes dominate forest succession. Furthermore, inference of mechanisms for community assembly may be misleading if based on a single axis of diversity alone. In this study, we evaluated the relative roles of deterministic and stochastic processes along a disturbance gradient by integrating species, functional, and phylogenetic beta diversity in a subtropical forest chronosequence in Southeastern China. We found a general pattern of increasing species turnover, but little-to-no change in phylogenetic and functional turnover over succession at two spatial scales. Meanwhile, the phylogenetic and functional beta diversity were not significantly different from random expectation. This result suggested a dominance of stochastic assembly, contrary to the general expectation that deterministic processes dominate forest succession. On the other hand, we found significant interactions of environment and disturbance and limited evidence for significant deviations of phylogenetic or functional turnover from random expectations for different size classes. This result provided weak evidence of deterministic processes over succession. Stochastic assembly of forest succession suggests that post-disturbance restoration may be largely unpredictable and difficult to control in subtropical forests.

  16. Discrete-State Stochastic Models of Calcium-Regulated Calcium Influx and Subspace Dynamics Are Not Well-Approximated by ODEs That Neglect Concentration Fluctuations

    PubMed Central

    Weinberg, Seth H.; Smith, Gregory D.

    2012-01-01

    Cardiac myocyte calcium signaling is often modeled using deterministic ordinary differential equations (ODEs) and mass-action kinetics. However, spatially restricted “domains” associated with calcium influx are small enough (e.g., 10−17 liters) that local signaling may involve 1–100 calcium ions. Is it appropriate to model the dynamics of subspace calcium using deterministic ODEs or, alternatively, do we require stochastic descriptions that account for the fundamentally discrete nature of these local calcium signals? To address this question, we constructed a minimal Markov model of a calcium-regulated calcium channel and associated subspace. We compared the expected value of fluctuating subspace calcium concentration (a result that accounts for the small subspace volume) with the corresponding deterministic model (an approximation that assumes large system size). When subspace calcium did not regulate calcium influx, the deterministic and stochastic descriptions agreed. However, when calcium binding altered channel activity in the model, the continuous deterministic description often deviated significantly from the discrete stochastic model, unless the subspace volume is unrealistically large and/or the kinetics of the calcium binding are sufficiently fast. This principle was also demonstrated using a physiologically realistic model of calmodulin regulation of L-type calcium channels introduced by Yue and coworkers. PMID:23509597

  17. Mixing Single Scattering Properties in Vector Radiative Transfer for Deterministic and Stochastic Solutions

    NASA Astrophysics Data System (ADS)

    Mukherjee, L.; Zhai, P.; Hu, Y.; Winker, D. M.

    2016-12-01

    Among the primary factors, which determine the polarized radiation, field of a turbid medium are the single scattering properties of the medium. When multiple types of scatterers are present, the single scattering properties of the scatterers need to be properly mixed in order to find the solutions to the vector radiative transfer theory (VRT). The VRT solvers can be divided into two types: deterministic and stochastic. The deterministic solver can only accept one set of single scattering property in its smallest discretized spatial volume. When the medium contains more than one kind of scatterer, their single scattering properties are averaged, and then used as input for the deterministic solver. The stochastic solver, can work with different kinds of scatterers explicitly. In this work, two different mixing schemes are studied using the Successive Order of Scattering (SOS) method and Monte Carlo (MC) methods. One scheme is used for deterministic and the other is used for the stochastic Monte Carlo method. It is found that the solutions from the two VRT solvers using two different mixing schemes agree with each other extremely well. This confirms the equivalence to the two mixing schemes and also provides a benchmark for the VRT solution for the medium studied.

  18. A relationship between eye movement patterns and performance in a precognitive tracking task

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Hartzell, E. J.

    1977-01-01

    Eye movements made by various subjects in the performance of a precognitive tracking task are studied. The tracking task persented by an antiaircraft artillery (AAA) simulator has an input forcing function represented by a deterministic aircraft fly-by. The performance of subjects is ranked by two metrics. Good, mediocre, and poor trackers are selected for analysis based on performance during the difficult segment of the tracking task and over replications. Using phase planes to characterize both the eye movement patterns and the displayed error signal, a simple metric is developed to study these patterns. Two characterizations of eye movement strategies are defined and quantified. Using these two types of eye strategies, two conclusions are obtained about good, mediocre, and poor trackers. First, the eye tracker who used a fixed strategy will consistently perform better. Secondly, the best fixed strategy is defined as a Crosshair Fixator.

  19. Multifractality in plasma edge electrostatic turbulence

    NASA Astrophysics Data System (ADS)

    Neto, C. Rodrigues; Guimarães-Filho, Z. O.; Caldas, I. L.; Nascimento, I. C.; Kuznetsov, Yu. K.

    2008-08-01

    Plasma edge turbulence in Tokamak Chauffage Alfvén Brésilien (TCABR) [R. M. O. Galvão et al., Plasma Phys. Contr. Fusion 43, 1181 (2001)] is investigated for multifractal properties of the fluctuating floating electrostatic potential measured by Langmuir probes. The multifractality in this signal is characterized by the full multifractal spectra determined by applying the wavelet transform modulus maxima. In this work, the dependence of the multifractal spectrum with the radial position is presented. The multifractality degree inside the plasma increases with the radial position reaching a maximum near the plasma edge and becoming almost constant in the scrape-off layer. Comparisons between these results with those obtained for random test time series with the same Hurst exponents and data length statistically confirm the reported multifractal behavior. Moreover, the persistence of these signals, characterized by their Hurst exponent, present radial profile similar to the deterministic component estimated from analysis based on dynamical recurrences.

  20. Decentralized stochastic control

    NASA Technical Reports Server (NTRS)

    Speyer, J. L.

    1980-01-01

    Decentralized stochastic control is characterized by being decentralized in that the information to one controller is not the same as information to another controller. The system including the information has a stochastic or uncertain component. This complicates the development of decision rules which one determines under the assumption that the system is deterministic. The system is dynamic which means the present decisions affect future system responses and the information in the system. This circumstance presents a complex problem where tools like dynamic programming are no longer applicable. These difficulties are discussed from an intuitive viewpoint. Particular assumptions are introduced which allow a limited theory which produces mechanizable affine decision rules.

  1. Performance evaluation of a distance learning program.

    PubMed

    Dailey, D J; Eno, K R; Brinkley, J F

    1994-01-01

    This paper presents a performance metric which uses a single number to characterize the response time for a non-deterministic client-server application operating over the Internet. When applied to a Macintosh-based distance learning application called the Digital Anatomist Browser, the metric allowed us to observe that "A typical student doing a typical mix of Browser commands on a typical data set will experience the same delay if they use a slow Macintosh on a local network or a fast Macintosh on the other side of the country accessing the data over the Internet." The methodology presented is applicable to other client-server applications that are rapidly appearing on the Internet.

  2. Structural reliability assessment of the Oman India Pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Sharif, A.M.; Preston, R.

    1996-12-31

    Reliability techniques are increasingly finding application in design. The special design conditions for the deep water sections of the Oman India Pipeline dictate their use since the experience basis for application of standard deterministic techniques is inadequate. The paper discusses the reliability analysis as applied to the Oman India Pipeline, including selection of a collapse model, characterization of the variability in the parameters that affect pipe resistance to collapse, and implementation of first and second order reliability analyses to assess the probability of pipe failure. The reliability analysis results are used as the basis for establishing the pipe wall thicknessmore » requirements for the pipeline.« less

  3. New Criterion and Tool for Caltrans Seismic Hazard Characterization

    NASA Astrophysics Data System (ADS)

    Shantz, T.; Merriam, M.; Turner, L.; Chiou, B.; Liu, X.

    2008-12-01

    Caltrans recently adopted new procedures for the development of response spectra for structure design. These procedures incorporate both deterministic and probabilistic criteria. The Next Generation Attenuation (NGA) models (2008) are used for deterministic assessment (using a revised late-Quaternary age fault database), and the USGS 2008 5% in 50-year hazard maps are used for probabilistic assessment. A minimum deterministic spectrum based on a M6.5 earthquake at 12 km is also included. These spectra are enveloped and the largest values used. A new publicly available web-based design tool for calculating the design spectrum will be used for calculations. The tool is built on a Windows-Apache-MySQL-PHP (WAMP) platform and integrates GoogleMaps for increased flexibility in the tool's use. Links to Caltrans data such as pre-construction logs of test borings assist in the estimation of Vs30 values used in the new procedures. Basin effects based on new models developed for the CFM, for the San Francisco Bay area by the USGS, and by Thurber (2008) are also incorporated. It is anticipated that additional layers such as CGS Seismic Hazard Zone maps will be added in the future. Application of the new criterion will result in expected higher levels of ground motion at many bridges west of the Coast Ranges. In eastern California, use of the NGA relationships for strike-slip faulting (the dominant sense of motion in California) will often result in slightly lower expected values for bridges. The expected result is a more realistic prediction of ground motions at bridges, in keeping with those motions developed for other large-scale and important structures. The tool is based on a simplified fault map of California, so it will not be used for more detailed evaluations such as surface rupture determination. Announcements regarding tool availability (expected to be in early 2009) are at http://www.dot.ca.gov/research/index.htm

  4. Deterministic models for traffic jams

    NASA Astrophysics Data System (ADS)

    Nagel, Kai; Herrmann, Hans J.

    1993-10-01

    We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.

  5. 10 CFR 960.3-2-3 - Recommendation of sites for characterization.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Recommendation of sites for characterization. 960.3-2-3... POTENTIAL SITES FOR A NUCLEAR WASTE REPOSITORY Implementation Guidelines § 960.3-2-3 Recommendation of sites... President not less than three candidate sites for such characterization. The recommendation decision shall...

  6. 10 CFR 960.3-1-4-3 - Site recommendation for characterization.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 4 2011-01-01 2011-01-01 false Site recommendation for characterization. 960.3-1-4-3... POTENTIAL SITES FOR A NUCLEAR WASTE REPOSITORY Implementation Guidelines § 960.3-1-4-3 Site recommendation for characterization. The evidence required to support the recommendation of a site as a candidate...

  7. Soil pH mediates the balance between stochastic and deterministic assembly of bacteria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Binu M.; Stegen, James C.; Kim, Mincheol

    Little is known about the factors affecting the relative influence of stochastic and deterministic processes that governs the assembly of microbial communities in successional soils. Here, we conducted a meta-analysis of bacterial communities using six different successional soils data sets, scattered across different regions, with different pH conditions in early and late successional soils. We found that soil pH was the best predictor of bacterial community assembly and the relative importance of stochastic and deterministic processes along successional soils. Extreme acidic or alkaline pH conditions lead to assembly of phylogenetically more clustered bacterial communities through deterministic processes, whereas pH conditionsmore » close to neutral lead to phylogenetically less clustered bacterial communities with more stochasticity. We suggest that the influence of pH, rather than successional age, is the main driving force in producing trends in phylogenetic assembly of bacteria, and that pH also influences the relative balance of stochastic and deterministic processes along successional soils. Given that pH had a much stronger association with community assembly than did successional age, we evaluated whether the inferred influence of pH was maintained when studying globally-distributed samples collected without regard for successional age. This dataset confirmed the strong influence of pH, suggesting that the influence of soil pH on community assembly processes occurs globally. Extreme pH conditions likely exert more stringent limits on survival and fitness, imposing strong selective pressures through ecological and evolutionary time. Taken together, these findings suggest that the degree to which stochastic vs. deterministic processes shape soil bacterial community assembly is a consequence of soil pH rather than successional age.« less

  8. The meta-Gaussian Bayesian Processor of forecasts and associated preliminary experiments

    NASA Astrophysics Data System (ADS)

    Chen, Fajing; Jiao, Meiyan; Chen, Jing

    2013-04-01

    Public weather services are trending toward providing users with probabilistic weather forecasts, in place of traditional deterministic forecasts. Probabilistic forecasting techniques are continually being improved to optimize available forecasting information. The Bayesian Processor of Forecast (BPF), a new statistical method for probabilistic forecast, can transform a deterministic forecast into a probabilistic forecast according to the historical statistical relationship between observations and forecasts generated by that forecasting system. This technique accounts for the typical forecasting performance of a deterministic forecasting system in quantifying the forecast uncertainty. The meta-Gaussian likelihood model is suitable for a variety of stochastic dependence structures with monotone likelihood ratios. The meta-Gaussian BPF adopting this kind of likelihood model can therefore be applied across many fields, including meteorology and hydrology. The Bayes theorem with two continuous random variables and the normal-linear BPF are briefly introduced. The meta-Gaussian BPF for a continuous predictand using a single predictor is then presented and discussed. The performance of the meta-Gaussian BPF is tested in a preliminary experiment. Control forecasts of daily surface temperature at 0000 UTC at Changsha and Wuhan stations are used as the deterministic forecast data. These control forecasts are taken from ensemble predictions with a 96-h lead time generated by the National Meteorological Center of the China Meteorological Administration, the European Centre for Medium-Range Weather Forecasts, and the US National Centers for Environmental Prediction during January 2008. The results of the experiment show that the meta-Gaussian BPF can transform a deterministic control forecast of surface temperature from any one of the three ensemble predictions into a useful probabilistic forecast of surface temperature. These probabilistic forecasts quantify the uncertainty of the control forecast; accordingly, the performance of the probabilistic forecasts differs based on the source of the underlying deterministic control forecasts.

  9. Watershed scale response to climate change--Trout Lake Basin, Wisconsin

    USGS Publications Warehouse

    Walker, John F.; Hunt, Randall J.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Trout River Basin at Trout Lake in northern Wisconsin.

  10. Watershed scale response to climate change--Clear Creek Basin, Iowa

    USGS Publications Warehouse

    Christiansen, Daniel E.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Clear Creek Basin, near Coralville, Iowa.

  11. Watershed scale response to climate change--Feather River Basin, California

    USGS Publications Warehouse

    Koczot, Kathryn M.; Markstrom, Steven L.; Hay, Lauren E.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Feather River Basin, California.

  12. Watershed scale response to climate change--South Fork Flathead River Basin, Montana

    USGS Publications Warehouse

    Chase, Katherine J.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the South Fork Flathead River Basin, Montana.

  13. Watershed scale response to climate change--Cathance Stream Basin, Maine

    USGS Publications Warehouse

    Dudley, Robert W.; Hay, Lauren E.; Markstrom, Steven L.; Hodgkins, Glenn A.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Cathance Stream Basin, Maine.

  14. Watershed scale response to climate change--Pomperaug River Watershed, Connecticut

    USGS Publications Warehouse

    Bjerklie, David M.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Pomperaug River Basin at Southbury, Connecticut.

  15. Watershed scale response to climate change--Starkweather Coulee Basin, North Dakota

    USGS Publications Warehouse

    Vining, Kevin C.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Starkweather Coulee Basin near Webster, North Dakota.

  16. Watershed scale response to climate change--Sagehen Creek Basin, California

    USGS Publications Warehouse

    Markstrom, Steven L.; Hay, Lauren E.; Regan, R. Steven

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sagehen Creek Basin near Truckee, California.

  17. Watershed scale response to climate change--Sprague River Basin, Oregon

    USGS Publications Warehouse

    Risley, John; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Sprague River Basin near Chiloquin, Oregon.

  18. Watershed scale response to climate change--Black Earth Creek Basin, Wisconsin

    USGS Publications Warehouse

    Hunt, Randall J.; Walker, John F.; Westenbroek, Steven M.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Black Earth Creek Basin, Wisconsin.

  19. Watershed scale response to climate change--East River Basin, Colorado

    USGS Publications Warehouse

    Battaglin, William A.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the East River Basin, Colorado.

  20. Watershed scale response to climate change--Naches River Basin, Washington

    USGS Publications Warehouse

    Mastin, Mark C.; Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Naches River Basin below Tieton River in Washington.

  1. Watershed scale response to climate change--Flint River Basin, Georgia

    USGS Publications Warehouse

    Hay, Lauren E.; Markstrom, Steven L.

    2012-01-01

    Fourteen basins for which the Precipitation Runoff Modeling System has been calibrated and evaluated were selected as study sites. Precipitation Runoff Modeling System is a deterministic, distributed parameter watershed model developed to evaluate the effects of various combinations of precipitation, temperature, and land use on streamflow and general basin hydrology. Output from five General Circulation Model simulations and four emission scenarios were used to develop an ensemble of climate-change scenarios for each basin. These ensembles were simulated with the corresponding Precipitation Runoff Modeling System model. This fact sheet summarizes the hydrologic effect and sensitivity of the Precipitation Runoff Modeling System simulations to climate change for the Flint River Basin at Montezuma, Georgia.

  2. Cognitive Diagnostic Analysis Using Hierarchically Structured Skills

    ERIC Educational Resources Information Center

    Su, Yu-Lan

    2013-01-01

    This dissertation proposes two modified cognitive diagnostic models (CDMs), the deterministic, inputs, noisy, "and" gate with hierarchy (DINA-H) model and the deterministic, inputs, noisy, "or" gate with hierarchy (DINO-H) model. Both models incorporate the hierarchical structures of the cognitive skills in the model estimation…

  3. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  4. Active temporal multiplexing of indistinguishable heralded single photons

    PubMed Central

    Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.

    2016-01-01

    It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317

  5. Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials.

    PubMed

    Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres

    2018-01-02

    Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.

  6. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  7. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    PubMed

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  8. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  9. Neo-deterministic seismic hazard assessment in North Africa

    NASA Astrophysics Data System (ADS)

    Mourabit, T.; Abou Elenean, K. M.; Ayadi, A.; Benouar, D.; Ben Suleman, A.; Bezzeghoud, M.; Cheddadi, A.; Chourak, M.; ElGabry, M. N.; Harbi, A.; Hfaiedh, M.; Hussein, H. M.; Kacem, J.; Ksentini, A.; Jabour, N.; Magrin, A.; Maouche, S.; Meghraoui, M.; Ousadou, F.; Panza, G. F.; Peresan, A.; Romdhane, N.; Vaccari, F.; Zuccolo, E.

    2014-04-01

    North Africa is one of the most earthquake-prone areas of the Mediterranean. Many devastating earthquakes, some of them tsunami-triggering, inflicted heavy loss of life and considerable economic damage to the region. In order to mitigate the destructive impact of the earthquakes, the regional seismic hazard in North Africa is assessed using the neo-deterministic, multi-scenario methodology (NDSHA) based on the computation of synthetic seismograms, using the modal summation technique, at a regular grid of 0.2 × 0.2°. This is the first study aimed at producing NDSHA maps of North Africa including five countries: Morocco, Algeria, Tunisia, Libya, and Egypt. The key input data for the NDSHA algorithm are earthquake sources, seismotectonic zonation, and structural models. In the preparation of the input data, it has been really important to go beyond the national borders and to adopt a coherent strategy all over the area. Thanks to the collaborative efforts of the teams involved, it has been possible to properly merge the earthquake catalogues available for each country to define with homogeneous criteria the seismogenic zones, the characteristic focal mechanism associated with each of them, and the structural models used to model wave propagation from the sources to the sites. As a result, reliable seismic hazard maps are produced in terms of maximum displacement ( D max), maximum velocity ( V max), and design ground acceleration.

  10. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  11. INCREASING HEAVY OIL RESERVES IN THE WILMINGTON OIL FIELD THROUGH ADVANCED RESERVOIR CHARACTERIZATION AND THERMAL PRODUCTION TECHNOLOGIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott Hara

    2000-02-18

    The project involves using advanced reservoir characterization and thermal production technologies to improve thermal recovery techniques and lower operating and capital costs in a slope and basin clastic (SBC) reservoir in the Wilmington field, Los Angeles Co., CA. Through March 1999, project work has been completed related to data preparation, basic reservoir engineering, developing a deterministic three dimensional (3-D) geologic model, a 3-D deterministic reservoir simulation model, and a rock-log model, well drilling and completions, and surface facilities. Work is continuing on the stochastic geologic model, developing a 3-D stochastic thermal reservoir simulation model of the Fault Block IIA Tarmore » (Tar II-A) Zone, and operational work and research studies to prevent thermal-related formation compaction. Thermal-related formation compaction is a concern of the project team due to observed surface subsidence in the local area above the steamflood project. Last quarter on January 12, the steamflood project lost its inexpensive steam source from the Harbor Cogeneration Plant as a result of the recent deregulation of electrical power rates in California. An operational plan was developed and implemented to mitigate the effects of the two situations. Seven water injection wells were placed in service in November and December 1998 on the flanks of the Phase 1 steamflood area to pressure up the reservoir to fill up the existing steam chest. Intensive reservoir engineering and geomechanics studies are continuing to determine the best ways to shut down the steamflood operations in Fault Block II while minimizing any future surface subsidence. The new 3-D deterministic thermal reservoir simulator model is being used to provide sensitivity cases to optimize production, steam injection, future flank cold water injection and reservoir temperature and pressure. According to the model, reservoir fill up of the steam chest at the current injection rate of 28,000 BPD and gross and net oil production rates of 7,700 BPD and 750 BOPD (injection to production ratio of 4) will occur in October 1999. At that time, the reservoir should act more like a waterflood and production and cold water injection can be operated at lower net injection rates to be determined. Modeling runs developed this quarter found that varying individual well injection rates to meet added production and local pressure problems by sub-zone could reduce steam chest fill-up by up to one month.« less

  12. Potential and flux field landscape theory. I. Global stability and dynamics of spatially dependent non-equilibrium systems.

    PubMed

    Wu, Wei; Wang, Jin

    2013-09-28

    We established a potential and flux field landscape theory to quantify the global stability and dynamics of general spatially dependent non-equilibrium deterministic and stochastic systems. We extended our potential and flux landscape theory for spatially independent non-equilibrium stochastic systems described by Fokker-Planck equations to spatially dependent stochastic systems governed by general functional Fokker-Planck equations as well as functional Kramers-Moyal equations derived from master equations. Our general theory is applied to reaction-diffusion systems. For equilibrium spatially dependent systems with detailed balance, the potential field landscape alone, defined in terms of the steady state probability distribution functional, determines the global stability and dynamics of the system. The global stability of the system is closely related to the topography of the potential field landscape in terms of the basins of attraction and barrier heights in the field configuration state space. The effective driving force of the system is generated by the functional gradient of the potential field alone. For non-equilibrium spatially dependent systems, the curl probability flux field is indispensable in breaking detailed balance and creating non-equilibrium condition for the system. A complete characterization of the non-equilibrium dynamics of the spatially dependent system requires both the potential field and the curl probability flux field. While the non-equilibrium potential field landscape attracts the system down along the functional gradient similar to an electron moving in an electric field, the non-equilibrium flux field drives the system in a curly way similar to an electron moving in a magnetic field. In the small fluctuation limit, the intrinsic potential field as the small fluctuation limit of the potential field for spatially dependent non-equilibrium systems, which is closely related to the steady state probability distribution functional, is found to be a Lyapunov functional of the deterministic spatially dependent system. Therefore, the intrinsic potential landscape can characterize the global stability of the deterministic system. The relative entropy functional of the stochastic spatially dependent non-equilibrium system is found to be the Lyapunov functional of the stochastic dynamics of the system. Therefore, the relative entropy functional quantifies the global stability of the stochastic system with finite fluctuations. Our theory offers an alternative general approach to other field-theoretic techniques, to study the global stability and dynamics of spatially dependent non-equilibrium field systems. It can be applied to many physical, chemical, and biological spatially dependent non-equilibrium systems.

  13. Undefined freeform surfaces having deterministic structure: issues of their characterization for functionality and manufacture

    NASA Astrophysics Data System (ADS)

    Whitehouse, David J.

    2016-09-01

    There is an increasing use of surfaces which have structure, an increase in the use of freeform surfaces, and most importantly an increase in the number of surfaces having both characteristics. These can be called multi-function surfaces, where more than one function is helped by the geometrical features: the structure can help one, the freeform another. Alternatively, they can be complementary to optimize a single function, but in all cases both geometries are involved. This paper examines some of the problems posed by having such disparate geometries on one surface; in particular, the methods of characterization needed to help understand the functionality and also to some extent their manufacture. This involves investigating ways of expressing how local and global geometric features of undefined freeform surfaces might influence function and how surface structure on top of or in series with the freeform affects the nature of the characterization. Some methods have been found of identifying possible strategies for tackling the characterization problem, based in part on the principles of least action and on the way that nature has solved the marriage of flexible freeform geometry and structure on surfaces.

  14. Feasibility of Exoplanet Coronagraphy with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G.; Woodruff, Robert A.; Brown, Robert; Noecker, M. Charley; Cheng, Edward

    2010-01-01

    Herein we report on a preliminary study to assess the use of the Hubble Space Telescope (HST) for the direct detection and spectroscopic characterization of exoplanets and debris disks - an application for which HST was not originally designed. Coronagraphic advances may enable the design of a science instrument that could achieve limiting contrasts approx.10deg beyond 275 milli-arcseconds (4 lambda/D at 800 nm) inner working angle, thereby enabling detection and characterization of several known jovian planets and imaging of debris disks. Advantages of using HST are that it already exists in orbit, it's primary mirror is thermally stable and it is the most characterized space telescope yet flown. However there is drift of the HST telescope, likely due to thermal effects crossing the terminator. The drift, however, is well characterized and consists of a larger deterministic components and a smaller stochastic component. It is the effect of this drift versus the sensing and control bandwidth of the instrument that would likely limit HST coronagraphic performance. Herein we discuss the science case, quantifY the limiting factors and assess the feasibility of using HST for exoplanet discovery using a hypothetical new instrument. Keywords: Hubble Space Telescope, coronagraphy, exoplanets, telescopes

  15. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  16. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  17. 10 CFR 60.18 - Review of site characterization activities. 2

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... developed, and on the progress of waste form and waste package research and development. The semiannual... of site characterization will be established. Other topics related to site characterization shall...

  18. 10 CFR 60.18 - Review of site characterization activities. 2

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... developed, and on the progress of waste form and waste package research and development. The semiannual... of site characterization will be established. Other topics related to site characterization shall...

  19. 10 CFR 60.18 - Review of site characterization activities. 2

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... developed, and on the progress of waste form and waste package research and development. The semiannual... of site characterization will be established. Other topics related to site characterization shall...

  20. 10 CFR 60.18 - Review of site characterization activities. 2

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... developed, and on the progress of waste form and waste package research and development. The semiannual... of site characterization will be established. Other topics related to site characterization shall...

  1. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  2. Extraction of angle deterministic signals in the presence of stationary speed fluctuations with cyclostationary blind source separation

    NASA Astrophysics Data System (ADS)

    Delvecchio, S.; Antoni, J.

    2012-02-01

    This paper addresses the use of a cyclostationary blind source separation algorithm (namely RRCR) to extract angle deterministic signals from mechanical rotating machines in presence of stationary speed fluctuations. This means that only phase fluctuations while machine is running in steady-state conditions are considered while run-up or run-down speed variations are not taken into account. The machine is also supposed to run in idle conditions so non-stationary phenomena due to the load are not considered. It is theoretically assessed that in such operating conditions the deterministic (periodic) signal in the angle domain becomes cyclostationary at first and second orders in the time domain. This fact justifies the use of the RRCR algorithm, which is able to directly extract the angle deterministic signal from the time domain without performing any kind of interpolation. This is particularly valuable when angular resampling fails because of uncontrolled speed fluctuations. The capability of the proposed approach is verified by means of simulated and actual vibration signals captured on a pneumatic screwdriver handle. In this particular case not only the extraction of the angle deterministic part can be performed but also the separation of the main sources of excitation (i.e. motor shaft imbalance, epyciloidal gear meshing and air pressure forces) affecting the user hand during operations.

  3. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  4. Tag-mediated cooperation with non-deterministic genotype-phenotype mapping

    NASA Astrophysics Data System (ADS)

    Zhang, Hong; Chen, Shu

    2016-01-01

    Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.

  5. SMARTE'S SITE CHARACTERIZATION TOOL

    EPA Science Inventory

    Site Characterization involves collecting environmental data to evaluate the nature and extent of contamination. Environmental data could consist of chemical analyses of soil, sediment, water or air samples. Typically site characterization data are statistically evaluated for thr...

  6. Site characterization report for the basalt waste isolation project. Volume II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1982-11-01

    The reference location for a repository in basalt for the terminal storage of nuclear wastes on the Hanford Site and the candidate horizons within this reference repository location have been identified and the preliminary characterization work in support of the site screening process has been completed. Fifteen technical questions regarding the qualification of the site were identified to be addressed during the detailed site characterization phase of the US Department of Energy-National Waste Terminal Storage Program site selection process. Resolution of these questions will be provided in the final site characterization progress report, currently planned to be issued in 1987,more » and in the safety analysis report to be submitted with the License Application. The additional information needed to resolve these questions and the plans for obtaining the information have been identified. This Site Characterization Report documents the results of the site screening process, the preliminary site characterization data, the technical issues that need to be addressed, and the plans for resolving these issues. Volume 2 contains chapters 6 through 12: geochemistry; surface hydrology; climatology, meteorology, and air quality; environmental, land-use, and socioeconomic characteristics; repository design; waste package; and performance assessment.« less

  7. 40 CFR 280.63 - Initial site characterization.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 26 2010-07-01 2010-07-01 false Initial site characterization. 280.63... Hazardous Substances § 280.63 Initial site characterization. (a) Unless directed to do otherwise by the implementing agency, owners and operators must assemble information about the site and the nature of the...

  8. Measuring predictability in ultrasonic signals: an application to scattering material characterization.

    PubMed

    Carrión, Alicia; Miralles, Ramón; Lara, Guillermo

    2014-09-01

    In this paper, we present a novel and completely different approach to the problem of scattering material characterization: measuring the degree of predictability of the time series. Measuring predictability can provide information of the signal strength of the deterministic component of the time series in relation to the whole time series acquired. This relationship can provide information about coherent reflections in material grains with respect to the rest of incoherent noises that typically appear in non-destructive testing using ultrasonics. This is a non-parametric technique commonly used in chaos theory that does not require making any kind of assumptions about attenuation profiles. In highly scattering media (low SNR), it has been shown theoretically that the degree of predictability allows material characterization. The experimental results obtained in this work with 32 cement probes of 4 different porosities demonstrate the ability of this technique to do classification. It has also been shown that, in this particular application, the measurement of predictability can be used as an indicator of the percentages of porosity of the test samples with great accuracy. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Digital flow model of the Chowan River estuary, North Carolina

    USGS Publications Warehouse

    Daniel, C.C.

    1977-01-01

    A one-dimensional deterministic flow model based on the continuity equation had been developed to provide estimates of daily flow past a number of points on the Chowan River estuary of northeast North Carolina. The digital model, programmed in Fortran IV, computes daily average discharge for nine sites; four of these represent inflow at the mouths of major tributaries, the five other sites are at stage stations along the estuary. Because flows within the Chowan River and the lower reaches of its tributaries are tidally affected, flows occur in both upstream and downstream directions. The period of record generated by the model extends from April 1, 1974, to March 31, 1976. During the two years of model operation the average discharge at Edenhouse near the mouth of the estuary was 5,830 cfs (cubic feet per second). Daily average flows during this period ranged from 55,900 cfs in the downstream direction on July 17, 1975, to 14,200 cfs in the upstream direction on November 30, 1974

  10. A Unit on Deterministic Chaos for Student Teachers

    ERIC Educational Resources Information Center

    Stavrou, D.; Assimopoulos, S.; Skordoulis, C.

    2013-01-01

    A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…

  11. A Deterministic Annealing Approach to Clustering AIRS Data

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  12. INTEGRATED PROBABILISTIC AND DETERMINISTIC MODELING TECHNIQUES IN ESTIMATING EXPOSURE TO WATER-BORNE CONTAMINANTS: PART 2 PHARMACOKINETIC MODELING

    EPA Science Inventory

    The Total Exposure Model (TEM) uses deterministic and stochastic methods to estimate the exposure of a person performing daily activities of eating, drinking, showering, and bathing. There were 250 time histories generated, by subject with activities, for the three exposure ro...

  13. Integrability and Chaos: The Classical Uncertainty

    ERIC Educational Resources Information Center

    Masoliver, Jaume; Ros, Ana

    2011-01-01

    In recent years there has been a considerable increase in the publishing of textbooks and monographs covering what was formerly known as random or irregular deterministic motion, now referred to as deterministic chaos. There is still substantial interest in a matter that is included in many graduate and even undergraduate courses on classical…

  14. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  15. Contemporary Genetics for Gender Researchers: Not Your Grandma's Genetics Anymore

    ERIC Educational Resources Information Center

    Salk, Rachel H.; Hyde, Janet S.

    2012-01-01

    Over the past century, much of genetics was deterministic, and feminist researchers framed justified criticisms of genetics research. However, over the past two decades, genetics research has evolved remarkably and has moved far from earlier deterministic approaches. Our article provides a brief primer on modern genetics, emphasizing contemporary…

  16. Technological Utopia, Dystopia and Ambivalence: Teaching with Social Media at a South African University

    ERIC Educational Resources Information Center

    Rambe, Patient; Nel, Liezel

    2015-01-01

    The discourse of social media adoption in higher education has often been funnelled through utopian and dystopian perspectives, which are polarised but determinist theorisations of human engagement with educational technologies. Consequently, these determinist approaches have obscured a broadened grasp of the situated, socially constructed nature…

  17. Management of oil spill contamination in the Gulf of Patras caused by an accidental subsea blowout.

    PubMed

    Makatounis, Panagiotis Eleftherios; Skancke, Jørgen; Florou, Evanthia; Stamou, Anastasios; Brandvik, Per Johan

    2017-12-01

    A methodology is presented and applied to assess the oil contamination probability in the Gulf of Patras and the environmental impacts on the environmentally sensitive area of Mesolongi - Aitoliko coastal lagoons, and to examine the effectiveness of response systems. The procedure consists of the following steps: (1) Determination of the computational domain and the main areas of interest, (2) determination of the drilling sites and oil release characteristics, (3) selection of the simulation periods and collection of environmental data, (4) identification of the species of interest and their characteristics, (5) performance of stochastic calculations and oil contamination probability analysis, (6) determination of the worst-cases, (7) determination of the characteristics of response systems, (8) performance of deterministic calculations, and (9) assessment of the impact of oil spill in the areas of interest. Stochastic calculations that were performed for three typical seasonal weather variations of the year 2015, three oil release sites and specific oil characteristics, showed that there is a considerable probability of oil pollution that reaches 30% in the Mesolongi - Aitoliko lagoons. Based on a simplified approach regarding the characteristic of the sensitive birds and fish in the lagoons, deterministic calculations showed that 78-90% of the bird population and 2-4% of the fish population are expected to be contaminated in the case of an oil spill without any intervention. The use of dispersants reduced the amount of stranded oil by approximately 16-21% and the contaminated bird population of the lagoons to approximately 70%; however, the affected fish population increased to 6-8.5% due to the higher oil concentration in the water column. Mechanical recovery with skimmers "cleaned" almost 10% of the released oil quantity, but it did not have any noticeable effect on the stranded oil and the impacted bird and fish populations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Neural network model for the prediction of PM10 daily concentrations in two sites in the Western Mediterranean.

    PubMed

    de Gennaro, Gianluigi; Trizio, Livia; Di Gilio, Alessia; Pey, Jorge; Pérez, Noemi; Cusack, Michael; Alastuey, Andrés; Querol, Xavier

    2013-10-01

    An artificial neural network (ANN) was developed and tested to forecast PM10 daily concentration in two contrasted environments in NE Spain, a regional background site (Montseny), and an urban background site (Barcelona-CSIC), which was highly influenced by vehicular emissions. In order to predict 24-h average PM10 concentrations, the artificial neural network previously developed by Caselli et al. (2009) was improved by using hourly PM concentrations and deterministic factors such as a Saharan dust alert. In particular, the model input data for prediction were the hourly PM10 concentrations 1-day in advance, local meteorological data and information about air masses origin. The forecasted performance indexes for both sites were calculated and they showed better results for the regional background site in Montseny (R(2)=0.86, SI=0.75) than for urban site in Barcelona (R(2)=0.73, SI=0.58), influenced by local and sometimes unexpected sources. Moreover, a sensitivity analysis conducted to understand the importance of the different variables included among the input data, showed that local meteorology and air masses origin are key factors in the model forecasts. This result explains the reason for the improvement of ANN's forecasting performance at the Montseny site with respect to the Barcelona site. Moreover, the artificial neural network developed in this work could prove useful to predict PM10 concentrations, especially, at regional background sites such as those on the Mediterranean Basin which are primarily affected by long-range transports. Hence, the artificial neural network presented here could be a powerful tool for obtaining real time information on air quality status and could aid stakeholders in their development of cost-effective control strategies. © 2013 Elsevier B.V. All rights reserved.

  19. SEMINAR PUBLICATION: SITE CHARACTERIZATION FOR SUBSURFACE REMEDIATION

    EPA Science Inventory

    This seminar publication provides a comprehensive approach to site characterization for subsurface remediation. Chapter 1 describes a methodology for integrating site characterization with subsurface remediation. The rest of the handbook is divided into three parts. Part I covers...

  20. Demonstration of innovative monitoring technologies at the Savannah River Integrated Demonstration Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossabi, J.; Jenkins, R.A.; Wise, M.B.

    1993-12-31

    The Department of Energy`s Office of Technology Development initiated an Integrated Demonstration Program at the Savannah River Site in 1989. The objective of this program is to develop, demonstrate, and evaluate innovative technologies that can improve present-day environmental restoration methods. The Integrated Demonstration Program at SRS is entitled ``Cleanup of Organics in Soils and Groundwater at Non-Arid Sites.`` New technologies in the areas of drilling, characterization, monitoring, and remediation are being demonstrated and evaluated for their technical performance and cost effectiveness in comparison with baseline technologies. Present site characterization and monitoring methods are costly, time-consuming, overly invasive, and often imprecise.more » Better technologies are required to accurately describe the subsurface geophysical and geochemical features of a site and the nature and extent of contamination. More efficient, nonintrusive characterization and monitoring techniques are necessary for understanding and predicting subsurface transport. More reliable procedures are also needed for interpreting monitoring and characterization data. Site characterization and monitoring are key elements in preventing, identifying, and restoring contaminated sites. The remediation of a site cannot be determined without characterization data, and monitoring may be required for 30 years after site closure.« less

  1. Investigating the complexity of precipitation sets within California via the fractal-multifractal method

    NASA Astrophysics Data System (ADS)

    Puente, Carlos E.; Maskey, Mahesh L.; Sivakumar, Bellie

    2017-04-01

    A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted in order to encode highly intermittent daily rainfall records observed over a year. Using such a notion, this research investigates the complexity of rainfall in various stations within the State of California. Specifically, records gathered at (from South to North) Cherry Valley, Merced, Sacramento and Shasta Dam, containing 59, 116, 115 and 72 years, all ending at water year 2015, were encoded and analyzed in detail. The analysis reveals that: (a) the FM approach yields faithful encodings of all records, by years, with mean square and maximum errors in accumulated rain that are less than a mere 2% and 10%, respectively; (b) the evolution of the corresponding "best" FM parameters, allowing visualization of the inter-annual rainfall dynamics from a reduced vantage point, exhibit implicit variability that precludes discriminating between sites and extrapolating to the future; (c) the evolution of the FM parameters, restricted to specific regions within space, allows finding sensible future simulations; and (d) the rain signals at all sites may be termed "equally complex," as usage of k-means clustering and conventional phase space analysis of FM parameters yields comparable results for all sites.

  2. Site-Control of InAs/GaAs Quantum Dots with Indium-Assisted Deoxidation

    PubMed Central

    Hussain, Sajid; Pozzato, Alessandro; Tormen, Massimo; Zannier, Valentina; Biasiol, Giorgio

    2016-01-01

    Site-controlled epitaxial growth of InAs quantum dots on GaAs substrates patterned with periodic nanohole arrays relies on the deterministic nucleation of dots into the holes. In the ideal situation, each hole should be occupied exactly by one single dot, with no nucleation onto planar areas. However, the single-dot occupancy per hole is often made difficult by the fact that lithographically-defined holes are generally much larger than the dots, thus providing several nucleation sites per hole. In addition, deposition of a thin GaAs buffer before the dots tends to further widen the holes in the [110] direction. We have explored a method of native surface oxide removal by using indium beams, which effectively prevents hole elongation along [110] and greatly helps single-dot occupancy per hole. Furthermore, as compared to Ga-assisted deoxidation, In-assisted deoxidation is efficient in completely removing surface contaminants, and any excess In can be easily re-desorbed thermally, thus leaving a clean, smooth GaAs surface. Low temperature photoluminescence showed that inhomogeneous broadening is substantially reduced for QDs grown on In-deoxidized patterns, with respect to planar self-assembled dots. PMID:28773333

  3. Hotspot Identification for Shanghai Expressways Using the Quantitative Risk Assessment Method

    PubMed Central

    Chen, Can; Li, Tienan; Sun, Jian; Chen, Feng

    2016-01-01

    Hotspot identification (HSID) is the first and key step of the expressway safety management process. This study presents a new HSID method using the quantitative risk assessment (QRA) technique. Crashes that are likely to happen for a specific site are treated as the risk. The aggregation of the crash occurrence probability for all exposure vehicles is estimated based on the empirical Bayesian method. As for the consequences of crashes, crashes may not only cause direct losses (e.g., occupant injuries and property damages) but also result in indirect losses. The indirect losses are expressed by the extra delays calculated using the deterministic queuing diagram method. The direct losses and indirect losses are uniformly monetized to be considered as the consequences of this risk. The potential costs of crashes, as a criterion to rank high-risk sites, can be explicitly expressed as the sum of the crash probability for all passing vehicles and the corresponding consequences of crashes. A case study on the urban expressways of Shanghai is presented. The results show that the new QRA method for HSID enables the identification of a set of high-risk sites that truly reveal the potential crash costs to society. PMID:28036009

  4. Deterministic chaos in an ytterbium-doped mode-locked fiber laser

    NASA Astrophysics Data System (ADS)

    Mélo, Lucas B. A.; Palacios, Guillermo F. R.; Carelli, Pedro V.; Acioli, Lúcio H.; Rios Leite, José R.; de Miranda, Marcio H. G.

    2018-05-01

    We experimentally study the nonlinear dynamics of a femtosecond ytterbium doped mode-locked fiber laser. With the laser operating in the pulsed regime a route to chaos is presented, starting from stable mode-locking, period two, period four, chaos and period three regimes. Return maps and bifurcation diagrams were extracted from time series for each regime. The analysis of the time series with the laser operating in the quasi mode-locked regime presents deterministic chaos described by an unidimensional Rossler map. A positive Lyapunov exponent $\\lambda = 0.14$ confirms the deterministic chaos of the system. We suggest an explanation about the observed map by relating gain saturation and intra-cavity loss.

  5. The viability of ADVANTG deterministic method for synthetic radiography generation

    NASA Astrophysics Data System (ADS)

    Bingham, Andrew; Lee, Hyoung K.

    2018-07-01

    Fast simulation techniques to generate synthetic radiographic images of high resolution are helpful when new radiation imaging systems are designed. However, the standard stochastic approach requires lengthy run time with poorer statistics at higher resolution. The investigation of the viability of a deterministic approach to synthetic radiography image generation was explored. The aim was to analyze a computational time decrease over the stochastic method. ADVANTG was compared to MCNP in multiple scenarios including a small radiography system prototype, to simulate high resolution radiography images. By using ADVANTG deterministic code to simulate radiography images the computational time was found to decrease 10 to 13 times compared to the MCNP stochastic approach while retaining image quality.

  6. Afraid to Start Because the Outcome is Uncertain?: Social Site Characterization as a Tool for Informing Public Engagement Efforts

    USGS Publications Warehouse

    Wade, S.; Greenberg, S.

    2009-01-01

    This paper introduces the concept of social site characterization as a parallel effort to technical site characterization to be used in evaluating and planning carbon dioxides capture and storage (CCS) projects. Social site characterization, much like technical site characterization, relies on a series of iterative investigations into public attitudes towards a CCS project and the factors that will shape those views. This paper also suggests ways it can be used to design approaches for actively engaging stakeholders and communities in the deployment of CCS projects. This work is informed by observing the site selection process for FutureGen and the implementation of research projects under the Regional Carbon Sequestration Partnership Program. ?? 2009 Elsevier Ltd. All rights reserved.

  7. The Dripping Handrail Model: Transient Chaos in Accretion Systems

    NASA Technical Reports Server (NTRS)

    Young, Karl; Scargle, Jeffrey D.; Cuzzi, Jeffrey (Technical Monitor)

    1995-01-01

    We define and study a simple dynamical model for accretion systems, the "dripping handrail" (DHR). The time evolution of this spatially extended system is a mixture of periodic and apparently random (but actually deterministic) behavior. The nature of this mixture depends on the values of its physical parameters - the accretion rate, diffusion coefficient, and density threshold. The aperiodic component is a special kind of deterministic chaos called transient chaos. The model can simultaneously exhibit both the quasiperiodic oscillations (QPO) and very low frequency noise (VLFN) that characterize the power spectra of fluctuations of several classes of accretion systems in astronomy. For this reason, our model may be relevant to many such astrophysical systems, including binary stars with accretion onto a compact object - white dwarf, neutron star, or black hole - as well as active galactic nuclei. We describe the systematics of the DHR's temporal behavior, by exploring its physical parameter space using several diagnostics: power spectra, wavelet "scalegrams," and Lyapunov exponents. In addition, we note that for large accretion rates the DHR has periodic modes; the effective pulse shapes for these modes - evaluated by folding the time series at the known period - bear a resemblance to the similarly- determined shapes for some x-ray pulsars. The pulsing observed in some of these systems may be such periodic-mode accretion, and not due to pure rotation as in the standard pulsar model.

  8. Numerical simulation and characterization of trapping noise in InGaP-GaAs heterojunctions devices at high injection

    NASA Astrophysics Data System (ADS)

    Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan

    2013-03-01

    Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.

  9. Proposed principles of maximum local entropy production.

    PubMed

    Ross, John; Corlan, Alexandru D; Müller, Stefan C

    2012-07-12

    Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.

  10. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    PubMed Central

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  11. Deterministic Role of Collision Cascade Density in Radiation Defect Dynamics in Si

    NASA Astrophysics Data System (ADS)

    Wallace, J. B.; Aji, L. B. Bayu; Shao, L.; Kucheyev, S. O.

    2018-05-01

    The formation of stable radiation damage in solids often proceeds via complex dynamic annealing (DA) processes, involving point defect migration and interaction. The dependence of DA on irradiation conditions remains poorly understood even for Si. Here, we use a pulsed ion beam method to study defect interaction dynamics in Si bombarded in the temperature range from ˜-30 ° C to 210 °C with ions in a wide range of masses, from Ne to Xe, creating collision cascades with different densities. We demonstrate that the complexity of the influence of irradiation conditions on defect dynamics can be reduced to a deterministic effect of a single parameter, the average cascade density, calculated by taking into account the fractal nature of collision cascades. For each ion species, the DA rate exhibits two well-defined Arrhenius regions where different DA mechanisms dominate. These two regions intersect at a critical temperature, which depends linearly on the cascade density. The low-temperature DA regime is characterized by an activation energy of ˜0.1 eV , independent of the cascade density. The high-temperature regime, however, exhibits a change in the dominant DA process for cascade densities above ˜0.04 at.%, evidenced by an increase in the activation energy. These results clearly demonstrate a crucial role of the collision cascade density and can be used to predict radiation defect dynamics in Si.

  12. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  13. Deterministic Role of Collision Cascade Density in Radiation Defect Dynamics in Si.

    PubMed

    Wallace, J B; Aji, L B Bayu; Shao, L; Kucheyev, S O

    2018-05-25

    The formation of stable radiation damage in solids often proceeds via complex dynamic annealing (DA) processes, involving point defect migration and interaction. The dependence of DA on irradiation conditions remains poorly understood even for Si. Here, we use a pulsed ion beam method to study defect interaction dynamics in Si bombarded in the temperature range from ∼-30 °C to 210 °C with ions in a wide range of masses, from Ne to Xe, creating collision cascades with different densities. We demonstrate that the complexity of the influence of irradiation conditions on defect dynamics can be reduced to a deterministic effect of a single parameter, the average cascade density, calculated by taking into account the fractal nature of collision cascades. For each ion species, the DA rate exhibits two well-defined Arrhenius regions where different DA mechanisms dominate. These two regions intersect at a critical temperature, which depends linearly on the cascade density. The low-temperature DA regime is characterized by an activation energy of ∼0.1  eV, independent of the cascade density. The high-temperature regime, however, exhibits a change in the dominant DA process for cascade densities above ∼0.04 at.%, evidenced by an increase in the activation energy. These results clearly demonstrate a crucial role of the collision cascade density and can be used to predict radiation defect dynamics in Si.

  14. A DETERMINISTIC GEOMETRIC REPRESENTATION OF TEMPORAL RAINFALL: SENSITIVITY ANALYSIS FOR A STORM IN BOSTON. (R824780)

    EPA Science Inventory

    In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...

  15. Seed availability constrains plant species sorting along a soil fertility gradient

    Treesearch

    Bryan L. Foster; Erin J. Questad; Cathy D. Collins; Cheryl A. Murphy; Timothy L. Dickson; Val H. Smith

    2011-01-01

    1. Spatial variation in species composition within and among communities may be caused by deterministic, niche-based species sorting in response to underlying environmental heterogeneity as well as by stochastic factors such as dispersal limitation and variable species pools. An important goal in ecology is to reconcile deterministic and stochastic perspectives of...

  16. The Role of Probability and Intentionality in Preschoolers' Causal Generalizations

    ERIC Educational Resources Information Center

    Sobel, David M.; Sommerville, Jessica A.; Travers, Lea V.; Blumenthal, Emily J.; Stoddard, Emily

    2009-01-01

    Three experiments examined whether preschoolers recognize that the causal properties of objects generalize to new members of the same set given either deterministic or probabilistic data. Experiment 1 found that 3- and 4-year-olds were able to make such a generalization given deterministic data but were at chance when they observed probabilistic…

  17. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    ERIC Educational Resources Information Center

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  18. A Nonlinear Dynamics Approach for Incorporating Wind-Speed Patterns into Wind-Power Project Evaluation

    PubMed Central

    Huffaker, Ray; Bittelli, Marco

    2015-01-01

    Wind-energy production may be expanded beyond regions with high-average wind speeds (such as the Midwest U.S.A.) to sites with lower-average speeds (such as the Southeast U.S.A.) by locating favorable regional matches between natural wind-speed and energy-demand patterns. A critical component of wind-power evaluation is to incorporate wind-speed dynamics reflecting documented diurnal and seasonal behavioral patterns. Conventional probabilistic approaches remove patterns from wind-speed data. These patterns must be restored synthetically before they can be matched with energy-demand patterns. How to accurately restore wind-speed patterns is a vexing problem spurring an expanding line of papers. We propose a paradigm shift in wind power evaluation that employs signal-detection and nonlinear-dynamics techniques to empirically diagnose whether synthetic pattern restoration can be avoided altogether. If the complex behavior of observed wind-speed records is due to nonlinear, low-dimensional, and deterministic system dynamics, then nonlinear dynamics techniques can reconstruct wind-speed dynamics from observed wind-speed data without recourse to conventional probabilistic approaches. In the first study of its kind, we test a nonlinear dynamics approach in an application to Sugarland Wind—the first utility-scale wind project proposed in Florida, USA. We find empirical evidence of a low-dimensional and nonlinear wind-speed attractor characterized by strong temporal patterns that match up well with regular daily and seasonal electricity demand patterns. PMID:25617767

  19. CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential

    USGS Publications Warehouse

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.

    2006-01-01

    This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.

  20. Comparison of Deterministic and Probabilistic Radial Distribution Systems Load Flow

    NASA Astrophysics Data System (ADS)

    Gupta, Atma Ram; Kumar, Ashwani

    2017-12-01

    Distribution system network today is facing the challenge of meeting increased load demands from the industrial, commercial and residential sectors. The pattern of load is highly dependent on consumer behavior and temporal factors such as season of the year, day of the week or time of the day. For deterministic radial distribution load flow studies load is taken as constant. But, load varies continually with a high degree of uncertainty. So, there is a need to model probable realistic load. Monte-Carlo Simulation is used to model the probable realistic load by generating random values of active and reactive power load from the mean and standard deviation of the load and for solving a Deterministic Radial Load Flow with these values. The probabilistic solution is reconstructed from deterministic data obtained for each simulation. The main contribution of the work is: Finding impact of probable realistic ZIP load modeling on balanced radial distribution load flow. Finding impact of probable realistic ZIP load modeling on unbalanced radial distribution load flow. Compare the voltage profile and losses with probable realistic ZIP load modeling for balanced and unbalanced radial distribution load flow.

  1. Unsteady Flows in a Single-Stage Transonic Axial-Flow Fan Stator Row. Ph.D. Thesis - Iowa State Univ.

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.

    1986-01-01

    Measurements of the unsteady velocity field within the stator row of a transonic axial-flow fan were acquired using a laser anemometer. Measurements were obtained on axisymmetric surfaces located at 10 and 50 percent span from the shroud, with the fan operating at maximum efficiency at design speed. The ensemble-average and variance of the measured velocities are used to identify rotor-wake-generated (deterministic) unsteadiness and turbulence, respectively. Correlations of both deterministic and turbulent velocity fluctuations provide information on the characteristics of unsteady interactions within the stator row. These correlations are derived from the Navier-Stokes equation in a manner similar to deriving the Reynolds stress terms, whereby various averaging operators are used to average the aperiodic, deterministic, and turbulent velocity fluctuations which are known to be present in multistage turbomachines. The correlations of deterministic and turbulent velocity fluctuations throughout the axial fan stator row are presented. In particular, amplification and attenuation of both types of unsteadiness are shown to occur within the stator blade passage.

  2. Precision production: enabling deterministic throughput for precision aspheres with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  3. Down to the roughness scale assessment of piston-ring/liner contacts

    NASA Astrophysics Data System (ADS)

    Checo, H. M.; Jaramillo, A.; Ausas, R. F.; Jai, M.; Buscaglia, G. C.

    2017-02-01

    The effects of surface roughness in hydrodynamic bearings been accounted for through several approaches, the most widely used being averaging or stochastic techniques. With these the surface is not treated “as it is”, but by means of an assumed probability distribution for the roughness. The so called direct, deterministic or measured-surface simulation) solve the lubrication problem with realistic surfaces down to the roughness scale. This leads to expensive computational problems. Most researchers have tackled this problem considering non-moving surfaces and neglecting the ring dynamics to reduce the computational burden. What is proposed here is to solve the fully-deterministic simulation both in space and in time, so that the actual movement of the surfaces and the rings dynamics are taken into account. This simulation is much more complex than previous ones, as it is intrinsically transient. The feasibility of these fully-deterministic simulations is illustrated two cases: fully deterministic simulation of liner surfaces with diverse finishings (honed and coated bores) with constant piston velocity and load on the ring and also in real engine conditions.

  4. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.

    PubMed

    Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo

    2017-05-01

    In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polettini, M., E-mail: matteo.polettini@uni.lu; Wachtel, A., E-mail: artur.wachtel@uni.lu; Esposito, M., E-mail: massimilano.esposito@uni.lu

    We study the effect of intrinsic noise on the thermodynamic balance of complex chemical networks subtending cellular metabolism and gene regulation. A topological network property called deficiency, known to determine the possibility of complex behavior such as multistability and oscillations, is shown to also characterize the entropic balance. In particular, when deficiency is zero the average stochastic dissipation rate equals that of the corresponding deterministic model, where correlations are disregarded. In fact, dissipation can be reduced by the effect of noise, as occurs in a toy model of metabolism that we employ to illustrate our findings. This phenomenon highlights thatmore » there is a close interplay between deficiency and the activation of new dissipative pathways at low molecule numbers.« less

  6. Epidemic spreading on adaptively weighted scale-free networks.

    PubMed

    Sun, Mengfeng; Zhang, Haifeng; Kang, Huiyan; Zhu, Guanghu; Fu, Xinchu

    2017-04-01

    We introduce three modified SIS models on scale-free networks that take into account variable population size, nonlinear infectivity, adaptive weights, behavior inertia and time delay, so as to better characterize the actual spread of epidemics. We develop new mathematical methods and techniques to study the dynamics of the models, including the basic reproduction number, and the global asymptotic stability of the disease-free and endemic equilibria. We show the disease-free equilibrium cannot undergo a Hopf bifurcation. We further analyze the effects of local information of diseases and various immunization schemes on epidemic dynamics. We also perform some stochastic network simulations which yield quantitative agreement with the deterministic mean-field approach.

  7. Observations and analysis of self-similar branching topology in glacier networks

    USGS Publications Warehouse

    Bahr, D.B.; Peckham, S.D.

    1996-01-01

    Glaciers, like rivers, have a branching structure which can be characterized by topological trees or networks. Probability distributions of various topological quantities in the networks are shown to satisfy the criterion for self-similarity, a symmetry structure which might be used to simplify future models of glacier dynamics. Two analytical methods of describing river networks, Shreve's random topology model and deterministic self-similar trees, are applied to the six glaciers of south central Alaska studied in this analysis. Self-similar trees capture the topological behavior observed for all of the glaciers, and most of the networks are also reasonably approximated by Shreve's theory. Copyright 1996 by the American Geophysical Union.

  8. Stochastic evolution in populations of ideas

    PubMed Central

    Nicole, Robin; Sollich, Peter; Galla, Tobias

    2017-01-01

    It is known that learning of players who interact in a repeated game can be interpreted as an evolutionary process in a population of ideas. These analogies have so far mostly been established in deterministic models, and memory loss in learning has been seen to act similarly to mutation in evolution. We here propose a representation of reinforcement learning as a stochastic process in finite ‘populations of ideas’. The resulting birth-death dynamics has absorbing states and allows for the extinction or fixation of ideas, marking a key difference to mutation-selection processes in finite populations. We characterize the outcome of evolution in populations of ideas for several classes of symmetric and asymmetric games. PMID:28098244

  9. Menstruation, perimenopause, and chaos theory.

    PubMed

    Derry, Paula S; Derry, Gregory N

    2012-01-01

    This article argues that menstruation, including the transition to menopause, results from a specific kind of complex system, namely, one that is nonlinear, dynamical, and chaotic. A complexity-based perspective changes how we think about and research menstruation-related health problems and positive health. Chaotic systems are deterministic but not predictable, characterized by sensitivity to initial conditions and strange attractors. Chaos theory provides a coherent framework that qualitatively accounts for puzzling results from perimenopause research. It directs attention to variability within and between women, adaptation, lifespan development, and the need for complex explanations of disease. Whether the menstrual cycle is chaotic can be empirically tested, and a summary of our research on 20- to 40-year-old women is provided.

  10. Stochastic evolution in populations of ideas

    NASA Astrophysics Data System (ADS)

    Nicole, Robin; Sollich, Peter; Galla, Tobias

    2017-01-01

    It is known that learning of players who interact in a repeated game can be interpreted as an evolutionary process in a population of ideas. These analogies have so far mostly been established in deterministic models, and memory loss in learning has been seen to act similarly to mutation in evolution. We here propose a representation of reinforcement learning as a stochastic process in finite ‘populations of ideas’. The resulting birth-death dynamics has absorbing states and allows for the extinction or fixation of ideas, marking a key difference to mutation-selection processes in finite populations. We characterize the outcome of evolution in populations of ideas for several classes of symmetric and asymmetric games.

  11. SITE CHARACTERIZATION LIBRARY: VOLUMN 1 (RELEASE 2.5)

    EPA Science Inventory

    This CD-ROM, Volume 1, Release 2.5, of EPA's National Exposure Research Laboratory (NERL - Las Vegas) Site Characterization Library, contains additional electronic documents and computer programs related to the characterization of hazardous waste sites. EPA has produced this libr...

  12. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  13. Impact of refining the assessment of dietary exposure to cadmium in the European adult population.

    PubMed

    Ferrari, Pietro; Arcella, Davide; Heraud, Fanny; Cappé, Stefano; Fabiansson, Stefan

    2013-01-01

    Exposure assessment constitutes an important step in any risk assessment of potentially harmful substances present in food. The European Food Safety Authority (EFSA) first assessed dietary exposure to cadmium in Europe using a deterministic framework, resulting in mean values of exposure in the range of health-based guidance values. Since then, the characterisation of foods has been refined to better match occurrence and consumption data, and a new strategy to handle left-censoring in occurrence data was devised. A probabilistic assessment was performed and compared with deterministic estimates, using occurrence values at the European level and consumption data from 14 national dietary surveys. Mean estimates in the probabilistic assessment ranged from 1.38 (95% CI = 1.35-1.44) to 2.08 (1.99-2.23) µg kg⁻¹ bodyweight (bw) week⁻¹ across the different surveys, which were less than 10% lower than deterministic (middle bound) mean values that ranged from 1.50 to 2.20 µg kg⁻¹ bw week⁻¹. Probabilistic 95th percentile estimates of dietary exposure ranged from 2.65 (2.57-2.72) to 4.99 (4.62-5.38) µg kg⁻¹ bw week⁻¹, which were, with the exception of one survey, between 3% and 17% higher than middle-bound deterministic estimates. Overall, the proportion of subjects exceeding the tolerable weekly intake of 2.5 µg kg⁻¹ bw ranged from 14.8% (13.6-16.0%) to 31.2% (29.7-32.5%) according to the probabilistic assessment. The results of this work indicate that mean values of dietary exposure to cadmium in the European population were of similar magnitude using determinist or probabilistic assessments. For higher exposure levels, probabilistic estimates were almost consistently larger than deterministic counterparts, thus reflecting the impact of using the full distribution of occurrence values to determine exposure levels. It is considered prudent to use probabilistic methodology should exposure estimates be close to or exceeding health-based guidance values.

  14. Aboveground and belowground arthropods experience different relative influences of stochastic versus deterministic community assembly processes following disturbance

    PubMed Central

    Martinez, Alexander S.; Faist, Akasha M.

    2016-01-01

    Background Understanding patterns of biodiversity is a longstanding challenge in ecology. Similar to other biotic groups, arthropod community structure can be shaped by deterministic and stochastic processes, with limited understanding of what moderates the relative influence of these processes. Disturbances have been noted to alter the relative influence of deterministic and stochastic processes on community assembly in various study systems, implicating ecological disturbances as a potential moderator of these forces. Methods Using a disturbance gradient along a 5-year chronosequence of insect-induced tree mortality in a subalpine forest of the southern Rocky Mountains, Colorado, USA, we examined changes in community structure and relative influences of deterministic and stochastic processes in the assembly of aboveground (surface and litter-active species) and belowground (species active in organic and mineral soil layers) arthropod communities. Arthropods were sampled for all years of the chronosequence via pitfall traps (aboveground community) and modified Winkler funnels (belowground community) and sorted to morphospecies. Community structure of both communities were assessed via comparisons of morphospecies abundance, diversity, and composition. Assembly processes were inferred from a mixture of linear models and matrix correlations testing for community associations with environmental properties, and from null-deviation models comparing observed vs. expected levels of species turnover (Beta diversity) among samples. Results Tree mortality altered community structure in both aboveground and belowground arthropod communities, but null models suggested that aboveground communities experienced greater relative influences of deterministic processes, while the relative influence of stochastic processes increased for belowground communities. Additionally, Mantel tests and linear regression models revealed significant associations between the aboveground arthropod communities and vegetation and soil properties, but no significant association among belowground arthropod communities and environmental factors. Discussion Our results suggest context-dependent influences of stochastic and deterministic community assembly processes across different fractions of a spatially co-occurring ground-dwelling arthropod community following disturbance. This variation in assembly may be linked to contrasting ecological strategies and dispersal rates within above- and below-ground communities. Our findings add to a growing body of evidence indicating concurrent influences of stochastic and deterministic processes in community assembly, and highlight the need to consider potential variation across different fractions of biotic communities when testing community ecology theory and considering conservation strategies. PMID:27761333

  15. Measurement Sets and Sites Commonly Used for Characterization

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Holekamp, Kara; Ryan, Robert; Sellers, Richard; Davis, Bruce; Zanoni, Vicki

    2002-01-01

    Scientists at NASA's Earth Science Applications Directorate are creating a well-characterized Verification & Validation (V&V) site at the Stennis Space Center. This site enables the in-flight characterization of remote sensing systems and the data they acquire. The data are predominantly acquired by commercial, high spatial resolution satellite systems, such as IKONOS and QuickBird 2, and airborne systems. The smaller scale of these newer high resolution remote sensing systems allows scientists to characterize the geometric, spatial, and radiometric data properties using a single V&V site. The targets and techniques used to characterize data from these newer systems can differ significantly from the techniques used to characterize data from the earlier, coarser spatial resolution systems. Scientists are also using the SSC V&V site to characterize thermal infrared systems and active LIDAR systems. SSC employs geodetic targets, edge targets, radiometric tarps, and thermal calibration ponds to characterize remote sensing data products. This paper presents a proposed set of required measurements for visible through long-wave infrared remote sensing systems and a description of the Stennis characterization. Other topics discussed include: 1) The use of ancillary atmospheric and solar measurements taken at SSC that support various characterizations; 2) Additional sites used for radiometric, geometric, and spatial characterization in the continental United States; 3) The need for a standardized technique to be adopted by CEOS and other organizations.

  16. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    PubMed

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.

  17. MESTRN: A Deterministic Meson-Muon Transport Code for Space Radiation

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Norbury, John W.; Norman, Ryan B.; Wilson, John W.; Singleterry, Robert C., Jr.; Tripathi, Ram K.

    2004-01-01

    A safe and efficient exploration of space requires an understanding of space radiations, so that human life and sensitive equipment can be protected. On the way to these sensitive sites, the radiation fields are modified in both quality and quantity. Many of these modifications are thought to be due to the production of pions and muons in the interactions between the radiation and intervening matter. A method used to predict the effects of the presence of these particles on the transport of radiation through materials is developed. This method was then used to develop software, which was used to calculate the fluxes of pions and muons after the transport of a cosmic ray spectrum through aluminum and water. Software descriptions are given in the appendices.

  18. A Summary Report on the NPH Evaluation of 105-L Disassembly Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, J.R.

    2002-04-30

    The L Area Disassembly Basin (LDB) is evaluated for the natural phenomena hazards (NPH) effects due to earthquake, wind, and tornado in accordance with DOE Order 420.1 and DOE-STD-1020. The deterministic analysis is performed for a Performance Category 3 (PC3) level of loads. Savannah River Site (SRS) specific NPH loads and design criteria are obtained from Engineering Standard 01060. It is demonstrated that the demand to capacity (D/C) ratios for primary and significant structural elements are acceptable (equal to or less than 1.0). Thus, 105-L Disassembly Basin building structure is qualified for the PC3 NPH effects in accordance with DOEmore » Order 420.1.« less

  19. Who's flying the plane: serotonin levels, aggression and free will.

    PubMed

    Siegel, Allan; Douard, John

    2011-01-01

    The present paper addresses the philosophical problem raised by current causal neurochemical models of impulsive violence and aggression: to what extent can we hold violent criminal offenders responsible for their conduct if that conduct is the result of deterministic biochemical processes in the brain. This question is currently receiving a great deal of attention among neuroscientists, legal scholars and philosophers. We examine our current knowledge of neuroscience to assess the possible roles of deterministic factors which induce impulsive aggression, and the extent to which this behavior can be controlled by neural conditioning mechanisms. Neural conditioning mechanisms, we suggest, may underlie what we consider the basis of responsible (though not necessarily moral) behavior: the capacity to give and take reasons. The models we first examine are based in part upon the role played by the neurotransmitter, serotonin, in the regulation of violence and aggression. Collectively, these results would appear to argue in favor of the view that low brain serotonin levels induce impulsive aggression which overrides mechanisms related to rational decision making processes. We next present an account of responsibility as based on the capacity to exercise a certain kind of reason-responsive control over one's conduct. The problem with such accounts of responsibility, however, is that they fail to specify a neurobiological realization of such mechanisms of control. We present a neurobiological, and weakly determinist, framework for understanding how persons can exercise guidance control over their conduct. This framework is based upon classical conditioning of neurons in the prefrontal cortex that allow for a decision making mechanism that provides for prefrontal cortical control of the sites in the brain which express aggressive behavior that include the hypothalamus and midbrain periaqueductal gray. The authors support the view that, in many circumstances, neural conditioning mechanisms provide the basis for the control of human aggression in spite of the presence of brain serotonin levels that might otherwise favor the expression of impulsive aggressive behavior. Indeed if those neural conditioning mechanisms underlie the human capacity to exercise control, they may be the neural realization of reason-responsiveness generally. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  1. Who's flying the plane: Serotonin levels, aggression and free will

    PubMed Central

    Siegel, Allan; Douard, John

    2010-01-01

    The present paper addresses the philosophical problem raised by current causal neurochemical models of impulsive violence and aggression: to what extent can we hold violent criminal offenders responsible for their conduct if that conduct is the result of deterministic biochemical processes in the brain. This question is currently receiving a great deal of attention among neuroscientists, legal scholars and philosophers. We examine our current knowledge of neuroscience to assess the possible roles of deterministic factors which induce impulsive aggression, and the extent to which this behavior can be controlled by neural conditioning mechanisms. Neural conditioning mechanisms, we suggest, may underlie what we consider the basis of responsible (though not necessarily moral) behavior: the capacity to give and take reasons. The models we first examine are based in part upon the role played by the neurotransmitter, serotonin, in the regulation of violence and aggression. Collectively, these results would appear to argue in favor of the view that low brain serotonin levels induce impulsive aggression which overrides mechanisms related to rational decision making processes. We next present an account of responsibility as based on the capacity to exercise a certain kind of reason-responsive control over one's conduct. The problem with such accounts of responsibility, however, is that they fail to specify a neurobiological realization of such mechanisms of control. We present a neurobiological, and weakly determinist, framework for understanding how persons can exercise guidance control over their conduct. This framework is based upon classical conditioning of neurons in the prefrontal cortex that allow for a decision making mechanism that provides for prefrontal cortical control of the sites in the brain which express aggressive behavior that include the hypothalamus and midbrain periaqueductal gray. The authors support the view that, in many circumstances, neural conditioning mechanisms provide the basis for the control of human aggression in spite of the presence of brain serotonin levels that might otherwise favor the expression of impulsive aggressive behavior. Indeed if those neural conditioning mechanisms underlie the human capacity to exercise control, they may be the neural realization of reason-responsiveness generally. PMID:21112635

  2. FIELD EVALUATION OF IN-SITU BIODEGRADATION OF CHLORINATED ETHENES: PART I, METHODOLOGY AND FIELD SITE CHARACTERIZATION

    EPA Science Inventory

    Careful site characterization and implementation of quantitative monitoring methods are prerequisites for a convincing evaluation of enhanced biostimulation for aquifer restoration. his paper describes the characterization of a site at Moffett Naval Air Station, Mountain View, Ca...

  3. SITE CHARACTERIZATION AND ANALYSIS PENETROMETER SYSTEM(SCAPS) LAZER-INDUCED FLUORESCENCE (LIF) SENSOR AND SUPPORT SYSTEM

    EPA Science Inventory

    The Consortium for Site Characterization Technology (CSCT) has established a formal program to accelerate acceptance and application of innovative monitoring and site characterization technologies that improve the way the nation manages its environmental problems. In 1995 the CS...

  4. Aspen succession in the Intermountain West: A deterministic model

    Treesearch

    Dale L. Bartos; Frederick R. Ward; George S. Innis

    1983-01-01

    A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...

  5. Using stochastic models to incorporate spatial and temporal variability [Exercise 14

    Treesearch

    Carolyn Hull Sieg; Rudy M. King; Fred Van Dyke

    2003-01-01

    To this point, our analysis of population processes and viability in the western prairie fringed orchid has used only deterministic models. In this exercise, we conduct a similar analysis, using a stochastic model instead. This distinction is of great importance to population biology in general and to conservation biology in particular. In deterministic models,...

  6. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2016-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  7. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  8. Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers

    DOE PAGES

    Saha, Sourabh K.; Culpepper, Martin L.

    2016-04-25

    Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.

  9. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  10. Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System

    ERIC Educational Resources Information Center

    Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.

    2015-01-01

    Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…

  11. Probabilistic direct counterfactual quantum communication

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng

    2017-02-01

    It is striking that the quantum Zeno effect can be used to launch a direct counterfactual communication between two spatially separated parties, Alice and Bob. So far, existing protocols of this type only provide a deterministic counterfactual communication service. However, this counterfactuality should be payed at a price. Firstly, the transmission time is much longer than a classical transmission costs. Secondly, the chained-cycle structure makes them more sensitive to channel noises. Here, we extend the idea of counterfactual communication, and present a probabilistic-counterfactual quantum communication protocol, which is proved to have advantages over the deterministic ones. Moreover, the presented protocol could evolve to a deterministic one solely by adjusting the parameters of the beam splitters. Project supported by the National Natural Science Foundation of China (Grant No. 61300203).

  12. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  13. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    PubMed

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  14. Integrated deterministic and probabilistic safety analysis for safety assessment of nuclear power plants

    DOE PAGES

    Di Maio, Francesco; Zio, Enrico; Smith, Curtis; ...

    2015-07-06

    The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs andmore » activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).« less

  15. Importance of geologic characterization of potential low-level radioactive waste disposal sites

    USGS Publications Warehouse

    Weibel, C.P.; Berg, R.C.

    1991-01-01

    Using the example of the Geff Alternative Site in Wayne County, Illinois, for the disposal of low-level radioactive waste, this paper demonstrates, from a policy and public opinion perspective, the importance of accurately determining site stratigraphy. Complete and accurate characterization of geologic materials and determination of site stratigraphy at potential low-level waste disposal sites provides the frame-work for subsequent hydrologic and geochemical investigations. Proper geologic characterization is critical to determine the long-term site stability and the extent of interactions of groundwater between the site and its surroundings. Failure to adequately characterize site stratigraphy can lead to the incorrect evaluation of the geology of a site, which in turn may result in a lack of public confidence. A potential problem of lack of public confidence was alleviated as a result of the resolution and proper definition of the Geff Alternative Site stratigraphy. The integrity of the investigation was not questioned and public perception was not compromised. ?? 1991 Springer-Verlag New York Inc.

  16. A FIELD EVALUATION OF IN-SITU BIODEGRADATION OF CHLORINATED ETHENES: PART I, METHODOLOGY AND FIELD SITE CHARACTERIZATION

    EPA Science Inventory

    Careful site characterization and implementation of quantitative monitoring methods are prerequisites for a convincing evaluation of enhanced biostimulation for aquifer restoration. This paper describes the characterization of a site at Moffett Naval Air Station, Mountain View, C...

  17. 10 CFR 60.16 - Site characterization plan required.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Site characterization plan required. 60.16 Section 60.16 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN GEOLOGIC REPOSITORIES Licenses Preapplication Review § 60.16 Site characterization plan required. Before proceeding to...

  18. Measurement Sets and Sites Commonly used for Characterizations

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary; Holekamp, Kara; Ryan, Robert; Blonski, Slawomir; Sellers, Richard; Davis, Bruce; Zanoni, Vicki

    2002-01-01

    Scientists with NASA's Earth Science Applications Directorate are creating a well-characterized Verification & Validation (V&V) site at the Stennis Space Center (SSC). This site enables the in-flight characterization of remote sensing systems and the data that they require. The data are predominantly acquired by commercial, high-spatial resolution satellite systems, such as IKONOS and QuickBird 2, and airborne systems. The smaller scale of these newer high-resolution remote sensing systems allows scientists to characterize the geometric, spatial, and radiometric data properties using a single V&V site. The targets and techniques used to characterize data from these newer systems can differ significantly from the earlier, coarser spatial resolution systems. Scientists are also using the SSC V&V site to characterize thermal infrared systems and active Light Detection and Ranging (LIDAR) systems. SSC employs geodetic targets, edge targets, radiometric tarps, and thermal calibration ponds to characterize remote sensing data products. This paper presents a proposed set of required measurements for visible-through-longwave infrared remote sensing systems, and a description of the Stennis characterization. Other topics discussed inslude: 1) use of ancillary atmospheric and solar measurements taken at SSC that support various characterizations, 2) other sites used for radiometric, geometric, and spatial characterization in the continental United States,a nd 3) the need for a standardized technique to be adopted by the Committee on Earth Observation Satellites (CEOS) and other organizations.

  19. Temporal assessment of microbial communities in soils of two contrasting mangroves.

    PubMed

    Rigonato, Janaina; Kent, Angela D; Gumiere, Thiago; Branco, Luiz Henrique Zanini; Andreote, Fernando Dini; Fiore, Marli Fátima

    Variations in microbial communities promoted by alterations in environmental conditions are reflected in similarities/differences both at taxonomic and functional levels. Here we used a natural gradient within mangroves from seashore to upland, to contrast the natural variability in bacteria, cyanobacteria and diazotroph assemblages in a pristine area compared to an oil polluted area along a timespan of three years, based on ARISA (bacteria and cyanobacteria) and nifH T-RFLP (diazotrophs) fingerprinting. The data presented herein indicated that changes in all the communities evaluated were mainly driven by the temporal effect in the contaminated area, while local effects were dominant on the pristine mangrove. A positive correlation of community structure between diazotrophs and cyanobacteria was observed, suggesting the functional importance of this phylum as nitrogen fixers in mangroves soils. Different ecological patterns explained the microbial behavior in the pristine and polluted mangroves. Stochastic models in the pristine mangrove indicate that there is not a specific environmental factor that determines the bacterial distribution, while cyanobacteria and diazotrophs better fitted in deterministic model in the same area. For the contaminated mangrove site, deterministic models better represented the variations in the communities, suggesting that the presence of oil might change the microbial ecological structures over time. Mangroves represent a unique environment threatened by global change, and this study contributed to the knowledge of the microbial distribution in such areas and its response on persistent contamination historic events. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  20. Photophysics and energy transfer studies of Alq3 confined in the voids of nanoporous anodic alumina.

    PubMed

    Mohammadpour, Arash; Utkin, Ilya; Bodepudi, Srikrishna Chanakya; Kar, Piyush; Fedosejevs, Robert; Pramanik, Sandipan; Shankar, Karthik

    2013-04-01

    We report on a hierarchical nanoarchitecture wherein distinct chromophores are deterministically placed at two different types of sites in a nanoporous metal oxide framework. One chromophore, namely Tris(8-hydroxyquinoline)aluminium(III) (Alq3), is embedded in the 1-2 nm sized nanovoids of anodic aluminum oxide (AAO) and another chromophore (carboxyfluorescein or pyrenebutyric acid) is anchored in the form of a monolayer to the surface of the walls of the cylindrical nanopores (- 20 nm in diameter) of AAO. We found the luminescence maximum to occur at 492 nm, blueshifted by at least 18 nm from the value in solutions and thin films. The excited state decay of Alq3 molecules in nanovoids was found to be biexponential with a fast component of 338 ps and a slower component of 2.26 ns, different from Alq3 thin films and solutions. Using a combination of steady state and time-resolved luminescence studies, we found that efficient Forster-type resonance energy transfer (FRET) from Alq3 in the nanovoids to the carboxyfluorescein monolayer could be used to pump the emission of surface-bound chromophores. Conversely, the emission of nanovoid-confined Alq3 could be pumped by energy transfer from a pyrenebutyric acid monolayer. Such intra-nanoarchitecture interactions between chromophores deterministically placed in different spatial locations are important in applications such as organic light emitting diodes, chemical sensors, energy transfer fluorescent labels, light harvesting antennas and organic spintronics.

  1. Identifying Flow Networks in a Karstified Aquifer by Application of the Cellular Automata-Based Deterministic Inversion Method (Lez Aquifer, France)

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Wang, X.; Jourde, H.; Lecoq, N.

    2017-12-01

    The distributed modeling of flow paths within karstic and fractured fields remains a complex task because of the high dependence of the hydraulic responses to the relative locations between observational boreholes and interconnected fractures and karstic conduits that control the main flow of the hydrosystem. The inverse problem in a distributed model is one alternative approach to interpret the hydraulic test data by mapping the karstic networks and fractured areas. In this work, we developed a Bayesian inversion approach, the Cellular Automata-based Deterministic Inversion (CADI) algorithm to infer the spatial distribution of hydraulic properties in a structurally constrained model. This method distributes hydraulic properties along linear structures (i.e., flow conduits) and iteratively modifies the structural geometry of this conduit network to progressively match the observed hydraulic data to the modeled ones. As a result, this method produces a conductivity model that is composed of a discrete conduit network embedded in the background matrix, capable of producing the same flow behavior as the investigated hydrologic system. The method is applied to invert a set of multiborehole hydraulic tests collected from a hydraulic tomography experiment conducted at the Terrieu field site in the Lez aquifer, Southern France. The emergent model shows a high consistency to field observation of hydraulic connections between boreholes. Furthermore, it provides a geologically realistic pattern of flow conduits. This method is therefore of considerable value toward an enhanced distributed modeling of the fractured and karstified aquifers.

  2. 10 CFR 63.16 - Review of site characterization activities. 2

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... which such activities are carried out and to observe excavations, borings, and in situ tests, as they... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Licenses Preapplication Review § 63.16 Review of site characterization activities. 2 2 In addition to the review of site characterization activities...

  3. 10 CFR 63.16 - Review of site characterization activities. 2

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... which such activities are carried out and to observe excavations, borings, and in situ tests, as they... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Licenses Preapplication Review § 63.16 Review of site characterization activities. 2 2 In addition to the review of site characterization activities...

  4. 10 CFR 63.16 - Review of site characterization activities. 2

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... which such activities are carried out and to observe excavations, borings, and in situ tests, as they... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Licenses Preapplication Review § 63.16 Review of site characterization activities. 2 2 In addition to the review of site characterization activities...

  5. 10 CFR 63.16 - Review of site characterization activities. 2

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... which such activities are carried out and to observe excavations, borings, and in situ tests, as they... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Licenses Preapplication Review § 63.16 Review of site characterization activities. 2 2 In addition to the review of site characterization activities...

  6. 10 CFR 63.16 - Review of site characterization activities. 2

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... which such activities are carried out and to observe excavations, borings, and in situ tests, as they... IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA Licenses Preapplication Review § 63.16 Review of site characterization activities. 2 2 In addition to the review of site characterization activities...

  7. SH-wave refraction/reflection and site characterization

    USGS Publications Warehouse

    Wang, Z.; Street, R.L.; Woolery, E.W.; Madin, I.P.

    2000-01-01

    Traditionally, nonintrusive techniques used to characterize soils have been based on P-wave refraction/reflection methods. However, near-surface unconsolidated soils are oftentimes water-saturated, and when groundwater is present at a site, the velocity of the P-waves is more related to the compressibility of the pore water than to the matrix of the unconsolidated soils. Conversely, SH-waves are directly relatable to the soil matrix. This makes SH-wave refraction/reflection methods effective in site characterizations where groundwater is present. SH-wave methods have been used extensively in site characterization and subsurface imaging for earthquake hazard assessments in the central United States and western Oregon. Comparison of SH-wave investigations with geotechnical investigations shows that SH-wave refraction/reflection techniques are viable and cost-effective for engineering site characterization.

  8. Universality in survivor distributions: Characterizing the winners of competitive dynamics

    NASA Astrophysics Data System (ADS)

    Luck, J. M.; Mehta, A.

    2015-11-01

    We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.

  9. Subsurface damage and microstructure development in precision microground hard ceramics using magnetorheological finishing spots.

    PubMed

    Shafrir, Shai N; Lambropoulos, John C; Jacobs, Stephen D

    2007-08-01

    We demonstrate the use of spots taken with magnetorheological finishing (MRF) for estimating subsurface damage (SSD) depth from deterministic microgrinding for three hard ceramics: aluminum oxynitride (Al(23)O(27)N(5)/ALON), polycrystalline alumina (Al(2)O(3)/PCA), and chemical vapor deposited (CVD) silicon carbide (Si(4)C/SiC). Using various microscopy techniques to characterize the surfaces, we find that the evolution of surface microroughness with the amount of material removed shows two stages. In the first, the damaged layer and SSD induced by microgrinding are removed, and the surface microroughness reaches a low value. Peak-to-valley (p-v) surface microroughness induced from grinding gives a measure of the SSD depth in the first stage. With the removal of additional material, a second stage develops, wherein the interaction of MRF and the material's microstructure is revealed. We study the development of this texture for these hard ceramics with the use of power spectral density to characterize surface features.

  10. Subsurface Damage and Microstructure Development in Precision Microground Hard Ceramics Using Magnetorheological Finishing Spots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shafrir, S.N.; Lambropoulos, J.C.; Jacobs, S.D.

    2007-08-01

    We demonstrate the use of spots taken with magnetorheological finishing (MRF) for estimating subsurface damage (SSD) depth from deterministic microgrinding for three hard ceramics: aluminum oxynitride (Al23O27N5/ALON), polycrystalline alumina (AL2O3/PCA), and chemical vapor deposited (CVD) silicon carbide (Si4C/SiC). Using various microscopy techniques to characterize the surfaces, we find that the evolution of surface microroughness with the amount of material removed shows two stages. In the first, the damaged layer and SSD induced by microgrinding are removed, and the surface roughness reaches a low value. Peak-to-valley (p-v) surface microroughness induced from grinding gives a measure of the SSD depth in themore » first stage. With the removal of additional material, a second stage develops, wherein the interaction of MRF and the material's microstructure is revealed. We study the development of this texture for these har ceramics with the use of power spectral density to characterize surface features.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaul, Alexander; Holzinger, Dennis; Müglich, Nicolas David

    A magnetic domain texture has been deterministically engineered in a topographically flat exchange-biased (EB) thin film system. The texture consists of long-range periodically arranged unit cells of four individual domains, characterized by individual anisotropies, individual geometry, and with non-collinear remanent magnetizations. The texture has been engineered by a sequence of light-ion bombardment induced magnetic patterning of the EB layer system. The magnetic texture's in-plane spatial magnetization distribution and the corresponding domain walls have been characterized by scanning electron microscopy with polarization analysis (SEMPA). The influence of magnetic stray fields emerging from neighboring domain walls and the influence of the differentmore » anisotropies of the adjacent domains on the Néel type domain wall core's magnetization rotation sense and widths were investigated. It is shown that the usual energy degeneracy of clockwise and counterclockwise rotating magnetization through the walls is revoked, suppressing Bloch lines along the domain wall. Estimates of the domain wall widths for different domain configurations based on material parameters determined by vibrating sample magnetometry were quantitatively compared to the SEMPA data.« less

  12. Imitative and best response behaviors in a nonlinear Cournotian setting

    NASA Astrophysics Data System (ADS)

    Cerboni Baiardi, Lorenzo; Naimzada, Ahmad K.

    2018-05-01

    We consider the competition among quantity setting players in a deterministic nonlinear oligopoly framework characterized by an isoelastic demand curve. Players are characterized by having heterogeneous decisional mechanisms to set their outputs: some players are imitators, while the remaining others adopt a rational-like rule according to which their past decisions are adjusted towards their static expectation best response. The Cournot-Nash production level is a stationary state of our model together with a further production level that can be interpreted as the competitive outcome in case only imitators are present. We found that both the number of players and the relative fraction of imitators influence stability of the Cournot-Nash equilibrium with an ambiguous role, and double instability thresholds may be observed. Global analysis shows that a wide variety of complex dynamic scenarios emerge. Chaotic trajectories as well as multi-stabilities, where different attractors coexist, are robust phenomena that can be observed for a wide spectrum of parameter sets.

  13. Design and evaluation guidelines for Department of Energy facilities subjected to natural phenomena hazards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kennedy, R.P.; Short, S.A.; McDonald, J.R.

    1990-06-01

    The Department of Energy (DOE) and the DOE Natural Phenomena Hazards Panel have developed uniform design and evaluation guidelines for protection against natural phenomena hazards at DOE sites throughout the United States. The goal of the guidelines is to assure that DOE facilities can withstand the effects of natural phenomena such as earthquakes, extreme winds, tornadoes, and flooding. The guidelines apply to both new facilities (design) and existing facilities (evaluation, modification, and upgrading). The intended audience is primarily the civil/structural or mechanical engineers conducting the design or evaluation of DOE facilities. The likelihood of occurrence of natural phenomena hazards atmore » each DOE site has been evaluated by the DOE Natural Phenomena Hazard Program. Probabilistic hazard models are available for earthquake, extreme wind/tornado, and flood. Alternatively, site organizations are encouraged to develop site-specific hazard models utilizing the most recent information and techniques available. In this document, performance goals and natural hazard levels are expressed in probabilistic terms, and design and evaluation procedures are presented in deterministic terms. Design/evaluation procedures conform closely to common standard practices so that the procedures will be easily understood by most engineers. Performance goals are expressed in terms of structure or equipment damage to the extent that: (1) the facility cannot function; (2) the facility would need to be replaced; or (3) personnel are endangered. 82 refs., 12 figs., 18 tabs.« less

  14. Integrating Saharan dust forecasts into a regional chemical transport model: a case study over Northern Italy.

    PubMed

    Carnevale, C; Finzi, G; Pisoni, E; Volta, M; Kishcha, P; Alpert, P

    2012-02-15

    The Po Valley in Northern Italy is frequently affected by high PM10 concentrations, where both natural and anthropogenic sources play a significant role. To improve air pollution modeling, 3D dust fields, produced by means of the DREAM dust forecasts, were integrated as boundary conditions into the mesoscale 3D deterministic Transport Chemical Aerosol Model (TCAM). A case study of the TCAM and DREAM integration was implemented over Northern Italy for the period May 15-June 30, 2007. First, the Saharan dust impact on PM10 concentration was analyzed for eleven remote PM10 sites with the lowest level of air pollution. These remote sites are the most sensitive to Saharan dust intrusions into Northern Italy, because of the absence of intensive industrial pollution. At these remote sites, the observed maxima in PM10 concentration during dust events is evidence of dust aerosol near the surface in Northern Italy. Comparisons between modeled PM10 concentrations and measurements at 230 PM10 sites in Northern Italy, showed that the integrated TCAM-DREAM model more accurately reproduced PM10 concentration than the base TCAM model, both in terms of correlation and mean error. Specifically, the correlation median increased from 0.40 to 0.65, while the normalized mean absolute error median dropped from 0.5 to 0.4. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Innovations in Site Characterization Case Study: The Role of a Conceptual Site Model for Expedited Site Characterization Using the Triad Approach at the Poudre River Site, Fort Collins, Colorado

    EPA Pesticide Factsheets

    This case study examines how systematic planning, an evolving conceptual site model (CSM), dynamic work strategies, and real time measurement technologies can be used to unravel complex contaminant distribution patterns...

  16. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  17. Efficient Algorithms for Handling Nondeterministic Automata

    NASA Astrophysics Data System (ADS)

    Vojnar, Tomáš

    Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.

  18. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes: Deterministic assembly of hyporheic microbiomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Emily B.; Crump, Alex R.; Resch, Charles T.

    2017-03-28

    Subsurface zones of groundwater and surface water mixing (hyporheic zones) are regions of enhanced rates of biogeochemical cycling, yet ecological processes governing hyporheic microbiome composition and function through space and time remain unknown. We sampled attached and planktonic microbiomes in the Columbia River hyporheic zone across seasonal hydrologic change, and employed statistical null models to infer mechanisms generating temporal changes in microbiomes within three hydrologically-connected, physicochemically-distinct geographic zones (inland, nearshore, river). We reveal that microbiomes remain dissimilar through time across all zones and habitat types (attached vs. planktonic) and that deterministic assembly processes regulate microbiome composition in all data subsets.more » The consistent presence of heterotrophic taxa and members of the Planctomycetes-Verrucomicrobia-Chlamydiae (PVC) superphylum nonetheless suggests common selective pressures for physiologies represented in these groups. Further, co-occurrence networks were used to provide insight into taxa most affected by deterministic assembly processes. We identified network clusters to represent groups of organisms that correlated with seasonal and physicochemical change. Extended network analyses identified keystone taxa within each cluster that we propose are central in microbiome composition and function. Finally, the abundance of one network cluster of nearshore organisms exhibited a seasonal shift from heterotrophic to autotrophic metabolisms and correlated with microbial metabolism, possibly indicating an ecological role for these organisms as foundational species in driving biogeochemical reactions within the hyporheic zone. Taken together, our research demonstrates a predominant role for deterministic assembly across highly-connected environments and provides insight into niche dynamics associated with seasonal changes in hyporheic microbiome composition and metabolism.« less

  19. Theory and applications of a deterministic approximation to the coalescent model

    PubMed Central

    Jewett, Ethan M.; Rosenberg, Noah A.

    2014-01-01

    Under the coalescent model, the random number nt of lineages ancestral to a sample is nearly deterministic as a function of time when nt is moderate to large in value, and it is well approximated by its expectation E[nt]. In turn, this expectation is well approximated by simple deterministic functions that are easy to compute. Such deterministic functions have been applied to estimate allele age, effective population size, and genetic diversity, and they have been used to study properties of models of infectious disease dynamics. Although a number of simple approximations of E[nt] have been derived and applied to problems of population-genetic inference, the theoretical accuracy of the formulas and the inferences obtained using these approximations is not known, and the range of problems to which they can be applied is not well understood. Here, we demonstrate general procedures by which the approximation nt ≈ E[nt] can be used to reduce the computational complexity of coalescent formulas, and we show that the resulting approximations converge to their true values under simple assumptions. Such approximations provide alternatives to exact formulas that are computationally intractable or numerically unstable when the number of sampled lineages is moderate or large. We also extend an existing class of approximations of E[nt] to the case of multiple populations of time-varying size with migration among them. Our results facilitate the use of the deterministic approximation nt ≈ E[nt] for deriving functionally simple, computationally efficient, and numerically stable approximations of coalescent formulas under complicated demographic scenarios. PMID:24412419

  20. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  1. A stochastic model for correlated protein motions

    NASA Astrophysics Data System (ADS)

    Karain, Wael I.; Qaraeen, Nael I.; Ajarmah, Basem

    2006-06-01

    A one-dimensional Langevin-type stochastic difference equation is used to find the deterministic and Gaussian contributions of time series representing the projections of a Bovine Pancreatic Trypsin Inhibitor (BPTI) protein molecular dynamics simulation along different eigenvector directions determined using principal component analysis. The deterministic part shows a distinct nonlinear behavior only for eigenvectors contributing significantly to the collective protein motion.

  2. Values in Science: Making Sense of Biology Doctoral Students' Critical Examination of a Deterministic Claim in a Media Article

    ERIC Educational Resources Information Center

    Raveendran, Aswathy; Chunawala, Sugra

    2015-01-01

    Several educators have emphasized that students need to understand science as a human endeavor that is not value free. In the exploratory study reported here, we investigated how doctoral students of biology understand the intersection of values and science in the context of genetic determinism. Deterministic research claims have been critiqued…

  3. The dual reading of general conditionals: The influence of abstract versus concrete contexts.

    PubMed

    Wang, Moyun; Yao, Xinyun

    2018-04-01

    A current main issue on conditionals is whether the meaning of general conditionals (e.g., If a card is red, then it is round) is deterministic (exceptionless) or probabilistic (exception-tolerating). In order to resolve the issue, two experiments examined the influence of conditional contexts (with vs. without frequency information of truth table cases) on the reading of general conditionals. Experiment 1 examined the direct reading of general conditionals in the possibility judgment task. Experiment 2 examined the indirect reading of general conditionals in the truth judgment task. It was found that both the direct and indirect reading of general conditionals exhibited the duality: the predominant deterministic semantic reading of conditionals without frequency information, and the predominant probabilistic pragmatic reading of conditionals with frequency information. The context of general conditionals determined the predominant reading of general conditionals. There were obvious individual differences in reading general conditionals with frequency information. The meaning of general conditionals is relative, depending on conditional contexts. The reading of general conditionals is flexible and complex so that no simple deterministic and probabilistic accounts are able to explain it. The present findings are beyond the extant deterministic and probabilistic accounts of conditionals.

  4. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  5. An overview of engineering concepts and current design algorithms for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Duffy, S. F.; Hu, J.; Hopkins, D. A.

    1995-01-01

    The article begins by examining the fundamentals of traditional deterministic design philosophy. The initial section outlines the concepts of failure criteria and limit state functions two traditional notions that are embedded in deterministic design philosophy. This is followed by a discussion regarding safety factors (a possible limit state function) and the common utilization of statistical concepts in deterministic engineering design approaches. Next the fundamental aspects of a probabilistic failure analysis are explored and it is shown that deterministic design concepts mentioned in the initial portion of the article are embedded in probabilistic design methods. For components fabricated from ceramic materials (and other similarly brittle materials) the probabilistic design approach yields the widely used Weibull analysis after suitable assumptions are incorporated. The authors point out that Weibull analysis provides the rare instance where closed form solutions are available for a probabilistic failure analysis. Since numerical methods are usually required to evaluate component reliabilities, a section on Monte Carlo methods is included to introduce the concept. The article concludes with a presentation of the technical aspects that support the numerical method known as fast probability integration (FPI). This includes a discussion of the Hasofer-Lind and Rackwitz-Fiessler approximations.

  6. Implementation speed of deterministic population passages compared to that of Rabi pulses

    NASA Astrophysics Data System (ADS)

    Chen, Jingwei; Wei, L. F.

    2015-02-01

    Fast Rabi π -pulse technique has been widely applied to various coherent quantum manipulations, although it requires precise designs of the pulse areas. Relaxing the precise pulse designs, various rapid adiabatic passage (RAP) approaches have been alternatively utilized to implement various population passages deterministically. However, the usual RAP protocol could not be implemented desirably fast, as the relevant adiabatic condition should be robustly satisfied during the passage. Here, we propose a modified shortcut to adiabaticity (STA) technique to accelerate significantly the desired deterministic quantum state population passages. This transitionless technique is beyond the usual rotating wave approximation (RWA) performed in the recent STA protocols, and thus can be applied to deliver various fast quantum evolutions wherein the relevant counter-rotating effects cannot be neglected. The proposal is demonstrated specifically with the driven two- and three-level systems. Numerical results show that with the present STA technique beyond the RWA the usual Stark-chirped RAPs and stimulated Raman adiabatic passages could be significantly speeded up; the deterministic population passages could be implemented as fast as the widely used fast Rabi π pulses, but are insensitive to the applied pulse areas.

  7. Shielding Calculations on Waste Packages - The Limits and Possibilities of different Calculation Methods by the example of homogeneous and inhomogeneous Waste Packages

    NASA Astrophysics Data System (ADS)

    Adams, Mike; Smalian, Silva

    2017-09-01

    For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.

  8. Calculating complete and exact Pareto front for multiobjective optimization: a new deterministic approach for discrete problems.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel

    2013-06-01

    Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.

  9. Model Validation and Site Characterization for Early Deployment MHK Sites and Establishment of Wave Classification Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kilcher, Levi F

    Model Validation and Site Characterization for Early Deployment Marine and Hydrokinetic Energy Sites and Establishment of Wave Classification Scheme presentation from from Water Power Technologies Office Peer Review, FY14-FY16.

  10. Recent Experience Using Active Love Wave Techniques to Characterize Seismographic Station Sites

    NASA Astrophysics Data System (ADS)

    Martin, A. J.; Yong, A.; Salomone, L.

    2014-12-01

    Active-source Love waves recorded by the multi-channel analysis of surface wave (MASLW) technique were recently analyzed in two site characterization projects. Between 2010 and 2011, the 2009 American Recovery and Reinvestment Act (ARRA) funded GEOVision to conduct geophysical investigations at 189 seismographic stations—185 in California and 4 in the Central Eastern U.S. (CEUS). The original project plan was to utilize active and passive Rayleigh wave-based techniques to obtain shear-wave velocity (VS) profiles to a minimum depth of 30 m and the time-averaged VS of the upper 30 meters (VS30). Early in the investigation it became evident that Rayleigh wave techniques, such as multi-channel analysis of surface waves (MASRW), were not effective at characterizing all sites. Shear-wave seismic refraction and MASLW techniques were therefore applied. The MASLW technique was deployed at a total of 38 sites, in addition to other methods, and used as the primary technique to characterize 22 sites, 5 of which were also characterized using Rayleigh wave techniques. In 2012, the Electric Power Research Institute funded characterization of 33 CEUS station sites. Based on experience from the ARRA investigation, both MASRW and MASLW data were acquired by GEOVision at 24 CEUS sites—the remaining 9 sites and 2 overlapping sites were characterized by University of Texas, Austin. Of the 24 sites characterized by GEOVision, 16 were characterized using MASLW data, 4 using both MASLW and MASRW data and 4 using MASRW data. Love wave techniques were often found to perform better, or at least yield phase velocity data that could be more readily modeled using the fundamental mode assumption, at shallow rock sites, sites with steep velocity gradients, and, sites with a thin, low velocity, surficial soil layer overlying stiffer sediments. These types of velocity structure often excite dominant higher modes in Rayleigh wave data, but not in Love wave data. At such sites, it may be possible to model Rayleigh wave data using multi- or effective-mode techniques; however, in many cases extraction of adequate Rayleigh wave dispersion data for modeling was difficult. These results imply that field procedures should include careful scrutiny of Rayleigh wave-based dispersion data in order to collect Love wave data when warranted.

  11. Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?

    NASA Astrophysics Data System (ADS)

    Choustova, Olga

    2007-02-01

    We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.

  12. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  13. Observations, theoretical ideas and modeling of turbulent flows: Past, present and future

    NASA Technical Reports Server (NTRS)

    Chapman, G. T.; Tobak, M.

    1985-01-01

    Turbulence was analyzed in a historical context featuring the interactions between observations, theoretical ideas, and modeling within three successive movements. These are identified as predominantly statistical, structural and deterministic. The statistical movement is criticized for its failure to deal with the structural elements observed in turbulent flows. The structural movement is criticized for its failure to embody observed structural elements within a formal theory. The deterministic movement is described as having the potential of overcoming these deficiencies by allowing structural elements to exhibit chaotic behavior that is nevertheless embodied within a theory. Four major ideas of this movement are described: bifurcation theory, strange attractors, fractals, and the renormalization group. A framework for the future study of turbulent flows is proposed, based on the premises of the deterministic movement.

  14. 10 CFR 960.3-2-2 - Nomination of sites as suitable for characterization.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... OF POTENTIAL SITES FOR A NUCLEAR WASTE REPOSITORY Implementation Guidelines § 960.3-2-2 Nomination of... of each repository site. For the second repository, at least three of the sites shall not have been nominated previously. Any site nominated as suitable for characterization for the first repository, but not...

  15. Ames expedited site characterization demonstration at the former manufactured gas plant site, Marshalltown, Iowa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevolo, A.J.; Kjartanson, B.H.; Wonder, J.D.

    1996-03-01

    The goal of the Ames Expedited Site Characterization (ESC) project is to evaluate and promote both innovative technologies (IT) and state-of-the-practice technologies (SOPT) for site characterization and monitoring. In April and May 1994, the ESC project conducted site characterization, technology comparison, and stakeholder demonstration activities at a former manufactured gas plant (FMGP) owned by Iowa Electric Services (IES) Utilities, Inc., in Marshalltown, Iowa. Three areas of technology were fielded at the Marshalltown FMGP site: geophysical, analytical and data integration. The geophysical technologies are designed to assess the subsurface geological conditions so that the location, fate and transport of the targetmore » contaminants may be assessed and forecasted. The analytical technologies/methods are designed to detect and quantify the target contaminants. The data integration technology area consists of hardware and software systems designed to integrate all the site information compiled and collected into a conceptual site model on a daily basis at the site; this conceptual model then becomes the decision-support tool. Simultaneous fielding of different methods within each of the three areas of technology provided data for direct comparison of the technologies fielded, both SOPT and IT. This document reports the results of the site characterization, technology comparison, and ESC demonstration activities associated with the Marshalltown FMGP site. 124 figs., 27 tabs.« less

  16. FW/CADIS-O: An Angle-Informed Hybrid Method for Neutron Transport

    NASA Astrophysics Data System (ADS)

    Munk, Madicken

    The development of methods for deep-penetration radiation transport is of continued importance for radiation shielding, nonproliferation, nuclear threat reduction, and medical applications. As these applications become more ubiquitous, the need for transport methods that can accurately and reliably model the systems' behavior will persist. For these types of systems, hybrid methods are often the best choice to obtain a reliable answer in a short amount of time. Hybrid methods leverage the speed and uniform uncertainty distribution of a deterministic solution to bias Monte Carlo transport to reduce the variance in the solution. At present, the Consistent Adjoint-Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) hybrid methods are the gold standard by which to model systems that have deeply-penetrating radiation. They use an adjoint scalar flux to generate variance reduction parameters for Monte Carlo. However, in problems where there exists strong anisotropy in the flux, CADIS and FW-CADIS are not as effective at reducing the problem variance as isotropic problems. This dissertation covers the theoretical background, implementation of, and characteri- zation of a set of angle-informed hybrid methods that can be applied to strongly anisotropic deep-penetration radiation transport problems. These methods use a forward-weighted adjoint angular flux to generate variance reduction parameters for Monte Carlo. As a result, they leverage both adjoint and contributon theory for variance reduction. They have been named CADIS-O and FW-CADIS-O. To characterize CADIS-O, several characterization problems with flux anisotropies were devised. These problems contain different physical mechanisms by which flux anisotropy is induced. Additionally, a series of novel anisotropy metrics by which to quantify flux anisotropy are used to characterize the methods beyond standard Figure of Merit (FOM) and relative error metrics. As a result, a more thorough investigation into the effects of anisotropy and the degree of anisotropy on Monte Carlo convergence is possible. The results from the characterization of CADIS-O show that it performs best in strongly anisotropic problems that have preferential particle flowpaths, but only if the flowpaths are not comprised of air. Further, the characterization of the method's sensitivity to deterministic angular discretization showed that CADIS-O has less sensitivity to discretization than CADIS for both quadrature order and PN order. However, more variation in the results were observed in response to changing quadrature order than PN order. Further, as a result of the forward-normalization in the O-methods, ray effect mitigation was observed in many of the characterization problems. The characterization of the CADIS-O-method in this dissertation serves to outline a path forward for further hybrid methods development. In particular, the response that the O-method has with changes in quadrature order, PN order, and on ray effect mitigation are strong indicators that the method is more resilient than its predecessors to strong anisotropies in the flux. With further method characterization, the full potential of the O-methods can be realized. The method can then be applied to geometrically complex, materially diverse problems and help to advance system modelling in deep-penetration radiation transport problems with strong anisotropies in the flux.

  17. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  18. Site Characterization Technologies for DNAPL Investigations

    EPA Pesticide Factsheets

    This document is intended to help managers at sites with potential or confirmed DNAPL contamination identify suitable characterization technologies, screen the technologies for potential application, learn about applications at similar sites, and...

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, H.

    In this dissertation we study a procedure which restarts a Markov process when the process is killed by some arbitrary multiplicative functional. The regenerative nature of this revival procedure is characterized through a Markov renewal equation. An interesting duality between the revival procedure and the classical killing operation is found. Under the condition that the multiplicative functional possesses an intensity, the generators of the revival process can be written down explicitly. An intimate connection is also found between the perturbation of the sample path of a Markov process and the perturbation of a generator (in Kato's sense). The applications ofmore » the theory include the study of the processes like piecewise-deterministic Markov process, virtual waiting time process and the first entrance decomposition (taboo probability).« less

  20. Evaluation of Fast-Time Wake Vortex Models using Wake Encounter Flight Test Data

    NASA Technical Reports Server (NTRS)

    Ahmad, Nashat N.; VanValkenburg, Randal L.; Bowles, Roland L.; Limon Duparcmeur, Fanny M.; Gloudesman, Thijs; van Lochem, Sander; Ras, Eelco

    2014-01-01

    This paper describes a methodology for the integration and evaluation of fast-time wake models with flight data. The National Aeronautics and Space Administration conducted detailed flight tests in 1995 and 1997 under the Aircraft Vortex Spacing System Program to characterize wake vortex decay and wake encounter dynamics. In this study, data collected during Flight 705 were used to evaluate NASA's fast-time wake transport and decay models. Deterministic and Monte-Carlo simulations were conducted to define wake hazard bounds behind the wake generator. The methodology described in this paper can be used for further validation of fast-time wake models using en-route flight data, and for determining wake turbulence constraints in the design of air traffic management concepts.

Top