Sample records for develop 3-d deterministic

  1. Applied Mathematics in EM Studies with Special Emphasis on an Uncertainty Quantification and 3-D Integral Equation Modelling

    NASA Astrophysics Data System (ADS)

    Pankratov, Oleg; Kuvshinov, Alexey

    2016-01-01

    Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.

  2. Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frambati, S.; Frignani, M.

    2012-07-01

    We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design formore » radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)« less

  3. Recent progress in the assembly of nanodevices and van der Waals heterostructures by deterministic placement of 2D materials.

    PubMed

    Frisenda, Riccardo; Navarro-Moratalla, Efrén; Gant, Patricia; Pérez De Lara, David; Jarillo-Herrero, Pablo; Gorbachev, Roman V; Castellanos-Gomez, Andres

    2018-01-02

    Designer heterostructures can now be assembled layer-by-layer with unmatched precision thanks to the recently developed deterministic placement methods to transfer two-dimensional (2D) materials. This possibility constitutes the birth of a very active research field on the so-called van der Waals heterostructures. Moreover, these deterministic placement methods also open the door to fabricate complex devices, which would be otherwise very difficult to achieve by conventional bottom-up nanofabrication approaches, and to fabricate fully-encapsulated devices with exquisite electronic properties. The integration of 2D materials with existing technologies such as photonic and superconducting waveguides and fiber optics is another exciting possibility. Here, we review the state-of-the-art of the deterministic placement methods, describing and comparing the different alternative methods available in the literature, and we illustrate their potential to fabricate van der Waals heterostructures, to integrate 2D materials into complex devices and to fabricate artificial bilayer structures where the layers present a user-defined rotational twisting angle.

  4. GPU accelerated simulations of 3D deterministic particle transport using discrete ordinates method

    NASA Astrophysics Data System (ADS)

    Gong, Chunye; Liu, Jie; Chi, Lihua; Huang, Haowei; Fang, Jingyue; Gong, Zhenghu

    2011-07-01

    Graphics Processing Unit (GPU), originally developed for real-time, high-definition 3D graphics in computer games, now provides great faculty in solving scientific applications. The basis of particle transport simulation is the time-dependent, multi-group, inhomogeneous Boltzmann transport equation. The numerical solution to the Boltzmann equation involves the discrete ordinates ( Sn) method and the procedure of source iteration. In this paper, we present a GPU accelerated simulation of one energy group time-independent deterministic discrete ordinates particle transport in 3D Cartesian geometry (Sweep3D). The performance of the GPU simulations are reported with the simulations of vacuum boundary condition. The discussion of the relative advantages and disadvantages of the GPU implementation, the simulation on multi GPUs, the programming effort and code portability are also reported. The results show that the overall performance speedup of one NVIDIA Tesla M2050 GPU ranges from 2.56 compared with one Intel Xeon X5670 chip to 8.14 compared with one Intel Core Q6600 chip for no flux fixup. The simulation with flux fixup on one M2050 is 1.23 times faster than on one X5670.

  5. An Analysis of Coherent Digital Receivers in the Presence of Colored Noise Interference.

    DTIC Science & Technology

    1985-06-01

    115 6.4 Pe for Det-erministic Jamnmers, JSR = 0.01, E0.3---------------------------------------------116 6.5 Pe for Deterministic Jamnmers, JSR = 0.1...k k where h p(t) and hhi(t) are the particular and homogeneous solutions, respectively, to a differential equation derived from the Fredholm I...yields 2 2D(s2)c (s) = N(s ) (3.4)c Multiplication by s corresponds to differentiation with respect to t in the time domain. So, Eq. (3.4) becomes D(p 2)K

  6. MC2-3 / DIF3D Analysis for the ZPPR-15 Doppler and Sodium Void Worth Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Micheal A.; Lell, Richard M.; Lee, Changho

    This manuscript covers validation efforts for our deterministic codes at Argonne National Laboratory. The experimental results come from the ZPPR-15 work in 1985-1986 which was focused on the accuracy of physics data for the integral fast reactor concept. Results for six loadings are studied in this document and focus on Doppler sample worths and sodium void worths. The ZPPR-15 loadings are modeled using the MC2-3/DIF3D codes developed and maintained at ANL and the MCNP code from LANL. The deterministic models are generated by processing the as-built geometry information, i.e. MCNP input, and generating MC2-3 cross section generation instructions and amore » drawer homogenized equivalence problem. The Doppler reactivity worth measurements are small heated samples which insert very small amounts of reactivity into the system (< 2 pcm). The results generated by the MC2-3/DIF3D codes were excellent for ZPPR-15A and ZPPR-15B and good for ZPPR-15D, compared to the MCNP solutions. In all cases, notable improvements were made over the analysis techniques applied to the same problems in 1987. The sodium void worths from MC2-3/DIF3D were quite good at 37.5 pcm while MCNP result was 33 pcm and the measured result was 31.5 pcm. Copyright © (2015) by the American Nuclear Society All rights reserved.« less

  7. Subspace algorithms for identifying separable-in-denominator 2D systems with deterministic-stochastic inputs

    NASA Astrophysics Data System (ADS)

    Ramos, José A.; Mercère, Guillaume

    2016-12-01

    In this paper, we present an algorithm for identifying two-dimensional (2D) causal, recursive and separable-in-denominator (CRSD) state-space models in the Roesser form with deterministic-stochastic inputs. The algorithm implements the N4SID, PO-MOESP and CCA methods, which are well known in the literature on 1D system identification, but here we do so for the 2D CRSD Roesser model. The algorithm solves the 2D system identification problem by maintaining the constraint structure imposed by the problem (i.e. Toeplitz and Hankel) and computes the horizontal and vertical system orders, system parameter matrices and covariance matrices of a 2D CRSD Roesser model. From a computational point of view, the algorithm has been presented in a unified framework, where the user can select which of the three methods to use. Furthermore, the identification task is divided into three main parts: (1) computing the deterministic horizontal model parameters, (2) computing the deterministic vertical model parameters and (3) computing the stochastic components. Specific attention has been paid to the computation of a stabilised Kalman gain matrix and a positive real solution when required. The efficiency and robustness of the unified algorithm have been demonstrated via a thorough simulation example.

  8. Convergence studies of deterministic methods for LWR explicit reflector methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canepa, S.; Hursin, M.; Ferroukhi, H.

    2013-07-01

    The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on verymore » different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)« less

  9. Three dimensional fabrication at small size scales

    PubMed Central

    Leong, Timothy G.; Zarafshar, Aasiyeh M.; Gracias, David H.

    2010-01-01

    Despite the fact that we live in a three-dimensional (3D) world and macroscale engineering is 3D, conventional sub-mm scale engineering is inherently two-dimensional (2D). New fabrication and patterning strategies are needed to enable truly three-dimensionally-engineered structures at small size scales. Here, we review strategies that have been developed over the last two decades that seek to enable such millimeter to nanoscale 3D fabrication and patterning. A focus of this review is the strategy of self-assembly, specifically in a biologically inspired, more deterministic form known as self-folding. Self-folding methods can leverage the strengths of lithography to enable the construction of precisely patterned 3D structures and “smart” components. This self-assembling approach is compared with other 3D fabrication paradigms, and its advantages and disadvantages are discussed. PMID:20349446

  10. Experimental demonstration on the deterministic quantum key distribution based on entangled photons.

    PubMed

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-02-10

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.

  11. Experimental demonstration on the deterministic quantum key distribution based on entangled photons

    PubMed Central

    Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu

    2016-01-01

    As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582

  12. Reactor Pressure Vessel Fracture Analysis Capabilities in Grizzly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Backman, Marie; Chakraborty, Pritam

    2015-03-01

    Efforts have been underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). Development in prior years has resulted a capability to calculate -integrals. For this application, these are used to calculate stress intensity factors for cracks to be used in deterministic linear elastic fracture mechanics (LEFM) assessments of fracture in degraded RPVs. The -integral can only be used to evaluate stress intensity factors for axis-aligned flaws because it can only be used to obtain the stress intensity factor for pure Mode Imore » loading. Off-axis flaws will be subjected to mixed-mode loading. For this reason, work has continued to expand the set of fracture mechanics capabilities to permit it to evaluate off-axis flaws. This report documents the following work to enhance Grizzly’s engineering fracture mechanics capabilities for RPVs: • Interaction Integral and -stress: To obtain mixed-mode stress intensity factors, a capability to evaluate interaction integrals for 2D or 3D flaws has been developed. A -stress evaluation capability has been developed to evaluate the constraint at crack tips in 2D or 3D. Initial verification testing of these capabilities is documented here. • Benchmarking for axis-aligned flaws: Grizzly’s capabilities to evaluate stress intensity factors for axis-aligned flaws have been benchmarked against calculations for the same conditions in FAVOR. • Off-axis flaw demonstration: The newly-developed interaction integral capabilities are demon- strated in an application to calculate the mixed-mode stress intensity factors for off-axis flaws. • Other code enhancements: Other enhancements to the thermomechanics capabilities that relate to the solution of the engineering RPV fracture problem are documented here.« less

  13. Printing, folding and assembly methods for forming 3D mesostructures in advanced materials

    NASA Astrophysics Data System (ADS)

    Zhang, Yihui; Zhang, Fan; Yan, Zheng; Ma, Qiang; Li, Xiuling; Huang, Yonggang; Rogers, John A.

    2017-03-01

    A rapidly expanding area of research in materials science involves the development of routes to complex 3D structures with feature sizes in the mesoscopic range (that is, between tens of nanometres and hundreds of micrometres). A goal is to establish methods for controlling the properties of materials systems and the function of devices constructed with them, not only through chemistry and morphology, but also through 3D architectures. The resulting systems, sometimes referred to as metamaterials, offer engineered behaviours with optical, thermal, acoustic, mechanical and electronic properties that do not occur in the natural world. Impressive advances in 3D printing techniques represent some of the most broadly recognized developments in this field, but recent successes with strategies based on concepts in origami, kirigami and deterministic assembly provide additional, unique options in 3D design and high-performance materials. In this Review, we highlight the latest progress and trends in methods for fabricating 3D mesostructures, beginning with the development of advanced material inks for nozzle-based approaches to 3D printing and new schemes for 3D optical patterning. In subsequent sections, we summarize more recent methods based on folding, rolling and mechanical assembly, including their application with materials such as designer hydrogels, monocrystalline inorganic semiconductors and graphene.

  14. INCREASING HEAVY OIL RESERVES IN THE WILMINGTON OIL FIELD THROUGH ADVANCED RESERVOIR CHARACTERIZATION AND THERMAL PRODUCTION TECHNOLOGIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott Hara

    2000-02-18

    The project involves using advanced reservoir characterization and thermal production technologies to improve thermal recovery techniques and lower operating and capital costs in a slope and basin clastic (SBC) reservoir in the Wilmington field, Los Angeles Co., CA. Through March 1999, project work has been completed related to data preparation, basic reservoir engineering, developing a deterministic three dimensional (3-D) geologic model, a 3-D deterministic reservoir simulation model, and a rock-log model, well drilling and completions, and surface facilities. Work is continuing on the stochastic geologic model, developing a 3-D stochastic thermal reservoir simulation model of the Fault Block IIA Tarmore » (Tar II-A) Zone, and operational work and research studies to prevent thermal-related formation compaction. Thermal-related formation compaction is a concern of the project team due to observed surface subsidence in the local area above the steamflood project. Last quarter on January 12, the steamflood project lost its inexpensive steam source from the Harbor Cogeneration Plant as a result of the recent deregulation of electrical power rates in California. An operational plan was developed and implemented to mitigate the effects of the two situations. Seven water injection wells were placed in service in November and December 1998 on the flanks of the Phase 1 steamflood area to pressure up the reservoir to fill up the existing steam chest. Intensive reservoir engineering and geomechanics studies are continuing to determine the best ways to shut down the steamflood operations in Fault Block II while minimizing any future surface subsidence. The new 3-D deterministic thermal reservoir simulator model is being used to provide sensitivity cases to optimize production, steam injection, future flank cold water injection and reservoir temperature and pressure. According to the model, reservoir fill up of the steam chest at the current injection rate of 28,000 BPD and gross and net oil production rates of 7,700 BPD and 750 BOPD (injection to production ratio of 4) will occur in October 1999. At that time, the reservoir should act more like a waterflood and production and cold water injection can be operated at lower net injection rates to be determined. Modeling runs developed this quarter found that varying individual well injection rates to meet added production and local pressure problems by sub-zone could reduce steam chest fill-up by up to one month.« less

  15. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  16. Programmable growth of branched silicon nanowires using a focused ion beam.

    PubMed

    Jun, Kimin; Jacobson, Joseph M

    2010-08-11

    Although significant progress has been made in being able to spatially define the position of material layers in vapor-liquid-solid (VLS) grown nanowires, less work has been carried out in deterministically defining the positions of nanowire branching points to facilitate more complicated structures beyond simple 1D wires. Work to date has focused on the growth of randomly branched nanowire structures. Here we develop a means for programmably designating nanowire branching points by means of focused ion beam-defined VLS catalytic points. This technique is repeatable without losing fidelity allowing multiple rounds of branching point definition followed by branch growth resulting in complex structures. The single crystal nature of this approach allows us to describe resulting structures with linear combinations of base vectors in three-dimensional (3D) space. Finally, by etching the resulting 3D defined wire structures branched nanotubes were fabricated with interconnected nanochannels inside. We believe that the techniques developed here should comprise a useful tool for extending linear VLS nanowire growth to generalized 3D wire structures.

  17. Plenary: Progress in Regional Landslide Hazard Assessment—Examples from the USA

    USGS Publications Warehouse

    Baum, Rex L.; Schulz, William; Brien, Dianne L.; Burns, William J.; Reid, Mark E.; Godt, Jonathan W.

    2014-01-01

    Landslide hazard assessment at local and regional scales contributes to mitigation of landslides in developing and densely populated areas by providing information for (1) land development and redevelopment plans and regulations, (2) emergency preparedness plans, and (3) economic analysis to (a) set priorities for engineered mitigation projects and (b) define areas of similar levels of hazard for insurance purposes. US Geological Survey (USGS) research on landslide hazard assessment has explored a range of methods that can be used to estimate temporal and spatial landslide potential and probability for various scales and purposes. Cases taken primarily from our work in the U.S. Pacific Northwest illustrate and compare a sampling of methods, approaches, and progress. For example, landform mapping using high-resolution topographic data resulted in identification of about four times more landslides in Seattle, Washington, than previous efforts using aerial photography. Susceptibility classes based on the landforms captured 93 % of all historical landslides (all types) throughout the city. A deterministic model for rainfall infiltration and shallow landslide initiation, TRIGRS, was able to identify locations of 92 % of historical shallow landslides in southwest Seattle. The potentially unstable areas identified by TRIGRS occupied only 26 % of the slope areas steeper than 20°. Addition of an unsaturated infiltration model to TRIGRS expands the applicability of the model to areas of highly permeable soils. Replacement of the single cell, 1D factor of safety with a simple 3D method of columns improves accuracy of factor of safety predictions for both saturated and unsaturated infiltration models. A 3D deterministic model for large, deep landslides, SCOOPS, combined with a three-dimensional model for groundwater flow, successfully predicted instability in steep areas of permeable outwash sand and topographic reentrants. These locations are consistent with locations of large, deep, historically active landslides. For an area in Seattle, a composite of the three maps illustrates how maps produced by different approaches might be combined to assess overall landslide potential. Examples from Oregon, USA, illustrate how landform mapping and deterministic analysis for shallow landslide potential have been adapted into standardized methods for efficiently producing detailed landslide inventory and shallow landslide susceptibility maps that have consistent content and format statewide.

  18. Laser targets compensate for limitations in inertial confinement fusion drivers

    NASA Astrophysics Data System (ADS)

    Kilkenny, J. D.; Alexander, N. B.; Nikroo, A.; Steinman, D. A.; Nobile, A.; Bernat, T.; Cook, R.; Letts, S.; Takagi, M.; Harding, D.

    2005-10-01

    Success in inertial confinement fusion (ICF) requires sophisticated, characterized targets. The increasing fidelity of three-dimensional (3D), radiation hydrodynamic computer codes has made it possible to design targets for ICF which can compensate for limitations in the existing single shot laser and Z pinch ICF drivers. Developments in ICF target fabrication technology allow more esoteric target designs to be fabricated. At present, requirements require new deterministic nano-material fabrication on micro scale.

  19. Proteus-MOC: A 3D deterministic solver incorporating 2D method of characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marin-Lafleche, A.; Smith, M. A.; Lee, C.

    2013-07-01

    A new transport solution methodology was developed by combining the two-dimensional method of characteristics with the discontinuous Galerkin method for the treatment of the axial variable. The method, which can be applied to arbitrary extruded geometries, was implemented in PROTEUS-MOC and includes parallelization in group, angle, plane, and space using a top level GMRES linear algebra solver. Verification tests were performed to show accuracy and stability of the method with the increased number of angular directions and mesh elements. Good scalability with parallelism in angle and axial planes is displayed. (authors)

  20. Anderson transition in a three-dimensional kicked rotor

    NASA Astrophysics Data System (ADS)

    Wang, Jiao; García-García, Antonio M.

    2009-03-01

    We investigate Anderson localization in a three-dimensional (3D) kicked rotor. By a finite-size scaling analysis we identify a mobility edge for a certain value of the kicking strength k=kc . For k>kc dynamical localization does not occur, all eigenstates are delocalized and the spectral correlations are well described by Wigner-Dyson statistics. This can be understood by mapping the kicked rotor problem onto a 3D Anderson model (AM) where a band of metallic states exists for sufficiently weak disorder. Around the critical region k≈kc we carry out a detailed study of the level statistics and quantum diffusion. In agreement with the predictions of the one parameter scaling theory (OPT) and with previous numerical simulations, the number variance is linear, level repulsion is still observed, and quantum diffusion is anomalous with ⟨p2⟩∝t2/3 . We note that in the 3D kicked rotor the dynamics is not random but deterministic. In order to estimate the differences between these two situations we have studied a 3D kicked rotor in which the kinetic term of the associated evolution matrix is random. A detailed numerical comparison shows that the differences between the two cases are relatively small. However in the deterministic case only a small set of irrational periods was used. A qualitative analysis of a much larger set suggests that deviations between the random and the deterministic kicked rotor can be important for certain choices of periods. Heuristically it is expected that localization effects will be weaker in a nonrandom potential since destructive interference will be less effective to arrest quantum diffusion. However we have found that certain choices of irrational periods enhance Anderson localization effects.

  1. Scoping analysis of the Advanced Test Reactor using SN2ND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolters, E.; Smith, M.; SC)

    2012-07-26

    A detailed set of calculations was carried out for the Advanced Test Reactor (ATR) using the SN2ND solver of the UNIC code which is part of the SHARP multi-physics code being developed under the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program in DOE-NE. The primary motivation of this work is to assess whether high fidelity deterministic transport codes can tackle coupled dynamics simulations of the ATR. The successful use of such codes in a coupled dynamics simulation can impact what experiments are performed and what power levels are permitted during those experiments at the ATR. The advantages of themore » SN2ND solver over comparable neutronics tools are its superior parallel performance and demonstrated accuracy on large scale homogeneous and heterogeneous reactor geometries. However, it should be noted that virtually no effort from this project was spent constructing a proper cross section generation methodology for the ATR usable in the SN2ND solver. While attempts were made to use cross section data derived from SCALE, the minimal number of compositional cross section sets were generated to be consistent with the reference Monte Carlo input specification. The accuracy of any deterministic transport solver is impacted by such an approach and clearly it causes substantial errors in this work. The reasoning behind this decision is justified given the overall funding dedicated to the task (two months) and the real focus of the work: can modern deterministic tools actually treat complex facilities like the ATR with heterogeneous geometry modeling. SN2ND has been demonstrated to solve problems with upwards of one trillion degrees of freedom which translates to tens of millions of finite elements, hundreds of angles, and hundreds of energy groups, resulting in a very high-fidelity model of the system unachievable by most deterministic transport codes today. A space-angle convergence study was conducted to determine the meshing and angular cubature requirements for the ATR, and also to demonstrate the feasibility of performing this analysis with a deterministic transport code capable of modeling heterogeneous geometries. The work performed indicates that a minimum of 260,000 linear finite elements combined with a L3T11 cubature (96 angles on the sphere) is required for both eigenvalue and flux convergence of the ATR. A critical finding was that the fuel meat and water channels must each be meshed with at least 3 'radial zones' for accurate flux convergence. A small number of 3D calculations were also performed to show axial mesh and eigenvalue convergence for a full core problem. Finally, a brief analysis was performed with different cross sections sets generated from DRAGON and SCALE, and the findings show that more effort will be required to improve the multigroup cross section generation process. The total number of degrees of freedom for a converged 27 group, 2D ATR problem is {approx}340 million. This number increases to {approx}25 billion for a 3D ATR problem. This scoping study shows that both 2D and 3D calculations are well within the capabilities of the current SN2ND solver, given the availability of a large-scale computing center such as BlueGene/P. However, dynamics calculations are not realistic without the implementation of improvements in the solver.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haghighat, A.; Sjoden, G.E.; Wagner, J.C.

    In the past 10 yr, the Penn State Transport Theory Group (PSTTG) has concentrated its efforts on developing accurate and efficient particle transport codes to address increasing needs for efficient and accurate simulation of nuclear systems. The PSTTG's efforts have primarily focused on shielding applications that are generally treated using multigroup, multidimensional, discrete ordinates (S{sub n}) deterministic and/or statistical Monte Carlo methods. The difficulty with the existing public codes is that they require significant (impractical) computation time for simulation of complex three-dimensional (3-D) problems. For the S{sub n} codes, the large memory requirements are handled through the use of scratchmore » files (i.e., read-from and write-to-disk) that significantly increases the necessary execution time. Further, the lack of flexible features and/or utilities for preparing input and processing output makes these codes difficult to use. The Monte Carlo method becomes impractical because variance reduction (VR) methods have to be used, and normally determination of the necessary parameters for the VR methods is very difficult and time consuming for a complex 3-D problem. For the deterministic method, the authors have developed the 3-D parallel PENTRAN (Parallel Environment Neutral-particle TRANsport) code system that, in addition to a parallel 3-D S{sub n} solver, includes pre- and postprocessing utilities. PENTRAN provides for full phase-space decomposition, memory partitioning, and parallel input/output to provide the capability of solving large problems in a relatively short time. Besides having a modular parallel structure, PENTRAN has several unique new formulations and features that are necessary for achieving high parallel performance. For the Monte Carlo method, the major difficulty currently facing most users is the selection of an effective VR method and its associated parameters. For complex problems, generally, this process is very time consuming and may be complicated due to the possibility of biasing the results. In an attempt to eliminate this problem, the authors have developed the A{sup 3}MCNP (automated adjoint accelerated MCNP) code that automatically prepares parameters for source and transport biasing within a weight-window VR approach based on the S{sub n} adjoint function. A{sup 3}MCNP prepares the necessary input files for performing multigroup, 3-D adjoint S{sub n} calculations using TORT.« less

  3. Estimation of electromagnetic dosimetric values from non-ionizing radiofrequency fields in an indoor commercial airplane environment.

    PubMed

    Aguirre, Erik; Arpón, Javier; Azpilicueta, Leire; López, Peio; de Miguel, Silvia; Ramos, Victoria; Falcone, Francisco

    2014-12-01

    In this article, the impact of topology as well as morphology of a complex indoor environment such as a commercial aircraft in the estimation of dosimetric assessment is presented. By means of an in-house developed deterministic 3D ray-launching code, estimation of electric field amplitude as a function of position for the complete volume of a commercial passenger airplane is obtained. Estimation of electromagnetic field exposure in this environment is challenging, due to the complexity and size of the scenario, as well as to the large metallic content, giving rise to strong multipath components. By performing the calculation with a deterministic technique, the complete scenario can be considered with an optimized balance between accuracy and computational cost. The proposed method can aid in the assessment of electromagnetic dosimetry in the future deployment of embarked wireless systems in commercial aircraft.

  4. Finite element modelling of woven composite failure modes at the mesoscopic scale: deterministic versus stochastic approaches

    NASA Astrophysics Data System (ADS)

    Roirand, Q.; Missoum-Benziane, D.; Thionnet, A.; Laiarinandrasana, L.

    2017-09-01

    Textile composites are composed of 3D complex architecture. To assess the durability of such engineering structures, the failure mechanisms must be highlighted. Examinations of the degradation have been carried out thanks to tomography. The present work addresses a numerical damage model dedicated to the simulation of the crack initiation and propagation at the scale of the warp yarns. For the 3D woven composites under study, loadings in tension and combined tension and bending were considered. Based on an erosion procedure of broken elements, the failure mechanisms have been modelled on 3D periodic cells by finite element calculations. The breakage of one element was determined using a failure criterion at the mesoscopic scale based on the yarn stress at failure. The results were found to be in good agreement with the experimental data for the two kinds of macroscopic loadings. The deterministic approach assumed a homogeneously distributed stress at failure all over the integration points in the meshes of woven composites. A stochastic approach was applied to a simple representative elementary periodic cell. The distribution of the Weibull stress at failure was assigned to the integration points using a Monte Carlo simulation. It was shown that this stochastic approach allowed more realistic failure simulations avoiding the idealised symmetry due to the deterministic modelling. In particular, the stochastic simulations performed have shown several variations of the stress as well as strain at failure and the failure modes of the yarn.

  5. Single Quantum Dot with Microlens and 3D-Printed Micro-objective as Integrated Bright Single-Photon Source

    PubMed Central

    2017-01-01

    Integrated single-photon sources with high photon-extraction efficiency are key building blocks for applications in the field of quantum communications. We report on a bright single-photon source realized by on-chip integration of a deterministic quantum dot microlens with a 3D-printed multilens micro-objective. The device concept benefits from a sophisticated combination of in situ 3D electron-beam lithography to realize the quantum dot microlens and 3D femtosecond direct laser writing for creation of the micro-objective. In this way, we obtain a high-quality quantum device with broadband photon-extraction efficiency of (40 ± 4)% and high suppression of multiphoton emission events with g(2)(τ = 0) < 0.02. Our results highlight the opportunities that arise from tailoring the optical properties of quantum emitters using integrated optics with high potential for the further development of plug-and-play fiber-coupled single-photon sources. PMID:28670600

  6. Magnetism in curved geometries

    NASA Astrophysics Data System (ADS)

    Streubel, Robert

    Deterministically bending and twisting two-dimensional structures in the three-dimensional (3D) space provide means to modify conventional or to launch novel functionalities by tailoring curvature and 3D shape. The recent developments of 3D curved magnetic geometries, ranging from theoretical predictions over fabrication to characterization using integral means as well as advanced magnetic tomography, will be reviewed. Theoretical works predict a curvature-induced effective anisotropy and effective Dzyaloshinskii-Moriya interaction resulting in a vast of novel effects including magnetochiral effects (chirality symmetry breaking) and topologically induced magnetization patterning. The remarkable development of nanotechnology, e.g. preparation of high-quality extended thin films, nanowires and frameworks via chemical and physical deposition as well as 3D nano printing, has granted first insights into the fundamental properties of 3D shaped magnetic objects. Optimizing magnetic and structural properties of these novel 3D architectures demands new investigation methods, particularly those based on vector tomographic imaging. Magnetic neutron tomography and electron-based 3D imaging, such as electron holography and vector field electron tomography, are well-established techniques to investigate macroscopic and nanoscopic samples, respectively. At the mesoscale, the curved objects can be investigated using the novel method of magnetic X-ray tomography. In spite of experimental challenges to address the appealing theoretical predictions of curvature-induced effects, those 3D magnetic architectures have already proven their application potential for life sciences, targeted delivery, realization of 3D spin-wave filters, and magneto-encephalography devices, to name just a few. DOE BES MSED (DE-AC02-05-CH11231).

  7. Precursor of transition to turbulence: spatiotemporal wave front.

    PubMed

    Bhaumik, S; Sengupta, T K

    2014-04-01

    To understand transition to turbulence via 3D disturbance growth, we report here results obtained from the solution of Navier-Stokes equation (NSE) to reproduce experimental results obtained by minimizing background disturbances and imposing deterministic excitation inside the shear layer. A similar approach was adopted in Sengupta and Bhaumik [Phys. Rev. Lett. 107, 154501 (2011)], where a route of transition from receptivity to fully developed turbulent stage was explained for 2D flow in terms of the spatio-temporal wave-front (STWF). The STWF was identified as the unit process of 2D turbulence creation for low amplitude wall excitation. Theoretical prediction of STWF for boundary layer was established earlier in Sengupta, Rao, and Venkatasubbaiah [Phys. Rev. Lett. 96, 224504 (2006)] from the Orr-Sommerfeld equation as due to spatiotemporal instability. Here, the same unit process of the STWF during transition is shown to be present for 3D disturbance field from the solution of governing NSE.

  8. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  9. Small-angle scattering from 3D Sierpinski tetrahedron generated using chaos game

    NASA Astrophysics Data System (ADS)

    Slyamov, Azat

    2017-12-01

    We approximate a three dimensional version of deterministic Sierpinski gasket (SG), also known as Sierpinski tetrahedron (ST), by using the chaos game representation (CGR). Structural properties of the fractal, generated by both deterministic and CGR algorithms are determined using small-angle scattering (SAS) technique. We calculate the corresponding monodisperse structure factor of ST, using an optimized Debye formula. We show that scattering from CGR of ST recovers basic fractal properties, such as fractal dimension, iteration number, scaling factor, overall size of the system and the number of units composing the fractal.

  10. Group Theoretical Route to Deterministic Weyl Points in Chiral Photonic Lattices.

    PubMed

    Saba, Matthias; Hamm, Joachim M; Baumberg, Jeremy J; Hess, Ortwin

    2017-12-01

    Topological phases derived from point degeneracies in photonic band structures show intriguing and unique behavior. Previously identified band degeneracies are based on accidental degeneracies and subject to engineering on a case-by-case basis. Here we show that deterministic pseudo Weyl points with nontrivial topology and hyperconic dispersion exist at the Brillouin zone center of chiral cubic symmetries. This conceivably allows realization of topologically protected frequency isolated surface bands in 3D and n=0 properties as demonstrated for a nanoplasmonic system and a photonic crystal.

  11. Group Theoretical Route to Deterministic Weyl Points in Chiral Photonic Lattices

    NASA Astrophysics Data System (ADS)

    Saba, Matthias; Hamm, Joachim M.; Baumberg, Jeremy J.; Hess, Ortwin

    2017-12-01

    Topological phases derived from point degeneracies in photonic band structures show intriguing and unique behavior. Previously identified band degeneracies are based on accidental degeneracies and subject to engineering on a case-by-case basis. Here we show that deterministic pseudo Weyl points with nontrivial topology and hyperconic dispersion exist at the Brillouin zone center of chiral cubic symmetries. This conceivably allows realization of topologically protected frequency isolated surface bands in 3D and n =0 properties as demonstrated for a nanoplasmonic system and a photonic crystal.

  12. Catching a quantum jump in mid-flight

    NASA Astrophysics Data System (ADS)

    Minev, Z. K.; Mundhada, S. O.; Zalys-Geller, E.; Shankar, S.; Rheinhold, P.; Frunzio, L.; Schoelkopf, R. J.; Mirrahimi, M.; Devoret, M. H.

    Quantum jumps provide a fundamental manifestation of the interplay between coherent dynamics and strong continuous measurements. Interestingly, the modern theoretical vantage point of quantum trajectories (Carmichael, 1993) suggests that the jump is not instantaneous, but rather smooth, coherent, and under the right conditions may present a deterministic character. We revisit the original observation of quantum jumps in a V-type, three-level atom (Berquist, 1986; Sauter, 1986), in order to ``deterministically'' catch the jump in mid-flight. We have designed and operated a V-type superconducting artificial atom with the 3 needed levels: G (for Ground), B (for Bright), and D (for Dark). The atom is coupled to a continuously monitored microwave mode that can distinguish B from the manifold formed by G and D, but without distinguishing G from D. We will present preliminary results showing how this experiment can be realized. Work supported by: ARO, ONR, AFOSR and YINQE. Discussions with H. Carmichael are gratefully acknowledged.

  13. Composing Data and Process Descriptions in the Design of Software Systems.

    DTIC Science & Technology

    1988-05-01

    accompanying ’data’ specification. So, for example, the bank account of Section 2.2.3 became ACC = open? d -- ACCIin(d) ACCA = payin? p --* ACCeosi(Ap) wdraw...w --* ACCtidraw(A,w) bal! balance(A) --+ ACCA I close -+ STOP where A has abstract type Account , with operators (that is, side-effect free functions...n accounts .................. 43 3.5 Non-deterministic merge ........ ........................... 45 4.1 Specification of a ticket machine system

  14. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  15. An ITK framework for deterministic global optimization for medical image registration

    NASA Astrophysics Data System (ADS)

    Dru, Florence; Wachowiak, Mark P.; Peters, Terry M.

    2006-03-01

    Similarity metric optimization is an essential step in intensity-based rigid and nonrigid medical image registration. For clinical applications, such as image guidance of minimally invasive procedures, registration accuracy and efficiency are prime considerations. In addition, clinical utility is enhanced when registration is integrated into image analysis and visualization frameworks, such as the popular Insight Toolkit (ITK). ITK is an open source software environment increasingly used to aid the development, testing, and integration of new imaging algorithms. In this paper, we present a new ITK-based implementation of the DIRECT (Dividing Rectangles) deterministic global optimization algorithm for medical image registration. Previously, it has been shown that DIRECT improves the capture range and accuracy for rigid registration. Our ITK class also contains enhancements over the original DIRECT algorithm by improving stopping criteria, adaptively adjusting a locality parameter, and by incorporating Powell's method for local refinement. 3D-3D registration experiments with ground-truth brain volumes and clinical cardiac volumes show that combining DIRECT with Powell's method improves registration accuracy over Powell's method used alone, is less sensitive to initial misorientation errors, and, with the new stopping criteria, facilitates adequate exploration of the search space without expending expensive iterations on non-improving function evaluations. Finally, in this framework, a new parallel implementation for computing mutual information is presented, resulting in near-linear speedup with two processors.

  16. Digital 3D holographic display using scattering layers for enhanced viewing angle and image size

    NASA Astrophysics Data System (ADS)

    Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun

    2017-05-01

    In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.

  17. SU-F-T-347: An Absolute Dose-Volume Constraint Based Deterministic Optimization Framework for Multi-Co60 Source Focused Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liang, B; Liu, B; Li, Y

    2016-06-15

    Purpose: Treatment plan optimization in multi-Co60 source focused radiotherapy with multiple isocenters is challenging, because dose distribution is normalized to maximum dose during optimization and evaluation. The objective functions are traditionally defined based on relative dosimetric distribution. This study presents an alternative absolute dose-volume constraint (ADC) based deterministic optimization framework (ADC-DOF). Methods: The initial isocenters are placed on the eroded target surface. Collimator size is chosen based on the area of 2D contour on corresponding axial slice. The isocenter spacing is determined by adjacent collimator sizes. The weights are optimized by minimizing the deviation from ADCs using the steepest descentmore » technique. An iterative procedure is developed to reduce the number of isocenters, where the isocenter with lowest weight is removed without affecting plan quality. The ADC-DOF is compared with the genetic algorithm (GA) using the same arbitrary shaped target (254cc), with a 15mm margin ring structure representing normal tissues. Results: For ADC-DOF, the ADCs imposed on target and ring are (D100>10Gy, D50,10, 0<12Gy, 15Gy and 20Gy) and (D40<10Gy). The resulting D100, 50, 10, 0 and D40 are (9.9Gy, 12.0Gy, 14.1Gy and 16.2Gy) and (10.2Gy). The objectives of GA are to maximize 50% isodose target coverage (TC) while minimize the dose delivered to the ring structure, which results in 97% TC and 47.2% average dose in ring structure. For ADC-DOF (GA) techniques, 20 out of 38 (10 out of 12) initial isocenters are used in the final plan, and the computation time is 8.7s (412.2s) on an i5 computer. Conclusion: We have developed a new optimization technique using ADC and deterministic optimization. Compared with GA, ADC-DOF uses more isocenters but is faster and more robust, and achieves a better conformity. For future work, we will focus on developing a more effective mechanism for initial isocenter determination.« less

  18. Continuous data assimilation for the three-dimensional Brinkman-Forchheimer-extended Darcy model

    NASA Astrophysics Data System (ADS)

    Markowich, Peter A.; Titi, Edriss S.; Trabelsi, Saber

    2016-04-01

    In this paper we introduce and analyze an algorithm for continuous data assimilation for a three-dimensional Brinkman-Forchheimer-extended Darcy (3D BFeD) model of porous media. This model is believed to be accurate when the flow velocity is too large for Darcy’s law to be valid, and additionally the porosity is not too small. The algorithm is inspired by ideas developed for designing finite-parameters feedback control for dissipative systems. It aims to obtain improved estimates of the state of the physical system by incorporating deterministic or noisy measurements and observations. Specifically, the algorithm involves a feedback control that nudges the large scales of the approximate solution toward those of the reference solution associated with the spatial measurements. In the first part of the paper, we present a few results of existence and uniqueness of weak and strong solutions of the 3D BFeD system. The second part is devoted to the convergence analysis of the data assimilation algorithm.

  19. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks.

    PubMed

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-02-08

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency.

  20. 3D J-Integral Capability in Grizzly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Backman, Marie; Chakraborty, Pritam

    2014-09-01

    This report summarizes work done to develop a capability to evaluate fracture contour J-Integrals in 3D in the Grizzly code. In the current fiscal year, a previously-developed 2D implementation of a J-Integral evaluation capability has been extended to work in 3D, and to include terms due both to mechanically-induced strains and due to gradients in thermal strains. This capability has been verified against a benchmark solution on a model of a curved crack front in 3D. The thermal term in this integral has been verified against a benchmark problem with a thermal gradient. These developments are part of a largermore » effort to develop Grizzly as a tool that can be used to predict the evolution of aging processes in nuclear power plant systems, structures, and components, and assess their capacity after being subjected to those aging processes. The capabilities described here have been developed to enable evaluations of Mode- stress intensity factors on axis-aligned flaws in reactor pressure vessels. These can be compared with the fracture toughness of the material to determine whether a pre-existing flaw would begin to propagate during a pos- tulated pressurized thermal shock accident. This report includes a demonstration calculation to show how Grizzly is used to perform a deterministic assessment of such a flaw propagation in a degraded reactor pressure vessel under pressurized thermal shock conditions. The stress intensity is calculated from J, and the toughness is computed using the fracture master curve and the degraded ductile to brittle transition temperature.« less

  1. Real-time logic modelling on SpaceWire

    NASA Astrophysics Data System (ADS)

    Zhou, Qiang; Ma, Yunpeng; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. However, it cannot meet the deterministic requirement for safety/time critical application in spacecraft, where the delay of real-time (RT) message streams must be guaranteed. Therefore, SpaceWire-D is developed that provides deterministic delivery over a SpaceWire network. Formal analysis and verification of real-time systems is critical to their development and safe implementation, and is a prerequisite for obtaining their safety certification. Failure to meet specified timing constraints such as deadlines in hard real-time systems may lead to catastrophic results. In this paper, a formal verification method, Real-Time Logic (RTL), has been proposed to specify and verify timing properties of SpaceWire-D network. Based on the principal of SpaceWire-D protocol, we firstly analyze the timing properties of fundamental transactions, such as RMAP WRITE, and RMAP READ. After that, the RMAP WRITE transaction structure is modeled in Real-Time Logic (RTL) and Presburger Arithmetic representations. And then, the associated constraint graph and safety analysis is provided. Finally, it is suggested that RTL method can be useful for the protocol evaluation and provision of recommendation for further protocol evolutions.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Justin; Slaughter, Andrew; Veeraraghavan, Swetha

    Multi-hazard Analysis for STOchastic time-DOmaiN phenomena (MASTODON) is a finite element application that aims at analyzing the response of 3-D soil-structure systems to natural and man-made hazards such as earthquakes, floods and fire. MASTODON currently focuses on the simulation of seismic events and has the capability to perform extensive ‘source-to-site’ simulations including earthquake fault rupture, nonlinear wave propagation and nonlinear soil-structure interaction (NLSSI) analysis. MASTODON is being developed to be a dynamic probabilistic risk assessment framework that enables analysts to not only perform deterministic analyses, but also easily perform probabilistic or stochastic simulations for the purpose of risk assessment.

  3. Analytical results for the statistical distribution related to a memoryless deterministic walk: dimensionality effect and mean-field models.

    PubMed

    Terçariol, César Augusto Sangaletti; Martinez, Alexandre Souto

    2005-08-01

    Consider a medium characterized by N points whose coordinates are randomly generated by a uniform distribution along the edges of a unitary d-dimensional hypercube. A walker leaves from each point of this disordered medium and moves according to the deterministic rule to go to the nearest point which has not been visited in the preceding mu steps (deterministic tourist walk). Each trajectory generated by this dynamics has an initial nonperiodic part of t steps (transient) and a final periodic part of p steps (attractor). The neighborhood rank probabilities are parametrized by the normalized incomplete beta function Id= I1/4 [1/2, (d+1) /2] . The joint distribution S(N) (mu,d) (t,p) is relevant, and the marginal distributions previously studied are particular cases. We show that, for the memory-less deterministic tourist walk in the euclidean space, this distribution is Sinfinity(1,d) (t,p) = [Gamma (1+ I(-1)(d)) (t+ I(-1)(d) ) /Gamma(t+p+ I(-1)(d)) ] delta(p,2), where t=0, 1,2, ... infinity, Gamma(z) is the gamma function and delta(i,j) is the Kronecker delta. The mean-field models are the random link models, which correspond to d-->infinity, and the random map model which, even for mu=0 , presents nontrivial cycle distribution [ S(N)(0,rm) (p) proportional to p(-1) ] : S(N)(0,rm) (t,p) =Gamma(N)/ {Gamma[N+1- (t+p) ] N( t+p)}. The fundamental quantities are the number of explored points n(e)=t+p and Id. Although the obtained distributions are simple, they do not follow straightforwardly and they have been validated by numerical experiments.

  4. Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model

    NASA Astrophysics Data System (ADS)

    Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon

    Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.

  5. Health risk assessment of inorganic arsenic intake of Ronphibun residents via duplicate diet study.

    PubMed

    Saipan, Piyawat; Ruangwises, Suthep

    2009-06-01

    To assess health risk from exposure to inorganic arsenic via duplicate portion sampling method in Ronphibun residents. A hundred and forty samples (140 subject-days) were collected from participants in Ronphibun sub-district. Inorganic arsenic in duplicate diet sample was determined by acid digestion and hydride generation-atomic absorption spectrometry. Deterministic risk assessment is referenced throughout the present paper using United States Environmental Protection Agency (U.S. EPA) guidelines. The average daily dose and lifetime average daily dose of inorganic arsenic via duplicate diet were 0.0021 mg/kg/d and 0.00084 mg/kg/d, respectively. The risk estimates in terms of hazard quotient was 6.98 and cancer risk was 1.26 x 10(-3). The results of deterministic risk characterization both hazard quotient and cancer risk from exposure inorganic arsenic in duplicate diets were greater than safety risk levels of hazard quotient (1) and cancer risk (1 x 10(-4)).

  6. Deterministically patterned biomimetic human iPSC-derived hepatic model via rapid 3D bioprinting

    PubMed Central

    Ma, Xuanyi; Qu, Xin; Zhu, Wei; Li, Yi-Shuan; Yuan, Suli; Zhang, Hong; Liu, Justin; Wang, Pengrui; Lai, Cheuk Sun Edwin; Zanella, Fabian; Feng, Gen-Sheng; Sheikh, Farah; Chien, Shu; Chen, Shaochen

    2016-01-01

    The functional maturation and preservation of hepatic cells derived from human induced pluripotent stem cells (hiPSCs) are essential to personalized in vitro drug screening and disease study. Major liver functions are tightly linked to the 3D assembly of hepatocytes, with the supporting cell types from both endodermal and mesodermal origins in a hexagonal lobule unit. Although there are many reports on functional 2D cell differentiation, few studies have demonstrated the in vitro maturation of hiPSC-derived hepatic progenitor cells (hiPSC-HPCs) in a 3D environment that depicts the physiologically relevant cell combination and microarchitecture. The application of rapid, digital 3D bioprinting to tissue engineering has allowed 3D patterning of multiple cell types in a predefined biomimetic manner. Here we present a 3D hydrogel-based triculture model that embeds hiPSC-HPCs with human umbilical vein endothelial cells and adipose-derived stem cells in a microscale hexagonal architecture. In comparison with 2D monolayer culture and a 3D HPC-only model, our 3D triculture model shows both phenotypic and functional enhancements in the hiPSC-HPCs over weeks of in vitro culture. Specifically, we find improved morphological organization, higher liver-specific gene expression levels, increased metabolic product secretion, and enhanced cytochrome P450 induction. The application of bioprinting technology in tissue engineering enables the development of a 3D biomimetic liver model that recapitulates the native liver module architecture and could be used for various applications such as early drug screening and disease modeling. PMID:26858399

  7. Deterministically patterned biomimetic human iPSC-derived hepatic model via rapid 3D bioprinting.

    PubMed

    Ma, Xuanyi; Qu, Xin; Zhu, Wei; Li, Yi-Shuan; Yuan, Suli; Zhang, Hong; Liu, Justin; Wang, Pengrui; Lai, Cheuk Sun Edwin; Zanella, Fabian; Feng, Gen-Sheng; Sheikh, Farah; Chien, Shu; Chen, Shaochen

    2016-02-23

    The functional maturation and preservation of hepatic cells derived from human induced pluripotent stem cells (hiPSCs) are essential to personalized in vitro drug screening and disease study. Major liver functions are tightly linked to the 3D assembly of hepatocytes, with the supporting cell types from both endodermal and mesodermal origins in a hexagonal lobule unit. Although there are many reports on functional 2D cell differentiation, few studies have demonstrated the in vitro maturation of hiPSC-derived hepatic progenitor cells (hiPSC-HPCs) in a 3D environment that depicts the physiologically relevant cell combination and microarchitecture. The application of rapid, digital 3D bioprinting to tissue engineering has allowed 3D patterning of multiple cell types in a predefined biomimetic manner. Here we present a 3D hydrogel-based triculture model that embeds hiPSC-HPCs with human umbilical vein endothelial cells and adipose-derived stem cells in a microscale hexagonal architecture. In comparison with 2D monolayer culture and a 3D HPC-only model, our 3D triculture model shows both phenotypic and functional enhancements in the hiPSC-HPCs over weeks of in vitro culture. Specifically, we find improved morphological organization, higher liver-specific gene expression levels, increased metabolic product secretion, and enhanced cytochrome P450 induction. The application of bioprinting technology in tissue engineering enables the development of a 3D biomimetic liver model that recapitulates the native liver module architecture and could be used for various applications such as early drug screening and disease modeling.

  8. Automatic mesh adaptivity for hybrid Monte Carlo/deterministic neutronics modeling of difficult shielding problems

    DOE PAGES

    Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; ...

    2015-06-30

    The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less

  9. Efficient room-temperature source of polarized single photons

    DOEpatents

    Lukishova, Svetlana G.; Boyd, Robert W.; Stroud, Carlos R.

    2007-08-07

    An efficient technique for producing deterministically polarized single photons uses liquid-crystal hosts of either monomeric or oligomeric/polymeric form to preferentially align the single emitters for maximum excitation efficiency. Deterministic molecular alignment also provides deterministically polarized output photons; using planar-aligned cholesteric liquid crystal hosts as 1-D photonic-band-gap microcavities tunable to the emitter fluorescence band to increase source efficiency, using liquid crystal technology to prevent emitter bleaching. Emitters comprise soluble dyes, inorganic nanocrystals or trivalent rare-earth chelates.

  10. The DANTE Boltzmann transport solver: An unstructured mesh, 3-D, spherical harmonics algorithm compatible with parallel computer architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGhee, J.M.; Roberts, R.M.; Morel, J.E.

    1997-06-01

    A spherical harmonics research code (DANTE) has been developed which is compatible with parallel computer architectures. DANTE provides 3-D, multi-material, deterministic, transport capabilities using an arbitrary finite element mesh. The linearized Boltzmann transport equation is solved in a second order self-adjoint form utilizing a Galerkin finite element spatial differencing scheme. The core solver utilizes a preconditioned conjugate gradient algorithm. Other distinguishing features of the code include options for discrete-ordinates and simplified spherical harmonics angular differencing, an exact Marshak boundary treatment for arbitrarily oriented boundary faces, in-line matrix construction techniques to minimize memory consumption, and an effective diffusion based preconditioner formore » scattering dominated problems. Algorithm efficiency is demonstrated for a massively parallel SIMD architecture (CM-5), and compatibility with MPP multiprocessor platforms or workstation clusters is anticipated.« less

  11. Balloon Ascent: 3-D Simulation Tool for the Ascent and Float of High-Altitude Balloons

    NASA Technical Reports Server (NTRS)

    Farley, Rodger E.

    2005-01-01

    The BalloonAscent balloon flight simulation code represents a from-scratch development using Visual Basic 5 as the software platform. The simulation code is a transient analysis of balloon flight, predicting the skin and gas temperatures along with the 3-D position and velocity in a time and spatially varying environment. There are manual and automated controls for gas valving and the dropping of ballast. Also, there are many handy calculators, such as appropriate free lift, and steady-state thermal solutions with temperature gradients. The strength of this simulation model over others in the past is that the infrared environment is deterministic rather than guessed at. The ground temperature is specified along with the emissivity, which creates a ground level IR environment that is then partially absorbed as it travels upward through the atmosphere to the altitude of the balloon.

  12. Probabilistic Neighborhood-Based Data Collection Algorithms for 3D Underwater Acoustic Sensor Networks

    PubMed Central

    Han, Guangjie; Li, Shanshan; Zhu, Chunsheng; Jiang, Jinfang; Zhang, Wenbo

    2017-01-01

    Marine environmental monitoring provides crucial information and support for the exploitation, utilization, and protection of marine resources. With the rapid development of information technology, the development of three-dimensional underwater acoustic sensor networks (3D UASNs) provides a novel strategy to acquire marine environment information conveniently, efficiently and accurately. However, the specific propagation effects of acoustic communication channel lead to decreased successful information delivery probability with increased distance. Therefore, we investigate two probabilistic neighborhood-based data collection algorithms for 3D UASNs which are based on a probabilistic acoustic communication model instead of the traditional deterministic acoustic communication model. An autonomous underwater vehicle (AUV) is employed to traverse along the designed path to collect data from neighborhoods. For 3D UASNs without prior deployment knowledge, partitioning the network into grids can allow the AUV to visit the central location of each grid for data collection. For 3D UASNs in which the deployment knowledge is known in advance, the AUV only needs to visit several selected locations by constructing a minimum probabilistic neighborhood covering set to reduce data latency. Otherwise, by increasing the transmission rounds, our proposed algorithms can provide a tradeoff between data collection latency and information gain. These algorithms are compared with basic Nearest-neighbor Heuristic algorithm via simulations. Simulation analyses show that our proposed algorithms can efficiently reduce the average data collection completion time, corresponding to a decrease of data latency. PMID:28208735

  13. 3D architecture modeling of reservoir compartments in a Shingled Turbidite Reservoir using high-resolution seismic data and sparse well control, example from Mars {open_quotes}Pink{close_quotes} reservoir, Mississippi Canyon Area, Gulf of Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapin, M.A.; Mahaffie, M.J.; Tiller, G.M.

    1996-12-31

    Economics of most deep-water development projects require large reservoir volumes to be drained with relatively few wells. The presence of reservoir compartments must therefore be detected and planned for in a pre-development stage. We have used 3-D seismic data to constrain large-scale, deterministic reservoir bodies in a 3-D architecture model of Pliocene-turbidite sands of the {open_quotes}E{close_quotes} or {open_quotes}Pink{close_quotes} reservoir, Prospect Mars, Mississippi Canyon Areas 763 and 807, Gulf of Mexico. Reservoir compartmentalization is influenced by stratigraphic shingling, which in turn is caused by low accommodation space predentin the upper portion of a ponded seismic sequence within a salt withdrawal mini-basin.more » The accumulation is limited by updip onlap onto a condensed section marl, and by lateral truncation by a large scale submarine erosion surface. Compartments were suggested by RFT pressure variations and by geochemical analysis of RFT fluid samples. A geological interpretation derived from high-resolution 3-D seismic and three wells was linked to 3-D architecture models through seismic inversion, resulting in a reservoir all available data. Distinguishing subtle stratigraphical shingles from faults was accomplished by detailed, loop-level mapping, and was important to characterize the different types of reservoir compartments. Seismic inversion was used to detune the seismic amplitude, adjust sandbody thickness, and update the rock properties. Recent development wells confirm the architectural style identified. This modeling project illustrates how high-quality seismic data and architecture models can be combined in a pre-development phase of a prospect, in order to optimize well placement.« less

  14. 3D architecture modeling of reservoir compartments in a Shingled Turbidite Reservoir using high-resolution seismic data and sparse well control, example from Mars [open quotes]Pink[close quotes] reservoir, Mississippi Canyon Area, Gulf of Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapin, M.A.; Mahaffie, M.J.; Tiller, G.M.

    1996-01-01

    Economics of most deep-water development projects require large reservoir volumes to be drained with relatively few wells. The presence of reservoir compartments must therefore be detected and planned for in a pre-development stage. We have used 3-D seismic data to constrain large-scale, deterministic reservoir bodies in a 3-D architecture model of Pliocene-turbidite sands of the [open quotes]E[close quotes] or [open quotes]Pink[close quotes] reservoir, Prospect Mars, Mississippi Canyon Areas 763 and 807, Gulf of Mexico. Reservoir compartmentalization is influenced by stratigraphic shingling, which in turn is caused by low accommodation space predentin the upper portion of a ponded seismic sequence withinmore » a salt withdrawal mini-basin. The accumulation is limited by updip onlap onto a condensed section marl, and by lateral truncation by a large scale submarine erosion surface. Compartments were suggested by RFT pressure variations and by geochemical analysis of RFT fluid samples. A geological interpretation derived from high-resolution 3-D seismic and three wells was linked to 3-D architecture models through seismic inversion, resulting in a reservoir all available data. Distinguishing subtle stratigraphical shingles from faults was accomplished by detailed, loop-level mapping, and was important to characterize the different types of reservoir compartments. Seismic inversion was used to detune the seismic amplitude, adjust sandbody thickness, and update the rock properties. Recent development wells confirm the architectural style identified. This modeling project illustrates how high-quality seismic data and architecture models can be combined in a pre-development phase of a prospect, in order to optimize well placement.« less

  15. Deterministic Bragg Coherent Diffraction Imaging.

    PubMed

    Pavlov, Konstantin M; Punegov, Vasily I; Morgan, Kaye S; Schmalz, Gerd; Paganin, David M

    2017-04-25

    A deterministic variant of Bragg Coherent Diffraction Imaging is introduced in its kinematical approximation, for X-ray scattering from an imperfect crystal whose imperfections span no more than half of the volume of the crystal. This approach provides a unique analytical reconstruction of the object's structure factor and displacement fields from the 3D diffracted intensity distribution centred around any particular reciprocal lattice vector. The simple closed-form reconstruction algorithm, which requires only one multiplication and one Fourier transformation, is not restricted by assumptions of smallness of the displacement field. The algorithm performs well in simulations incorporating a variety of conditions, including both realistic levels of noise and departures from ideality in the reference (i.e. imperfection-free) part of the crystal.

  16. Analysis of wireless sensor network topology and estimation of optimal network deployment by deterministic radio channel characterization.

    PubMed

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leire; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2015-02-05

    One of the main challenges in the implementation and design of context-aware scenarios is the adequate deployment strategy for Wireless Sensor Networks (WSNs), mainly due to the strong dependence of the radiofrequency physical layer with the surrounding media, which can lead to non-optimal network designs. In this work, radioplanning analysis for WSN deployment is proposed by employing a deterministic 3D ray launching technique in order to provide insight into complex wireless channel behavior in context-aware indoor scenarios. The proposed radioplanning procedure is validated with a testbed implemented with a Mobile Ad Hoc Network WSN following a chain configuration, enabling the analysis and assessment of a rich variety of parameters, such as received signal level, signal quality and estimation of power consumption. The adoption of deterministic radio channel techniques allows the design and further deployment of WSNs in heterogeneous wireless scenarios with optimized behavior in terms of coverage, capacity, quality of service and energy consumption.

  17. Three-dimensional reconstructions come to life--interactive 3D PDF animations in functional morphology.

    PubMed

    van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander

    2014-01-01

    Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering.

  18. Three-Dimensional Reconstructions Come to Life – Interactive 3D PDF Animations in Functional Morphology

    PubMed Central

    van de Kamp, Thomas; dos Santos Rolo, Tomy; Vagovič, Patrik; Baumbach, Tilo; Riedel, Alexander

    2014-01-01

    Digital surface mesh models based on segmented datasets have become an integral part of studies on animal anatomy and functional morphology; usually, they are published as static images, movies or as interactive PDF files. We demonstrate the use of animated 3D models embedded in PDF documents, which combine the advantages of both movie and interactivity, based on the example of preserved Trigonopterus weevils. The method is particularly suitable to simulate joints with largely deterministic movements due to precise form closure. We illustrate the function of an individual screw-and-nut type hip joint and proceed to the complex movements of the entire insect attaining a defence position. This posture is achieved by a specific cascade of movements: Head and legs interlock mutually and with specific features of thorax and the first abdominal ventrite, presumably to increase the mechanical stability of the beetle and to maintain the defence position with minimal muscle activity. The deterministic interaction of accurately fitting body parts follows a defined sequence, which resembles a piece of engineering. PMID:25029366

  19. Stochastic Dynamic Mixed-Integer Programming (SD-MIP)

    DTIC Science & Technology

    2015-05-05

    stochastic linear programming ( SLP ) problems. By using a combination of ideas from cutting plane theory of deterministic MIP (especially disjunctive...developed to date. b) As part of this project, we have also developed tools for very large scale Stochastic Linear Programming ( SLP ). There are...several reasons for this. First, SLP models continue to challenge many of the fastest computers to date, and many applications within the DoD (e.g

  20. Optofluidic fabrication for 3D-shaped particles

    NASA Astrophysics Data System (ADS)

    Paulsen, Kevin S.; di Carlo, Dino; Chung, Aram J.

    2015-04-01

    Complex three-dimensional (3D)-shaped particles could play unique roles in biotechnology, structural mechanics and self-assembly. Current methods of fabricating 3D-shaped particles such as 3D printing, injection moulding or photolithography are limited because of low-resolution, low-throughput or complicated/expensive procedures. Here, we present a novel method called optofluidic fabrication for the generation of complex 3D-shaped polymer particles based on two coupled processes: inertial flow shaping and ultraviolet (UV) light polymerization. Pillars within fluidic platforms are used to deterministically deform photosensitive precursor fluid streams. The channels are then illuminated with patterned UV light to polymerize the photosensitive fluid, creating particles with multi-scale 3D geometries. The fundamental advantages of optofluidic fabrication include high-resolution, multi-scalability, dynamic tunability, simple operation and great potential for bulk fabrication with full automation. Through different combinations of pillar configurations, flow rates and UV light patterns, an infinite set of 3D-shaped particles is available, and a variety are demonstrated.

  1. Predicting the Stochastic Properties of the Shallow Subsurface for Improved Geophysical Modeling

    NASA Astrophysics Data System (ADS)

    Stroujkova, A.; Vynne, J.; Bonner, J.; Lewkowicz, J.

    2005-12-01

    Strong ground motion data from numerous explosive field experiments and from moderate to large earthquakes show significant variations in amplitude and waveform shape with respect to both azimuth and range. Attempts to model these variations using deterministic models have often been unsuccessful. It has been hypothesized that a stochastic description of the geological medium is a more realistic approach. To estimate the stochastic properties of the shallow subsurface, we use Measurement While Drilling (MWD) data, which are routinely collected by mines in order to facilitate design of blast patterns. The parameters, such as rotation speed of the drill, torque, and penetration rate, are used to compute the rock's Specific Energy (SE), which is then related to a blastability index. We use values of SE measured at two different mines and calibrated to laboratory measurements of rock properties to determine correlation lengths of the subsurface rocks in 2D, needed to obtain 2D and 3D stochastic models. The stochastic models are then combined with the deterministic models and used to compute synthetic seismic waveforms.

  2. Direct numerical simulation of two-dimensional wall-bounded turbulent flows from receptivity stage.

    PubMed

    Sengupta, T K; Bhaumik, S; Bhumkar, Y G

    2012-02-01

    Deterministic route to turbulence creation in 2D wall boundary layer is shown here by solving full Navier-Stokes equation by dispersion relation preserving (DRP) numerical methods for flow over a flat plate excited by wall and free stream excitations. Present results show the transition caused by wall excitation is predominantly due to nonlinear growth of the spatiotemporal wave front, even in the presence of Tollmien-Schlichting (TS) waves. The existence and linear mechanism of creating the spatiotemporal wave front was established in Sengupta, Rao and Venkatasubbaiah [Phys. Rev. Lett. 96, 224504 (2006)] via the solution of Orr-Sommerfeld equation. Effects of spatiotemporal front(s) in the nonlinear phase of disturbance evolution have been documented by Sengupta and Bhaumik [Phys. Rev. Lett. 107, 154501 (2011)], where a flow is taken from the receptivity stage to the fully developed 2D turbulent state exhibiting a k(-3) energy spectrum by solving the Navier-Stokes equation without any artifice. The details of this mechanism are presented here for the first time, along with another problem of forced excitation of the boundary layer by convecting free stream vortices. Thus, the excitations considered here are for a zero pressure gradient (ZPG) boundary layer by (i) monochromatic time-harmonic wall excitation and (ii) free stream excitation by convecting train of vortices at a constant height. The latter case demonstrates neither monochromatic TS wave, nor the spatiotemporal wave front, yet both the cases eventually show the presence of k(-3) energy spectrum, which has been shown experimentally for atmospheric dynamics in Nastrom, Gage and Jasperson [Nature 310, 36 (1984)]. Transition by a nonlinear mechanism of the Navier-Stokes equation leading to k(-3) energy spectrum in the inertial subrange is the typical characteristic feature of all 2D turbulent flows. Reproduction of the spectrum noted in atmospheric data (showing dominance of the k(-3) spectrum over the k(-5/3) spectrum in Nastrom et al.) in laboratory scale indicates universality of this spectrum for all 2D turbulent flows. Creation of universal features of 2D turbulence by a deterministic route has been established here for the first time by solving the Navier-Stokes equation without any modeling, as has been reported earlier in the literature by other researchers.

  3. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm.

    PubMed

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S; Subramanian, Hariharan; Dravid, Vinayak P; Backman, Vadim

    2017-06-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass-density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass-density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass-density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass-density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass-density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes.

  4. Measuring the Autocorrelation Function of Nanoscale Three-Dimensional Density Distribution in Individual Cells Using Scanning Transmission Electron Microscopy, Atomic Force Microscopy, and a New Deconvolution Algorithm

    PubMed Central

    Li, Yue; Zhang, Di; Capoglu, Ilker; Hujsak, Karl A.; Damania, Dhwanil; Cherkezyan, Lusik; Roth, Eric; Bleher, Reiner; Wu, Jinsong S.; Subramanian, Hariharan; Dravid, Vinayak P.; Backman, Vadim

    2018-01-01

    Essentially all biological processes are highly dependent on the nanoscale architecture of the cellular components where these processes take place. Statistical measures, such as the autocorrelation function (ACF) of the three-dimensional (3D) mass–density distribution, are widely used to characterize cellular nanostructure. However, conventional methods of reconstruction of the deterministic 3D mass–density distribution, from which these statistical measures can be calculated, have been inadequate for thick biological structures, such as whole cells, due to the conflict between the need for nanoscale resolution and its inverse relationship with thickness after conventional tomographic reconstruction. To tackle the problem, we have developed a robust method to calculate the ACF of the 3D mass–density distribution without tomography. Assuming the biological mass distribution is isotropic, our method allows for accurate statistical characterization of the 3D mass–density distribution by ACF with two data sets: a single projection image by scanning transmission electron microscopy and a thickness map by atomic force microscopy. Here we present validation of the ACF reconstruction algorithm, as well as its application to calculate the statistics of the 3D distribution of mass–density in a region containing the nucleus of an entire mammalian cell. This method may provide important insights into architectural changes that accompany cellular processes. PMID:28416035

  5. Deterministic and stochastic bifurcations in the Hindmarsh-Rose neuronal model

    NASA Astrophysics Data System (ADS)

    Dtchetgnia Djeundam, S. R.; Yamapi, R.; Kofane, T. C.; Aziz-Alaoui, M. A.

    2013-09-01

    We analyze the bifurcations occurring in the 3D Hindmarsh-Rose neuronal model with and without random signal. When under a sufficient stimulus, the neuron activity takes place; we observe various types of bifurcations that lead to chaotic transitions. Beside the equilibrium solutions and their stability, we also investigate the deterministic bifurcation. It appears that the neuronal activity consists of chaotic transitions between two periodic phases called bursting and spiking solutions. The stochastic bifurcation, defined as a sudden change in character of a stochastic attractor when the bifurcation parameter of the system passes through a critical value, or under certain condition as the collision of a stochastic attractor with a stochastic saddle, occurs when a random Gaussian signal is added. Our study reveals two kinds of stochastic bifurcation: the phenomenological bifurcation (P-bifurcations) and the dynamical bifurcation (D-bifurcations). The asymptotical method is used to analyze phenomenological bifurcation. We find that the neuronal activity of spiking and bursting chaos remains for finite values of the noise intensity.

  6. Northern Hemisphere glaciation and the evolution of Plio-Pleistocene climate noise

    NASA Astrophysics Data System (ADS)

    Meyers, Stephen R.; Hinnov, Linda A.

    2010-08-01

    Deterministic orbital controls on climate variability are commonly inferred to dominate across timescales of 104-106 years, although some studies have suggested that stochastic processes may be of equal or greater importance. Here we explicitly quantify changes in deterministic orbital processes (forcing and/or pacing) versus stochastic climate processes during the Plio-Pleistocene, via time-frequency analysis of two prominent foraminifera oxygen isotopic stacks. Our results indicate that development of the Northern Hemisphere ice sheet is paralleled by an overall amplification of both deterministic and stochastic climate energy, but their relative dominance is variable. The progression from a more stochastic early Pliocene to a strongly deterministic late Pleistocene is primarily accommodated during two transitory phases of Northern Hemisphere ice sheet growth. This long-term trend is punctuated by “stochastic events,” which we interpret as evidence for abrupt reorganization of the climate system at the initiation and termination of the mid-Pleistocene transition and at the onset of Northern Hemisphere glaciation. In addition to highlighting a complex interplay between deterministic and stochastic climate change during the Plio-Pleistocene, our results support an early onset for Northern Hemisphere glaciation (between 3.5 and 3.7 Ma) and reveal some new characteristics of the orbital signal response, such as the puzzling emergence of 100 ka and 400 ka cyclic climate variability during theoretical eccentricity nodes.

  7. Single Microwave-Photon Detector using an Artificial Lambda-type Three-Level System

    DTIC Science & Technology

    2016-01-11

    Single microwave-photon detector using an artificial Λ-type three- level system Kunihiro Inomata,1∗†, Zhirong Lin,1†, Kazuki Koshino,2, William D...three- level system Kunihiro Inomata,1∗† Zhirong Lin,1† Kazuki Koshino,2 William D. Oliver,3,4 Jaw-Shen Tsai,1 Tsuyoshi Yamamoto,5 Yasunobu Nakamura...single-microwave-photon detector based on the deterministic switching in an artificial Λ-type three- level system implemented using the dressed states of a

  8. Do rational numbers play a role in selection for stochasticity?

    PubMed

    Sinclair, Robert

    2014-01-01

    When a given tissue must, to be able to perform its various functions, consist of different cell types, each fairly evenly distributed and with specific probabilities, then there are at least two quite different developmental mechanisms which might achieve the desired result. Let us begin with the case of two cell types, and first imagine that the proportion of numbers of cells of these types should be 1:3. Clearly, a regular structure composed of repeating units of four cells, three of which are of the dominant type, will easily satisfy the requirements, and a deterministic mechanism may lend itself to the task. What if, however, the proportion should be 10:33? The same simple, deterministic approach would now require a structure of repeating units of 43 cells, and this certainly seems to require a far more complex and potentially prohibitive deterministic developmental program. Stochastic development, replacing regular units with random distributions of given densities, might not be evolutionarily competitive in comparison with the deterministic program when the proportions should be 1:3, but it has the property that, whatever developmental mechanism underlies it, its complexity does not need to depend very much upon target cell densities at all. We are immediately led to speculate that proportions which correspond to fractions with large denominators (such as the 33 of 10/33) may be more easily achieved by stochastic developmental programs than by deterministic ones, and this is the core of our thesis: that stochastic development may tend to occur more often in cases involving rational numbers with large denominators. To be imprecise: that simple rationality and determinism belong together, as do irrationality and randomness.

  9. Construction and comparison of parallel implicit kinetic solvers in three spatial dimensions

    NASA Astrophysics Data System (ADS)

    Titarev, Vladimir; Dumbser, Michael; Utyuzhnikov, Sergey

    2014-01-01

    The paper is devoted to the further development and systematic performance evaluation of a recent deterministic framework Nesvetay-3D for modelling three-dimensional rarefied gas flows. Firstly, a review of the existing discretization and parallelization strategies for solving numerically the Boltzmann kinetic equation with various model collision integrals is carried out. Secondly, a new parallelization strategy for the implicit time evolution method is implemented which improves scaling on large CPU clusters. Accuracy and scalability of the methods are demonstrated on a pressure-driven rarefied gas flow through a finite-length circular pipe as well as an external supersonic flow over a three-dimensional re-entry geometry of complicated aerodynamic shape.

  10. A deterministic, dynamic systems model of cow-calf production: The effects of the duration of postpartum anestrus on production parameters over a 10-year horizon.

    PubMed

    Shane, D D; Larson, R L; Sanderson, M W; Miesner, M; White, B J

    2017-04-01

    The duration of postpartum anestrus (dPPA) is important to consider for reproductive performance and efficiency in cow-calf operations. We developed a deterministic, dynamic systems model of cow-calf production over a 10-yr horizon to model the effects that dPPA had on measures of herd productivity, including the percentage of cows cycling before the end of the first 21 d of the breeding season (%C21), the percentage of cows pregnant at pregnancy diagnosis (%PPD), the distribution of pregnancy by 21-d breeding intervals, the kilograms of calf weaned (KW), the kilograms of calf weaned per cow exposed (KPC), and the replacement percentage. A 1,000-animal herd was modeled, with the beginning and ending dates for a 63-d natural service breeding season being the same for eligible replacement heifers (nulliparous cows) and cows (primiparous and multiparous cows). Herds were simulated to have a multiparous cow dPPA of 50, 60, 70, or 80 d, with the dPPA for primiparous cows being set to 50, 60, 70, 80, 90, 100, or 110 d. Only combinations where the primiparous dPPA was greater than or equal to the multiparous dPPA were included, resulting in 22 model herds being simulated in the analysis. All other model parameters were held constant between simulations. In model season 10, the %C21 was 96.2% when the multiparous cow and primiparous cow dPPA was 50 d and was 48.3% when the multiparous cow and primiparous cow dPPA was 80 d. The %PPD in model season 10 for these same herds was 95.1% and 86.0%, respectively. The percentage of the herd becoming pregnant in the first 21 d of the breeding season also differed between these herds (61.8% and 31.3%, respectively). The 10-yr total KW was more than 275,000 kg greater for the herd with a 50-d multiparous cow and primiparous cow dPPA when compared with the herd with the 80-d multiparous and primiparous cow dPPA and had a model season 10 KPC of 180.8 kg compared with 151.4 kg for the longer dPPA. The model results show that both the multiparous cow and primiparous cow dPPA affect herd productivity outcomes and that a dPPA less than 60 d results in improved production outcomes relative to longer dPPA. Veterinarians and producers should consider determining the dPPA to aid in making management decisions to improve reproductive performance of cow-calf herds.

  11. Global solutions to random 3D vorticity equations for small initial data

    NASA Astrophysics Data System (ADS)

    Barbu, Viorel; Röckner, Michael

    2017-11-01

    One proves the existence and uniqueness in (Lp (R3)) 3, 3/2 < p < 2, of a global mild solution to random vorticity equations associated to stochastic 3D Navier-Stokes equations with linear multiplicative Gaussian noise of convolution type, for sufficiently small initial vorticity. This resembles some earlier deterministic results of T. Kato [16] and are obtained by treating the equation in vorticity form and reducing the latter to a random nonlinear parabolic equation. The solution has maximal regularity in the spatial variables and is weakly continuous in (L3 ∩L 3p/4p - 6)3 with respect to the time variable. Furthermore, we obtain the pathwise continuous dependence of solutions with respect to the initial data. In particular, one gets a locally unique solution of 3D stochastic Navier-Stokes equation in vorticity form up to some explosion stopping time τ adapted to the Brownian motion.

  12. Quadratic Finite Element Method for 1D Deterministic Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tolar, Jr., D R; Ferguson, J M

    2004-01-06

    In the discrete ordinates, or SN, numerical solution of the transport equation, both the spatial ({und r}) and angular ({und {Omega}}) dependences on the angular flux {psi}{und r},{und {Omega}}are modeled discretely. While significant effort has been devoted toward improving the spatial discretization of the angular flux, we focus on improving the angular discretization of {psi}{und r},{und {Omega}}. Specifically, we employ a Petrov-Galerkin quadratic finite element approximation for the differencing of the angular variable ({mu}) in developing the one-dimensional (1D) spherical geometry S{sub N} equations. We develop an algorithm that shows faster convergence with angular resolution than conventional S{sub N} algorithms.

  13. A hybrid model for predicting carbon monoxide from vehicular exhausts in urban environments

    NASA Astrophysics Data System (ADS)

    Gokhale, Sharad; Khare, Mukesh

    Several deterministic-based air quality models evaluate and predict the frequently occurring pollutant concentration well but, in general, are incapable of predicting the 'extreme' concentrations. In contrast, the statistical distribution models overcome the above limitation of the deterministic models and predict the 'extreme' concentrations. However, the environmental damages are caused by both extremes as well as by the sustained average concentration of pollutants. Hence, the model should predict not only 'extreme' ranges but also the 'middle' ranges of pollutant concentrations, i.e. the entire range. Hybrid modelling is one of the techniques that estimates/predicts the 'entire range' of the distribution of pollutant concentrations by combining the deterministic based models with suitable statistical distribution models ( Jakeman, et al., 1988). In the present paper, a hybrid model has been developed to predict the carbon monoxide (CO) concentration distributions at one of the traffic intersections, Income Tax Office (ITO), in the Delhi city, where the traffic is heterogeneous in nature and meteorology is 'tropical'. The model combines the general finite line source model (GFLSM) as its deterministic, and log logistic distribution (LLD) model, as its statistical components. The hybrid (GFLSM-LLD) model is then applied at the ITO intersection. The results show that the hybrid model predictions match with that of the observed CO concentration data within the 5-99 percentiles range. The model is further validated at different street location, i.e. Sirifort roadway. The validation results show that the model predicts CO concentrations fairly well ( d=0.91) in 10-95 percentiles range. The regulatory compliance is also developed to estimate the probability of exceedance of hourly CO concentration beyond the National Ambient Air Quality Standards (NAAQS) of India. It consists of light vehicles, heavy vehicles, three- wheelers (auto rickshaws) and two-wheelers (scooters, motorcycles, etc).

  14. Deterministic Line-Shape Programming of Silicon Nanowires for Extremely Stretchable Springs and Electronics.

    PubMed

    Xue, Zhaoguo; Sun, Mei; Dong, Taige; Tang, Zhiqiang; Zhao, Yaolong; Wang, Junzhuan; Wei, Xianlong; Yu, Linwei; Chen, Qing; Xu, Jun; Shi, Yi; Chen, Kunji; Roca I Cabarrocas, Pere

    2017-12-13

    Line-shape engineering is a key strategy to endow extra stretchability to 1D silicon nanowires (SiNWs) grown with self-assembly processes. We here demonstrate a deterministic line-shape programming of in-plane SiNWs into extremely stretchable springs or arbitrary 2D patterns with the aid of indium droplets that absorb amorphous Si precursor thin film to produce ultralong c-Si NWs along programmed step edges. A reliable and faithful single run growth of c-SiNWs over turning tracks with different local curvatures has been established, while high resolution transmission electron microscopy analysis reveals a high quality monolike crystallinity in the line-shaped engineered SiNW springs. Excitingly, in situ scanning electron microscopy stretching and current-voltage characterizations also demonstrate a superelastic and robust electric transport carried by the SiNW springs even under large stretching of more than 200%. We suggest that this highly reliable line-shape programming approach holds a strong promise to extend the mature c-Si technology into the development of a new generation of high performance biofriendly and stretchable electronics.

  15. Evaluating the risk of death via the hematopoietic syndrome mode for prolonged exposure of nuclear workers to radiation delivered at very low rates.

    PubMed

    Scott, B R; Lyzlov, A F; Osovets, S V

    1998-05-01

    During a Phase-I effort, studies were planned to evaluate deterministic (nonstochastic) effects of chronic exposure of nuclear workers at the Mayak atomic complex in the former Soviet Union to relatively high levels (> 0.25 Gy) of ionizing radiation. The Mayak complex has been used, since the late 1940's, to produce plutonium for nuclear weapons. Workers at Site A of the complex were involved in plutonium breeding using nuclear reactors, and some were exposed to relatively large doses of gamma rays plus relatively small neutron doses. The Weibull normalized-dose model, which has been set up to evaluate the risk of specific deterministic effects of combined, continuous exposure of humans to alpha, beta, and gamma radiations, is here adapted for chronic exposure to gamma rays and neutrons during repeated 6-h work shifts--as occurred for some nuclear workers at Site A. Using the adapted model, key conclusions were reached that will facilitate a Phase-II study of deterministic effects among Mayak workers. These conclusions include the following: (1) neutron doses may be more important for Mayak workers than for Japanese A-bomb victims in Hiroshima and can be accounted for using an adjusted dose (which accounts for neutron relative biological effectiveness); (2) to account for dose-rate effects, normalized dose X (a dimensionless fraction of an LD50 or ED50) can be evaluated in terms of an adjusted dose; (3) nonlinear dose-response curves for the risk of death via the hematopoietic mode can be converted to linear dose-response curves (for low levels of risk) using a newly proposed dimensionless dose, D = X(V), in units of Oklad (where D is pronounced "deh"), and V is the shape parameter in the Weibull model; (4) for X < or = Xo, where Xo is the threshold normalized dose, D = 0; (5) unlike absorbed dose, the dose D can be averaged over different Mayak workers in order to calculate the average risk of death via the hematopoietic mode for the population exposed at Site A; and (6) the expected cases of death via the hematopoietic syndrome mode for Mayak workers chronically exposed during work shifts at Site A to gamma rays and neutrons can be predicted using ln(2)B M[D]; where B (pronounced "beh") is the number of workers at risk (criticality accident victims excluded); and M[D] is the average (mean) value of D (averaged over the worker population at risk, for Site A, for the time period considered). These results can be used to facilitate a Phase II study of deterministic radiation effects among Mayak workers chronically exposed to gamma rays and neutrons.

  16. JCCRER Project 2.3 -- Deterministic effects of occupational exposure to radiation. Phase 1: Feasibility study; Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okladnikova, N.; Pesternikova, V.; Sumina, M.

    1998-12-01

    Phase 1 of Project 2.3, a short-term collaborative Feasibility Study, was funded for 12 months starting on 1 February 1996. The overall aim of the study was to determine the practical feasibility of using the dosimetric and clinical data on the MAYAK worker population to study the deterministic effects of exposure to external gamma radiation and to internal alpha radiation from inhaled plutonium. Phase 1 efforts were limited to the period of greatest worker exposure (1948--1954) and focused on collaboratively: assessing the comprehensiveness, availability, quality, and suitability of the Russian clinical and dosimetric data for the study of deterministic effects;more » creating an electronic data base containing complete clinical and dosimetric data on a small, representative sample of MAYAK workers; developing computer software for the testing of a currently used health risk model of hematopoietic effects; and familiarizing the US team with the Russian diagnostic criteria and techniques used in the identification of Chronic Radiation Sickness.« less

  17. Specialized Silicon Compilers for Language Recognition.

    DTIC Science & Technology

    1984-07-01

    realizations of non-deterministic automata have been reported that solve these problems in diffierent ways. Floyd and Ullman [ 281 have presented a...in Applied Mathematics, pages 19-31. American Mathematical Society, 1967. [ 281 Floyd, R. W. and J. D. Ullman. The Compilation of Regular Expressions...Shannon (editor). Automata Studies, chapter 1, pages 3-41. Princeton University Press, Princeton. N. J., 1956. [44] Kohavi, Zwi . Switching and Finite

  18. Disentangling Mechanisms That Mediate the Balance Between Stochastic and Deterministic Processes in Microbial Succession

    DOE PAGES

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; ...

    2015-03-17

    Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic mattermore » (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.« less

  19. Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor

    DOE PAGES

    Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.; ...

    2017-02-28

    Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less

  20. Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.

    Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less

  1. The GMAO Hybrid Ensemble-Variational Atmospheric Data Assimilation System: Version 2.0

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo; El Akkraoui, Amal

    2018-01-01

    This document describes the implementation and usage of the Goddard Earth Observing System (GEOS) Hybrid Ensemble-Variational Atmospheric Data Assimilation System (Hybrid EVADAS). Its aim is to provide comprehensive guidance to users of GEOS ADAS interested in experimenting with its hybrid functionalities. The document is also aimed at providing a short summary of the state-of-science in this release of the hybrid system. As explained here, the ensemble data assimilation system (EnADAS) mechanism added to GEOS ADAS to enable hybrid data assimilation applications has been introduced to the pre-existing machinery of GEOS in the most non-intrusive possible way. Only very minor changes have been made to the original scripts controlling GEOS ADAS with the objective of facilitating its usage by both researchers and the GMAO's near-real-time Forward Processing applications. In a hybrid scenario two data assimilation systems run concurrently in a two-way feedback mode such that: the ensemble provides background ensemble perturbations required by the ADAS deterministic (typically high resolution) hybrid analysis; and the deterministic ADAS provides analysis information for recentering of the EnADAS analyses and information necessary to ensure that observation bias correction procedures are consistent between both the deterministic ADAS and the EnADAS. The nonintrusive approach to introducing hybrid capability to GEOS ADAS means, in particular, that previously existing features continue to be available. Thus, not only is this upgraded version of GEOS ADAS capable of supporting new applications such as Hybrid 3D-Var, 3D-EnVar, 4D-EnVar and Hybrid 4D-EnVar, it remains possible to use GEOS ADAS in its traditional 3D-Var mode which has been used in both MERRA and MERRA-2. Furthermore, as described in this document, GEOS ADAS also supports a configuration for exercising a purely ensemble-based assimilation strategy which can be fully decoupled from its variational component. We should point out that Release 1.0 of this document was made available to GMAO in mid-2013, when we introduced Hybrid 3D-Var capability to GEOS ADAS. This initial version of the documentation included a considerably different state-of-science introductory section but many of the same detailed description of the mechanisms of GEOS EnADAS. We are glad to report that a few of the desirable Future Works listed in Release 1.0 have now been added to the present version of GEOS EnADAS. These include the ability to exercise an Ensemble Prediction System that uses the ensemble analyses of GEOS EnADAS and (a very early, but functional version of) a tool to support Ensemble Forecast Sensitivity and Observation Impact applications.

  2. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography

    NASA Astrophysics Data System (ADS)

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  3. A hybrid symplectic principal component analysis and central tendency measure method for detection of determinism in noisy time series with application to mechanomyography.

    PubMed

    Xie, Hong-Bo; Dokos, Socrates

    2013-06-01

    We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.

  4. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  5. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  6. Risk assessment for furan contamination through the food chain in Belgian children.

    PubMed

    Scholl, Georges; Huybrechts, Inge; Humblet, Marie-France; Scippo, Marie-Louise; De Pauw, Edwin; Eppe, Gauthier; Saegerman, Claude

    2012-08-01

    Young, old, pregnant and immuno-compromised persons are of great concern for risk assessors as they represent the sub-populations most at risk. The present paper focuses on risk assessment linked to furan exposure in children. Only the Belgian population was considered because individual contamination and consumption data that are required for accurate risk assessment were available for Belgian children only. Two risk assessment approaches, the so-called deterministic and probabilistic, were applied and the results were compared for the estimation of daily intake. A significant difference between the average Estimated Daily Intake (EDI) was underlined between the deterministic (419 ng kg⁻¹ body weight (bw) day⁻¹) and the probabilistic (583 ng kg⁻¹ bw day⁻¹) approaches, which results from the mathematical treatment of the null consumption and contamination data. The risk was characterised by two ways: (1) the classical approach by comparison of the EDI to a reference dose (RfD(chronic-oral)) and (2) the most recent approach, namely the Margin of Exposure (MoE) approach. Both reached similar conclusions: the risk level is not of a major concern, but is neither negligible. In the first approach, only 2.7 or 6.6% (respectively in the deterministic and in the probabilistic way) of the studied population presented an EDI above the RfD(chronic-oral). In the second approach, the percentage of children displaying a MoE above 10,000 and below 100 is 3-0% and 20-0.01% in the deterministic and probabilistic modes, respectively. In addition, children were compared to adults and significant differences between the contamination patterns were highlighted. While major contamination was linked to coffee consumption in adults (55%), no item predominantly contributed to the contamination in children. The most important were soups (19%), dairy products (17%), pasta and rice (11%), fruit and potatoes (9% each).

  7. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, Benjamin, E-mail: collinsbs@ornl.gov; Stimpson, Shane, E-mail: stimpsonsg@ornl.gov; Kelley, Blake W., E-mail: kelleybl@umich.edu

    2016-12-01

    A consistent “2D/1D” neutron transport method is derived from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. This paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. Several applications have been performed on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  8. Stability and accuracy of 3D neutron transport simulations using the 2D/1D method in MPACT

    DOE PAGES

    Collins, Benjamin; Stimpson, Shane; Kelley, Blake W.; ...

    2016-08-25

    We derived a consistent “2D/1D” neutron transport method from the 3D Boltzmann transport equation, to calculate fuel-pin-resolved neutron fluxes for realistic full-core Pressurized Water Reactor (PWR) problems. The 2D/1D method employs the Method of Characteristics to discretize the radial variables and a lower order transport solution to discretize the axial variable. Our paper describes the theory of the 2D/1D method and its implementation in the MPACT code, which has become the whole-core deterministic neutron transport solver for the Consortium for Advanced Simulations of Light Water Reactors (CASL) core simulator VERA-CS. We also performed several applications on both leadership-class and industry-classmore » computing clusters. Results are presented for whole-core solutions of the Watts Bar Nuclear Power Station Unit 1 and compared to both continuous-energy Monte Carlo results and plant data.« less

  9. Improving Deterministic Reserve Requirements for Security Constrained Unit Commitment and Scheduling Problems in Power Systems

    NASA Astrophysics Data System (ADS)

    Wang, Fengyu

    Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.

  10. Acceleration of MCNP calculations for small pipes configurations by using Weigth Windows Importance cards created by the SN-3D ATTILA

    NASA Astrophysics Data System (ADS)

    Castanier, Eric; Paterne, Loic; Louis, Céline

    2017-09-01

    In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.

  11. Paleoclimatic significance of δD and δ13C values in pinon pine needles from packrat middens spanning the last 40,000 years

    USGS Publications Warehouse

    Pendall, Elise; Betancourt, Julio L.; Leavitt, Steven W.

    1999-01-01

    We compared two approaches to interpreting δD of cellulose nitrate in piñon pine needles (Pinus edulis) preserved in packrat middens from central New Mexico, USA. One approach was based on linear regression between modern δD values and climate parameters, and the other on a deterministic isotope model, modified from Craig and Gordon's terminal lake evaporation model that assumes steady-state conditions and constant isotope effects. One such effect, the net biochemical fractionation factor, was determined for a new species, piñon pine. Regressions showed that δD values in cellulose nitrate from annual cohorts of needles (1989–1996) were strongly correlated with growing season (May–August) precipitation amount, and δ13C values in the same samples were correlated with June relative humidity. The deterministic model reconstructed δD values of meteoric water used by plants after constraining relative humidity effects with δ13C values; growing season temperatures were estimated via modern correlations with δD values of meteoric water. Variations of this modeling approach have been applied to tree-ring cellulose before, but not to macrofossil cellulose, and comparisons to empirical relationships have not been provided. Results from fossil piñon needles spanning the last ∼40,000 years showed no significant trend in δD values of cellulose nitrate, suggesting either no change in the amount of summer precipitation (based on the transfer function) or δD values of meteoric water or temperature (based on the deterministic model). However, there were significant differences in δ13C values, and therefore relative humidity, between Pleistocene and Holocene.

  12. The Creation and Statistical Evaluation of a Deterministic Model of the Human Bronchial Tree from HRCT Images.

    PubMed

    Montesantos, Spyridon; Katz, Ira; Pichelin, Marine; Caillibotte, Georges

    2016-01-01

    A quantitative description of the morphology of lung structure is essential prior to any form of predictive modeling of ventilation or aerosol deposition implemented within the lung. The human lung is a very complex organ, with airway structures that span two orders of magnitude and having a multitude of interfaces between air, tissue and blood. As such, current medical imaging protocols cannot provide medical practitioners and researchers with in-vivo knowledge of deeper lung structures. In this work a detailed algorithm for the generation of an individualized 3D deterministic model of the conducting part of the human tracheo-bronchial tree is described. Distinct initial conditions were obtained from the high-resolution computed tomography (HRCT) images of seven healthy volunteers. The algorithm developed is fractal in nature and is implemented as a self-similar space sub-division procedure. The expansion process utilizes physiologically realistic relationships and thresholds to produce an anatomically consistent human airway tree. The model was validated through extensive statistical analysis of the results and comparison of the most common morphological features with previously published morphometric studies and other equivalent models. The resulting trees were shown to be in good agreement with published human lung geometric characteristics and can be used to study, among other things, structure-function relationships in simulation studies.

  13. Self-assembled three dimensional network designs for soft electronics

    PubMed Central

    Jang, Kyung-In; Li, Kan; Chung, Ha Uk; Xu, Sheng; Jung, Han Na; Yang, Yiyuan; Kwak, Jean Won; Jung, Han Hee; Song, Juwon; Yang, Ce; Wang, Ao; Liu, Zhuangjian; Lee, Jong Yoon; Kim, Bong Hoon; Kim, Jae-Hwan; Lee, Jungyup; Yu, Yongjoon; Kim, Bum Jun; Jang, Hokyung; Yu, Ki Jun; Kim, Jeonghyun; Lee, Jung Woo; Jeong, Jae-Woong; Song, Young Min; Huang, Yonggang; Zhang, Yihui; Rogers, John A.

    2017-01-01

    Low modulus, compliant systems of sensors, circuits and radios designed to intimately interface with the soft tissues of the human body are of growing interest, due to their emerging applications in continuous, clinical-quality health monitors and advanced, bioelectronic therapeutics. Although recent research establishes various materials and mechanics concepts for such technologies, all existing approaches involve simple, two-dimensional (2D) layouts in the constituent micro-components and interconnects. Here we introduce concepts in three-dimensional (3D) architectures that bypass important engineering constraints and performance limitations set by traditional, 2D designs. Specifically, open-mesh, 3D interconnect networks of helical microcoils formed by deterministic compressive buckling establish the basis for systems that can offer exceptional low modulus, elastic mechanics, in compact geometries, with active components and sophisticated levels of functionality. Coupled mechanical and electrical design approaches enable layout optimization, assembly processes and encapsulation schemes to yield 3D configurations that satisfy requirements in demanding, complex systems, such as wireless, skin-compatible electronic sensors. PMID:28635956

  14. Self-assembled three dimensional network designs for soft electronics

    NASA Astrophysics Data System (ADS)

    Jang, Kyung-In; Li, Kan; Chung, Ha Uk; Xu, Sheng; Jung, Han Na; Yang, Yiyuan; Kwak, Jean Won; Jung, Han Hee; Song, Juwon; Yang, Ce; Wang, Ao; Liu, Zhuangjian; Lee, Jong Yoon; Kim, Bong Hoon; Kim, Jae-Hwan; Lee, Jungyup; Yu, Yongjoon; Kim, Bum Jun; Jang, Hokyung; Yu, Ki Jun; Kim, Jeonghyun; Lee, Jung Woo; Jeong, Jae-Woong; Song, Young Min; Huang, Yonggang; Zhang, Yihui; Rogers, John A.

    2017-06-01

    Low modulus, compliant systems of sensors, circuits and radios designed to intimately interface with the soft tissues of the human body are of growing interest, due to their emerging applications in continuous, clinical-quality health monitors and advanced, bioelectronic therapeutics. Although recent research establishes various materials and mechanics concepts for such technologies, all existing approaches involve simple, two-dimensional (2D) layouts in the constituent micro-components and interconnects. Here we introduce concepts in three-dimensional (3D) architectures that bypass important engineering constraints and performance limitations set by traditional, 2D designs. Specifically, open-mesh, 3D interconnect networks of helical microcoils formed by deterministic compressive buckling establish the basis for systems that can offer exceptional low modulus, elastic mechanics, in compact geometries, with active components and sophisticated levels of functionality. Coupled mechanical and electrical design approaches enable layout optimization, assembly processes and encapsulation schemes to yield 3D configurations that satisfy requirements in demanding, complex systems, such as wireless, skin-compatible electronic sensors.

  15. Sperm navigation along helical paths in 3D chemoattractant landscapes

    PubMed Central

    Jikeli, Jan F.; Alvarez, Luis; Friedrich, Benjamin M.; Wilson, Laurence G.; Pascal, René; Colin, Remy; Pichlo, Magdalena; Rennhack, Andreas; Brenker, Christoph; Kaupp, U. Benjamin

    2015-01-01

    Sperm require a sense of direction to locate the egg for fertilization. They follow gradients of chemical and physical cues provided by the egg or the oviduct. However, the principles underlying three-dimensional (3D) navigation in chemical landscapes are unknown. Here using holographic microscopy and optochemical techniques, we track sea urchin sperm navigating in 3D chemoattractant gradients. Sperm sense gradients on two timescales, which produces two different steering responses. A periodic component, resulting from the helical swimming, gradually aligns the helix towards the gradient. When incremental path corrections fail and sperm get off course, a sharp turning manoeuvre puts sperm back on track. Turning results from an ‘off' Ca2+ response signifying a chemoattractant stimulation decrease and, thereby, a drop in cyclic GMP concentration and membrane voltage. These findings highlight the computational sophistication by which sperm sample gradients for deterministic klinotaxis. We provide a conceptual and technical framework for studying microswimmers in 3D chemical landscapes. PMID:26278469

  16. Sperm navigation along helical paths in 3D chemoattractant landscapes.

    PubMed

    Jikeli, Jan F; Alvarez, Luis; Friedrich, Benjamin M; Wilson, Laurence G; Pascal, René; Colin, Remy; Pichlo, Magdalena; Rennhack, Andreas; Brenker, Christoph; Kaupp, U Benjamin

    2015-08-17

    Sperm require a sense of direction to locate the egg for fertilization. They follow gradients of chemical and physical cues provided by the egg or the oviduct. However, the principles underlying three-dimensional (3D) navigation in chemical landscapes are unknown. Here using holographic microscopy and optochemical techniques, we track sea urchin sperm navigating in 3D chemoattractant gradients. Sperm sense gradients on two timescales, which produces two different steering responses. A periodic component, resulting from the helical swimming, gradually aligns the helix towards the gradient. When incremental path corrections fail and sperm get off course, a sharp turning manoeuvre puts sperm back on track. Turning results from an 'off' Ca(2+) response signifying a chemoattractant stimulation decrease and, thereby, a drop in cyclic GMP concentration and membrane voltage. These findings highlight the computational sophistication by which sperm sample gradients for deterministic klinotaxis. We provide a conceptual and technical framework for studying microswimmers in 3D chemical landscapes.

  17. Deterministic direct reprogramming of somatic cells to pluripotency.

    PubMed

    Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H

    2013-10-03

    Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.

  18. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE PAGES

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  19. Deterministic Mean-Field Ensemble Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d

  20. Utilizing a Value of Information Framework to Improve Ore Collection and Classification Procedures

    DTIC Science & Technology

    2006-05-01

    account for uncertainty in revenues or costs. Studies that utilize this type of deterministic modeling are: Boshkov & Wright (1973); Laubscher (1981... Disney & Peters, 2003). Disney & Peters (2003) reference a number of applications in both the veterinary and agricultural sectors. Agricultural studies...covered by revenue made from selling the end product. Because the cost data are aggregated for the BI and D3 mills at Kiruna, we have to allocate the

  1. Parallel deterministic neutronics with AMR in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clouse, C.; Ferguson, J.; Hendrickson, C.

    1997-12-31

    AMTRAN, a three dimensional Sn neutronics code with adaptive mesh refinement (AMR) has been parallelized over spatial domains and energy groups and runs on the Meiko CS-2 with MPI message passing. Block refined AMR is used with linear finite element representations for the fluxes, which allows for a straight forward interpretation of fluxes at block interfaces with zoning differences. The load balancing algorithm assumes 8 spatial domains, which minimizes idle time among processors.

  2. Deterministic Coupling of Quantum Emitters in 2D Materials to Plasmonic Nanocavity Arrays.

    PubMed

    Tran, Toan Trong; Wang, Danqing; Xu, Zai-Quan; Yang, Ankun; Toth, Milos; Odom, Teri W; Aharonovich, Igor

    2017-04-12

    Quantum emitters in two-dimensional materials are promising candidates for studies of light-matter interaction and next generation, integrated on-chip quantum nanophotonics. However, the realization of integrated nanophotonic systems requires the coupling of emitters to optical cavities and resonators. In this work, we demonstrate hybrid systems in which quantum emitters in 2D hexagonal boron nitride (hBN) are deterministically coupled to high-quality plasmonic nanocavity arrays. The plasmonic nanoparticle arrays offer a high-quality, low-loss cavity in the same spectral range as the quantum emitters in hBN. The coupled emitters exhibit enhanced emission rates and reduced fluorescence lifetimes, consistent with Purcell enhancement in the weak coupling regime. Our results provide the foundation for a versatile approach for achieving scalable, integrated hybrid systems based on low-loss plasmonic nanoparticle arrays and 2D materials.

  3. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  4. An approach to model reactor core nodalization for deterministic safety analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salim, Mohd Faiz, E-mail: mohdfaizs@tnb.com.my; Samsudin, Mohd Rafie, E-mail: rafies@tnb.com.my; Mamat Ibrahim, Mohd Rizal, E-mail: m-rizal@nuclearmalaysia.gov.my

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to bemore » employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH{sub 1.6}, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D{sup ®} computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.« less

  5. Sub-200 ps CRT in monolithic scintillator PET detectors using digital SiPM arrays and maximum likelihood interaction time estimation.

    PubMed

    van Dam, Herman T; Borghi, Giacomo; Seifert, Stefan; Schaart, Dennis R

    2013-05-21

    Digital silicon photomultiplier (dSiPM) arrays have favorable characteristics for application in monolithic scintillator detectors for time-of-flight positron emission tomography (PET). To fully exploit these benefits, a maximum likelihood interaction time estimation (MLITE) method was developed to derive the time of interaction from the multiple time stamps obtained per scintillation event. MLITE was compared to several deterministic methods. Timing measurements were performed with monolithic scintillator detectors based on novel dSiPM arrays and LSO:Ce,0.2%Ca crystals of 16 × 16 × 10 mm(3), 16 × 16 × 20 mm(3), 24 × 24 × 10 mm(3), and 24 × 24 × 20 mm(3). The best coincidence resolving times (CRTs) for pairs of identical detectors were obtained with MLITE and measured 157 ps, 185 ps, 161 ps, and 184 ps full-width-at-half-maximum (FWHM), respectively. For comparison, a small reference detector, consisting of a 3 × 3 × 5 mm(3) LSO:Ce,0.2%Ca crystal coupled to a single pixel of a dSiPM array, was measured to have a CRT as low as 120 ps FWHM. The results of this work indicate that the influence of the optical transport of the scintillation photons on the timing performance of monolithic scintillator detectors can at least partially be corrected for by utilizing the information contained in the spatio-temporal distribution of the collection of time stamps registered per scintillation event.

  6. Sub-200 ps CRT in monolithic scintillator PET detectors using digital SiPM arrays and maximum likelihood interaction time estimation

    NASA Astrophysics Data System (ADS)

    van Dam, Herman T.; Borghi, Giacomo; Seifert, Stefan; Schaart, Dennis R.

    2013-05-01

    Digital silicon photomultiplier (dSiPM) arrays have favorable characteristics for application in monolithic scintillator detectors for time-of-flight positron emission tomography (PET). To fully exploit these benefits, a maximum likelihood interaction time estimation (MLITE) method was developed to derive the time of interaction from the multiple time stamps obtained per scintillation event. MLITE was compared to several deterministic methods. Timing measurements were performed with monolithic scintillator detectors based on novel dSiPM arrays and LSO:Ce,0.2%Ca crystals of 16 × 16 × 10 mm3, 16 × 16 × 20 mm3, 24 × 24 × 10 mm3, and 24 × 24 × 20 mm3. The best coincidence resolving times (CRTs) for pairs of identical detectors were obtained with MLITE and measured 157 ps, 185 ps, 161 ps, and 184 ps full-width-at-half-maximum (FWHM), respectively. For comparison, a small reference detector, consisting of a 3 × 3 × 5 mm3 LSO:Ce,0.2%Ca crystal coupled to a single pixel of a dSiPM array, was measured to have a CRT as low as 120 ps FWHM. The results of this work indicate that the influence of the optical transport of the scintillation photons on the timing performance of monolithic scintillator detectors can at least partially be corrected for by utilizing the information contained in the spatio-temporal distribution of the collection of time stamps registered per scintillation event.

  7. Generalized Detectability for Discrete Event Systems

    PubMed Central

    Shu, Shaolong; Lin, Feng

    2011-01-01

    In our previous work, we investigated detectability of discrete event systems, which is defined as the ability to determine the current and subsequent states of a system based on observation. For different applications, we defined four types of detectabilities: (weak) detectability, strong detectability, (weak) periodic detectability, and strong periodic detectability. In this paper, we extend our results in three aspects. (1) We extend detectability from deterministic systems to nondeterministic systems. Such a generalization is necessary because there are many systems that need to be modeled as nondeterministic discrete event systems. (2) We develop polynomial algorithms to check strong detectability. The previous algorithms are based on observer whose construction is of exponential complexity, while the new algorithms are based on a new automaton called detector. (3) We extend detectability to D-detectability. While detectability requires determining the exact state of a system, D-detectability relaxes this requirement by asking only to distinguish certain pairs of states. With these extensions, the theory on detectability of discrete event systems becomes more applicable in solving many practical problems. PMID:21691432

  8. Statistical Energy Analysis for Designers. Part 1. Basic Theory

    DTIC Science & Technology

    1974-09-01

    deterministic system. That is a possible answer, but it may not be the most useful one. The most glaring deficiency of SEA is its inability to deal with...present whether this represents the "next logical step" in the chain that we spoke of, but it bears examination. 13 A second deficiency of SEA is its...undamped string, p = lineal density, r = 0, and A=-T(D/ax) 2. Thus, Eq. (2.3.2) becomes Tk2 = pw2, or k = ±w/c , (2.3.3) where c=VT- is the speed of

  9. Patients' understanding of and responses to multiplex genetic susceptibility test results.

    PubMed

    Kaphingst, Kimberly A; McBride, Colleen M; Wade, Christopher; Alford, Sharon Hensley; Reid, Robert; Larson, Eric; Baxevanis, Andreas D; Brody, Lawrence C

    2012-07-01

    Examination of patients' responses to direct-to-consumer genetic susceptibility tests is needed to inform clinical practice. This study examined patients' recall and interpretation of, and responses to, genetic susceptibility test results provided directly by mail. This observational study had three prospective assessments (before testing, 10 days after receiving results, and 3 months later). Participants were 199 patients aged 25-40 years who received free genetic susceptibility testing for eight common health conditions. More than 80% of the patients correctly recalled their results for the eight health conditions. Patients were unlikely to interpret genetic results as deterministic of health outcomes (mean = 6.0, s.d. = 0.8 on a scale of 1-7, 1 indicating strongly deterministic). In multivariate analysis, patients with the least deterministic interpretations were white (P = 0.0098), more educated (P = 0.0093), and least confused by results (P = 0.001). Only 1% talked about their results with a provider. Findings suggest that most patients will correctly recall their results and will not interpret genetics as the sole cause of diseases. The subset of those confused by results could benefit from consultation with a health-care provider, which could emphasize that health habits currently are the best predictors of risk. Providers could leverage patients' interest in genetic tests to encourage behavior changes to reduce disease risk.

  10. Experimental search for Exact Coherent Structures in turbulent small aspect ratio Taylor-Couette flow

    NASA Astrophysics Data System (ADS)

    Crowley, Christopher J.; Krygier, Michael; Grigoriev, Roman O.; Schatz, Michael F.

    2017-11-01

    Recent theoretical and experimental work suggests that the dynamics of turbulent flows are guided by unstable nonchaotic solutions to the Navier-Stokes equations. These solutions, known as exact coherent structures (ECS), play a key role in a fundamentally deterministic description of turbulence. In order to quantitatively demonstrate that actual turbulence in 3D flows is guided by ECS, high resolution, 3D-3C experimental measurements of the velocity need to be compared to solutions from direct numerical simulation of the Navier-Stokes equations. In this talk, we will present experimental measurements of fully time resolved, velocity measurements in a volume of turbulence in a counter-rotating, small aspect ratio Taylor-Couette flow. This work is supported by the Army Research Office (Contract # W911NF-16-1-0281).

  11. Three-dimensional silicon inverse photonic quasicrystals for infrared wavelengths.

    PubMed

    Ledermann, Alexandra; Cademartiri, Ludovico; Hermatschweiler, Martin; Toninelli, Costanza; Ozin, Geoffrey A; Wiersma, Diederik S; Wegener, Martin; von Freymann, Georg

    2006-12-01

    Quasicrystals are a class of lattices characterized by a lack of translational symmetry. Nevertheless, the points of the lattice are deterministically arranged, obeying rotational symmetry. Thus, we expect properties that are different from both crystals and glasses. Indeed, naturally occurring electronic quasicrystals (for example, AlPdMn metal alloys) show peculiar electronic, vibrational and physico-chemical properties. Regarding artificial quasicrystals for electromagnetic waves, three-dimensional (3D) structures have recently been realized at GHz frequencies and 2D structures have been reported for the near-infrared region. Here, we report on the first fabrication and characterization of 3D quasicrystals for infrared frequencies. Using direct laser writing combined with a silicon inversion procedure, we achieve high-quality silicon inverse icosahedral structures. Both polymeric and silicon quasicrystals are characterized by means of electron microscopy and visible-light Laue diffraction. The diffraction patterns of structures with a local five-fold real-space symmetry axis reveal a ten-fold symmetry as required by theory for 3D structures.

  12. Deterministic Integration of Biological and Soft Materials onto 3D Microscale Cellular Frameworks

    PubMed Central

    McCracken, Joselle M.; Xu, Sheng; Badea, Adina; Jang, Kyung-In; Yan, Zheng; Wetzel, David J.; Nan, Kewang; Lin, Qing; Han, Mengdi; Anderson, Mikayla A.; Lee, Jung Woo; Wei, Zijun; Pharr, Matt; Wang, Renhan; Su, Jessica; Rubakhin, Stanislav S.; Sweedler, Jonathan V.

    2018-01-01

    Complex 3D organizations of materials represent ubiquitous structural motifs found in the most sophisticated forms of matter, the most notable of which are in life-sustaining hierarchical structures found in biology, but where simpler examples also exist as dense multilayered constructs in high-performance electronics. Each class of system evinces specific enabling forms of assembly to establish their functional organization at length scales not dissimilar to tissue-level constructs. This study describes materials and means of assembly that extend and join these disparate systems—schemes for the functional integration of soft and biological materials with synthetic 3D microscale, open frameworks that can leverage the most advanced forms of multilayer electronic technologies, including device-grade semiconductors such as monocrystalline silicon. Cellular migration behaviors, temporal dependencies of their growth, and contact guidance cues provided by the nonplanarity of these frameworks illustrate design criteria useful for their functional integration with living matter (e.g., NIH 3T3 fibroblast and primary rat dorsal root ganglion cell cultures). PMID:29552634

  13. Boise Hydrogeophysical Research Site: Control Volume/Test Cell and Community Research Asset

    NASA Astrophysics Data System (ADS)

    Barrash, W.; Bradford, J.; Malama, B.

    2008-12-01

    The Boise Hydrogeophysical Research Site (BHRS) is a research wellfield or field-scale test facility developed in a shallow, coarse, fluvial aquifer with the objectives of supporting: (a) development of cost- effective, non- or minimally-invasive quantitative characterization and imaging methods in heterogeneous aquifers using hydrologic and geophysical techniques; (b) examination of fundamental relationships and processes at multiple scales; (c) testing theories and models for groundwater flow and solute transport; and (d) educating and training of students in multidisciplinary subsurface science and engineering. The design of the wells and the wellfield support modular use and reoccupation of wells for a wide range of single-well, cross-hole, multiwell and multilevel hydrologic, geophysical, and combined hydrologic-geophysical experiments. Efforts to date by Boise State researchers and collaborators have been largely focused on: (a) establishing the 3D distributions of geologic, hydrologic, and geophysical parameters which can then be used as the basis for jointly inverting hard and soft data to return the 3D K distribution and (b) developing subsurface measurement and imaging methods including tomographic characterization and imaging methods. At this point the hydrostratigraphic framework of the BHRS is known to be a hierarchical multi-scale system which includes layers and lenses that are recognized with geologic, hydrologic, radar, seismic, and EM methods; details are now emerging which may allow 3D deterministic characterization of zones and/or material variations at the meter scale in the central wellfield. Also the site design and subsurface framework have supported a variety of testing configurations for joint hydrologic and geophysical experiments. Going forward we recognize the opportunity to increase the R&D returns from use of the BHRS with additional infrastructure (especially for monitoring the vadose zone and surface water-groundwater interactions), more collaborative activity, and greater access to site data. Our broader goal of becoming more available as a research asset for the scientific community also supports the long-term business plan of increasing funding opportunities to maintain and operate the site.

  14. Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.

    PubMed

    Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo

    2017-05-01

    In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.

  15. Controlled mechanical buckling for origami-inspired construction of 3D microstructures in advanced materials.

    PubMed

    Yan, Zheng; Zhang, Fan; Wang, Jiechen; Liu, Fei; Guo, Xuelin; Nan, Kewang; Lin, Qing; Gao, Mingye; Xiao, Dongqing; Shi, Yan; Qiu, Yitao; Luan, Haiwen; Kim, Jung Hwan; Wang, Yiqi; Luo, Hongying; Han, Mengdi; Huang, Yonggang; Zhang, Yihui; Rogers, John A

    2016-04-25

    Origami is a topic of rapidly growing interest in both the scientific and engineering research communities due to its promising potential in a broad range of applications. Previous assembly approaches of origami structures at the micro/nanoscale are constrained by the applicable classes of materials, topologies and/or capability of control over the transformation. Here, we introduce an approach that exploits controlled mechanical buckling for autonomic origami assembly of 3D structures across material classes from soft polymers to brittle inorganic semiconductors, and length scales from nanometers to centimeters. This approach relies on a spatial variation of thickness in the initial 2D structures as an effective strategy to produce engineered folding creases during the compressive buckling process. The elastic nature of the assembly scheme enables active, deterministic control over intermediate states in the 2D to 3D transformation in a continuous and reversible manner. Demonstrations include a broad set of 3D structures formed through unidirectional, bidirectional, and even hierarchical folding, with examples ranging from half cylindrical columns and fish scales, to cubic boxes, pyramids, starfish, paper fans, skew tooth structures, and to amusing system-level examples of soccer balls, model houses, cars, and multi-floor textured buildings.

  16. Deterministic and reliability based optimization of integrated thermal protection system composite panel using adaptive sampling techniques

    NASA Astrophysics Data System (ADS)

    Ravishankar, Bharani

    Conventional space vehicles have thermal protection systems (TPS) that provide protection to an underlying structure that carries the flight loads. In an attempt to save weight, there is interest in an integrated TPS (ITPS) that combines the structural function and the TPS function. This has weight saving potential, but complicates the design of the ITPS that now has both thermal and structural failure modes. The main objectives of this dissertation was to optimally design the ITPS subjected to thermal and mechanical loads through deterministic and reliability based optimization. The optimization of the ITPS structure requires computationally expensive finite element analyses of 3D ITPS (solid) model. To reduce the computational expenses involved in the structural analysis, finite element based homogenization method was employed, homogenizing the 3D ITPS model to a 2D orthotropic plate. However it was found that homogenization was applicable only for panels that are much larger than the characteristic dimensions of the repeating unit cell in the ITPS panel. Hence a single unit cell was used for the optimization process to reduce the computational cost. Deterministic and probabilistic optimization of the ITPS panel required evaluation of failure constraints at various design points. This further demands computationally expensive finite element analyses which was replaced by efficient, low fidelity surrogate models. In an optimization process, it is important to represent the constraints accurately to find the optimum design. Instead of building global surrogate models using large number of designs, the computational resources were directed towards target regions near constraint boundaries for accurate representation of constraints using adaptive sampling strategies. Efficient Global Reliability Analyses (EGRA) facilitates sequentially sampling of design points around the region of interest in the design space. EGRA was applied to the response surface construction of the failure constraints in the deterministic and reliability based optimization of the ITPS panel. It was shown that using adaptive sampling, the number of designs required to find the optimum were reduced drastically, while improving the accuracy. System reliability of ITPS was estimated using Monte Carlo Simulation (MCS) based method. Separable Monte Carlo method was employed that allowed separable sampling of the random variables to predict the probability of failure accurately. The reliability analysis considered uncertainties in the geometry, material properties, loading conditions of the panel and error in finite element modeling. These uncertainties further increased the computational cost of MCS techniques which was also reduced by employing surrogate models. In order to estimate the error in the probability of failure estimate, bootstrapping method was applied. This research work thus demonstrates optimization of the ITPS composite panel with multiple failure modes and large number of uncertainties using adaptive sampling techniques.

  17. Assimilation of lightning data by nudging tropospheric water vapor and applications to numerical forecasts of convective events

    NASA Astrophysics Data System (ADS)

    Dixon, Kenneth

    A lightning data assimilation technique is developed for use with observations from the World Wide Lightning Location Network (WWLLN). The technique nudges the water vapor mixing ratio toward saturation within 10 km of a lightning observation. This technique is applied to deterministic forecasts of convective events on 29 June 2012, 17 November 2013, and 19 April 2011 as well as an ensemble forecast of the 29 June 2012 event using the Weather Research and Forecasting (WRF) model. Lightning data are assimilated over the first 3 hours of the forecasts, and the subsequent impact on forecast quality is evaluated. The nudged deterministic simulations for all events produce composite reflectivity fields that are closer to observations. For the ensemble forecasts of the 29 June 2012 event, the improvement in forecast quality from lightning assimilation is more subtle than for the deterministic forecasts, suggesting that the lightning assimilation may improve ensemble convective forecasts where conventional observations (e.g., aircraft, surface, radiosonde, satellite) are less dense or unavailable.

  18. Unsteady Flows in a Single-Stage Transonic Axial-Flow Fan Stator Row. Ph.D. Thesis - Iowa State Univ.

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.

    1986-01-01

    Measurements of the unsteady velocity field within the stator row of a transonic axial-flow fan were acquired using a laser anemometer. Measurements were obtained on axisymmetric surfaces located at 10 and 50 percent span from the shroud, with the fan operating at maximum efficiency at design speed. The ensemble-average and variance of the measured velocities are used to identify rotor-wake-generated (deterministic) unsteadiness and turbulence, respectively. Correlations of both deterministic and turbulent velocity fluctuations provide information on the characteristics of unsteady interactions within the stator row. These correlations are derived from the Navier-Stokes equation in a manner similar to deriving the Reynolds stress terms, whereby various averaging operators are used to average the aperiodic, deterministic, and turbulent velocity fluctuations which are known to be present in multistage turbomachines. The correlations of deterministic and turbulent velocity fluctuations throughout the axial fan stator row are presented. In particular, amplification and attenuation of both types of unsteadiness are shown to occur within the stator blade passage.

  19. Design Analysis of SNS Target StationBiological Shielding Monoligh with Proton Power Uprate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bekar, Kursat B.; Ibrahim, Ahmad M.

    2017-05-01

    This report documents the analysis of the dose rate in the experiment area outside the Spallation Neutron Source (SNS) target station shielding monolith with proton beam energy of 1.3 GeV. The analysis implemented a coupled three dimensional (3D)/two dimensional (2D) approach that used both the Monte Carlo N-Particle Extended (MCNPX) 3D Monte Carlo code and the Discrete Ordinates Transport (DORT) two dimensional deterministic code. The analysis with proton beam energy of 1.3 GeV showed that the dose rate in continuously occupied areas on the lateral surface outside the SNS target station shielding monolith is less than 0.25 mrem/h, which compliesmore » with the SNS facility design objective. However, the methods and codes used in this analysis are out of date and unsupported, and the 2D approximation of the target shielding monolith does not accurately represent the geometry. We recommend that this analysis is updated with modern codes and libraries such as ADVANTG or SHIFT. These codes have demonstrated very high efficiency in performing full 3D radiation shielding analyses of similar and even more difficult problems.« less

  20. Experimental realization of real-time feedback-control of single-atom arrays

    NASA Astrophysics Data System (ADS)

    Kim, Hyosub; Lee, Woojun; Ahn, Jaewook

    2016-05-01

    Deterministic loading of neutral atoms on particular locations has remained a challenging problem. Here we show, in a proof-of-principle experimental demonstration, that such deterministic loading can be achieved by rearrangement of atoms. In the experiment, cold rubidium atom were trapped by optical tweezers, which are the hologram images made by a liquid-crystal spatial light modulator (LC-SLM). After the initial occupancy was identified, the hologram was actively controlled to rearrange the captured atoms on to unfilled sites. For this, we developed a new flicker-free hologram algorithm that enables holographic atom translation. Our demonstration show that up to N=9 atoms were simultaneously moved in the 2D plane with the movable degrees of freedom of 2N=18 and the fidelity of 99% for single-atom 5- μm translation. It is hoped that our in situ atom rearrangement becomes useful in scaling quantum computers. Samsung Science and Technology Foundation [SSTF-BA1301-12].

  1. Parallel Stochastic discrete event simulation of calcium dynamics in neuron.

    PubMed

    Ishlam Patoary, Mohammad Nazrul; Tropper, Carl; McDougal, Robert A; Zhongwei, Lin; Lytton, William W

    2017-09-26

    The intra-cellular calcium signaling pathways of a neuron depends on both biochemical reactions and diffusions. Some quasi-isolated compartments (e.g. spines) are so small and calcium concentrations are so low that one extra molecule diffusing in by chance can make a nontrivial difference in its concentration (percentage-wise). These rare events can affect dynamics discretely in such way that they cannot be evaluated by a deterministic simulation. Stochastic models of such a system provide a more detailed understanding of these systems than existing deterministic models because they capture their behavior at a molecular level. Our research focuses on the development of a high performance parallel discrete event simulation environment, Neuron Time Warp (NTW), which is intended for use in the parallel simulation of stochastic reaction-diffusion systems such as intra-calcium signaling. NTW is integrated with NEURON, a simulator which is widely used within the neuroscience community. We simulate two models, a calcium buffer and a calcium wave model. The calcium buffer model is employed in order to verify the correctness and performance of NTW by comparing it to a serial deterministic simulation in NEURON. We also derived a discrete event calcium wave model from a deterministic model using the stochastic IP3R structure.

  2. Complexity in Soil Systems: What Does It Mean and How Should We Proceed?

    NASA Astrophysics Data System (ADS)

    Faybishenko, B.; Molz, F. J.; Brodie, E.; Hubbard, S. S.

    2015-12-01

    The complex soil systems approach is needed fundamentally for the development of integrated, interdisciplinary methods to measure and quantify the physical, chemical and biological processes taking place in soil, and to determine the role of fine-scale heterogeneities. This presentation is aimed at a review of the concepts and observations concerning complexity and complex systems theory, including terminology, emergent complexity and simplicity, self-organization and a general approach to the study of complex systems using the Weaver (1948) concept of "organized complexity." These concepts are used to provide understanding of complex soil systems, and to develop experimental and mathematical approaches to soil microbiological processes. The results of numerical simulations, observations and experiments are presented that indicate the presence of deterministic chaotic dynamics in soil microbial systems. So what are the implications for the scientists who wish to develop mathematical models in the area of organized complexity or to perform experiments to help clarify an aspect of an organized complex system? The modelers have to deal with coupled systems having at least three dependent variables, and they have to forgo making linear approximations to nonlinear phenomena. The analogous rule for experimentalists is that they need to perform experiments that involve measurement of at least three interacting entities (variables depending on time, space, and each other). These entities could be microbes in soil penetrated by roots. If a process being studied in a soil affects the soil properties, like biofilm formation, then this effect has to be measured and included. The mathematical implications of this viewpoint are examined, and results of numerical solutions to a system of equations demonstrating deterministic chaotic behavior are also discussed using time series and the 3D strange attractors.

  3. The potential cost-effectiveness of infant pneumococcal vaccines in Australia.

    PubMed

    Newall, Anthony T; Creighton, Prudence; Philp, David J; Wood, James G; MacIntyre, C Raina

    2011-10-19

    Over the last decade infant pneumococcal vaccination has been adopted as part of routine immunisation schedules in many developed countries. Although highly successful in many settings such as Australia and the United States, rapid serotype replacement has occurred in some European countries. Recently two pneumococcal conjugate vaccines (PCVs) with extended serotype coverage have been licensed for use, a 10-valent (PHiD-CV) and a 13-valent (PCV-13) vaccine, and offer potential replacements for the existing vaccine (PCV-7) in Australia. To evaluate the cost-effectiveness of PCV programs we developed a static, deterministic state-transition model. The perspective for costs included those to the government and healthcare system. When compared to current practice (PCV-7) both vaccines offered potential benefits, with those estimated for PHiD-CV due primarily to prevention of otitis media and PCV-13 due to a further reduction in invasive disease in Australia. At equivalent total cost to vaccinate an infant, compared to no PCV the base-case cost per QALY saved were estimated at A$64,900 (current practice, PCV-7; 3+0), A$50,200 (PHiD-CV; 3+1) and A$55,300 (PCV-13; 3+0), respectively. However, assumptions regarding herd protection, serotype protection, otitis media efficacy, and vaccination cost changed the relative cost-effectiveness of alternative PCV programs. The high proportion of current invasive disease caused by serotype 19A (as included in PCV-13) may be a decisive factor in determining vaccine policy in Australia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Retrospective dosimetry analyses of reactor vessel cladding samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwood, L. R.; Soderquist, C. Z.; Fero, A. H.

    2011-07-01

    Reactor pressure vessel cladding samples for Ringhals Units 3 and 4 in Sweden were analyzed using retrospective reactor dosimetry techniques. The objective was to provide the best estimates of the neutron fluence for comparison with neutron transport calculations. A total of 51 stainless steel samples consisting of chips weighing approximately 100 to 200 mg were removed from selected locations around the pressure vessel and were sent to Pacific Northwest National Laboratory for analysis. The samples were fully characterized and analyzed for radioactive isotopes, with special interest in the presence of Nb-93m. The RPV cladding retrospective dosimetry results will be combinedmore » with a re-evaluation of the surveillance capsule dosimetry and with ex-vessel neutron dosimetry results to form a comprehensive 3D comparison of measurements to calculations performed with 3D deterministic transport code. (authors)« less

  5. Estimation of Radiofrequency Power Leakage from Microwave Ovens for Dosimetric Assessment at Nonionizing Radiation Exposure Levels

    PubMed Central

    Lopez-Iturri, Peio; de Miguel-Bilbao, Silvia; Aguirre, Erik; Azpilicueta, Leire; Falcone, Francisco; Ramos, Victoria

    2015-01-01

    The electromagnetic field leakage levels of nonionizing radiation from a microwave oven have been estimated within a complex indoor scenario. By employing a hybrid simulation technique, based on coupling full wave simulation with an in-house developed deterministic 3D ray launching code, estimations of the observed electric field values can be obtained for the complete indoor scenario. The microwave oven can be modeled as a time- and frequency-dependent radiating source, in which leakage, basically from the microwave oven door, is propagated along the complete indoor scenario interacting with all of the elements present in it. This method can be of aid in order to assess the impact of such devices on expected exposure levels, allowing adequate minimization strategies such as optimal location to be applied. PMID:25705676

  6. Hybrid deterministic-stochastic modeling of x-ray beam bowtie filter scatter on a CT system.

    PubMed

    Liu, Xin; Hsieh, Jiang

    2015-01-01

    Knowledge of scatter generated by bowtie filter (i.e. x-ray beam compensator) is crucial for providing artifact free images on the CT scanners. Our approach is to use a hybrid deterministic-stochastic simulation to estimate the scatter level generated by a bowtie filter made of a material with low atomic number. First, major components of CT systems, such as source, flat filter, bowtie filter, body phantom, are built into a 3D model. The scattered photon fluence and the primary transmitted photon fluence are simulated by MCNP - a Monte Carlo simulation toolkit. The rejection of scattered photon by the post patient collimator (anti-scatter grid) is simulated with an analytical formula. The biased sinogram is created by superimposing scatter signal generated by the simulation onto the primary x-ray beam signal. Finally, images with artifacts are reconstructed with the biased signal. The effect of anti-scatter grid height on scatter rejection are also discussed and demonstrated.

  7. Monitoring and exposure assessment of pesticide residues in cowpea (Vigna unguiculata L. Walp) from five provinces of southern China.

    PubMed

    Huan, Zhibo; Xu, Zhi; Luo, Jinhui; Xie, Defang

    2016-11-01

    Residues of 14 pesticides were determined in 150 cowpea samples collected in five southern Chinese provinces in 2013 and 2014.70% samples were detected one or more residues. 61.3% samples were illegal mainly because of detection of unauthorized pesticides. 14.0% samples contained more than three pesticides. Deterministic and probabilistic methods were used to assess the chronic and acute risk of pesticides in cowpea to eight subgroups of people. Deterministic assessment showed that the estimated short-term intakes (ESTIs) of carbofuran were 1199.4%-2621.9% of the acute reference doses (ARfD) while the rates were 985.9%-4114.7% using probabilistic assessment. Probabilistic assessment showed 4.2%-7.8% subjects may suffer from unacceptable acute risk from carbofuran contaminated cowpeas from the five provinces (especially children). But undue concern is not necessary, because all the estimations are based on conservative assumption. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Stochastic and Deterministic Crystal Structure Solution Methods in GSAS-II: Monte Carlo/Simulated Annealing Versus Charge Flipping

    DOE PAGES

    Von Dreele, Robert

    2017-08-29

    One of the goals in developing GSAS-II was to expand from the capabilities of the original General Structure Analysis System (GSAS) which largely encompassed just structure refinement and post refinement analysis. GSAS-II has been written almost entirely in Python loaded with graphics, GUI and mathematical packages (matplotlib, pyOpenGL, wxpython, numpy and scipy). Thus, GSAS-II has a fully developed modern GUI as well as extensive graphical display of data and results. However, the structure and operation of Python has required new approaches to many of the algorithms used in crystal structure analysis. The extensions beyond GSAS include image calibration/integration as wellmore » as peak fitting and unit cell indexing for powder data which are precursors for structure solution. Structure solution within GSAS-II begins with either Pawley or LeBail extracted structure factors from powder data or those measured in a single crystal experiment. Both charge flipping and Monte Carlo-Simulated Annealing techniques are available; the former can be applied to (3+1) incommensurate structures as well as conventional 3D structures.« less

  9. A 2D systems approach to iterative learning control for discrete linear processes with zero Markov parameters

    NASA Astrophysics Data System (ADS)

    Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.

    2011-07-01

    In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.

  10. Process influences and correction possibilities for high precision injection molded freeform optics

    NASA Astrophysics Data System (ADS)

    Dick, Lars; Risse, Stefan; Tünnermann, Andreas

    2016-08-01

    Modern injection molding processes offer a cost-efficient method for manufacturing high precision plastic optics for high volume applications. Besides form deviation of molded freeform optics, internal material stress is a relevant influencing factor for the functionality of a freeform optics in an optical system. This paper illustrates dominant influence parameters of an injection molding process relating to form deviation and internal material stress based on a freeform demonstrator geometry. Furthermore, a deterministic and efficient way for 3D mold correcting of systematic, asymmetrical shrinkage errors is shown to reach micrometer range shape accuracy at diameters up to 40 mm. In a second case, a stress-optimized parameter combination using unusual molding conditions was 3D corrected to reach high precision and low stress freeform polymer optics.

  11. CyberShake Physics-Based PSHA in Central California

    NASA Astrophysics Data System (ADS)

    Callaghan, S.; Maechling, P. J.; Goulet, C. A.; Milner, K. R.; Graves, R. W.; Olsen, K. B.; Jordan, T. H.

    2017-12-01

    The Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, which performs physics-based probabilistic seismic hazard analyis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a wavefield of Strain Green Tensors. An earthquake rupture forecast (ERF) is then extended by varying hypocenters and slips on finite faults, generating about 500,000 events per site of interest. Seismic reciprocity is used to calculate synthetic seismograms, which are processed to obtain intensity measures (IMs) such as RotD100. These are combined with ERF probabilities to produce hazard curves. PSHA results from hundreds of locations across a region are interpolated to produce a hazard map. CyberShake simulations with SCEC 3D Community Velocity Models have shown how the site and path effects vary with differences in upper crustal structure, and they are particularly informative about epistemic uncertainties in basin effects, which are not well parameterized by depths to iso-velocity surfaces, common inputs to GMPEs. In 2017, SCEC performed CyberShake Study 17.3, expanding into Central California for the first time. Seismic hazard calculations were performed at 1 Hz at 438 sites, using both a 3D tomographically-derived central California velocity model and a regionally averaged 1D model. Our simulation volumes extended outside of Central California, so we included other SCEC velocity models and developed a smoothing algorithm to minimize reflection and refraction effects along interfaces. CyberShake Study 17.3 ran for 31 days on NCSA's Blue Waters and ORNL's Titan supercomputers, burning 21.6 million core-hours and producing 285 million two-component seismograms and 43 billion IMs. These results demonstrate that CyberShake can be successfully expanded into new regions, and lend insights into the effects of directivity-basin coupling associated with basins near major faults such as the San Andreas. In particular, we observe in the 3D results that basin amplification for sites in the southern San Joaquin Valley is less than for sites in smaller basins such as around Ventura. We will present CyberShake hazard estimates from the 1D and 3D models, compare results to those from previous CyberShake studies and GMPEs, and describe our future plans.

  12. Selective Attention, Diffused Attention, and the Development of Categorization

    PubMed Central

    Deng, Wei (Sophia); Sloutsky, Vladimir M.

    2016-01-01

    How do people learn categories and what changes with development? The current study attempts to address these questions by focusing on the role of attention in the development of categorization. In Experiment 1, participants (adults, 7-year-olds, and 4-year-olds) were trained with novel categories consisting of deterministic and probabilistic features, and their categorization and memory for features were tested. In Experiment 2, participants’ attention was directed to the deterministic feature, and in Experiment 3 it was directed to the probabilistic features. Attentional cuing affected categorization and memory in adults and 7-year-olds: these participants relied on the cued features in their categorization and exhibited better memory of cued than of non-cued features. In contrast, in 4-year-olds attentional cueing affected only categorization, but not memory: these participants exhibited equally good memory for both cued and non-cued features. Furthermore, across the experiments, 4-year-olds remembered non-cued features better than adults. These results coupled with computational simulations provide novel evidence (1) pointing to differences in category representation and mechanisms of categorization across development, (2) elucidating the role of attention in the development of categorization, and (3) suggesting an important distinction between representation and decision factors in categorization early in development. These issues are discussed with respect to theories of categorization and its development. PMID:27721103

  13. Randomly chosen chaotic maps can give rise to nearly ordered behavior

    NASA Astrophysics Data System (ADS)

    Boyarsky, Abraham; Góra, Paweł; Islam, Md. Shafiqul

    2005-10-01

    Parrondo’s paradox [J.M.R. Parrondo, G.P. Harmer, D. Abbott, New paradoxical games based on Brownian ratchets, Phys. Rev. Lett. 85 (2000), 5226-5229] (see also [O.E. Percus, J.K. Percus, Can two wrongs make a right? Coin-tossing games and Parrondo’s paradox, Math. Intelligencer 24 (3) (2002) 68-72]) states that two losing gambling games when combined one after the other (either deterministically or randomly) can result in a winning game: that is, a losing game followed by a losing game = a winning game. Inspired by this paradox, a recent study [J. Almeida, D. Peralta-Salas, M. Romera, Can two chaotic systems give rise to order? Physica D 200 (2005) 124-132] asked an analogous question in discrete time dynamical system: can two chaotic systems give rise to order, namely can they be combined into another dynamical system which does not behave chaotically? Numerical evidence is provided in [J. Almeida, D. Peralta-Salas, M. Romera, Can two chaotic systems give rise to order? Physica D 200 (2005) 124-132] that two chaotic quadratic maps, when composed with each other, create a new dynamical system which has a stable period orbit. The question of what happens in the case of random composition of maps is posed in [J. Almeida, D. Peralta-Salas, M. Romera, Can two chaotic systems give rise to order? Physica D 200 (2005) 124-132] but left unanswered. In this note we present an example of a dynamical system where, at each iteration, a map is chosen in a probabilistic manner from a collection of chaotic maps. The resulting random map is proved to have an infinite absolutely continuous invariant measure (acim) with spikes at two points. From this we show that the dynamics behaves in a nearly ordered manner. When the foregoing maps are applied one after the other, deterministically as in [O.E. Percus, J.K. Percus, Can two wrongs make a right? Coin-tossing games and Parrondo’s paradox, Math. Intelligencer 24 (3) (2002) 68-72], the resulting composed map has a periodic orbit which is stable.

  14. Design optimization and uncertainty quantification for aeromechanics forced response of a turbomachinery blade

    NASA Astrophysics Data System (ADS)

    Modgil, Girish A.

    Gas turbine engines for aerospace applications have evolved dramatically over the last 50 years through the constant pursuit for better specific fuel consumption, higher thrust-to-weight ratio, lower noise and emissions all while maintaining reliability and affordability. An important step in enabling these improvements is a forced response aeromechanics analysis involving structural dynamics and aerodynamics of the turbine. It is well documented that forced response vibration is a very critical problem in aircraft engine design, causing High Cycle Fatigue (HCF). Pushing the envelope on engine design has led to increased forced response problems and subsequently an increased risk of HCF failure. Forced response analysis is used to assess design feasibility of turbine blades for HCF using a material limit boundary set by the Goodman Diagram envelope that combines the effects of steady and vibratory stresses. Forced response analysis is computationally expensive, time consuming and requires multi-domain experts to finalize a result. As a consequence, high-fidelity aeromechanics analysis is performed deterministically and is usually done at the end of the blade design process when it is very costly to make significant changes to geometry or aerodynamic design. To address uncertainties in the system (engine operating point, temperature distribution, mistuning, etc.) and variability in material properties, designers apply conservative safety factors in the traditional deterministic approach, which leads to bulky designs. Moreover, using a deterministic approach does not provide a calculated risk of HCF failure. This thesis describes a process that begins with the optimal aerodynamic design of a turbomachinery blade developed using surrogate models of high-fidelity analyses. The resulting optimal blade undergoes probabilistic evaluation to generate aeromechanics results that provide a calculated likelihood of failure from HCF. An existing Rolls-Royce High Work Single Stage (HWSS) turbine blisk provides a baseline to demonstrate the process. The generalized polynomial chaos (gPC) toolbox which was developed includes sampling methods and constructs polynomial approximations. The toolbox provides not only the means for uncertainty quantification of the final blade design, but also facilitates construction of the surrogate models used for the blade optimization. This paper shows that gPC , with a small number of samples, achieves very fast rates of convergence and high accuracy in describing probability distributions without loss of detail in the tails . First, an optimization problem maximizes stage efficiency using turbine aerodynamic design rules as constraints; the function evaluations for this optimization are surrogate models from detailed 3D steady Computational Fluid Dynamics (CFD) analyses. The resulting optimal shape provides a starting point for the 3D high-fidelity aeromechanics (unsteady CFD and 3D Finite Element Analysis (FEA)) UQ study assuming three uncertain input parameters. This investigation seeks to find the steady and vibratory stresses associated with the first torsion mode for the HWSS turbine blisk near maximum operating speed of the engine. Using gPC to provide uncertainty estimates of the steady and vibratory stresses enables the creation of a Probabilistic Goodman Diagram, which - to the authors' best knowledge - is the first of its kind using high fidelity aeromechanics for turbomachinery blades. The Probabilistic Goodman Diagram enables turbine blade designers to make more informed design decisions and it allows the aeromechanics expert to assess quantitatively the risk associated with HCF for any mode crossing based on high fidelity simulations.

  15. Resolution improvement of 3D stereo-lithography through the direct laser trajectory programming: Application to microfluidic deterministic lateral displacement device.

    PubMed

    Juskova, Petra; Ollitrault, Alexis; Serra, Marco; Viovy, Jean-Louis; Malaquin, Laurent

    2018-02-13

    The vast majority of current microfluidic devices are produced using soft lithography, a technique with strong limitations regarding the fabrication of three-dimensional architectures. Additive manufacturing holds great promises to overcome these limitations, but conventional machines still lack the resolution required by most microfluidic applications. 3D printing machines based on two-photon lasers, in contrast, have the needed resolution but are too limited in speed and size of the global device. Here we demonstrate how the resolution of conventional stereolithographic machines can be improved by a direct programming of the laser path and can contribute to bridge the gap between the two above technologies, allowing the direct printing of features between 10 and 100 μm, corresponding to a large fraction of microfluidic applications. This strategy allows to achieve resolutions limited only by the physical size of the laser beam, decreasing by a factor at least 2× the size of the smallest features printable, and increasing their reproducibility by a factor 5. The approach was applied to produce an open microfluidic device with the reversible seal, integrating periodical patterns using the simple motifs, and validated by the fabrication of a deterministic lateral displacement particles sorting device. The sorting of polystyrene beads (diameter: 20 μm and 45 μm) was achieved with a specificity >95%, comparable with that achieved with arrays prepared by microlithography. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Deterministic Squeezed States with Collective Measurements and Feedback.

    PubMed

    Cox, Kevin C; Greve, Graham P; Weiner, Joshua M; Thompson, James K

    2016-03-04

    We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N=5×10^{4} laser-cooled ^{87}Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms-comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N=4×10^{5}  atoms. This is one of the largest reported entanglement enhancements to date in any system.

  17. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  18. FRACOR-software toolbox for deterministic mapping of fracture corridors in oil fields on AutoCAD platform

    NASA Astrophysics Data System (ADS)

    Ozkaya, Sait I.

    2018-03-01

    Fracture corridors are interconnected large fractures in a narrow sub vertical tabular array, which usually traverse entire reservoir vertically and extended for several hundreds of meters laterally. Fracture corridors with their huge conductivities constitute an important element of many fractured reservoirs. Unlike small diffuse fractures, actual fracture corridors must be mapped deterministically for simulation or field development purposes. Fracture corridors can be identified and quantified definitely with borehole image logs and well testing. However, there are rarely sufficient image logs or well tests, and it is necessary to utilize various fracture corridor indicators with varying degrees of reliability. Integration of data from many different sources, in turn, requires a platform with powerful editing and layering capability. Available commercial reservoir characterization software packages, with layering and editing capabilities, can be cost intensive. CAD packages are far more affordable and may easily acquire the versatility and power of commercial software packages with addition of a small software toolbox. The objective of this communication is to present FRACOR, a software toolbox which enables deterministic 2D fracture corridor mapping and modeling on AutoCAD platform. The FRACOR toolbox is written in AutoLISPand contains several independent routines to import and integrate available fracture corridor data from an oil field, and export results as text files. The resulting fracture corridor maps consists mainly of fracture corridors with different confidence levels from combination of static and dynamic data and exclusion zones where no fracture corridor can exist. The exported text file of fracture corridors from FRACOR can be imported into an upscaling programs to generate fracture grid for dual porosity simulation or used for field development and well planning.

  19. Expanding CyberShake Physics-Based Seismic Hazard Calculations to Central California

    NASA Astrophysics Data System (ADS)

    Silva, F.; Callaghan, S.; Maechling, P. J.; Goulet, C. A.; Milner, K. R.; Graves, R. W.; Olsen, K. B.; Jordan, T. H.

    2016-12-01

    As part of its program of earthquake system science, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by first simulating a tensor-valued wavefield of Strain Green Tensors. CyberShake then takes an earthquake rupture forecast and extends it by varying the hypocenter location and slip distribution, resulting in about 500,000 rupture variations. Seismic reciprocity is used to calculate synthetic seismograms for each rupture variation at each computation site. These seismograms are processed to obtain intensity measures, such as spectral acceleration, which are then combined with probabilities from the earthquake rupture forecast to produce a hazard curve. Hazard curves are calculated at seismic frequencies up to 1 Hz for hundreds of sites in a region and the results interpolated to obtain a hazard map. In developing and verifying CyberShake, we have focused our modeling in the greater Los Angeles region. We are now expanding the hazard calculations into Central California. Using workflow tools running jobs across two large-scale open-science supercomputers, NCSA Blue Waters and OLCF Titan, we calculated 1-Hz PSHA results for over 400 locations in Central California. For each location, we produced hazard curves using both a 3D central California velocity model created via tomographic inversion, and a regionally averaged 1D model. These new results provide low-frequency exceedance probabilities for the rapidly expanding metropolitan areas of Santa Barbara, Bakersfield, and San Luis Obispo, and lend new insights into the effects of directivity-basin coupling associated with basins juxtaposed to major faults such as the San Andreas. Particularly interesting are the basin effects associated with the deep sediments of the southern San Joaquin Valley. We will compare hazard estimates from the 1D and 3D models, summarize the challenges of expanding CyberShake to a new geographic region, and describe our future CyberShake plans.

  20. A transport model for the deterministic stresses associated with turbomachinery blade row interactions

    NASA Astrophysics Data System (ADS)

    van de Wall, Allan George

    The unsteady process resulting from the interaction of upstream vortical structures with a downstream blade row in turbomachines can have a significant impact on the machine efficiency. A transport model assuming incompressible flow and using linear theory was developed to take this process into account in the computation of time-average multistage turbomachinery flows. The upstream vortical structures are transported by the mean flow of the downstream blade row, redistributing the time-average unsteady kinetic energy (Uke ) associated with the incoming disturbance. The model was applied to compressor and turbine geometry. For compressors, the Uke associated with upstream 2-D wakes and 3-D tip clearance flows is reduced as a result of the interaction with a downstream blade row. This reduction results from inviscid effects as well as viscous effects and reduces the loss associated with the upstream disturbance. Any disturbance passing through a compressor blade row results in a smaller loss than if the disturbance was mixed-out prior to entering the blade row. For turbines, the Uke associated with upstream 2-D wakes and 3-D tip clearance flows are significantly amplified by inviscid effects as a result of the interaction with a downstream turbine blade row. Viscous effects act to reduce the amplification of the Uke by inviscid effects but results in a substantial loss. Any disturbance passing through a turbine blade row results in a larger loss than if the disturbance was mixedout prior to entering the blade row.

  1. Development of probabilistic multimedia multipathway computer codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, C.; LePoire, D.; Gnanapragasam, E.

    2002-01-01

    The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less

  2. Pre-polishing on a CNC platform with bound abrasive contour tools

    NASA Astrophysics Data System (ADS)

    Schoeffer, Adrienne E.

    2003-05-01

    Deterministic micorgrinding (DMG) of optical glasses and ceramics is the commercial manufacturing process of choice to shape glass surfaces prior to final finishing. This process employs rigid bound matrix diamond tooling resulting in surface roughness values of 3-51.tm peak to valley and 100-400nm rms, as well as mid-spatial frequency tool marks that require subsequent removal in secondary finishing steps. The ability to pre-polish optical surfaces within the grinding platform would reduce final finishing process times. Bound abrasive contour wheels containing cerium oxide, alumina or zirconia abrasives were constructed with an epoxy matrix. The effects of abrasive type, composition, and erosion promoters were examined for tool hardness (Shore D), and tested with commercial optical glasses in an OptiproTM CNC grinding platform. Metrology protocols were developed to examine tool wear and subsequent surface roughness. Work is directed to demonstrating effective material removal, improved surface roughness and cutter mark removal.

  3. Prepolishing on a CNC platform with bound abrasive contour tools

    NASA Astrophysics Data System (ADS)

    Schoeffler, Adrienne E.; Gregg, Leslie L.; Schoen, John M.; Fess, Edward M.; Hakiel, Michael; Jacobs, Stephen D.

    2003-05-01

    Deterministic microgrinding (DMG) of optical glasses and ceramics is the commercial manufacturing process of choice to shape glass surfaces prior to final finishing. This process employs rigid bound matrix diamond tooling resulting in surface roughness values of 3-5μm peak to valley and 100-400nm rms, as well as mid-spatial frequency tool marks that require subsequent removal in secondary finishing steps. The ability to pre-polish optical surfaces within the grinding platform would reduce final finishing process times. Bound abrasive contour wheels containing cerium oxide, alumina or zirconia abrasives were constructed with an epoxy matrix. The effects of abrasive type, composition, and erosion promoters were examined for tool hardness (Shore D), and tested with commercial optical glasses in an Optipro CNC grinding platform. Metrology protocols were developed to examine tool wear and subsequent surface roughness. Work is directed to demonstrating effective material removal, improved surface roughness and cutter mark removal.

  4. Monte Carlo Approach for Estimating Density and Atomic Number From Dual-Energy Computed Tomography Images of Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos

    2017-12-01

    We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.

  5. The development of the deterministic nonlinear PDEs in particle physics to stochastic case

    NASA Astrophysics Data System (ADS)

    Abdelrahman, Mahmoud A. E.; Sohaly, M. A.

    2018-06-01

    In the present work, accuracy method called, Riccati-Bernoulli Sub-ODE technique is used for solving the deterministic and stochastic case of the Phi-4 equation and the nonlinear Foam Drainage equation. Also, the control on the randomness input is studied for stability stochastic process solution.

  6. An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling

    NASA Astrophysics Data System (ADS)

    Qiu, X. N.; Lau, H. Y. K.

    The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.

  7. Data-driven gradient algorithm for high-precision quantum control

    NASA Astrophysics Data System (ADS)

    Wu, Re-Bing; Chu, Bing; Owens, David H.; Rabitz, Herschel

    2018-04-01

    In the quest to achieve scalable quantum information processing technologies, gradient-based optimal control algorithms (e.g., grape) are broadly used for implementing high-precision quantum gates, but their performance is often hindered by deterministic or random errors in the system model and the control electronics. In this paper, we show that grape can be taught to be more effective by jointly learning from the design model and the experimental data obtained from process tomography. The resulting data-driven gradient optimization algorithm (d-grape) can in principle correct all deterministic gate errors, with a mild efficiency loss. The d-grape algorithm may become more powerful with broadband controls that involve a large number of control parameters, while other algorithms usually slow down due to the increased size of the search space. These advantages are demonstrated by simulating the implementation of a two-qubit controlled-not gate.

  8. Deterministic secure quantum communication using a single d-level system.

    PubMed

    Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun

    2017-03-22

    Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected.

  9. Selective attention, diffused attention, and the development of categorization.

    PubMed

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2016-12-01

    How do people learn categories and what changes with development? The current study attempts to address these questions by focusing on the role of attention in the development of categorization. In Experiment 1, participants (adults, 7-year-olds, and 4-year-olds) were trained with novel categories consisting of deterministic and probabilistic features, and their categorization and memory for features were tested. In Experiment 2, participants' attention was directed to the deterministic feature, and in Experiment 3 it was directed to the probabilistic features. Attentional cueing affected categorization and memory in adults and 7-year-olds: these participants relied on the cued features in their categorization and exhibited better memory of cued than of non-cued features. In contrast, in 4-year-olds attentional cueing affected only categorization, but not memory: these participants exhibited equally good memory for both cued and non-cued features. Furthermore, across the experiments, 4-year-olds remembered non-cued features better than adults. These results coupled with computational simulations provide novel evidence (1) pointing to differences in category representation and mechanisms of categorization across development, (2) elucidating the role of attention in the development of categorization, and (3) suggesting an important distinction between representation and decision factors in categorization early in development. These issues are discussed with respect to theories of categorization and its development. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Solar Radiation Transport in the Cloudy Atmosphere: A 3D Perspective on Observations and Climate Impacts

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.; Marshak, Alexander

    2010-01-01

    The interplay of sunlight with clouds is a ubiquitous and often pleasant visual experience, but it conjures up major challenges for weather, climate, environmental science and beyond. Those engaged in the characterization of clouds (and the clear air nearby) by remote sensing methods are even more confronted. The problem comes, on the one hand, from the spatial complexity of real clouds and, on the other hand, from the dominance of multiple scattering in the radiation transport. The former ingredient contrasts sharply with the still popular representation of clouds as homogeneous plane-parallel slabs for the purposes of radiative transfer computations. In typical cloud scenes the opposite asymptotic transport regimes of diffusion and ballistic propagation coexist. We survey the three-dimensional (3D) atmospheric radiative transfer literature over the past 50 years and identify three concurrent and intertwining thrusts: first, how to assess the damage (bias) caused by 3D effects in the operational 1D radiative transfer models? Second, how to mitigate this damage? Finally, can we exploit 3D radiative transfer phenomena to innovate observation methods and technologies? We quickly realize that the smallest scale resolved computationally or observationally may be artificial but is nonetheless a key quantity that separates the 3D radiative transfer solutions into two broad and complementary classes: stochastic and deterministic. Both approaches draw on classic and contemporary statistical, mathematical and computational physics.

  11. Large-Amplitude Forced Response of Dynamic Systems

    DTIC Science & Technology

    1992-11-01

    Blacksburg, VA, June 25-27, 1990. 11. A. Abou- Rayan , A. H. Nayfeh, D. T. Mook, and M. A. Nayfeh, "Nonlinear Analysis of a Parametrically Excited...34 62nd Shock and Vibration Symposium, Springfield, VA, October 29-31, 1991. 23. A. Abou- Rayan , A. H. Nayfeh, D. T. Mook, and M. A. Nayfeh...Mechanics, Virginia Polytechnic Institute and State University, Blacksburg, VA, 1991. 6. A. Abou- Rayan , Ph.D., "Deterministic and Stochastic Responses

  12. Disentangling the Cosmic Web with Lagrangian Submanifold

    NASA Astrophysics Data System (ADS)

    Shandarin, Sergei F.; Medvedev, Mikhail V.

    2016-10-01

    The Cosmic Web is a complicated highly-entangled geometrical object. Remarkably it has formed from practically Gaussian initial conditions, which may be regarded as the simplest departure from exactly uniform universe in purely deterministic mapping. The full complexity of the web is revealed neither in configuration no velocity spaces considered separately. It can be fully appreciated only in six-dimensional (6D) phase space. However, studies of the phase space is complicated by the fact that every projection of it on a three-dimensional (3D) space is multivalued and contained caustics. In addition phase space is not a metric space that complicates studies of geometry. We suggest to use Lagrangian submanifold i.e., x = x(q), where both x and q are 3D vectors instead of the phase space for studies the complexity of cosmic web in cosmological N-body dark matter simulations. Being fully equivalent in dynamical sense to the phase space it has an advantage of being a single valued and also metric space.

  13. Service-Oriented Architecture (SOA) Instantiation within a Hard Real-Time, Deterministic Combat System Environment

    ERIC Educational Resources Information Center

    Moreland, James D., Jr

    2013-01-01

    This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…

  14. CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential

    USGS Publications Warehouse

    Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.

    2006-01-01

    This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.

  15. Heart rate variability as determinism with jump stochastic parameters.

    PubMed

    Zheng, Jiongxuan; Skufca, Joseph D; Bollt, Erik M

    2013-08-01

    We use measured heart rate information (RR intervals) to develop a one-dimensional nonlinear map that describes short term deterministic behavior in the data. Our study suggests that there is a stochastic parameter with persistence which causes the heart rate and rhythm system to wander about a bifurcation point. We propose a modified circle map with a jump process noise term as a model which can qualitatively capture such this behavior of low dimensional transient determinism with occasional (stochastically defined) jumps from one deterministic system to another within a one parameter family of deterministic systems.

  16. Steepest Ascent Low/Non-Low-Frequency Ratio in Empirical Mode Decomposition to Separate Deterministic and Stochastic Velocities From a Single Lagrangian Drifter

    NASA Astrophysics Data System (ADS)

    Chu, Peter C.

    2018-03-01

    SOund Fixing And Ranging (RAFOS) floats deployed by the Naval Postgraduate School (NPS) in the California Current system from 1992 to 2001 at depth between 150 and 600 m (http://www.oc.nps.edu/npsRAFOS/) are used to study 2-D turbulent characteristics. Each drifter trajectory is adaptively decomposed using the empirical mode decomposition (EMD) into a series of intrinsic mode functions (IMFs) with corresponding specific scale for each IMF. A new steepest ascent low/non-low-frequency ratio is proposed in this paper to separate a Lagrangian trajectory into low-frequency (nondiffusive, i.e., deterministic) and high-frequency (diffusive, i.e., stochastic) components. The 2-D turbulent (or called eddy) diffusion coefficients are calculated on the base of the classical turbulent diffusion with mixing length theory from stochastic component of a single drifter. Statistical characteristics of the calculated 2-D turbulence length scale, strength, and diffusion coefficients from the NPS RAFOS data are presented with the mean values (over the whole drifters) of the 2-D diffusion coefficients comparable to the commonly used diffusivity tensor method.

  17. A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bal, Guillaume, E-mail: gb2030@columbia.edu; Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov; Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030

    2011-08-20

    Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or amore » airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.« less

  18. An error-dependent model of instrument-scanning behavior in commercial airline pilots. Ph.D. Thesis - May 1983

    NASA Technical Reports Server (NTRS)

    Jones, D. H.

    1985-01-01

    A new flexible model of pilot instrument scanning behavior is presented which assumes that the pilot uses a set of deterministic scanning patterns on the pilot's perception of error in the state of the aircraft, and the pilot's knowledge of the interactive nature of the aircraft's systems. Statistical analyses revealed that a three stage Markov process composed of the pilot's three predicted lookpoints (LP), occurring 1/30, 2/30, and 3/30 of a second prior to each LP, accurately modelled the scanning behavior of 14 commercial airline pilots while flying steep turn maneuvers in a Boeing 737 flight simulator. The modelled scanning data for each pilot were not statistically different from the observed scanning data in comparisons of mean dwell time, entropy, and entropy rate. These findings represent the first direct evidence that pilots are using deterministic scanning patterns during instrument flight. The results are interpreted as direct support for the error dependent model and suggestions are made for further research that could allow for identification of the specific scanning patterns suggested by the model.

  19. Deterministic Parsing and Linguistic Explanation. Revision,

    DTIC Science & Technology

    1985-06-01

    near the town can have any of the following intepretations : 2"See Zubizarretta (1082) wrod StoweU (1081). 17...Department of Linguistics and Philosophy. 43 ... FILMED " ൒-85 D I’ DTIC " , S . I * -J’ . p -#-

  20. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    PubMed

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  1. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession

    PubMed Central

    Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-01-01

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885

  2. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem.

    PubMed

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A; Hazen, Terry C; Tiedje, James M; Arkin, Adam P

    2014-03-04

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession.

  3. Deterministic ion beam material adding technology for high-precision optical surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  4. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model

    PubMed Central

    Nené, Nuno R.; Dunham, Alistair S.; Illingworth, Christopher J. R.

    2018-01-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. PMID:29500183

  5. Determination of three-dimensional muscle architectures: validation of the DTI-based fiber tractography method by manual digitization

    PubMed Central

    Schenk, P; Siebert, T; Hiepe, P; Güllmar, D; Reichenbach, J R; Wick, C; Blickhan, R; Böl, M

    2013-01-01

    In the last decade, diffusion tensor imaging (DTI) has been used increasingly to investigate three-dimensional (3D) muscle architectures. So far there is no study that has proved the validity of this method to determine fascicle lengths and pennation angles within a whole muscle. To verify the DTI method, fascicle lengths of m. soleus as well as their pennation angles have been measured using two different methods. First, the 3D muscle architecture was analyzed in vivo applying the DTI method with subsequent deterministic fiber tractography. In a second step, the muscle architecture of the same muscle was analyzed using a standard manual digitization system (MicroScribe MLX). Comparing both methods, we found differences for the median pennation angles (P < 0.001) but not for the median fascicle lengths (P = 0.216). Despite the statistical results, we conclude that the DTI method is appropriate to determine the global fiber orientation. The difference in median pennation angles determined with both methods is only about 1.2° (median pennation angle of MicroScribe: 9.7°; DTI: 8.5°) and probably has no practical relevance for muscle simulation studies. Determining fascicle lengths requires additional restriction and further development of the DTI method. PMID:23678961

  6. Determination of three-dimensional muscle architectures: validation of the DTI-based fiber tractography method by manual digitization.

    PubMed

    Schenk, P; Siebert, T; Hiepe, P; Güllmar, D; Reichenbach, J R; Wick, C; Blickhan, R; Böl, M

    2013-07-01

    In the last decade, diffusion tensor imaging (DTI) has been used increasingly to investigate three-dimensional (3D) muscle architectures. So far there is no study that has proved the validity of this method to determine fascicle lengths and pennation angles within a whole muscle. To verify the DTI method, fascicle lengths of m. soleus as well as their pennation angles have been measured using two different methods. First, the 3D muscle architecture was analyzed in vivo applying the DTI method with subsequent deterministic fiber tractography. In a second step, the muscle architecture of the same muscle was analyzed using a standard manual digitization system (MicroScribe MLX). Comparing both methods, we found differences for the median pennation angles (P < 0.001) but not for the median fascicle lengths (P = 0.216). Despite the statistical results, we conclude that the DTI method is appropriate to determine the global fiber orientation. The difference in median pennation angles determined with both methods is only about 1.2° (median pennation angle of MicroScribe: 9.7°; DTI: 8.5°) and probably has no practical relevance for muscle simulation studies. Determining fascicle lengths requires additional restriction and further development of the DTI method. © 2013 Anatomical Society.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y M; Bush, K; Han, B

    Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less

  8. Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less

  9. 4D Hybrid Ensemble-Variational Data Assimilation for the NCEP GFS: Outer Loops and Variable Transforms

    NASA Astrophysics Data System (ADS)

    Kleist, D. T.; Ide, K.; Mahajan, R.; Thomas, C.

    2014-12-01

    The use of hybrid error covariance models has become quite popular for numerical weather prediction (NWP). One such method for incorporating localized covariances from an ensemble within the variational framework utilizes an augmented control variable (EnVar), and has been implemented in the operational NCEP data assimilation system (GSI). By taking the existing 3D EnVar algorithm in GSI and allowing for four-dimensional ensemble perturbations, coupled with the 4DVAR infrastructure already in place, a 4D EnVar capability has been developed. The 4D EnVar algorithm has a few attractive qualities relative to 4DVAR, including the lack of need for tangent-linear and adjoint model as well as reduced computational cost. Preliminary results using real observations have been encouraging, showing forecast improvements nearly as large as were found in moving from 3DVAR to hybrid 3D EnVar. 4D EnVar is the method of choice for the next generation assimilation system for use with the operational NCEP global model, the global forecast system (GFS). The use of an outer-loop has long been the method of choice for 4DVar data assimilation to help address nonlinearity. An outer loop involves the re-running of the (deterministic) background forecast from the updated initial condition at the beginning of the assimilation window, and proceeding with another inner loop minimization. Within 4D EnVar, a similar procedure can be adopted since the solver evaluates a 4D analysis increment throughout the window, consistent with the valid times of the 4D ensemble perturbations. In this procedure, the ensemble perturbations are kept fixed and centered about the updated background state. This is analogous to the quasi-outer loop idea developed for the EnKF. Here, we present results for both toy model and real NWP systems demonstrating the impact from incorporating outer loops to address nonlinearity within the 4D EnVar context. The appropriate amplitudes for observation and background error covariances in subsequent outer loops will be explored. Lastly, variable transformations on the ensemble perturbations will be utilized to help address issues of non-Gaussianity. This may be particularly important for variables that clearly have non-Gaussian error characteristics such as water vapor and cloud condensate.

  10. Inverse kinematic problem for a random gradient medium in geometric optics approximation

    NASA Astrophysics Data System (ADS)

    Petersen, N. V.

    1990-03-01

    Scattering at random inhomogeneities in a gradient medium results in systematic deviations of the rays and travel times of refracted body waves from those corresponding to the deterministic velocity component. The character of the difference depends on the parameters of the deterministic and random velocity component. However, at great distances to the source, independently of the velocity parameters (weakly or strongly inhomogeneous medium), the most probable depth of the ray turning point is smaller than that corresponding to the deterministic velocity component, the most probable travel times also being lower. The relative uncertainty in the deterministic velocity component, derived from the mean travel times using methods developed for laterally homogeneous media (for instance, the Herglotz-Wiechert method), is systematic in character, but does not exceed the contrast of velocity inhomogeneities by magnitude. The gradient of the deterministic velocity component has a significant effect on the travel-time fluctuations. The variance at great distances to the source is mainly controlled by shallow inhomogeneities. The travel-time flucutations are studied only for weakly inhomogeneous media.

  11. Deterministic control of the emission from light sources in 1D nanoporous photonic crystals (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Galisteo-López, Juan F.

    2017-02-01

    Controlling the emission of a light source demands acting on its local photonic environment via the local density of states (LDOS). Approaches to exert such control on large scale samples, commonly relying on self-assembly methods, usually lack from a precise positioning of the emitter within the material. Alternatively expensive and time consuming techniques can be used to produce samples of small dimensions where a deterministic control on emitter position can be achieved. In this work we present a full solution process approach to fabricate photonic architectures containing nano-emitters which position can be controlled with nanometer precision over squared milimiter regions. By a combination of spin and dip coating we fabricate one-dimensional (1D) nanoporous photonic crystals, which potential in different fields such as photovoltaics or sensing has been previously reported, containing monolayers of luminescent polymeric nanospheres. We demonstrate how, by modifying the position of the emitters within the photonic crystal, their emission properties (photoluminescence intensity and angular distribution) can be deterministically modified. Further, the nano-emitters can be used as a probe to study the LDOS distribution within these systems with a spatial resolution of 25 nm (provided by the probe size) carrying out macroscopic measurements over squared milimiter regions. Routes to enhance light-matter interaction in this kind of systems by combining them with metallic surfaces are finally discussed.

  12. Adequacy assessment of composite generation and transmission systems incorporating wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Gao, Yi

    The development and utilization of wind energy for satisfying electrical demand has received considerable attention in recent years due to its tremendous environmental, social and economic benefits, together with public support and government incentives. Electric power generation from wind energy behaves quite differently from that of conventional sources. The fundamentally different operating characteristics of wind energy facilities therefore affect power system reliability in a different manner than those of conventional systems. The reliability impact of such a highly variable energy source is an important aspect that must be assessed when the wind power penetration is significant. The focus of the research described in this thesis is on the utilization of state sampling Monte Carlo simulation in wind integrated bulk electric system reliability analysis and the application of these concepts in system planning and decision making. Load forecast uncertainty is an important factor in long range planning and system development. This thesis describes two approximate approaches developed to reduce the number of steps in a load duration curve which includes load forecast uncertainty, and to provide reasonably accurate generating and bulk system reliability index predictions. The developed approaches are illustrated by application to two composite test systems. A method of generating correlated random numbers with uniform distributions and a specified correlation coefficient in the state sampling method is proposed and used to conduct adequacy assessment in generating systems and in bulk electric systems containing correlated wind farms in this thesis. The studies described show that it is possible to use the state sampling Monte Carlo simulation technique to quantitatively assess the reliability implications associated with adding wind power to a composite generation and transmission system including the effects of multiple correlated wind sites. This is an important development as it permits correlated wind farms to be incorporated in large practical system studies without requiring excessive increases in computer solution time. The procedures described in this thesis for creating monthly and seasonal wind farm models should prove useful in situations where time period models are required to incorporate scheduled maintenance of generation and transmission facilities. There is growing interest in combining deterministic considerations with probabilistic assessment in order to evaluate the quantitative system risk and conduct bulk power system planning. A relatively new approach that incorporates deterministic and probabilistic considerations in a single risk assessment framework has been designated as the joint deterministic-probabilistic approach. The research work described in this thesis illustrates that the joint deterministic-probabilistic approach can be effectively used to integrate wind power in bulk electric system planning. The studies described in this thesis show that the application of the joint deterministic-probabilistic method provides more stringent results for a system with wind power than the traditional deterministic N-1 method because the joint deterministic-probabilistic technique is driven by the deterministic N-1 criterion with an added probabilistic perspective which recognizes the power output characteristics of a wind turbine generator.

  13. Stochastic Partial Differential Equation Solver for Hydroacoustic Modeling: Improvements to Paracousti Sound Propagation Solver

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2017-12-01

    Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  14. Aspen succession in the Intermountain West: A deterministic model

    Treesearch

    Dale L. Bartos; Frederick R. Ward; George S. Innis

    1983-01-01

    A deterministic model of succession in aspen forests was developed using existing data and intuition. The degree of uncertainty, which was determined by allowing the parameter values to vary at random within limits, was larger than desired. This report presents results of an analysis of model sensitivity to changes in parameter values. These results have indicated...

  15. Guidelines 13 and 14—Prediction uncertainty

    USGS Publications Warehouse

    Hill, Mary C.; Tiedeman, Claire

    2005-01-01

    An advantage of using optimization for model development and calibration is that optimization provides methods for evaluating and quantifying prediction uncertainty. Both deterministic and statistical methods can be used. Guideline 13 discusses using regression and post-audits, which we classify as deterministic methods. Guideline 14 discusses inferential statistics and Monte Carlo methods, which we classify as statistical methods.

  16. Stochastic and deterministic models for agricultural production networks.

    PubMed

    Bai, P; Banks, H T; Dediu, S; Govan, A Y; Last, M; Lloyd, A L; Nguyen, H K; Olufsen, M S; Rempala, G; Slenning, B D

    2007-07-01

    An approach to modeling the impact of disturbances in an agricultural production network is presented. A stochastic model and its approximate deterministic model for averages over sample paths of the stochastic system are developed. Simulations, sensitivity and generalized sensitivity analyses are given. Finally, it is shown how diseases may be introduced into the network and corresponding simulations are discussed.

  17. A DETERMINISTIC GEOMETRIC REPRESENTATION OF TEMPORAL RAINFALL: SENSITIVITY ANALYSIS FOR A STORM IN BOSTON. (R824780)

    EPA Science Inventory

    In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...

  18. The Role of Probability and Intentionality in Preschoolers' Causal Generalizations

    ERIC Educational Resources Information Center

    Sobel, David M.; Sommerville, Jessica A.; Travers, Lea V.; Blumenthal, Emily J.; Stoddard, Emily

    2009-01-01

    Three experiments examined whether preschoolers recognize that the causal properties of objects generalize to new members of the same set given either deterministic or probabilistic data. Experiment 1 found that 3- and 4-year-olds were able to make such a generalization given deterministic data but were at chance when they observed probabilistic…

  19. Working Beyond Moore’s Limit - Coherent Nonlinear Optical Control of Individual and Coupled Single Electron Doped Quantum Dots

    DTIC Science & Technology

    2015-07-06

    preparation for deterministic spin-photon entanglement ; (3) Demonstration of initialization of the 2 qubit states; (4) Demonstration of nonlocal nuclear...Demonstration of a flying qubit by entanglement of the quantum dot spin polarization with the polarization of a spontaneously emitted photon. Future...coherent optical control steps in preparation for deterministic spin-photon entanglement ; (3) Demonstration of initialization of the 2 qubit states in

  20. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harrisson, G.; Marleau, G.

    2012-07-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculationmore » performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)« less

  1. A Summary Report on the NPH Evaluation of 105-L Disassembly Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, J.R.

    2002-04-30

    The L Area Disassembly Basin (LDB) is evaluated for the natural phenomena hazards (NPH) effects due to earthquake, wind, and tornado in accordance with DOE Order 420.1 and DOE-STD-1020. The deterministic analysis is performed for a Performance Category 3 (PC3) level of loads. Savannah River Site (SRS) specific NPH loads and design criteria are obtained from Engineering Standard 01060. It is demonstrated that the demand to capacity (D/C) ratios for primary and significant structural elements are acceptable (equal to or less than 1.0). Thus, 105-L Disassembly Basin building structure is qualified for the PC3 NPH effects in accordance with DOEmore » Order 420.1.« less

  2. Reflector and Protections in a Sodium-cooled Fast Reactor: Modelling and Optimization

    NASA Astrophysics Data System (ADS)

    Blanchet, David; Fontaine, Bruno

    2017-09-01

    The ASTRID project (Advanced Sodium Technological Reactor for Industrial Demonstration) is a Generation IV nuclear reactor concept under development in France [1]. In this frame, studies are underway to optimize radial reflectors and protections. Considering radial protections made in natural boron carbide, this study is conducted to assess the neutronic performances of the MgO as the reference choice for reflector material, in comparison with other possible materials including a more conventional stainless steel. The analysis is based upon a simplified 1-D and 2-D deterministic modelling of the reactor, providing simplified interfaces between core, reflector and protections. Such models allow examining detailed reaction rate distributions; they also provide physical insights into local spectral effects occurring at the Core-Reflector and at the Reflector-Protection interfaces.

  3. A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce

    2008-01-01

    Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.

  4. Controllability of Deterministic Networks with the Identical Degree Sequence

    PubMed Central

    Ma, Xiujuan; Zhao, Haixing; Wang, Binghong

    2015-01-01

    Controlling complex network is an essential problem in network science and engineering. Recent advances indicate that the controllability of complex network is dependent on the network's topology. Liu and Barabási, et.al speculated that the degree distribution was one of the most important factors affecting controllability for arbitrary complex directed network with random link weights. In this paper, we analysed the effect of degree distribution to the controllability for the deterministic networks with unweighted and undirected. We introduce a class of deterministic networks with identical degree sequence, called (x,y)-flower. We analysed controllability of the two deterministic networks ((1, 3)-flower and (2, 2)-flower) by exact controllability theory in detail and give accurate results of the minimum number of driver nodes for the two networks. In simulation, we compare the controllability of (x,y)-flower networks. Our results show that the family of (x,y)-flower networks have the same degree sequence, but their controllability is totally different. So the degree distribution itself is not sufficient to characterize the controllability of deterministic networks with unweighted and undirected. PMID:26020920

  5. Parameter Estimation in Epidemiology: from Simple to Complex Dynamics

    NASA Astrophysics Data System (ADS)

    Aguiar, Maíra; Ballesteros, Sebastién; Boto, João Pedro; Kooi, Bob W.; Mateus, Luís; Stollenwerk, Nico

    2011-09-01

    We revisit the parameter estimation framework for population biological dynamical systems, and apply it to calibrate various models in epidemiology with empirical time series, namely influenza and dengue fever. When it comes to more complex models like multi-strain dynamics to describe the virus-host interaction in dengue fever, even most recently developed parameter estimation techniques, like maximum likelihood iterated filtering, come to their computational limits. However, the first results of parameter estimation with data on dengue fever from Thailand indicate a subtle interplay between stochasticity and deterministic skeleton. The deterministic system on its own already displays complex dynamics up to deterministic chaos and coexistence of multiple attractors.

  6. Rockslide susceptibility and hazard assessment for mitigation works design along vertical rocky cliffs: workflow proposal based on a real case-study conducted in Sacco (Campania), Italy

    NASA Astrophysics Data System (ADS)

    Pignalosa, Antonio; Di Crescenzo, Giuseppe; Marino, Ermanno; Terracciano, Rosario; Santo, Antonio

    2015-04-01

    The work here presented concerns a case study in which a complete multidisciplinary workflow has been applied for an extensive assessment of the rockslide susceptibility and hazard in a common scenario such as a vertical and fractured rocky cliffs. The studied area is located in a high-relief zone in Southern Italy (Sacco, Salerno, Campania), characterized by wide vertical rocky cliffs formed by tectonized thick successions of shallow-water limestones. The study concerned the following phases: a) topographic surveying integrating of 3d laser scanning, photogrammetry and GNSS; b) gelogical surveying, characterization of single instabilities and geomecanichal surveying, conducted by geologists rock climbers; c) processing of 3d data and reconstruction of high resolution geometrical models; d) structural and geomechanical analyses; e) data filing in a GIS-based spatial database; f) geo-statistical and spatial analyses and mapping of the whole set of data; g) 3D rockfall analysis; The main goals of the study have been a) to set-up an investigation method to achieve a complete and thorough characterization of the slope stability conditions and b) to provide a detailed base for an accurate definition of the reinforcement and mitigation systems. For this purposes the most up-to-date methods of field surveying, remote sensing, 3d modelling and geospatial data analysis have been integrated in a systematic workflow, accounting of the economic sustainability of the whole project. A novel integrated approach have been applied both fusing deterministic and statistical surveying methods. This approach enabled to deal with the wide extension of the studied area (near to 200.000 m2), without compromising an high accuracy of the results. The deterministic phase, based on a field characterization of single instabilities and their further analyses on 3d models, has been applied for delineating the peculiarity of each single feature. The statistical approach, based on geostructural field mapping and on punctual geomechanical data from scan-line surveying, allowed the rock mass partitioning in homogeneous geomechanical sectors and data interpolation through bounded geostatistical analyses on 3d models. All data, resulting from both approaches, have been referenced and filed in a single spatial database and considered in global geo-statistical analyses for deriving a fully modelled and comprehensive evaluation of the rockslide susceptibility. The described workflow yielded the following innovative results: a) a detailed census of single potential instabilities, through a spatial database recording the geometrical, geological and mechanical features, along with the expected failure modes; b) an high resolution characterization of the whole slope rockslide susceptibility, based on the partitioning of the area according to the stability and mechanical conditions which can be directly related to specific hazard mitigation systems; c) the exact extension of the area exposed to the rockslide hazard, along with the dynamic parameters of expected phenomena; d) an intervention design for hazard mitigation.

  7. Generation of an activation map for decommissioning planning of the Berlin Experimental Reactor-II

    NASA Astrophysics Data System (ADS)

    Lapins, Janis; Guilliard, Nicole; Bernnat, Wolfgang

    2017-09-01

    The BER-II is an experimental facility with 10 MW that was operated since 1974. Its planned operation will end in 2019. To support the decommissioning planning, a map with the overall distribution of relevant radionuclides has to be created according to the state of the art. In this paper, a procedure to create these 3-d maps using a combination of MCNP and deterministic methods is presented. With this approach, an activation analysis is performed for the whole reactor geometry including the most remote parts of the concrete shielding.

  8. Multi-Algorithm Particle Simulations with Spatiocyte.

    PubMed

    Arjunan, Satya N V; Takahashi, Koichi

    2017-01-01

    As quantitative biologists get more measurements of spatially regulated systems such as cell division and polarization, simulation of reaction and diffusion of proteins using the data is becoming increasingly relevant to uncover the mechanisms underlying the systems. Spatiocyte is a lattice-based stochastic particle simulator for biochemical reaction and diffusion processes. Simulations can be performed at single molecule and compartment spatial scales simultaneously. Molecules can diffuse and react in 1D (filament), 2D (membrane), and 3D (cytosol) compartments. The implications of crowded regions in the cell can be investigated because each diffusing molecule has spatial dimensions. Spatiocyte adopts multi-algorithm and multi-timescale frameworks to simulate models that simultaneously employ deterministic, stochastic, and particle reaction-diffusion algorithms. Comparison of light microscopy images to simulation snapshots is supported by Spatiocyte microscopy visualization and molecule tagging features. Spatiocyte is open-source software and is freely available at http://spatiocyte.org .

  9. Patterned arrays of lateral heterojunctions within monolayer two-dimensional semiconductors

    DOE PAGES

    Mahjouri-Samani, Masoud; Lin, Ming-Wei; Wang, Kai; ...

    2015-07-22

    The formation of semiconductor heterojunctions and their high density integration are foundations of modern electronics and optoelectronics. To enable two-dimensional (2D) crystalline semiconductors as building blocks in next generation electronics, developing methods to deterministically form lateral heterojunctions is crucial. Here we demonstrate a process strategy for the formation of lithographically-patterned lateral semiconducting heterojunctions within a single 2D crystal. E-beam lithography is used to pattern MoSe 2 monolayer crystals with SiO 2, and the exposed locations are selectively and totally converted to MoS 2 using pulsed laser deposition (PLD) of sulfur in order to form MoSe 2/MoS 2 heterojunctions in predefinedmore » patterns. The junctions and conversion process are characterized by atomically resolved scanning transmission electron microscopy, photoluminescence, and Raman spectroscopy. This demonstration of lateral semiconductor heterojunction arrays within a single 2D crystal is an essential step for the lateral integration of 2D semiconductor building blocks with different electronic and optoelectronic properties for high-density, ultrathin circuitry.« less

  10. Robust Fixed-Structure Control

    DTIC Science & Technology

    1994-10-30

    Deterministic Foundation for Statistical Energy Analysis ," J. Sound Vibr., to appear. 1.96 D. S. Bernstein and S. P. Bhat, "Lyapunov Stability, Semistability...S. Bernstein, "Power Flow, Energy Balance, and Statistical Energy Analysis for Large Scale, Interconnected Systems," Proc. Amer. Contr. Conf., pp

  11. D-VASim: an interactive virtual laboratory environment for the simulation and analysis of genetic circuits.

    PubMed

    Baig, Hasan; Madsen, Jan

    2017-01-15

    Simulation and behavioral analysis of genetic circuits is a standard approach of functional verification prior to their physical implementation. Many software tools have been developed to perform in silico analysis for this purpose, but none of them allow users to interact with the model during runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze the behavior of genetic logic circuit models represented in an SBML (Systems Biology Markup Language). Hence, SBML models developed in other software environments can be analyzed and simulated in D-VASim. D-VASim offers deterministic as well as stochastic simulation; and differs from other software tools by being able to extract and validate the Boolean logic from the SBML model. D-VASim is also capable of analyzing the threshold value and propagation delay of a genetic circuit model. D-VASim is available for Windows and Mac OS and can be downloaded from bda.compute.dtu.dk/downloads/. haba@dtu.dk, jama@dtu.dk. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Automated Weight-Window Generation for Threat Detection Applications Using ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W; Miller, Thomas Martin; Evans, Thomas M

    2009-01-01

    Deterministic transport codes have been used for some time to generate weight-window parameters that can improve the efficiency of Monte Carlo simulations. As the use of this hybrid computational technique is becoming more widespread, the scope of applications in which it is being applied is expanding. An active source of new applications is the field of homeland security--particularly the detection of nuclear material threats. For these problems, automated hybrid methods offer an efficient alternative to trial-and-error variance reduction techniques (e.g., geometry splitting or the stochastic weight window generator). The ADVANTG code has been developed to automate the generation of weight-windowmore » parameters for MCNP using the Consistent Adjoint Driven Importance Sampling method and employs the TORT or Denovo 3-D discrete ordinates codes to generate importance maps. In this paper, we describe the application of ADVANTG to a set of threat-detection simulations. We present numerical results for an 'active-interrogation' problem in which a standard cargo container is irradiated by a deuterium-tritium fusion neutron generator. We also present results for two passive detection problems in which a cargo container holding a shielded neutron or gamma source is placed near a portal monitor. For the passive detection problems, ADVANTG obtains an O(10{sup 4}) speedup and, for a detailed gamma spectrum tally, an average O(10{sup 2}) speedup relative to implicit-capture-only simulations, including the deterministic calculation time. For the active-interrogation problem, an O(10{sup 4}) speedup is obtained when compared to a simulation with angular source biasing and crude geometry splitting.« less

  13. Inferring Fitness Effects from Time-Resolved Sequence Data with a Delay-Deterministic Model.

    PubMed

    Nené, Nuno R; Dunham, Alistair S; Illingworth, Christopher J R

    2018-05-01

    A common challenge arising from the observation of an evolutionary system over time is to infer the magnitude of selection acting upon a specific genetic variant, or variants, within the population. The inference of selection may be confounded by the effects of genetic drift in a system, leading to the development of inference procedures to account for these effects. However, recent work has suggested that deterministic models of evolution may be effective in capturing the effects of selection even under complex models of demography, suggesting the more general application of deterministic approaches to inference. Responding to this literature, we here note a case in which a deterministic model of evolution may give highly misleading inferences, resulting from the nondeterministic properties of mutation in a finite population. We propose an alternative approach that acts to correct for this error, and which we denote the delay-deterministic model. Applying our model to a simple evolutionary system, we demonstrate its performance in quantifying the extent of selection acting within that system. We further consider the application of our model to sequence data from an evolutionary experiment. We outline scenarios in which our model may produce improved results for the inference of selection, noting that such situations can be easily identified via the use of a regular deterministic model. Copyright © 2018 Nené et al.

  14. The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes

    NASA Astrophysics Data System (ADS)

    Bogdanova, E. V.; Kuznetsov, A. N.

    2017-01-01

    The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.

  15. Deterministic secure quantum communication using a single d-level system

    PubMed Central

    Jiang, Dong; Chen, Yuanyuan; Gu, Xuemei; Xie, Ling; Chen, Lijun

    2017-01-01

    Deterministic secure quantum communication (DSQC) can transmit secret messages between two parties without first generating a shared secret key. Compared with quantum key distribution (QKD), DSQC avoids the waste of qubits arising from basis reconciliation and thus reaches higher efficiency. In this paper, based on data block transmission and order rearrangement technologies, we propose a DSQC protocol. It utilizes a set of single d-level systems as message carriers, which are used to directly encode the secret message in one communication process. Theoretical analysis shows that these employed technologies guarantee the security, and the use of a higher dimensional quantum system makes our protocol achieve higher security and efficiency. Since only quantum memory is required for implementation, our protocol is feasible with current technologies. Furthermore, Trojan horse attack (THA) is taken into account in our protocol. We give a THA model and show that THA significantly increases the multi-photon rate and can thus be detected. PMID:28327557

  16. Stochasticity, succession, and environmental perturbations in a fluidic ecosystem

    PubMed Central

    Zhou, Jizhong; Deng, Ye; Zhang, Ping; Xue, Kai; Liang, Yuting; Van Nostrand, Joy D.; Yang, Yunfeng; He, Zhili; Wu, Liyou; Stahl, David A.; Hazen, Terry C.; Tiedje, James M.; Arkin, Adam P.

    2014-01-01

    Unraveling the drivers of community structure and succession in response to environmental change is a central goal in ecology. Although the mechanisms shaping community structure have been intensively examined, those controlling ecological succession remain elusive. To understand the relative importance of stochastic and deterministic processes in mediating microbial community succession, a unique framework composed of four different cases was developed for fluidic and nonfluidic ecosystems. The framework was then tested for one fluidic ecosystem: a groundwater system perturbed by adding emulsified vegetable oil (EVO) for uranium immobilization. Our results revealed that groundwater microbial community diverged substantially away from the initial community after EVO amendment and eventually converged to a new community state, which was closely clustered with its initial state. However, their composition and structure were significantly different from each other. Null model analysis indicated that both deterministic and stochastic processes played important roles in controlling the assembly and succession of the groundwater microbial community, but their relative importance was time dependent. Additionally, consistent with the proposed conceptual framework but contradictory to conventional wisdom, the community succession responding to EVO amendment was primarily controlled by stochastic rather than deterministic processes. During the middle phase of the succession, the roles of stochastic processes in controlling community composition increased substantially, ranging from 81.3% to 92.0%. Finally, there are limited successional studies available to support different cases in the conceptual framework, but further well-replicated explicit time-series experiments are needed to understand the relative importance of deterministic and stochastic processes in controlling community succession. PMID:24550501

  17. Detecting and disentangling nonlinear structure from solar flux time series

    NASA Technical Reports Server (NTRS)

    Ashrafi, S.; Roszman, L.

    1992-01-01

    Interest in solar activity has grown in the past two decades for many reasons. Most importantly for flight dynamics, solar activity changes the atmospheric density, which has important implications for spacecraft trajectory and lifetime prediction. Building upon the previously developed Rayleigh-Benard nonlinear dynamic solar model, which exhibits many dynamic behaviors observed in the Sun, this work introduces new chaotic solar forecasting techniques. Our attempt to use recently developed nonlinear chaotic techniques to model and forecast solar activity has uncovered highly entangled dynamics. Numerical techniques for decoupling additive and multiplicative white noise from deterministic dynamics and examines falloff of the power spectra at high frequencies as a possible means of distinguishing deterministic chaos from noise than spectrally white or colored are presented. The power spectral techniques presented are less cumbersome than current methods for identifying deterministic chaos, which require more computationally intensive calculations, such as those involving Lyapunov exponents and attractor dimension.

  18. Soft tubular microfluidics for 2D and 3D applications

    PubMed Central

    Xi, Wang; Kong, Fang; Yeo, Joo Chuan; Yu, Longteng; Sonam, Surabhi; Dao, Ming; Gong, Xiaobo; Lim, Chwee Teck

    2017-01-01

    Microfluidics has been the key component for many applications, including biomedical devices, chemical processors, microactuators, and even wearable devices. This technology relies on soft lithography fabrication which requires cleanroom facilities. Although popular, this method is expensive and labor-intensive. Furthermore, current conventional microfluidic chips precludes reconfiguration, making reiterations in design very time-consuming and costly. To address these intrinsic drawbacks of microfabrication, we present an alternative solution for the rapid prototyping of microfluidic elements such as microtubes, valves, and pumps. In addition, we demonstrate how microtubes with channels of various lengths and cross-sections can be attached modularly into 2D and 3D microfluidic systems for functional applications. We introduce a facile method of fabricating elastomeric microtubes as the basic building blocks for microfluidic devices. These microtubes are transparent, biocompatible, highly deformable, and customizable to various sizes and cross-sectional geometries. By configuring the microtubes into deterministic geometry, we enable rapid, low-cost formation of microfluidic assemblies without compromising their precision and functionality. We demonstrate configurable 2D and 3D microfluidic systems for applications in different domains. These include microparticle sorting, microdroplet generation, biocatalytic micromotor, triboelectric sensor, and even wearable sensing. Our approach, termed soft tubular microfluidics, provides a simple, cheaper, and faster solution for users lacking proficiency and access to cleanroom facilities to design and rapidly construct microfluidic devices for their various applications and needs. PMID:28923968

  19. Soft tubular microfluidics for 2D and 3D applications

    NASA Astrophysics Data System (ADS)

    Xi, Wang; Kong, Fang; Yeo, Joo Chuan; Yu, Longteng; Sonam, Surabhi; Dao, Ming; Gong, Xiaobo; Teck Lim, Chwee

    2017-10-01

    Microfluidics has been the key component for many applications, including biomedical devices, chemical processors, microactuators, and even wearable devices. This technology relies on soft lithography fabrication which requires cleanroom facilities. Although popular, this method is expensive and labor-intensive. Furthermore, current conventional microfluidic chips precludes reconfiguration, making reiterations in design very time-consuming and costly. To address these intrinsic drawbacks of microfabrication, we present an alternative solution for the rapid prototyping of microfluidic elements such as microtubes, valves, and pumps. In addition, we demonstrate how microtubes with channels of various lengths and cross-sections can be attached modularly into 2D and 3D microfluidic systems for functional applications. We introduce a facile method of fabricating elastomeric microtubes as the basic building blocks for microfluidic devices. These microtubes are transparent, biocompatible, highly deformable, and customizable to various sizes and cross-sectional geometries. By configuring the microtubes into deterministic geometry, we enable rapid, low-cost formation of microfluidic assemblies without compromising their precision and functionality. We demonstrate configurable 2D and 3D microfluidic systems for applications in different domains. These include microparticle sorting, microdroplet generation, biocatalytic micromotor, triboelectric sensor, and even wearable sensing. Our approach, termed soft tubular microfluidics, provides a simple, cheaper, and faster solution for users lacking proficiency and access to cleanroom facilities to design and rapidly construct microfluidic devices for their various applications and needs.

  20. Configurable Crossbar Switch for Deterministic, Low-latency Inter-blade Communications in a MicroTCA Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karamooz, Saeed; Breeding, John Eric; Justice, T Alan

    As MicroTCA expands into applications beyond the telecommunications industry from which it originated, it faces new challenges in the area of inter-blade communications. The ability to achieve deterministic, low-latency communications between blades is critical to realizing a scalable architecture. In the past, legacy bus architectures accomplished inter-blade communications using dedicated parallel buses across the backplane. Because of limited fabric resources on its backplane, MicroTCA uses the carrier hub (MCH) for this purpose. Unfortunately, MCH products from commercial vendors are limited to standard bus protocols such as PCI Express, Serial Rapid IO and 10/40GbE. While these protocols have exceptional throughput capability,more » they are neither deterministic nor necessarily low-latency. To overcome this limitation, an MCH has been developed based on the Xilinx Virtex-7 690T FPGA. This MCH provides the system architect/developer complete flexibility in both the interface protocol and routing of information between blades. In this paper, we present the application of this configurable MCH concept to the Machine Protection System under development for the Spallation Neutron Sources's proton accelerator. Specifically, we demonstrate the use of the configurable MCH as a 12x4-lane crossbar switch using the Aurora protocol to achieve a deterministic, low-latency data link. In this configuration, the crossbar has an aggregate bandwidth of 48 GB/s.« less

  1. Where’s the Noise? Key Features of Spontaneous Activity and Neural Variability Arise through Learning in a Deterministic Network

    PubMed Central

    Hartmann, Christoph; Lazar, Andreea; Nessler, Bernhard; Triesch, Jochen

    2015-01-01

    Even in the absence of sensory stimulation the brain is spontaneously active. This background “noise” seems to be the dominant cause of the notoriously high trial-to-trial variability of neural recordings. Recent experimental observations have extended our knowledge of trial-to-trial variability and spontaneous activity in several directions: 1. Trial-to-trial variability systematically decreases following the onset of a sensory stimulus or the start of a motor act. 2. Spontaneous activity states in sensory cortex outline the region of evoked sensory responses. 3. Across development, spontaneous activity aligns itself with typical evoked activity patterns. 4. The spontaneous brain activity prior to the presentation of an ambiguous stimulus predicts how the stimulus will be interpreted. At present it is unclear how these observations relate to each other and how they arise in cortical circuits. Here we demonstrate that all of these phenomena can be accounted for by a deterministic self-organizing recurrent neural network model (SORN), which learns a predictive model of its sensory environment. The SORN comprises recurrently coupled populations of excitatory and inhibitory threshold units and learns via a combination of spike-timing dependent plasticity (STDP) and homeostatic plasticity mechanisms. Similar to balanced network architectures, units in the network show irregular activity and variable responses to inputs. Additionally, however, the SORN exhibits sequence learning abilities matching recent findings from visual cortex and the network’s spontaneous activity reproduces the experimental findings mentioned above. Intriguingly, the network’s behaviour is reminiscent of sampling-based probabilistic inference, suggesting that correlates of sampling-based inference can develop from the interaction of STDP and homeostasis in deterministic networks. We conclude that key observations on spontaneous brain activity and the variability of neural responses can be accounted for by a simple deterministic recurrent neural network which learns a predictive model of its sensory environment via a combination of generic neural plasticity mechanisms. PMID:26714277

  2. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  3. A 2-D/1-D transverse leakage approximation based on azimuthal, Fourier moments

    DOE PAGES

    Stimpson, Shane G.; Collins, Benjamin S.; Downar, Thomas

    2017-01-12

    Here, the MPACT code being developed collaboratively by Oak Ridge National Laboratory and the University of Michigan is the primary deterministic neutron transport solver within the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). In MPACT, the two-dimensional (2-D)/one-dimensional (1-D) scheme is the most commonly used method for solving neutron transport-based three-dimensional nuclear reactor core physics problems. Several axial solvers in this scheme assume isotropic transverse leakages, but work with the axial S N solver has extended these leakages to include both polar and azimuthal dependence. However, explicit angular representation can be burdensome for run-time and memory requirements. The workmore » here alleviates this burden by assuming that the azimuthal dependence of the angular flux and transverse leakages are represented by a Fourier series expansion. At the heart of this is a new axial SN solver that takes in a Fourier expanded radial transverse leakage and generates the angular fluxes used to construct the axial transverse leakages used in the 2-D-Method of Characteristics calculations.« less

  4. Deterministic and unambiguous dense coding

    NASA Astrophysics Data System (ADS)

    Wu, Shengjun; Cohen, Scott M.; Sun, Yuqing; Griffiths, Robert B.

    2006-04-01

    Optimal dense coding using a partially-entangled pure state of Schmidt rank Dmacr and a noiseless quantum channel of dimension D is studied both in the deterministic case where at most Ld messages can be transmitted with perfect fidelity, and in the unambiguous case where when the protocol succeeds (probability τx ) Bob knows for sure that Alice sent message x , and when it fails (probability 1-τx ) he knows it has failed. Alice is allowed any single-shot (one use) encoding procedure, and Bob any single-shot measurement. For Dmacr ⩽D a bound is obtained for Ld in terms of the largest Schmidt coefficient of the entangled state, and is compared with published results by Mozes [Phys. Rev. A71, 012311 (2005)]. For Dmacr >D it is shown that Ld is strictly less than D2 unless Dmacr is an integer multiple of D , in which case uniform (maximal) entanglement is not needed to achieve the optimal protocol. The unambiguous case is studied for Dmacr ⩽D , assuming τx>0 for a set of Dmacr D messages, and a bound is obtained for the average ⟨1/τ⟩ . A bound on the average ⟨τ⟩ requires an additional assumption of encoding by isometries (unitaries when Dmacr =D ) that are orthogonal for different messages. Both bounds are saturated when τx is a constant independent of x , by a protocol based on one-shot entanglement concentration. For Dmacr >D it is shown that (at least) D2 messages can be sent unambiguously. Whether unitary (isometric) encoding suffices for optimal protocols remains a major unanswered question, both for our work and for previous studies of dense coding using partially-entangled states, including noisy (mixed) states.

  5. Entanglement sensitivity to signal attenuation and amplification

    NASA Astrophysics Data System (ADS)

    Filippov, Sergey N.; Ziman, Mário

    2014-07-01

    We analyze general laws of continuous-variable entanglement dynamics during the deterministic attenuation and amplification of the physical signal carrying the entanglement. These processes are inevitably accompanied by noises, so we find fundamental limitations on noise intensities that destroy entanglement of Gaussian and non-Gaussian input states. The phase-insensitive amplification Φ1⊗Φ2⊗⋯ΦN with the power gain κi≥2 (≈3 dB, i =1,...,N) is shown to destroy entanglement of any N-mode Gaussian state even in the case of quantum-limited performance. In contrast, we demonstrate non-Gaussian states with the energy of a few photons such that their entanglement survives within a wide range of noises beyond quantum-limited performance for any degree of attenuation or gain. We detect entanglement preservation properties of the channel Φ1⊗Φ2, where each mode is deterministically attenuated or amplified. Gaussian states of high energy are shown to be robust to very asymmetric attenuations, whereas non-Gaussian states are at an advantage in the case of symmetric attenuation and general amplification. If Φ1=Φ2, the total noise should not exceed 1/2√κ2+1 to guarantee entanglement preservation.

  6. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    NASA Astrophysics Data System (ADS)

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.

    2017-09-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  7. Micro-masonry for 3D additive micromanufacturing.

    PubMed

    Keum, Hohyun; Kim, Seok

    2014-08-01

    Transfer printing is a method to transfer solid micro/nanoscale materials (herein called 'inks') from a substrate where they are generated to a different substrate by utilizing elastomeric stamps. Transfer printing enables the integration of heterogeneous materials to fabricate unexampled structures or functional systems that are found in recent advanced devices such as flexible and stretchable solar cells and LED arrays. While transfer printing exhibits unique features in material assembly capability, the use of adhesive layers or the surface modification such as deposition of self-assembled monolayer (SAM) on substrates for enhancing printing processes hinders its wide adaptation in microassembly of microelectromechanical system (MEMS) structures and devices. To overcome this shortcoming, we developed an advanced mode of transfer printing which deterministically assembles individual microscale objects solely through controlling surface contact area without any surface alteration. The absence of an adhesive layer or other modification and the subsequent material bonding processes ensure not only mechanical bonding, but also thermal and electrical connection between assembled materials, which further opens various applications in adaptation in building unusual MEMS devices.

  8. Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia

    A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.

  9. Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.

    2003-01-01

    Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent to a fresh quarry face. The results at this site support using deterministic deconvolution, which incorporates the GPR instrument's unique source wavelet, as a standard part of routine GPR data processing. ?? 2003 Elsevier B.V. All rights reserved.

  10. First-order reliability application and verification methods for semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-11-01

    Escalating risks of aerostructures stimulated by increasing size, complexity, and cost should no longer be ignored in conventional deterministic safety design methods. The deterministic pass-fail concept is incompatible with probability and risk assessments; stress audits are shown to be arbitrary and incomplete, and the concept compromises the performance of high-strength materials. A reliability method is proposed that combines first-order reliability principles with deterministic design variables and conventional test techniques to surmount current deterministic stress design and audit deficiencies. Accumulative and propagation design uncertainty errors are defined and appropriately implemented into the classical safety-index expression. The application is reduced to solving for a design factor that satisfies the specified reliability and compensates for uncertainty errors, and then using this design factor as, and instead of, the conventional safety factor in stress analyses. The resulting method is consistent with current analytical skills and verification practices, the culture of most designers, and the development of semistatic structural designs.

  11. Multi-Scale Multi-Physics Modeling of Matrix Transport Properties in Fractured Shale Reservoirs

    NASA Astrophysics Data System (ADS)

    Mehmani, A.; Prodanovic, M.

    2014-12-01

    Understanding the shale matrix flow behavior is imperative in successful reservoir development for hydrocarbon production and carbon storage. Without a predictive model, significant uncertainties in flowback from the formation, the communication between the fracture and matrix as well as proper fracturing practice will ensue. Informed by SEM images, we develop deterministic network models that couple pores from multiple scales and their respective fluid physics. The models are used to investigate sorption hysteresis as an affordable way of inferring the nanoscale pore structure in core scale. In addition, restricted diffusion as a function of pore shape, pore-throat size ratios and network connectivity is computed to make correct interpretation of the 2D NMR maps possible. Our novel pore network models have the ability to match sorption hysteresis measurements without any tuning parameters. The results clarify a common misconception of linking type 3 nitrogen hysteresis curves to only the shale pore shape and show promising sensitivty for nanopore structre inference in core scale. The results on restricted diffusion shed light on the importance of including shape factors in 2D NMR interpretations. A priori "weighting factors" as a function of pore-throat and throat-length ratio are presented and the effect of network connectivity on diffusion is quantitatively assessed. We are currently working on verifying our models with experimental data gathered from the Eagleford formation.

  12. ASSESSMENT OF TWO PHYSICALLY BASED WATERSHED MODELS BASED ON THEIR PERFORMANCES OF SIMULATING SEDIMENT MOVEMENT OVER SMALL WATERSHEDS

    EPA Science Inventory


    Abstract: Two physically based and deterministic models, CASC2-D and KINEROS are evaluated and compared for their performances on modeling sediment movement on a small agricultural watershed over several events. Each model has different conceptualization of a watershed. CASC...

  13. Intergrated 3-D Ground-Penetrating Radar,Outcrop,and Boreholoe Data Applied to Reservoir Characterization and Flow Simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMechan et al.

    2001-08-31

    Existing reservoir models are based on 2-D outcrop;3-D aspects are inferred from correlation between wells,and so are inadequately constrained for reservoir simulations. To overcome these deficiencies, we initiated a multidimensional characterization of reservoir analogs in the Cretaceous Ferron Sandstone in Utah.The study was conducted at two sites(Corbula Gulch Coyote Basin); results from both sites are contained in this report. Detailed sedimentary facies maps of cliff faces define the geometry and distribution of potential reservoir flow units, barriers and baffles at the outcrop. High resolution 2-D and 3-D ground penetrating radar(GPR) images extend these reservoir characteristics into 3-D to allow developmentmore » of realistic 3-D reservoir models. Models use geometric information from the mapping and the GPR data, petrophysical data from surface and cliff-face outcrops, lab analyses of outcrop and core samples, and petrography. The measurements are all integrated into a single coordinate system using GPS and laser mapping of the main sedimentologic features and boundaries. The final step is analysis of results of 3-D fluid flow modeling to demonstrate applicability of our reservoir analog studies to well siting and reservoir engineering for maximization of hydrocarbon production. The main goals of this project are achieved. These are the construction of a deterministic 3-D reservoir analog model from a variety of geophysical and geologic measurements at the field sites, integrating these into comprehensive petrophysical models, and flow simulation through these models. This unique approach represents a significant advance in characterization and use of reservoir analogs. To data,the team has presented five papers at GSA and AAPG meetings produced a technical manual, and completed 15 technical papers. The latter are the main content of this final report. In addition,the project became part of 5 PhD dissertations, 3 MS theses,and two senior undergraduate research projects.« less

  14. On a sparse pressure-flow rate condensation of rigid circulation models

    PubMed Central

    Schiavazzi, D. E.; Hsia, T. Y.; Marsden, A. L.

    2015-01-01

    Cardiovascular simulation has shown potential value in clinical decision-making, providing a framework to assess changes in hemodynamics produced by physiological and surgical alterations. State-of-the-art predictions are provided by deterministic multiscale numerical approaches coupling 3D finite element Navier Stokes simulations to lumped parameter circulation models governed by ODEs. Development of next-generation stochastic multiscale models whose parameters can be learned from available clinical data under uncertainty constitutes a research challenge made more difficult by the high computational cost typically associated with the solution of these models. We present a methodology for constructing reduced representations that condense the behavior of 3D anatomical models using outlet pressure-flow polynomial surrogates, based on multiscale model solutions spanning several heart cycles. Relevance vector machine regression is compared with maximum likelihood estimation, showing that sparse pressure/flow rate approximations offer superior performance in producing working surrogate models to be included in lumped circulation networks. Sensitivities of outlets flow rates are also quantified through a Sobol’ decomposition of their total variance encoded in the orthogonal polynomial expansion. Finally, we show that augmented lumped parameter models including the proposed surrogates accurately reproduce the response of multiscale models they were derived from. In particular, results are presented for models of the coronary circulation with closed loop boundary conditions and the abdominal aorta with open loop boundary conditions. PMID:26671219

  15. P ≠NP Millenium-Problem(MP) TRIVIAL Physics Proof Via NATURAL TRUMPS Artificial-``Intelligence'' Via: Euclid Geometry, Plato Forms, Aristotle Square-of-Opposition, Menger Dimension-Theory Connections!!! NO Computational-Complexity(CC)/ANYthing!!!: Geometry!!!

    NASA Astrophysics Data System (ADS)

    Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward

    P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!

  16. On stochastic control and optimal measurement strategies. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kramer, L. C.

    1971-01-01

    The control of stochastic dynamic systems is studied with particular emphasis on those which influence the quality or nature of the measurements which are made to effect control. Four main areas are discussed: (1) the meaning of stochastic optimality and the means by which dynamic programming may be applied to solve a combined control/measurement problem; (2) a technique by which it is possible to apply deterministic methods, specifically the minimum principle, to the study of stochastic problems; (3) the methods described are applied to linear systems with Gaussian disturbances to study the structure of the resulting control system; and (4) several applications are considered.

  17. Positive dwell time algorithm with minimum equal extra material removal in deterministic optical surfacing technology.

    PubMed

    Li, Longxiang; Xue, Donglin; Deng, Weijie; Wang, Xu; Bai, Yang; Zhang, Feng; Zhang, Xuejun

    2017-11-10

    In deterministic computer-controlled optical surfacing, accurate dwell time execution by computer numeric control machines is crucial in guaranteeing a high-convergence ratio for the optical surface error. It is necessary to consider the machine dynamics limitations in the numerical dwell time algorithms. In this paper, these constraints on dwell time distribution are analyzed, and a model of the equal extra material removal is established. A positive dwell time algorithm with minimum equal extra material removal is developed. Results of simulations based on deterministic magnetorheological finishing demonstrate the necessity of considering machine dynamics performance and illustrate the validity of the proposed algorithm. Indeed, the algorithm effectively facilitates the determinacy of sub-aperture optical surfacing processes.

  18. Development of the Coastal Storm Modeling System (CoSMoS) for predicting the impact of storms on high-energy, active-margin coasts

    USGS Publications Warehouse

    Barnard, Patrick; Maarten van Ormondt,; Erikson, Li H.; Jodi Eshleman,; Hapke, Cheryl J.; Peter Ruggiero,; Peter Adams,; Foxgrover, Amy C.

    2014-01-01

    The Coastal Storm Modeling System (CoSMoS) applies a predominantly deterministic framework to make detailed predictions (meter scale) of storm-induced coastal flooding, erosion, and cliff failures over large geographic scales (100s of kilometers). CoSMoS was developed for hindcast studies, operational applications (i.e., nowcasts and multiday forecasts), and future climate scenarios (i.e., sea-level rise + storms) to provide emergency responders and coastal planners with critical storm hazards information that may be used to increase public safety, mitigate physical damages, and more effectively manage and allocate resources within complex coastal settings. The prototype system, developed for the California coast, uses the global WAVEWATCH III wave model, the TOPEX/Poseidon satellite altimetry-based global tide model, and atmospheric-forcing data from either the US National Weather Service (operational mode) or Global Climate Models (future climate mode), to determine regional wave and water-level boundary conditions. These physical processes are dynamically downscaled using a series of nested Delft3D-WAVE (SWAN) and Delft3D-FLOW (FLOW) models and linked at the coast to tightly spaced XBeach (eXtreme Beach) cross-shore profile models and a Bayesian probabilistic cliff failure model. Hindcast testing demonstrates that, despite uncertainties in preexisting beach morphology over the ~500 km alongshore extent of the pilot study area, CoSMoS effectively identifies discrete sections of the coast (100s of meters) that are vulnerable to coastal hazards under a range of current and future oceanographic forcing conditions, and is therefore an effective tool for operational and future climate scenario planning.

  19. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  20. Development of DCGLs by using both probabilistic and deterministic analyses in RESRAD (onsite) and RESRAD-OFFSITE codes.

    PubMed

    Kamboj, Sunita; Yu, Charley; Johnson, Robert

    2013-05-01

    The Derived Concentration Guideline Levels for two building areas previously used in waste processing and storage at Argonne National Laboratory were developed using both probabilistic and deterministic radiological environmental pathway analysis. Four scenarios were considered. The two current uses considered were on-site industrial use and off-site residential use with farming. The two future uses (i.e., after an institutional control period of 100 y) were on-site recreational use and on-site residential use with farming. The RESRAD-OFFSITE code was used for the current-use off-site residential/farming scenario and RESRAD (onsite) was used for the other three scenarios. Contaminants of concern were identified from the past operations conducted in the buildings and the actual characterization done at the site. Derived Concentration Guideline Levels were developed for all four scenarios using deterministic and probabilistic approaches, which include both "peak-of-the-means" and "mean-of-the-peaks" analyses. The future-use on-site residential/farming scenario resulted in the most restrictive Derived Concentration Guideline Levels for most radionuclides.

  1. From statistical proofs of the Kochen-Specker theorem to noise-robust noncontextuality inequalities

    NASA Astrophysics Data System (ADS)

    Kunjwal, Ravi; Spekkens, Robert W.

    2018-05-01

    The Kochen-Specker theorem rules out models of quantum theory wherein projective measurements are assigned outcomes deterministically and independently of context. This notion of noncontextuality is not applicable to experimental measurements because these are never free of noise and thus never truly projective. For nonprojective measurements, therefore, one must drop the requirement that an outcome be assigned deterministically in the model and merely require that it be assigned a distribution over outcomes in a manner that is context-independent. By demanding context independence in the representation of preparations as well, one obtains a generalized principle of noncontextuality that also supports a quantum no-go theorem. Several recent works have shown how to derive inequalities on experimental data which, if violated, demonstrate the impossibility of finding a generalized-noncontextual model of this data. That is, these inequalities do not presume quantum theory and, in particular, they make sense without requiring an operational analog of the quantum notion of projectiveness. We here describe a technique for deriving such inequalities starting from arbitrary proofs of the Kochen-Specker theorem. It extends significantly previous techniques that worked only for logical proofs, which are based on sets of projective measurements that fail to admit of any deterministic noncontextual assignment, to the case of statistical proofs, which are based on sets of projective measurements that d o admit of some deterministic noncontextual assignments, but not enough to explain the quantum statistics.

  2. Grizzly Staus Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spencer, Benjamin; Zhang, Yongfeng; Chakraborty, Pritam

    2014-09-01

    This report summarizes work during FY 2014 to develop capabilities to predict embrittlement of reactor pressure vessel steel, and to assess the response of embrittled reactor pressure vessels to postulated accident conditions. This work has been conducted a three length scales. At the engineering scale, 3D fracture mechanics capabilities have been developed to calculate stress intensities and fracture toughnesses, to perform a deterministic assessment of whether a crack would propagate at the location of an existing flaw. This capability has been demonstrated on several types of flaws in a generic reactor pressure vessel model. Models have been developed at themore » scale of fracture specimens to develop a capability to determine how irradiation affects the fracture toughness of material. Verification work has been performed on a previously-developed model to determine the sensitivity of the model to specimen geometry and size effects. The effects of irradiation on the parameters of this model has been investigated. At lower length scales, work has continued in an ongoing to understand how irradiation and thermal aging affect the microstructure and mechanical properties of reactor pressure vessel steel. Previously-developed atomistic kinetic monte carlo models have been further developed and benchmarked against experimental data. Initial work has been performed to develop models of nucleation in a phase field model. Additional modeling work has also been performed to improve the fundamental understanding of the formation mechanisms and stability of matrix defects caused.« less

  3. 3D Dynamic Rupture Simulations along the Wasatch Fault, Utah, Incorporating Rough-fault Topography

    NASA Astrophysics Data System (ADS)

    Withers, Kyle; Moschetti, Morgan

    2017-04-01

    Studies have found that the Wasatch Fault has experienced successive large magnitude (>Mw 7.2) earthquakes, with an average recurrence interval near 350 years. To date, no large magnitude event has been recorded along the fault, with the last rupture along the Salt Lake City segment occurring 1300 years ago. Because of this, as well as the lack of strong ground motion records in basins and from normal-faulting earthquakes worldwide, seismic hazard in the region is not well constrained. Previous numerical simulations have modeled deterministic ground motion in the heavily populated regions of Utah, near Salt Lake City, but were primarily restricted to low frequencies ( 1 Hz). Our goal is to better assess broadband ground motions from the Wasatch Fault Zone. Here, we extend deterministic ground motion prediction to higher frequencies ( 5 Hz) in this region by using physics-based spontaneous dynamic rupture simulations along a normal fault with characteristics derived from geologic observations. We use a summation by parts finite difference code (Waveqlab3D) with rough-fault topography following a self-similar fractal distribution (over length scales from 100 m to the size of the fault) and include off-fault plasticity to simulate ruptures > Mw 6.5. Geometric complexity along fault planes has previously been shown to generate broadband sources with spectral energy matching that of observations. We investigate the impact of varying the hypocenter location, as well as the influence that multiple realizations of rough-fault topography have on the rupture process and resulting ground motion. We utilize Waveqlab3's computational efficiency to model wave-propagation to a significant distance from the fault with media heterogeneity at both long and short spatial wavelengths. These simulations generate a synthetic dataset of ground motions to compare with GMPEs, in terms of both the median and inter and intraevent variability.

  4. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  5. Heterogeneous propellant internal ballistics: criticism and regeneration

    NASA Astrophysics Data System (ADS)

    Glick, R. L.

    2011-10-01

    Although heterogeneous propellant and its innately nondeterministic, chemically discrete morphology dominates applications, ballisticcharacterization deterministic time-mean burning rate and acoustic admittance measures' absence of explicit, nondeterministic information requires homogeneous propellant with a smooth, uniformly regressing burning surface: inadequate boundary conditions for heterogeneous propellant grained applications. The past age overcame this dichotomy with one-dimensional (1D) models and empirical knowledge from numerous, adequately supported motor developments and supplementary experiments. However, current cost and risk constraints inhibit this approach. Moreover, its fundamental science approach is more sensitive to incomplete boundary condition information (garbage-in still equals garbage-out) and more is expected. This work critiques this situation and sketches a path forward based on enhanced ballistic and motor characterizations in the workplace and approximate model and apparatus developments mentored by CSAR DNS capabilities (or equivalent).

  6. Comparison of Transport Codes, HZETRN, HETC and FLUKA, Using 1977 GCR Solar Minimum Spectra

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.; Slaba, Tony C.; Tripathi, Ram K.; Blattnig, Steve R.; Norbury, John W.; Badavi, Francis F.; Townsend, Lawrence W.; Handler, Thomas; Gabriel, Tony A.; Pinsky, Lawrence S.; hide

    2009-01-01

    The HZETRN deterministic radiation transport code is one of several tools developed to analyze the effects of harmful galactic cosmic rays (GCR) and solar particle events (SPE) on mission planning, astronaut shielding and instrumentation. This paper is a comparison study involving the two Monte Carlo transport codes, HETC-HEDS and FLUKA, and the deterministic transport code, HZETRN. Each code is used to transport ions from the 1977 solar minimum GCR spectrum impinging upon a 20 g/cm2 Aluminum slab followed by a 30 g/cm2 water slab. This research is part of a systematic effort of verification and validation to quantify the accuracy of HZETRN and determine areas where it can be improved. Comparisons of dose and dose equivalent values at various depths in the water slab are presented in this report. This is followed by a comparison of the proton fluxes, and the forward, backward and total neutron fluxes at various depths in the water slab. Comparisons of the secondary light ion 2H, 3H, 3He and 4He fluxes are also examined.

  7. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  8. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2011-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  9. Cost-Effectiveness Analysis of Intensity Modulated Radiation Therapy Versus 3-Dimensional Conformal Radiation Therapy for Anal Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodges, Joseph C., E-mail: joseph.hodges@utsouthwestern.edu; Beg, Muhammad S.; Das, Prajnan

    2014-07-15

    Purpose: To compare the cost-effectiveness of intensity modulated radiation therapy (IMRT) and 3-dimensional conformal radiation therapy (3D-CRT) for anal cancer and determine disease, patient, and treatment parameters that influence the result. Methods and Materials: A Markov decision model was designed with the various disease states for the base case of a 65-year-old patient with anal cancer treated with either IMRT or 3D-CRT and concurrent chemotherapy. Health states accounting for rates of local failure, colostomy failure, treatment breaks, patient prognosis, acute and late toxicities, and the utility of toxicities were informed by existing literature and analyzed with deterministic and probabilistic sensitivitymore » analysis. Results: In the base case, mean costs and quality-adjusted life expectancy in years (QALY) for IMRT and 3D-CRT were $32,291 (4.81) and $28,444 (4.78), respectively, resulting in an incremental cost-effectiveness ratio of $128,233/QALY for IMRT compared with 3D-CRT. Probabilistic sensitivity analysis found that IMRT was cost-effective in 22%, 47%, and 65% of iterations at willingness-to-pay thresholds of $50,000, $100,000, and $150,000 per QALY, respectively. Conclusions: In our base model, IMRT was a cost-ineffective strategy despite the reduced acute treatment toxicities and their associated costs of management. The model outcome was sensitive to variations in local and colostomy failure rates, as well as patient-reported utilities relating to acute toxicities.« less

  10. Space Radiation Transport Methods Development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2002-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.

  11. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  12. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead-based applications.

    PubMed

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-04-10

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead-encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin-biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules.

  13. Deterministic bead-in-droplet ejection utilizing an integrated plug-in bead dispenser for single bead–based applications

    PubMed Central

    Kim, Hojin; Choi, In Ho; Lee, Sanghyun; Won, Dong-Joon; Oh, Yong Suk; Kwon, Donghoon; Sung, Hyung Jin; Jeon, Sangmin; Kim, Joonwon

    2017-01-01

    This paper presents a deterministic bead-in-droplet ejection (BIDE) technique that regulates the precise distribution of microbeads in an ejected droplet. The deterministic BIDE was realized through the effective integration of a microfluidic single-particle handling technique with a liquid dispensing system. The integrated bead dispenser facilitates the transfer of the desired number of beads into a dispensing volume and the on-demand ejection of bead-encapsulated droplets. Single bead–encapsulated droplets were ejected every 3 s without any failure. Multiple-bead dispensing with deterministic control of the number of beads was demonstrated to emphasize the originality and quality of the proposed dispensing technique. The dispenser was mounted using a plug-socket type connection, and the dispensing process was completely automated using a programmed sequence without any microscopic observation. To demonstrate a potential application of the technique, bead-based streptavidin–biotin binding assay in an evaporating droplet was conducted using ultralow numbers of beads. The results evidenced the number of beads in the droplet crucially influences the reliability of the assay. Therefore, the proposed deterministic bead-in-droplet technology can be utilized to deliver desired beads onto a reaction site, particularly to reliably and efficiently enrich and detect target biomolecules. PMID:28393911

  14. Effects of realistic topography on the ground motion of the Colombian Andes - A case study at the Aburrá Valley, Antioquia

    NASA Astrophysics Data System (ADS)

    Restrepo, Doriam; Bielak, Jacobo; Serrano, Ricardo; Gómez, Juan; Jaramillo, Juan

    2016-03-01

    This paper presents a set of deterministic 3-D ground motion simulations for the greater metropolitan area of Medellín in the Aburrá Valley, an earthquake-prone region of the Colombian Andes that exhibits moderate-to-strong topographic irregularities. We created the velocity model of the Aburrá Valley region (version 1) using the geological structures as a basis for determining the shear wave velocity. The irregular surficial topography is considered by means of a fictitious domain strategy. The simulations cover a 50 × 50 × 25 km3 volume, and four Mw = 5 rupture scenarios along a segment of the Romeral fault, a significant source of seismic activity in Colombia. In order to examine the sensitivity of ground motion to the irregular topography and the 3-D effects of the valley, each earthquake scenario was simulated with three different models: (i) realistic 3-D velocity structure plus realistic topography, (ii) realistic 3-D velocity structure without topography, and (iii) homogeneous half-space with realistic topography. Our results show how surface topography affects the ground response. In particular, our findings highlight the importance of the combined interaction between source-effects, source-directivity, focusing, soft-soil conditions, and 3-D topography. We provide quantitative evidence of this interaction and show that topographic amplification factors can be as high as 500 per cent at some locations. In other areas within the valley, the topographic effects result in relative reductions, but these lie in the 0-150 per cent range.

  15. Realistic Simulation for Body Area and Body-To-Body Networks

    PubMed Central

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele

    2016-01-01

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537

  16. Realistic Simulation for Body Area and Body-To-Body Networks.

    PubMed

    Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele

    2016-04-20

    In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.

  17. Study on the evaluation method for fault displacement based on characterized source model

    NASA Astrophysics Data System (ADS)

    Tonagi, M.; Takahama, T.; Matsumoto, Y.; Inoue, N.; Irikura, K.; Dalguer, L. A.

    2016-12-01

    In IAEA Specific Safety Guide (SSG) 9 describes that probabilistic methods for evaluating fault displacement should be used if no sufficient basis is provided to decide conclusively that the fault is not capable by using the deterministic methodology. In addition, International Seismic Safety Centre compiles as ANNEX to realize seismic hazard for nuclear facilities described in SSG-9 and shows the utility of the deterministic and probabilistic evaluation methods for fault displacement. In Japan, it is required that important nuclear facilities should be established on ground where fault displacement will not arise when earthquakes occur in the future. Under these situations, based on requirements, we need develop evaluation methods for fault displacement to enhance safety in nuclear facilities. We are studying deterministic and probabilistic methods with tentative analyses using observed records such as surface fault displacement and near-fault strong ground motions of inland crustal earthquake which fault displacements arose. In this study, we introduce the concept of evaluation methods for fault displacement. After that, we show parts of tentative analysis results for deterministic method as follows: (1) For the 1999 Chi-Chi earthquake, referring slip distribution estimated by waveform inversion, we construct a characterized source model (Miyake et al., 2003, BSSA) which can explain observed near-fault broad band strong ground motions. (2) Referring a characterized source model constructed in (1), we study an evaluation method for surface fault displacement using hybrid method, which combines particle method and distinct element method. At last, we suggest one of the deterministic method to evaluate fault displacement based on characterized source model. This research was part of the 2015 research project `Development of evaluating method for fault displacement` by the Secretariat of Nuclear Regulation Authority (S/NRA), Japan.

  18. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Paul T; Yin, Shengjun; Klasky, Hilda B

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite-element solutions and to also assess the level of confidence that can be placed in the best-estimate finiteelement solutions.« less

  19. Mott Electrons in an Artificial Graphenelike Crystal of Rare-Earth Nickelate S.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middey, Srimanta; Meyers, Derek J.; Doennig, D.

    2016-02-05

    Deterministic control over the periodic geometrical arrangement of the constituent atoms is the backbone of the material properties, which, along with the interactions, define the electronic and magnetic ground state. Following this notion, a bilayer of a prototypical rare-earth nickelate, NdNiO3, combined with a dielectric spacer, LaAlO3, has been layered along the pseudocubic [111] direction. The resulting artificial graphenelike Mott crystal with magnetic 3d electrons has antiferromagnetic correlations. In addition, a combination of resonant X-ray linear dichroism measurements and ab initio calculations reveal the presence of an ordered orbital pattern, which is unattainable in either bulk nickelates or nickelate basedmore » heterostructures grown along the [001] direction. These findings highlight another promising venue towards designing new quantum many-body states by virtue of geometrical engineering.« less

  20. Illustrated structural application of universal first-order reliability method

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.

  1. Precision production: enabling deterministic throughput for precision aspheres with MRF

    NASA Astrophysics Data System (ADS)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  2. Scaling theory for the quasideterministic limit of continuous bifurcations.

    PubMed

    Kessler, David A; Shnerb, Nadav M

    2012-05-01

    Deterministic rate equations are widely used in the study of stochastic, interacting particles systems. This approach assumes that the inherent noise, associated with the discreteness of the elementary constituents, may be neglected when the number of particles N is large. Accordingly, it fails close to the extinction transition, when the amplitude of stochastic fluctuations is comparable with the size of the population. Here we present a general scaling theory of the transition regime for spatially extended systems. We demonstrate this through a detailed study of two fundamental models for out-of-equilibrium phase transitions: the Susceptible-Infected-Susceptible (SIS) that belongs to the directed percolation equivalence class and the Susceptible-Infected-Recovered (SIR) model belonging to the dynamic percolation class. Implementing the Ginzburg criteria we show that the width of the fluctuation-dominated region scales like N^{-κ}, where N is the number of individuals per site and κ=2/(d_{u}-d), d_{u} is the upper critical dimension. Other exponents that control the approach to the deterministic limit are shown to be calculable once κ is known. The theory is extended to include the corrections to the front velocity above the transition. It is supported by the results of extensive numerical simulations for systems of various dimensionalities.

  3. A deterministic, dynamic systems model of cow-calf production: The effects of breeding replacement heifers before mature cows over a 10-year horizon.

    PubMed

    Shane, D D; Larson, R L; Sanderson, M W; Miesner, M; White, B J

    2017-10-01

    Some cattle production experts believe that cow-calf producers should breed replacement heifers (nulliparous cows) before cows (primiparous and multiparous cows), sometimes referred to as providing a heifer lead time (tHL). Our objective was to model the effects different durations of tHL may have on measures of herd productivity, including the percent of the herd cycling before the end of the first 21 d of the breeding season (%C21), the percent of the herd pregnant at pregnancy diagnosis (%PPD), the distribution of pregnancy by 21-d breeding intervals, the kilograms of calf weaned per cow exposed (KPC), and the replacement percentage (%RH), using a deterministic, dynamic systems model of cow-calf production over a 10-yr horizon. We also wished to examine differences in the effect of tHL related to the primiparous duration of postpartum anestrus (dPPA). The study model examined 6 different dPPA for primiparous cows (60, 70, 80, 90, 100, or 110 d). The multiparous cow duration of postpartum anestrus was set to 60 d. The breeding season length for nulliparous cows was 63 d, as was the breeding season length for primiparous and multiparous cows. Nulliparous cows were modeled with a tHL of 0, 7, 14, 21, 28, 35, or 42 d. Results are reported for the final breeding season of the 10-yr horizon. Increasing tHL resulted in a greater %C21 for the herd and for primiparous cows. Length of tHL had minimal impact on the %PPD unless the dPPA was 80 d or greater. For a dPPA of 110 d, a 0 d tHL resulted in the herd having 88.1 %PPD. When tHL was 21 d, the %PPD increased to 93.0%. The KPC was 161.2 kg when the dPPA was 110 d and tHL was 0 d and improved to 183.2 kg when tHL was increased to 42 d. The %RH did not vary much unless the dPPA was 90 d or greater, but increasing tHL resulted in decreased %RH. Based on the model results, increasing tHL improves the production outcomes included in the analysis, but herds with dPPA of 90 d or greater had the greatest degree of improvement. For these herds, approximately two-thirds of the improvement in outcomes by increasing tHL from 0 d to 42 d was realized when tHL was 21 d. Costs are likely incurred when implementing tHL in a breeding management program, and an ideal tHL likely depends on the dPPA of the herd, the expected improvement in productivity, and the costs associated with increasing tHL. Determining the dPPA of a herd could help veterinarians and producers develop optimal herd management strategies.

  4. A Deterministic and Random Propagation Study with the Design of an Open Path 320 GHz to 340 GHz Transmissometer

    NASA Astrophysics Data System (ADS)

    Scally, Lawrence J.

    This program was implemented by Lawrence J. Scally for a Ph.D. under the EECE department at the University of Colorado at Boulder with most funding provided by the U.S. Army. Professor Gasiewski is the advisor and guider for the entire program; he has a strong history decades ago in this type of program. This program is developing a more advanced than previous years transmissometer, called Terahertz Atmospheric and Ionospheric Propagation, Absorption and Scattering System (TAIPAS), on an open path between the University of Colorado EE building roof and the mesa on owned by National Institute of Standards and Technology (NIST); NIST has invested money, location and support for the program. Besides designing and building the transmissometer, that has never be accomplished at this level, the system also analyzes the atmospheric propagation of frequencies by scanning between 320 GHz and 340 GHz, which includes the peak absorption frequency at 325.1529 GHz due to water absorption. The processing and characterization of the deterministic and random propagation characteristics of the atmosphere in the real world was significantly started; this will be executed with varies aerosols for decades on the permanently mounted system that is accessible 24/7 via a network over the CU Virtual Private Network (VPN).

  5. Chaos in a spatially-developing plane mixing layer

    NASA Technical Reports Server (NTRS)

    Broze, J. G.; Hussain, Fazle; Buell, J. C.

    1988-01-01

    A spatially-developing plane mixing layer was analyzed for chaotic behavior. A direct numerical simulation of the Navier-Stokes equations in a 2-D domain infinite in y and having inflow-outflow boundary conditions in x was used for data. Spectra, correlation dimension and the largest Lyapunov exponent were computed as functions of downstream distance x. When forced at a single (fundamental) frequency with maximum amplitude, the flow is periodic at the inflow but becomes aperiodic with increasing x. The aperiodic behavior is caused by the presence of a noisy subharmonic caused by the feedback between the necessarily nonphysical inflow and outflow boundary conditions. In order to overshadow this noise the flow was also studied with the same fundamental forcing and added random forcing of amplitude upsilon prime sub R/delta U = 0.01 at the inlet. Results were qualitatively the same in both cases: for small x, spectral peaks were sharp and dimension was nearly 1, but as x increased a narrowband spectral peak grew, spectra decayed exponentially at high frequencies and dimension increased to greater than 3. Based on these results, the flow appears to exhibit deterministic chaos. However, at no location was the largest Lyapunov exponent found to be significantly greater than zero.

  6. Comparison of space radiation calculations for deterministic and Monte Carlo transport codes

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo

    For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.

  7. Automatic 2D-to-3D image conversion using 3D examples from the internet

    NASA Astrophysics Data System (ADS)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.

  8. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    NASA Astrophysics Data System (ADS)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  9. Fractal-Based Image Compression

    DTIC Science & Technology

    1990-01-01

    used Ziv - Lempel - experiments and for software development. Addi- Welch compression algorithm (ZLW) [51 [4] was used tional thanks to Roger Boss, Bill...vol17no. 6 (June 4) and with the minimum number of maps. [5] J. Ziv and A. Lempel , Compression of !ndivid- 5 Summary ual Sequences via Variable-Rate...transient and should be discarded. 2.5 Collage Theorem algorithm2 C3.2 Deterministic Algorithm for IFS Attractor For fast image compression the best

  10. ACCELERATING FUSION REACTOR NEUTRONICS MODELING BY AUTOMATIC COUPLING OF HYBRID MONTE CARLO/DETERMINISTIC TRANSPORT ON CAD GEOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W

    2015-01-01

    Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNcemore » reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).« less

  11. Investing to Survive in a Duopoly Model

    NASA Astrophysics Data System (ADS)

    Pinto, Alberto A.; Oliveira, Bruno M. P. M.; Ferreira, Fernanda A.; Ferreira, Miguel

    We present deterministic dynamics on the production costs of Cournot competitions, based on perfect Nash equilibria of nonlinear R&D investment strategies to reduce the production costs of the firms at every period of the game. We analyse the effects that the R&D investment strategies can have in the profits of the firms along the time. We show that small changes in the initial production costs or small changes in the parameters that determine the efficiency of the R&D programs or of the firms can produce strong economic effects in the long run of the profits of the firms.

  12. Slip Potential of Faults in the Fort Worth Basin

    NASA Astrophysics Data System (ADS)

    Hennings, P.; Osmond, J.; Lund Snee, J. E.; Zoback, M. D.

    2017-12-01

    Similar to other areas of the southcentral United States, the Fort Worth Basin of NE Texas has experienced an increase in the rate of seismicity which has been attributed to injection of waste water in deep saline aquifers. To assess the hazard of induced seismicity in the basin we have integrated new data on location and character of previously known and unknown faults, stress state, and pore pressure to produce an assessment of fault slip potential which can be used to investigate prior and ongoing earthquake sequences and for development of mitigation strategies. We have assembled data on faults in the basin from published sources, 2D and 3D seismic data, and interpretations provided from petroleum operators to yield a 3D fault model with 292 faults ranging in strike-length from 116 to 0.4 km. The faults have mostly normal geometries, all cut the disposal intervals, and most are presumed to cut into the underlying crystalline and metamorphic basement. Analysis of outcrops along the SW flank of the basin assist with geometric characterization of the fault systems. The interpretation of stress state comes from integration of wellbore image and sonic data, reservoir stimulation data, and earthquake focal mechanisms. The orientation of SHmax is generally uniform across the basin but stress style changes from being more strike-slip in the NE part of the basin to normal faulting in the SW part. Estimates of pore pressure come from a basin-scale hydrogeologic model as history-matched to injection test data. With these deterministic inputs and appropriate ranges of uncertainty we assess the conditional probability that faults in our 3D model might slip via Mohr-Coulomb reactivation in response to increases in injected-related pore pressure. A key component of the analysis is constraining the uncertainties associated with each of the principal parameters. Many of the faults in the model are interpreted to be critically-stressed within reasonable ranges of uncertainty.

  13. Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations

    NASA Astrophysics Data System (ADS)

    Savran, William Harvey

    High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.

  14. SU-G-TeP1-15: Toward a Novel GPU Accelerated Deterministic Solution to the Linear Boltzmann Transport Equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, R; Fallone, B; Cross Cancer Institute, Edmonton, AB

    Purpose: To develop a Graphic Processor Unit (GPU) accelerated deterministic solution to the Linear Boltzmann Transport Equation (LBTE) for accurate dose calculations in radiotherapy (RT). A deterministic solution yields the potential for major speed improvements due to the sparse matrix-vector and vector-vector multiplications and would thus be of benefit to RT. Methods: In order to leverage the massively parallel architecture of GPUs, the first order LBTE was reformulated as a second order self-adjoint equation using the Least Squares Finite Element Method (LSFEM). This produces a symmetric positive-definite matrix which is efficiently solved using a parallelized conjugate gradient (CG) solver. Themore » LSFEM formalism is applied in space, discrete ordinates is applied in angle, and the Multigroup method is applied in energy. The final linear system of equations produced is tightly coupled in space and angle. Our code written in CUDA-C was benchmarked on an Nvidia GeForce TITAN-X GPU against an Intel i7-6700K CPU. A spatial mesh of 30,950 tetrahedral elements was used with an S4 angular approximation. Results: To avoid repeating a full computationally intensive finite element matrix assembly at each Multigroup energy, a novel mapping algorithm was developed which minimized the operations required at each energy. Additionally, a parallelized memory mapping for the kronecker product between the sparse spatial and angular matrices, including Dirichlet boundary conditions, was created. Atomicity is preserved by graph-coloring overlapping nodes into separate kernel launches. The one-time mapping calculations for matrix assembly, kronecker product, and boundary condition application took 452±1ms on GPU. Matrix assembly for 16 energy groups took 556±3s on CPU, and 358±2ms on GPU using the mappings developed. The CG solver took 93±1s on CPU, and 468±2ms on GPU. Conclusion: Three computationally intensive subroutines in deterministically solving the LBTE have been formulated on GPU, resulting in two orders of magnitude speedup. Funding support from Natural Sciences and Engineering Research Council and Alberta Innovates Health Solutions. Dr. Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization).« less

  15. Assessment of solute fluxes beneath an orchard irrigated with treated sewage water: A numerical study

    NASA Astrophysics Data System (ADS)

    Russo, David; Laufer, Asher; Shapira, Roi H.; Kurtzman, Daniel

    2013-02-01

    Detailed numerical simulations were used to analyze water flow and transport of nitrate, chloride, and a tracer solute in a 3-D, spatially heterogeneous, variably saturated soil, originating from a citrus orchard irrigated with treated sewage water (TSW) considering realistic features of the soil-water-plant-atmosphere system. Results of this study suggest that under long-term irrigation with TSW, because of nitrate uptake by the tree roots and nitrogen transformations, the vadose zone may provide more capacity for the attenuation of the nitrate load in the groundwater than for the chloride load in the groundwater. Results of the 3-D simulations were used to assess their counterparts based on a simplified, deterministic, 1-D vertical simulation and on limited soil monitoring. Results of the analyses suggest that the information that may be gained from a single sampling point (located close to the area active in water uptake by the tree roots) or from the results of the 1-D simulation is insufficient for a quantitative description of the response of the complicated, 3-D flow system. Both might considerably underestimate the movement and spreading of a pulse of a tracer solute and also the groundwater contamination hazard posed by nitrate and particularly by chloride moving through the vadose zone. This stems mainly from the rain that drove water through the flow system away from the rooted area and could not be represented by the 1-D model or by the single sampling point. It was shown, however, that an additional sampling point, located outside the area active in water uptake, may substantially improve the quantitative description of the response of the complicated, 3-D flow system.

  16. Deterministic chaos in entangled eigenstates

    NASA Astrophysics Data System (ADS)

    Schlegel, K. G.; Förster, S.

    2008-05-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  17. Dynamic analysis of a stochastic rumor propagation model

    NASA Astrophysics Data System (ADS)

    Jia, Fangju; Lv, Guangying

    2018-01-01

    The rapid development of the Internet, especially the emergence of the social networks, leads rumor propagation into a new media era. In this paper, we are concerned with a stochastic rumor propagation model. Sufficient conditions for extinction and persistence in the mean of the rumor are established. The threshold between persistence in the mean and extinction of the rumor is obtained. Compared with the corresponding deterministic model, the threshold affected by the white noise is smaller than the basic reproduction number R0 of the deterministic system.

  18. Relativity and indeterminism

    NASA Astrophysics Data System (ADS)

    Byrne, Patrick H.

    1981-12-01

    It is well known that Albert Einstein adhered to a deterministic world view throughout his career. Nevertheless, his developments of the special and general theories of relativity prove to be incompatible with that world view. Two different forms of determinism—classical Laplacian determinism and the determinism of isolated systems—are considered. Through careful considerations of what concretely is involved in predicting future states of the entire universe, or of isolated systems, it is shown that the demands of the theories of relativity make these deterministic positions untenable.

  19. Joint Stochastic Inversion of Pre-Stack 3D Seismic Data and Well Logs for High Resolution Hydrocarbon Reservoir Characterization

    NASA Astrophysics Data System (ADS)

    Torres-Verdin, C.

    2007-05-01

    This paper describes the successful implementation of a new 3D AVA stochastic inversion algorithm to quantitatively integrate pre-stack seismic amplitude data and well logs. The stochastic inversion algorithm is used to characterize flow units of a deepwater reservoir located in the central Gulf of Mexico. Conventional fluid/lithology sensitivity analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generates typical Class III AVA responses. On the other hand, layer- dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution. Accordingly, AVA stochastic inversion, which combines the advantages of AVA analysis with those of geostatistical inversion, provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties (P-velocity, S-velocity, density), and lithotype (sand- shale) distributions. The quantitative use of rock/fluid information through AVA seismic amplitude data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, yields accurate 3D models of petrophysical properties such as porosity and permeability. Finally, by fully integrating pre-stack seismic amplitude data and well logs, the vertical resolution of inverted products is higher than that of deterministic inversions methods.

  20. Deterministic Factors Overwhelm Stochastic Environmental Fluctuations as Drivers of Jellyfish Outbreaks.

    PubMed

    Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick

    2015-01-01

    Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future.

  1. Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer

    NASA Astrophysics Data System (ADS)

    Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.

    2016-12-01

    Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.

  2. Spatial Characterization of Radio Propagation Channel in Urban Vehicle-to-Infrastructure Environments to Support WSNs Deployment

    PubMed Central

    Granda, Fausto; Azpilicueta, Leyre; Vargas-Rosales, Cesar; Lopez-Iturri, Peio; Aguirre, Erik; Astrain, Jose Javier; Villandangos, Jesus; Falcone, Francisco

    2017-01-01

    Vehicular ad hoc Networks (VANETs) enable vehicles to communicate with each other as well as with roadside units (RSUs). Although there is a significant research effort in radio channel modeling focused on vehicle-to-vehicle (V2V), not much work has been done for vehicle-to-infrastructure (V2I) using 3D ray-tracing tools. This work evaluates some important parameters of a V2I wireless channel link such as large-scale path loss and multipath metrics in a typical urban scenario using a deterministic simulation model based on an in-house 3D Ray-Launching (3D-RL) algorithm at 5.9 GHz. Results show the high impact that the spatial distance; link frequency; placement of RSUs; and factors such as roundabout, geometry and relative position of the obstacles have in V2I propagation channel. A detailed spatial path loss characterization of the V2I channel along the streets and avenues is presented. The 3D-RL results show high accuracy when compared with measurements, and represent more reliably the propagation phenomena when compared with analytical path loss models. Performance metrics for a real test scenario implemented with a VANET wireless sensor network implemented ad-hoc are also described. These results constitute a starting point in the design phase of Wireless Sensor Networks (WSNs) radio-planning in the urban V2I deployment in terms of coverage. PMID:28590429

  3. Spatial Characterization of Radio Propagation Channel in Urban Vehicle-to-Infrastructure Environments to Support WSNs Deployment.

    PubMed

    Granda, Fausto; Azpilicueta, Leyre; Vargas-Rosales, Cesar; Lopez-Iturri, Peio; Aguirre, Erik; Astrain, Jose Javier; Villandangos, Jesus; Falcone, Francisco

    2017-06-07

    Vehicular ad hoc Networks (VANETs) enable vehicles to communicate with each other as well as with roadside units (RSUs). Although there is a significant research effort in radio channel modeling focused on vehicle-to-vehicle (V2V), not much work has been done for vehicle-to-infrastructure (V2I) using 3D ray-tracing tools. This work evaluates some important parameters of a V2I wireless channel link such as large-scale path loss and multipath metrics in a typical urban scenario using a deterministic simulation model based on an in-house 3D Ray-Launching (3D-RL) algorithm at 5.9 GHz. Results show the high impact that the spatial distance; link frequency; placement of RSUs; and factors such as roundabout, geometry and relative position of the obstacles have in V2I propagation channel. A detailed spatial path loss characterization of the V2I channel along the streets and avenues is presented. The 3D-RL results show high accuracy when compared with measurements, and represent more reliably the propagation phenomena when compared with analytical path loss models. Performance metrics for a real test scenario implemented with a VANET wireless sensor network implemented ad-hoc are also described. These results constitute a starting point in the design phase of Wireless Sensor Networks (WSNs) radio-planning in the urban V2I deployment in terms of coverage.

  4. Holographic Patterning of High Performance on-chip 3D Lithium-ion Microbatteries

    DOE PAGES

    Ning, Hailong; Pikul, James H.; Wang, Runyu; ...

    2015-05-11

    As sensors, wireless communication devices, personal health monitoring systems, and autonomous microelectromechanical systems (MEMS) become distributed and smaller, there is an increasing demand for miniaturized integrated power sources. Although thin-film batteries are well-suited for on-chip integration, their energy and power per unit area are limited. Three-dimensional electrode designs have potential to offer much greater power and energy per unit area; however, efforts to date to realize 3D microbatteries have led to prototypes with solid electrodes (and therefore low power) or mesostructured electrodes not compatible with manufacturing or on-chip integration. Here in this paper, we demonstrate an on-chip compatible method tomore » fabricate high energy density (6.5 μWh cm -2∙μm -1) 3D mesostructured Li-ion microbatteries based on LiMnO 2 cathodes, and NiSn anodes that possess supercapacitor-like power (3,600 μW cm(-2)∙μm(-1) peak). The mesostructured electrodes are fabricated by combining 3D holographic lithography with conventional photolithography, enabling deterministic control of both the internal electrode mesostructure and the spatial distribution of the electrodes on the substrate. The resultant full cells exhibit impressive performances, for example a conventional light-emitting diode (LED) is driven with a 500-μA peak current (600-C discharge) from a 10-μm-thick microbattery with an area of 4 mm 2 for 200 cycles with only 12% capacity fade. Lastly, a combined experimental and modeling study where the structural parameters of the battery are modulated illustrates the unique design flexibility enabled by 3D holographic lithography and provides guidance for optimization for a given application.« less

  5. Holographic patterning of high-performance on-chip 3D lithium-ion microbatteries

    PubMed Central

    Ning, Hailong; Pikul, James H.; Zhang, Runyu; Li, Xuejiao; Xu, Sheng; Wang, Junjie; Rogers, John A.; King, William P.; Braun, Paul V.

    2015-01-01

    As sensors, wireless communication devices, personal health monitoring systems, and autonomous microelectromechanical systems (MEMS) become distributed and smaller, there is an increasing demand for miniaturized integrated power sources. Although thin-film batteries are well-suited for on-chip integration, their energy and power per unit area are limited. Three-dimensional electrode designs have potential to offer much greater power and energy per unit area; however, efforts to date to realize 3D microbatteries have led to prototypes with solid electrodes (and therefore low power) or mesostructured electrodes not compatible with manufacturing or on-chip integration. Here, we demonstrate an on-chip compatible method to fabricate high energy density (6.5 μWh cm−2⋅μm−1) 3D mesostructured Li-ion microbatteries based on LiMnO2 cathodes, and NiSn anodes that possess supercapacitor-like power (3,600 μW cm−2⋅μm−1 peak). The mesostructured electrodes are fabricated by combining 3D holographic lithography with conventional photolithography, enabling deterministic control of both the internal electrode mesostructure and the spatial distribution of the electrodes on the substrate. The resultant full cells exhibit impressive performances, for example a conventional light-emitting diode (LED) is driven with a 500-μA peak current (600-C discharge) from a 10-μm-thick microbattery with an area of 4 mm2 for 200 cycles with only 12% capacity fade. A combined experimental and modeling study where the structural parameters of the battery are modulated illustrates the unique design flexibility enabled by 3D holographic lithography and provides guidance for optimization for a given application. PMID:25964360

  6. A space radiation transport method development

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.

    2004-01-01

    Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.

  7. Renewable energy in electric utility capacity planning: a decomposition approach with application to a Mexican utility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Staschus, K.

    1985-01-01

    In this dissertation, efficient algorithms for electric-utility capacity expansion planning with renewable energy are developed. The algorithms include a deterministic phase that quickly finds a near-optimal expansion plan using derating and a linearized approximation to the time-dependent availability of nondispatchable energy sources. A probabilistic second phase needs comparatively few computer-time consuming probabilistic simulation iterations to modify this solution towards the optimal expansion plan. For the deterministic first phase, two algorithms, based on a Lagrangian Dual decomposition and a Generalized Benders Decomposition, are developed. The probabilistic second phase uses a Generalized Benders Decomposition approach. Extensive computational tests of the algorithms aremore » reported. Among the deterministic algorithms, the one based on Lagrangian Duality proves fastest. The two-phase approach is shown to save up to 80% in computing time as compared to a purely probabilistic algorithm. The algorithms are applied to determine the optimal expansion plan for the Tijuana-Mexicali subsystem of the Mexican electric utility system. A strong recommendation to push conservation programs in the desert city of Mexicali results from this implementation.« less

  8. Failed rib region prediction in a human body model during crash events with precrash braking.

    PubMed

    Guleyupoglu, B; Koya, B; Barnard, R; Gayzik, F S

    2018-02-28

    The objective of this study is 2-fold. We used a validated human body finite element model to study the predicted chest injury (focusing on rib fracture as a function of element strain) based on varying levels of simulated precrash braking. Furthermore, we compare deterministic and probabilistic methods of rib injury prediction in the computational model. The Global Human Body Models Consortium (GHBMC) M50-O model was gravity settled in the driver position of a generic interior equipped with an advanced 3-point belt and airbag. Twelve cases were investigated with permutations for failure, precrash braking system, and crash severity. The severities used were median (17 kph), severe (34 kph), and New Car Assessment Program (NCAP; 56.4 kph). Cases with failure enabled removed rib cortical bone elements once 1.8% effective plastic strain was exceeded. Alternatively, a probabilistic framework found in the literature was used to predict rib failure. Both the probabilistic and deterministic methods take into consideration location (anterior, lateral, and posterior). The deterministic method is based on a rubric that defines failed rib regions dependent on a threshold for contiguous failed elements. The probabilistic method depends on age-based strain and failure functions. Kinematics between both methods were similar (peak max deviation: ΔX head = 17 mm; ΔZ head = 4 mm; ΔX thorax = 5 mm; ΔZ thorax = 1 mm). Seat belt forces at the time of probabilistic failed region initiation were lower than those at deterministic failed region initiation. The probabilistic method for rib fracture predicted more failed regions in the rib (an analog for fracture) than the deterministic method in all but 1 case where they were equal. The failed region patterns between models are similar; however, there are differences that arise due to stress reduced from element elimination that cause probabilistic failed regions to continue to rise after no deterministic failed region would be predicted. Both the probabilistic and deterministic methods indicate similar trends with regards to the effect of precrash braking; however, there are tradeoffs. The deterministic failed region method is more spatially sensitive to failure and is more sensitive to belt loads. The probabilistic failed region method allows for increased capability in postprocessing with respect to age. The probabilistic failed region method predicted more failed regions than the deterministic failed region method due to force distribution differences.

  9. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling

    PubMed Central

    Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W. F.; Jeelani, Owase; Dunaway, David J.; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face. PMID:29742139

  10. A novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling.

    PubMed

    Knoops, Paul G M; Borghi, Alessandro; Ruggiero, Federica; Badiali, Giovanni; Bianchi, Alberto; Marchetti, Claudio; Rodriguez-Florez, Naiara; Breakey, Richard W F; Jeelani, Owase; Dunaway, David J; Schievano, Silvia

    2018-01-01

    Repositioning of the maxilla in orthognathic surgery is carried out for functional and aesthetic purposes. Pre-surgical planning tools can predict 3D facial appearance by computing the response of the soft tissue to the changes to the underlying skeleton. The clinical use of commercial prediction software remains controversial, likely due to the deterministic nature of these computational predictions. A novel probabilistic finite element model (FEM) for the prediction of postoperative facial soft tissues is proposed in this paper. A probabilistic FEM was developed and validated on a cohort of eight patients who underwent maxillary repositioning and had pre- and postoperative cone beam computed tomography (CBCT) scans taken. Firstly, a variables correlation assessed various modelling parameters. Secondly, a design of experiments (DOE) provided a range of potential outcomes based on uniformly distributed input parameters, followed by an optimisation. Lastly, the second DOE iteration provided optimised predictions with a probability range. A range of 3D predictions was obtained using the probabilistic FEM and validated using reconstructed soft tissue surfaces from the postoperative CBCT data. The predictions in the nose and upper lip areas accurately include the true postoperative position, whereas the prediction under-estimates the position of the cheeks and lower lip. A probabilistic FEM has been developed and validated for the prediction of the facial appearance following orthognathic surgery. This method shows how inaccuracies in the modelling and uncertainties in executing surgical planning influence the soft tissue prediction and it provides a range of predictions including a minimum and maximum, which may be helpful for patients in understanding the impact of surgery on the face.

  11. StreamFlow 1.0: an extension to the spatially distributed snow model Alpine3D for hydrological modelling and deterministic stream temperature prediction

    NASA Astrophysics Data System (ADS)

    Gallice, Aurélien; Bavay, Mathias; Brauchli, Tristan; Comola, Francesco; Lehning, Michael; Huwald, Hendrik

    2016-12-01

    Climate change is expected to strongly impact the hydrological and thermal regimes of Alpine rivers within the coming decades. In this context, the development of hydrological models accounting for the specific dynamics of Alpine catchments appears as one of the promising approaches to reduce our uncertainty of future mountain hydrology. This paper describes the improvements brought to StreamFlow, an existing model for hydrological and stream temperature prediction built as an external extension to the physically based snow model Alpine3D. StreamFlow's source code has been entirely written anew, taking advantage of object-oriented programming to significantly improve its structure and ease the implementation of future developments. The source code is now publicly available online, along with a complete documentation. A special emphasis has been put on modularity during the re-implementation of StreamFlow, so that many model aspects can be represented using different alternatives. For example, several options are now available to model the advection of water within the stream. This allows for an easy and fast comparison between different approaches and helps in defining more reliable uncertainty estimates of the model forecasts. In particular, a case study in a Swiss Alpine catchment reveals that the stream temperature predictions are particularly sensitive to the approach used to model the temperature of subsurface flow, a fact which has been poorly reported in the literature to date. Based on the case study, StreamFlow is shown to reproduce hourly mean discharge with a Nash-Sutcliffe efficiency (NSE) of 0.82 and hourly mean temperature with a NSE of 0.78.

  12. Fracture Capabilities in Grizzly with the extended Finite Element Method (X-FEM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolbow, John; Zhang, Ziyu; Spencer, Benjamin

    Efforts are underway to develop fracture mechanics capabilities in the Grizzly code to enable it to be used to perform deterministic fracture assessments of degraded reactor pressure vessels (RPVs). A capability was previously developed to calculate three-dimensional interaction- integrals to extract mixed-mode stress-intensity factors. This capability requires the use of a finite element mesh that conforms to the crack geometry. The eXtended Finite Element Method (X-FEM) provides a means to represent a crack geometry without explicitly fitting the finite element mesh to it. This is effected by enhancing the element kinematics to represent jump discontinuities at arbitrary locations inside ofmore » the element, as well as the incorporation of asymptotic near-tip fields to better capture crack singularities. In this work, use of only the discontinuous enrichment functions was examined to see how accurate stress intensity factors could still be calculated. This report documents the following work to enhance Grizzly’s engineering fracture capabilities by introducing arbitrary jump discontinuities for prescribed crack geometries; X-FEM Mesh Cutting in 3D: to enhance the kinematics of elements that are intersected by arbitrary crack geometries, a mesh cutting algorithm was implemented in Grizzly. The algorithm introduces new virtual nodes and creates partial elements, and then creates a new mesh connectivity; Interaction Integral Modifications: the existing code for evaluating the interaction integral in Grizzly was based on the assumption of a mesh that was fitted to the crack geometry. Modifications were made to allow for the possibility of a crack front that passes arbitrarily through the mesh; and Benchmarking for 3D Fracture: the new capabilities were benchmarked against mixed-mode three-dimensional fracture problems with known analytical solutions.« less

  13. Modeling small cell lung cancer (SCLC) biology through deterministic and stochastic mathematical models.

    PubMed

    Salgia, Ravi; Mambetsariev, Isa; Hewelt, Blake; Achuthan, Srisairam; Li, Haiqing; Poroyko, Valeriy; Wang, Yingyu; Sattler, Martin

    2018-05-25

    Mathematical cancer models are immensely powerful tools that are based in part on the fractal nature of biological structures, such as the geometry of the lung. Cancers of the lung provide an opportune model to develop and apply algorithms that capture changes and disease phenotypes. We reviewed mathematical models that have been developed for biological sciences and applied them in the context of small cell lung cancer (SCLC) growth, mutational heterogeneity, and mechanisms of metastasis. The ultimate goal is to develop the stochastic and deterministic nature of this disease, to link this comprehensive set of tools back to its fractalness and to provide a platform for accurate biomarker development. These techniques may be particularly useful in the context of drug development research, such as combination with existing omics approaches. The integration of these tools will be important to further understand the biology of SCLC and ultimately develop novel therapeutics.

  14. Intelligent Manufacturing of Commercial Optics Final Report CRADA No. TC-0313-92

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, J. S.; Pollicove, H.

    The project combined the research and development efforts of LLNL and the University of Rochester Center for Manufacturing Optics (COM), to develop a new generation of flexible computer controlled optics· grinding machines. COM's principal near term development effort is to commercialize the OPTICAM-SM, a new prototype spherical grinding machine. A crucial requirement for commercializing the OPTICAM-SM is the development of a predictable and repeatable material removal process ( deterministic micro-grinding) that yields high quality surfaces that minimize non-deterministic polishing. OPTICAM machine tools and the fabrication process development studies are part of COM' s response to the DOD (ARPA) request tomore » implement a modernization strategy for revitalizing the U.S. optics manufacturing base. This project was entered into in order to develop a new generation of :flexible, computer-controlled optics grinding machines.« less

  15. An alternative approach to measure similarity between two deterministic transient signals

    NASA Astrophysics Data System (ADS)

    Shin, Kihong

    2016-06-01

    In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.

  16. Deterministic generation of remote entanglement with active quantum feedback

    DOE PAGES

    Martin, Leigh; Motzoi, Felix; Li, Hanhan; ...

    2015-12-10

    We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less

  17. Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.

    PubMed

    Marino, Dale J; Starr, Thomas B

    2007-12-01

    A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.

  18. Spatial scaling patterns and functional redundancies in a changing boreal lake landscape

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Uden, Daniel R.; Johnson, Richard K.

    2015-01-01

    Global transformations extend beyond local habitats; therefore, larger-scale approaches are needed to assess community-level responses and resilience to unfolding environmental changes. Using longterm data (1996–2011), we evaluated spatial patterns and functional redundancies in the littoral invertebrate communities of 85 Swedish lakes, with the objective of assessing their potential resilience to environmental change at regional scales (that is, spatial resilience). Multivariate spatial modeling was used to differentiate groups of invertebrate species exhibiting spatial patterns in composition and abundance (that is, deterministic species) from those lacking spatial patterns (that is, stochastic species). We then determined the functional feeding attributes of the deterministic and stochastic invertebrate species, to infer resilience. Between one and three distinct spatial patterns in invertebrate composition and abundance were identified in approximately one-third of the species; the remainder were stochastic. We observed substantial differences in metrics between deterministic and stochastic species. Functional richness and diversity decreased over time in the deterministic group, suggesting a loss of resilience in regional invertebrate communities. However, taxon richness and redundancy increased monotonically in the stochastic group, indicating the capacity of regional invertebrate communities to adapt to change. Our results suggest that a refined picture of spatial resilience emerges if patterns of both the deterministic and stochastic species are accounted for. Spatially extensive monitoring may help increase our mechanistic understanding of community-level responses and resilience to regional environmental change, insights that are critical for developing management and conservation agendas in this current period of rapid environmental transformation.

  19. Micro-masonry for 3D Additive Micromanufacturing

    PubMed Central

    Keum, Hohyun; Kim, Seok

    2014-01-01

    Transfer printing is a method to transfer solid micro/nanoscale materials (herein called ‘inks’) from a substrate where they are generated to a different substrate by utilizing elastomeric stamps. Transfer printing enables the integration of heterogeneous materials to fabricate unexampled structures or functional systems that are found in recent advanced devices such as flexible and stretchable solar cells and LED arrays. While transfer printing exhibits unique features in material assembly capability, the use of adhesive layers or the surface modification such as deposition of self-assembled monolayer (SAM) on substrates for enhancing printing processes hinders its wide adaptation in microassembly of microelectromechanical system (MEMS) structures and devices. To overcome this shortcoming, we developed an advanced mode of transfer printing which deterministically assembles individual microscale objects solely through controlling surface contact area without any surface alteration. The absence of an adhesive layer or other modification and the subsequent material bonding processes ensure not only mechanical bonding, but also thermal and electrical connection between assembled materials, which further opens various applications in adaptation in building unusual MEMS devices. PMID:25146178

  20. Large conditional single-photon cross-phase modulation

    NASA Astrophysics Data System (ADS)

    Beck, Kristin; Hosseini, Mahdi; Duan, Yiheng; Vuletic, Vladan

    2016-05-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of up to π / 3 between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. With a moderate improvement in cavity finesse, our system can reach a coherent phase shift of p at low loss, enabling deterministic and universal photonic quantum logic. Preprint: arXiv:1512.02166 [quant-ph

  1. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  2. Integrating urban recharge uncertainty into standard groundwater modeling practice: A case study on water main break predictions for the Barton Springs segment of the Edwards Aquifer, Austin, Texas

    NASA Astrophysics Data System (ADS)

    Sinner, K.; Teasley, R. L.

    2016-12-01

    Groundwater models serve as integral tools for understanding flow processes and informing stakeholders and policy makers in management decisions. Historically, these models tended towards a deterministic nature, relying on historical data to predict and inform future decisions based on model outputs. This research works towards developing a stochastic method of modeling recharge inputs from pipe main break predictions in an existing groundwater model, which subsequently generates desired outputs incorporating future uncertainty rather than deterministic data. The case study for this research is the Barton Springs segment of the Edwards Aquifer near Austin, Texas. Researchers and water resource professionals have modeled the Edwards Aquifer for decades due to its high water quality, fragile ecosystem, and stakeholder interest. The original case study and model that this research is built upon was developed as a co-design problem with regional stakeholders and the model outcomes are generated specifically for communication with policy makers and managers. Recently, research in the Barton Springs segment demonstrated a significant contribution of urban, or anthropogenic, recharge to the aquifer, particularly during dry period, using deterministic data sets. Due to social and ecological importance of urban water loss to recharge, this study develops an evaluation method to help predicted pipe breaks and their related recharge contribution within the Barton Springs segment of the Edwards Aquifer. To benefit groundwater management decision processes, the performance measures captured in the model results, such as springflow, head levels, storage, and others, were determined by previous work in elicitation of problem framing to determine stakeholder interests and concerns. The results of the previous deterministic model and the stochastic model are compared to determine gains to stakeholder knowledge through the additional modeling

  3. Fluctuating Hydrodynamics Confronts the Rapidity Dependence of Transverse Momentum Fluctuations

    NASA Astrophysics Data System (ADS)

    Pokharel, Rajendra; Gavin, Sean; Moschelli, George

    2012-10-01

    Interest in the development of the theory of fluctuating hydrodynamics is growing [1]. Early efforts suggested that viscous diffusion broadens the rapidity dependence of transverse momentum correlations [2]. That work stimulated an experimental analysis by STAR [3]. We attack this new data along two fronts. First, we compute STAR's fluctuation observable using the NeXSPheRIO code, which combines fluctuating initial conditions from a string fragmentation model with deterministic viscosity-free hydrodynamic evolution. We find that NeXSPheRIO produces a longitudinal narrowing, in contrast to the data. Second, we study the hydrodynamic evolution using second order causal viscous hydrodynamics including Langevin noise. We obtain a deterministic evolution equation for the transverse momentum density correlation function. We use the latest theoretical equations of state and transport coefficients to compute STAR's observable. The results are in excellent accord with the measured broadening. In addition, we predict features of the distribution that can distinguish 2nd and 1st order diffusion. [4pt] [1] J. Kapusta, B. Mueller, M. Stephanov, arXiv:1112.6405 [nucl-th].[0pt] [2] S. Gavin and M. Abdel-Aziz, Phys. Rev. Lett. 97, 162302 (2006)[0pt] [3] H. Agakishiev et al., STAR, STAR, Phys. Lett. B704

  4. Deterministic influences exceed dispersal effects on hydrologically-connected microbiomes.

    PubMed

    Graham, Emily B; Crump, Alex R; Resch, Charles T; Fansler, Sarah; Arntzen, Evan; Kennedy, David W; Fredrickson, Jim K; Stegen, James C

    2017-04-01

    Subsurface groundwater-surface water mixing zones (hyporheic zones) have enhanced biogeochemical activity, but assembly processes governing subsurface microbiomes remain a critical uncertainty in understanding hyporheic biogeochemistry. To address this obstacle, we investigated (a) biogeographical patterns in attached and waterborne microbiomes across three hydrologically-connected, physicochemically-distinct zones (inland hyporheic, nearshore hyporheic and river); (b) assembly processes that generated these patterns; (c) groups of organisms that corresponded to deterministic changes in the environment; and (d) correlations between these groups and hyporheic metabolism. All microbiomes remained dissimilar through time, but consistent presence of similar taxa suggested dispersal and/or common selective pressures among zones. Further, we demonstrated a pronounced impact of deterministic assembly in all microbiomes as well as seasonal shifts from heterotrophic to autotrophic microorganisms associated with increases in groundwater discharge. The abundance of one statistical cluster of organisms increased with active biomass and respiration, revealing organisms that may strongly influence hyporheic biogeochemistry. Based on our results, we propose a conceptualization of hyporheic zone metabolism in which increased organic carbon concentrations during surface water intrusion support heterotrophy, which succumbs to autotrophy under groundwater discharge. These results provide new opportunities to enhance microbially-explicit ecosystem models describing hyporheic zone biogeochemistry and its influence over riverine ecosystem function. © 2017 Society for Applied Microbiology and John Wiley & Sons Ltd.

  5. Developing deterioration models for Wyoming bridges.

    DOT National Transportation Integrated Search

    2016-05-01

    Deterioration models for the Wyoming Bridge Inventory were developed using both stochastic and deterministic models. : The selection of explanatory variables is investigated and a new method using LASSO regression to eliminate human bias : in explana...

  6. Multi-Year Revenue and Expenditure Forecasting for Small Municipal Governments.

    DTIC Science & Technology

    1981-03-01

    Management Audit Econometric Revenue Forecast Gap and Impact Analysis Deterministic Expenditure Forecast Municipal Forecasting Municipal Budget Formlto...together with a multi-year revenue and expenditure forecasting model for the City of Monterey, California. The Monterey model includes an econometric ...65 5 D. FORECAST BASED ON THE ECONOMETRIC MODEL ------- 67 E. FORECAST BASED ON EXPERT JUDGMENT AND TREND ANALYSIS

  7. Daniell method for power spectral density estimation in atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labuda, Aleksander

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less

  8. Evaluation of hybrid inverse planning and optimization (HIPO) algorithm for optimization in real-time, high-dose-rate (HDR) brachytherapy for prostate.

    PubMed

    Pokharel, Shyam; Rana, Suresh; Blikenstaff, Joseph; Sadeghi, Amir; Prestidge, Bradley

    2013-07-08

    The purpose of this study is to investigate the effectiveness of the HIPO planning and optimization algorithm for real-time prostate HDR brachytherapy. This study consists of 20 patients who underwent ultrasound-based real-time HDR brachytherapy of the prostate using the treatment planning system called Oncentra Prostate (SWIFT version 3.0). The treatment plans for all patients were optimized using inverse dose-volume histogram-based optimization followed by graphical optimization (GRO) in real time. The GRO is manual manipulation of isodose lines slice by slice. The quality of the plan heavily depends on planner expertise and experience. The data for all patients were retrieved later, and treatment plans were created and optimized using HIPO algorithm with the same set of dose constraints, number of catheters, and set of contours as in the real-time optimization algorithm. The HIPO algorithm is a hybrid because it combines both stochastic and deterministic algorithms. The stochastic algorithm, called simulated annealing, searches the optimal catheter distributions for a given set of dose objectives. The deterministic algorithm, called dose-volume histogram-based optimization (DVHO), optimizes three-dimensional dose distribution quickly by moving straight downhill once it is in the advantageous region of the search space given by the stochastic algorithm. The PTV receiving 100% of the prescription dose (V100) was 97.56% and 95.38% with GRO and HIPO, respectively. The mean dose (D(mean)) and minimum dose to 10% volume (D10) for the urethra, rectum, and bladder were all statistically lower with HIPO compared to GRO using the student pair t-test at 5% significance level. HIPO can provide treatment plans with comparable target coverage to that of GRO with a reduction in dose to the critical structures.

  9. Microenvironmental Stiffness of 3D Polymeric Structures to Study Invasive Rates of Cancer Cells.

    PubMed

    Lemma, Enrico Domenico; Spagnolo, Barbara; Rizzi, Francesco; Corvaglia, Stefania; Pisanello, Marco; De Vittorio, Massimo; Pisanello, Ferruccio

    2017-11-01

    Cells are highly dynamic elements, continuously interacting with the extracellular environment. Mechanical forces sensed and applied by cells are responsible for cellular adhesion, motility, and deformation, and are heavily involved in determining cancer spreading and metastasis formation. Cell/extracellular matrix interactions are commonly analyzed with the use of hydrogels and 3D microfabricated scaffolds. However, currently available techniques have a limited control over the stiffness of microscaffolds and do not allow for separating environmental properties from biological processes in driving cell mechanical behavior, including nuclear deformability and cell invasiveness. Herein, a new approach is presented to study tumor cell invasiveness by exploiting an innovative class of polymeric scaffolds based on two-photon lithography to control the stiffness of deterministic microenvironments in 3D. This is obtained by fine-tuning of the laser power during the lithography, thus locally modifying both structural and mechanical properties in the same fabrication process. Cage-like structures and cylindric stent-like microscaffolds are fabricated with different Young's modulus and stiffness gradients, allowing obtaining new insights on the mechanical interplay between tumor cells and the surrounding environments. In particular, cell invasion is mostly driven by softer architectures, and the introduction of 3D stiffness "weak spots" is shown to boost the rate at which cancer cells invade the scaffolds. The possibility to modulate structural compliance also allowed estimating the force distribution exerted by a single cell on the scaffold, revealing that both pushing and pulling forces are involved in the cell-structure interaction. Overall, exploiting this method to obtain a wide range of 3D architectures with locally engineered stiffness can pave the way for unique applications to study tumor cell dynamics. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Universal photonic quantum computation via time-delayed feedback

    PubMed Central

    Pichler, Hannes; Choi, Soonwon; Zoller, Peter; Lukin, Mikhail D.

    2017-01-01

    We propose and analyze a deterministic protocol to generate two-dimensional photonic cluster states using a single quantum emitter via time-delayed quantum feedback. As a physical implementation, we consider a single atom or atom-like system coupled to a 1D waveguide with a distant mirror, where guided photons represent the qubits, while the mirror allows the implementation of feedback. We identify the class of many-body quantum states that can be produced using this approach and characterize them in terms of 2D tensor network states. PMID:29073057

  11. Optimal port-based teleportation

    NASA Astrophysics Data System (ADS)

    Mozrzymas, Marek; Studziński, Michał; Strelchuk, Sergii; Horodecki, Michał

    2018-05-01

    Deterministic port-based teleportation (dPBT) protocol is a scheme where a quantum state is guaranteed to be transferred to another system without unitary correction. We characterise the best achievable performance of the dPBT when both the resource state and the measurement is optimised. Surprisingly, the best possible fidelity for an arbitrary number of ports and dimension of the teleported state is given by the largest eigenvalue of a particular matrix—Teleportation Matrix. It encodes the relationship between a certain set of Young diagrams and emerges as the optimal solution to the relevant semidefinite programme.

  12. Development of three-dimensional patient face model that enables real-time collision detection and cutting operation for a dental simulator.

    PubMed

    Yamaguchi, Satoshi; Yamada, Yuya; Yoshida, Yoshinori; Noborio, Hiroshi; Imazato, Satoshi

    2012-01-01

    The virtual reality (VR) simulator is a useful tool to develop dental hand skill. However, VR simulations with reactions of patients have limited computational time to reproduce a face model. Our aim was to develop a patient face model that enables real-time collision detection and cutting operation by using stereolithography (STL) and deterministic finite automaton (DFA) data files. We evaluated dependence of computational cost and constructed the patient face model using the optimum condition for combining STL and DFA data files, and assessed the computational costs for operation in do-nothing, collision, cutting, and combination of collision and cutting. The face model was successfully constructed with low computational costs of 11.3, 18.3, 30.3, and 33.5 ms for do-nothing, collision, cutting, and collision and cutting, respectively. The patient face model could be useful for developing dental hand skill with VR.

  13. Characterizing Uncertainty and Variability in PBPK Models ...

    EPA Pesticide Factsheets

    Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro

  14. Application of deterministic deconvolution of ground-penetrating radar data in a study of carbonate strata

    USGS Publications Warehouse

    Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.

    2004-01-01

    We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.

  15. Universal first-order reliability concept applied to semistatic structures

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1994-01-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  16. Universal first-order reliability concept applied to semistatic structures

    NASA Astrophysics Data System (ADS)

    Verderaime, V.

    1994-07-01

    A reliability design concept was developed for semistatic structures which combines the prevailing deterministic method with the first-order reliability method. The proposed method surmounts deterministic deficiencies in providing uniformly reliable structures and improved safety audits. It supports risk analyses and reliability selection criterion. The method provides a reliability design factor derived from the reliability criterion which is analogous to the current safety factor for sizing structures and verifying reliability response. The universal first-order reliability method should also be applicable for air and surface vehicles semistatic structures.

  17. Dipy, a library for the analysis of diffusion MRI data.

    PubMed

    Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian

    2014-01-01

    Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing.

  18. Dipy, a library for the analysis of diffusion MRI data

    PubMed Central

    Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian

    2014-01-01

    Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing. PMID:24600385

  19. A constant stress-drop model for producing broadband synthetic seismograms: Comparison with the next generation attenuation relations

    USGS Publications Warehouse

    Frankel, A.

    2009-01-01

    Broadband (0.1-20 Hz) synthetic seismograms for finite-fault sources were produced for a model where stress drop is constant with seismic moment to see if they can match the magnitude dependence and distance decay of response spectral amplitudes found in the Next Generation Attenuation (NGA) relations recently developed from strong-motion data of crustal earthquakes in tectonically active regions. The broadband synthetics were constructed for earthquakes of M 5.5, 6.5, and 7.5 by combining deterministic synthetics for plane-layered models at low frequencies with stochastic synthetics at high frequencies. The stochastic portion used a source model where the Brune stress drop of 100 bars is constant with seismic moment. The deterministic synthetics were calculated using an average slip velocity, and hence, dynamic stress drop, on the fault that is uniform with magnitude. One novel aspect of this procedure is that the transition frequency between the deterministic and stochastic portions varied with magnitude, so that the transition frequency is inversely related to the rise time of slip on the fault. The spectral accelerations at 0.2, 1.0, and 3.0 sec periods from the synthetics generally agreed with those from the set of NGA relations for M 5.5-7.5 for distances of 2-100 km. At distances of 100-200 km some of the NGA relations for 0.2 sec spectral acceleration were substantially larger than the values of the synthetics for M 7.5 and M 6.5 earthquakes because these relations do not have a term accounting for Q. At 3 and 5 sec periods, the synthetics for M 7.5 earthquakes generally had larger spectral accelerations than the NGA relations, although there was large scatter in the results from the synthetics. The synthetics showed a sag in response spectra at close-in distances for M 5.5 between 0.3 and 0.7 sec that is not predicted from the NGA relations.

  20. Class III Mid-Term Project, "Increasing Heavy Oil Reserves in the Wilmington Oil Field Through Advanced Reservoir Characterization and Thermal Production Technologies"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scott Hara

    2007-03-31

    The overall objective of this project was to increase heavy oil reserves in slope and basin clastic (SBC) reservoirs through the application of advanced reservoir characterization and thermal production technologies. The project involved improving thermal recovery techniques in the Tar Zone of Fault Blocks II-A and V (Tar II-A and Tar V) of the Wilmington Field in Los Angeles County, near Long Beach, California. A primary objective has been to transfer technology that can be applied in other heavy oil formations of the Wilmington Field and other SBC reservoirs, including those under waterflood. The first budget period addressed several producibilitymore » problems in the Tar II-A and Tar V thermal recovery operations that are common in SBC reservoirs. A few of the advanced technologies developed include a three-dimensional (3-D) deterministic geologic model, a 3-D deterministic thermal reservoir simulation model to aid in reservoir management and subsequent post-steamflood development work, and a detailed study on the geochemical interactions between the steam and the formation rocks and fluids. State of the art operational work included drilling and performing a pilot steam injection and production project via four new horizontal wells (2 producers and 2 injectors), implementing a hot water alternating steam (WAS) drive pilot in the existing steamflood area to improve thermal efficiency, installing a 2400-foot insulated, subsurface harbor channel crossing to supply steam to an island location, testing a novel alkaline steam completion technique to control well sanding problems, and starting on an advanced reservoir management system through computer-aided access to production and geologic data to integrate reservoir characterization, engineering, monitoring, and evaluation. The second budget period phase (BP2) continued to implement state-of-the-art operational work to optimize thermal recovery processes, improve well drilling and completion practices, and evaluate the geomechanical characteristics of the producing formations. The objectives were to further improve reservoir characterization of the heterogeneous turbidite sands, test the proficiency of the three-dimensional geologic and thermal reservoir simulation models, identify the high permeability thief zones to reduce water breakthrough and cycling, and analyze the nonuniform distribution of the remaining oil in place. This work resulted in the redevelopment of the Tar II-A and Tar V post-steamflood projects by drilling several new wells and converting idle wells to improve injection sweep efficiency and more effectively drain the remaining oil reserves. Reservoir management work included reducing water cuts, maintaining or increasing oil production, and evaluating and minimizing further thermal-related formation compaction. The BP2 project utilized all the tools and knowledge gained throughout the DOE project to maximize recovery of the oil in place.« less

  1. Structural Deterministic Safety Factors Selection Criteria and Verification

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1992-01-01

    Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.

  2. Numerical simulation and characterization of trapping noise in InGaP-GaAs heterojunctions devices at high injection

    NASA Astrophysics Data System (ADS)

    Nallatamby, Jean-Christophe; Abdelhadi, Khaled; Jacquet, Jean-Claude; Prigent, Michel; Floriot, Didier; Delage, Sylvain; Obregon, Juan

    2013-03-01

    Commercially available simulators present considerable advantages in performing accurate DC, AC and transient simulations of semiconductor devices, including many fundamental and parasitic effects which are not generally taken into account in house-made simulators. Nevertheless, while the TCAD simulators of the public domain we have tested give accurate results for the simulation of diffusion noise, none of the tested simulators perform trap-assisted GR noise accurately. In order to overcome the aforementioned problem we propose a robust solution to accurately simulate GR noise due to traps. It is based on numerical processing of the output data of one of the simulators available in the public-domain, namely SENTAURUS (from Synopsys). We have linked together, through a dedicated Data Access Component (DAC), the deterministic output data available from SENTAURUS and a powerful, customizable post-processing tool developed on the mathematical SCILAB software package. Thus, robust simulations of GR noise in semiconductor devices can be performed by using GR Langevin sources associated to the scalar Green functions responses of the device. Our method takes advantage of the accuracy of the deterministic simulations of electronic devices obtained with SENTAURUS. A Comparison between 2-D simulations and measurements of low frequency noise on InGaP-GaAs heterojunctions, at low as well as high injection levels, demonstrates the validity of the proposed simulation tool.

  3. Large conditional single-photon cross-phase modulation

    PubMed Central

    Hosseini, Mahdi; Duan, Yiheng; Vuletić, Vladan

    2016-01-01

    Deterministic optical quantum logic requires a nonlinear quantum process that alters the phase of a quantum optical state by π through interaction with only one photon. Here, we demonstrate a large conditional cross-phase modulation between a signal field, stored inside an atomic quantum memory, and a control photon that traverses a high-finesse optical cavity containing the atomic memory. This approach avoids fundamental limitations associated with multimode effects for traveling optical photons. We measure a conditional cross-phase shift of π/6 (and up to π/3 by postselection on photons that remain in the system longer than average) between the retrieved signal and control photons, and confirm deterministic entanglement between the signal and control modes by extracting a positive concurrence. By upgrading to a state-of-the-art cavity, our system can reach a coherent phase shift of π at low loss, enabling deterministic and universal photonic quantum logic. PMID:27519798

  4. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  5. A microfluidic platform for drug screening in a 3D cancer microenvironment.

    PubMed

    Pandya, Hardik J; Dhingra, Karan; Prabhakar, Devbalaji; Chandrasekar, Vineethkrishna; Natarajan, Siva Kumar; Vasan, Anish S; Kulkarni, Ashish; Shafiee, Hadi

    2017-08-15

    Development of resistance to chemotherapy treatments is a major challenge in the battle against cancer. Although a vast repertoire of chemotherapeutics is currently available for treating cancer, a technique for rapidly identifying the right drug based on the chemo-resistivity of the cancer cells is not available and it currently takes weeks to months to evaluate the response of cancer patients to a drug. A sensitive, low-cost diagnostic assay capable of rapidly evaluating the effect of a series of drugs on cancer cells can significantly change the paradigm in cancer treatment management. Integration of microfluidics and electrical sensing modality in a 3D tumour microenvironment may provide a powerful platform to tackle this issue. Here, we report a 3D microfluidic platform that could be potentially used for a real-time deterministic analysis of the success rate of a chemotherapeutic drug in less than 12h. The platform (66mm×50mm; L×W) is integrated with the microsensors (interdigitated gold electrodes with width and spacing 10µm) that can measure the change in the electrical response of cancer cells seeded in a 3D extra cellular matrix when a chemotherapeutic drug is flown next to the matrix. B16-F10 mouse melanoma, 4T1 mouse breast cancer, and DU 145 human prostate cancer cells were used as clinical models. The change in impedance magnitude on flowing chemotherapeutics drugs measured at 12h for drug-susceptible and drug tolerant breast cancer cells compared to control were 50,552±144 Ω and 28,786±233 Ω, respectively, while that of drug-susceptible melanoma cells were 40,197±222 Ω and 4069±79 Ω, respectively. In case of prostate cancer the impedance change between susceptible and resistant cells were 8971±1515 Ω and 3281±429 Ω, respectively, which demonstrated that the microfluidic platform was capable of delineating drug susceptible cells, drug tolerant, and drug resistant cells in less than 12h. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Invited Review: A review of deterministic effects in cyclic variability of internal combustion engines

    DOE PAGES

    Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; ...

    2015-02-18

    Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less

  7. Effects of magnetometer calibration and maneuvers on accuracies of magnetometer-only attitude-and-rate determination

    NASA Technical Reports Server (NTRS)

    Challa, M.; Natanson, G.

    1998-01-01

    Two different algorithms - a deterministic magnetic-field-only algorithm and a Kalman filter for gyroless spacecraft - are used to estimate the attitude and rates of the Rossi X-Ray Timing Explorer (RXTE) using only measurements from a three-axis magnetometer. The performance of these algorithms is examined using in-flight data from various scenarios. In particular, significant enhancements in accuracies are observed when' the telemetered magnetometer data are accurately calibrated using a recently developed calibration algorithm. Interesting features observed in these studies of the inertial-pointing RXTE include a remarkable sensitivity of the filter to the numerical values of the noise parameters and relatively long convergence time spans. By analogy, the accuracy of the deterministic scheme is noticeably lower as a result of reduced rates of change of the body-fixed geomagnetic field. Preliminary results show the filter-per-axis attitude accuracies ranging between 0.1 and 0.5 deg and rate accuracies between 0.001 deg/sec and 0.005 deg./sec, whereas the deterministic method needs a more sophisticated techniques for smoothing time derivatives of the measured geomagnetic field to clearly distinguish both attitude and rate solutions from the numerical noise. Also included is a new theoretical development in the deterministic algorithm: the transformation of a transcendental equation in the original theory into an 8th-order polynomial equation. It is shown that this 8th-order polynomial reduces to quadratic equations in the two limiting cases-infinitely high wheel momentum, and constant rates-discussed in previous publications.

  8. Probabilistic vs. deterministic fiber tracking and the influence of different seed regions to delineate cerebellar-thalamic fibers in deep brain stimulation.

    PubMed

    Schlaier, Juergen R; Beer, Anton L; Faltermeier, Rupert; Fellner, Claudia; Steib, Kathrin; Lange, Max; Greenlee, Mark W; Brawanski, Alexander T; Anthofer, Judith M

    2017-06-01

    This study compared tractography approaches for identifying cerebellar-thalamic fiber bundles relevant to planning target sites for deep brain stimulation (DBS). In particular, probabilistic and deterministic tracking of the dentate-rubro-thalamic tract (DRTT) and differences between the spatial courses of the DRTT and the cerebello-thalamo-cortical (CTC) tract were compared. Six patients with movement disorders were examined by magnetic resonance imaging (MRI), including two sets of diffusion-weighted images (12 and 64 directions). Probabilistic and deterministic tractography was applied on each diffusion-weighted dataset to delineate the DRTT. Results were compared with regard to their sensitivity in revealing the DRTT and additional fiber tracts and processing time. Two sets of regions-of-interests (ROIs) guided deterministic tractography of the DRTT or the CTC, respectively. Tract distances to an atlas-based reference target were compared. Probabilistic fiber tracking with 64 orientations detected the DRTT in all twelve hemispheres. Deterministic tracking detected the DRTT in nine (12 directions) and in only two (64 directions) hemispheres. Probabilistic tracking was more sensitive in detecting additional fibers (e.g. ansa lenticularis and medial forebrain bundle) than deterministic tracking. Probabilistic tracking lasted substantially longer than deterministic. Deterministic tracking was more sensitive in detecting the CTC than the DRTT. CTC tracts were located adjacent but consistently more posterior to DRTT tracts. These results suggest that probabilistic tracking is more sensitive and robust in detecting the DRTT but harder to implement than deterministic approaches. Although sensitivity of deterministic tracking is higher for the CTC than the DRTT, targets for DBS based on these tracts likely differ. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  10. Deterministic Aperiodic Structures for on-chip Nanophotonics and Nanoplasmonics Device Applications

    DTIC Science & Technology

    2013-04-01

    the origin of the superior field enhancement and localization observed in several aperiodic plasmonic structures. Due to the ...removed by hot acetone bath, resulting in the Si nano-hole master . The Si master is first treated with a silanizing agent to reduce the adhesion of ...arrays needs to be utilized, as illustrated in Figs. 7(a-d). The nanodot master fabrication proceeds

  11. Consistent Adjoint Driven Importance Sampling using Space, Energy and Angle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M

    2012-08-01

    For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADISmore » also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.« less

  12. Influences of the current density on the performances of the chrome-plated layer in deterministic electroplating repair

    NASA Astrophysics Data System (ADS)

    Xia, H.; Shen, X. M.; Yang, X. C.; Xiong, Y.; Jiang, G. L.

    2018-01-01

    Deterministic electroplating repair is a novel method for rapidly repairing the attrited parts. By the qualitative contrast and quantitative comparison, influences of the current density on performances of the chrome-plated layer were concluded in this study. The chrome-plated layers were fabricated under different current densities when the other parameters were kept constant. Hardnesses, thicknesses and components, surface morphologies and roughnesses, and wearability of the chrome-plated layers were detected by the Vickers hardness tester, scanning electron microscope / energy dispersive X-ray detector, digital microscope in the 3D imaging mode, and the ball-milling instrument with profilograph, respectively. In order to scientifically evaluate each factor, the experimental data was normalized. A comprehensive evaluation model was founded to quantitative analyse influence of the current density based on analytic hierarchy process method and the weighted evaluation method. The calculated comprehensive evaluation indexes corresponding to current density of 40A/dm2, 45A/dm2, 50A/dm2, 55A/dm2, 60A/dm2, and 65A/dm2 were 0.2246, 0.4850, 0.4799, 0.4922, 0.8672, and 0.1381, respectively. Experimental results indicate that final optimal option was 60A/dm2, and the priority orders were 60A/dm2, 55A/dm2, 45A/dm2, 50A/dm2, 40A/dm2, and 65A/dm2.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dustin Popp; Zander Mausolff; Sedat Goluoglu

    We are proposing to use the code, TDKENO, to model TREAT. TDKENO solves the time dependent, three dimensional Boltzmann transport equation with explicit representation of delayed neutrons. Instead of directly integrating this equation, the neutron flux is factored into two components – a rapidly varying amplitude equation and a slowly varying shape equation and each is solved separately on different time scales. The shape equation is solved using the 3D Monte Carlo transport code KENO, from Oak Ridge National Laboratory’s SCALE code package. Using the Monte Carlo method to solve the shape equation is still computationally intensive, but the operationmore » is only performed when needed. The amplitude equation is solved deterministically and frequently, so the solution gives an accurate time-dependent solution without having to repeatedly We have modified TDKENO to incorporate KENO-VI so that we may accurately represent the geometries within TREAT. This paper explains the motivation behind using generalized geometry, and provides the results of our modifications. TDKENO uses the Improved Quasi-Static method to accomplish this. In this method, the neutron flux is factored into two components. One component is a purely time-dependent and rapidly varying amplitude function, which is solved deterministically and very frequently (small time steps). The other is a slowly varying flux shape function that weakly depends on time and is only solved when needed (significantly larger time steps).« less

  14. Mutation Clusters from Cancer Exome.

    PubMed

    Kakushadze, Zura; Yu, Willie

    2017-08-15

    We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development.

  15. Mutation Clusters from Cancer Exome

    PubMed Central

    Kakushadze, Zura; Yu, Willie

    2017-01-01

    We apply our statistically deterministic machine learning/clustering algorithm *K-means (recently developed in https://ssrn.com/abstract=2908286) to 10,656 published exome samples for 32 cancer types. A majority of cancer types exhibit a mutation clustering structure. Our results are in-sample stable. They are also out-of-sample stable when applied to 1389 published genome samples across 14 cancer types. In contrast, we find in- and out-of-sample instabilities in cancer signatures extracted from exome samples via nonnegative matrix factorization (NMF), a computationally-costly and non-deterministic method. Extracting stable mutation structures from exome data could have important implications for speed and cost, which are critical for early-stage cancer diagnostics, such as novel blood-test methods currently in development. PMID:28809811

  16. On the Development of a Deterministic Three-Dimensional Radiation Transport Code

    NASA Technical Reports Server (NTRS)

    Rockell, Candice; Tweed, John

    2011-01-01

    Since astronauts on future deep space missions will be exposed to dangerous radiations, there is a need to accurately model the transport of radiation through shielding materials and to estimate the received radiation dose. In response to this need a three dimensional deterministic code for space radiation transport is now under development. The new code GRNTRN is based on a Green's function solution of the Boltzmann transport equation that is constructed in the form of a Neumann series. Analytical approximations will be obtained for the first three terms of the Neumann series and the remainder will be estimated by a non-perturbative technique . This work discusses progress made to date and exhibits some computations based on the first two Neumann series terms.

  17. Deterministic Migration-Based Separation of White Blood Cells.

    PubMed

    Kim, Byeongyeon; Choi, Young Joon; Seo, Hyekyung; Shin, Eui-Cheol; Choi, Sungyoung

    2016-10-01

    Functional and phenotypic analyses of peripheral white blood cells provide useful clinical information. However, separation of white blood cells from peripheral blood requires a time-consuming, inconvenient process and thus analyses of separated white blood cells are limited in clinical settings. To overcome this limitation, a microfluidic separation platform is developed to enable deterministic migration of white blood cells, directing the cells into designated positions according to a ridge pattern. The platform uses slant ridge structures on the channel top to induce the deterministic migration, which allows efficient and high-throughput separation of white blood cells from unprocessed whole blood. The extent of the deterministic migration under various rheological conditions is explored, enabling highly efficient migration of white blood cells in whole blood and achieving high-throughput separation of the cells (processing 1 mL of whole blood less than 7 min). In the separated cell population, the composition of lymphocyte subpopulations is well preserved, and T cells secrete cytokines without any functional impairment. On the basis of the results, this microfluidic platform is a promising tool for the rapid enrichment of white blood cells, and it is useful for functional and phenotypic analyses of peripheral white blood cells. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Demographic noise can reverse the direction of deterministic selection

    PubMed Central

    Constable, George W. A.; Rogers, Tim; McKane, Alan J.; Tarnita, Corina E.

    2016-01-01

    Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to r−K theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085

  19. a New Method for Calculating the Fractal Dimension of Surface Topography

    NASA Astrophysics Data System (ADS)

    Zuo, Xue; Zhu, Hua; Zhou, Yuankai; Li, Yan

    2015-06-01

    A new method termed as three-dimensional root-mean-square (3D-RMS) method, is proposed to calculate the fractal dimension (FD) of machined surfaces. The measure of this method is the root-mean-square value of surface data, and the scale is the side length of square in the projection plane. In order to evaluate the calculation accuracy of the proposed method, the isotropic surfaces with deterministic FD are generated based on the fractional Brownian function and Weierstrass-Mandelbrot (WM) fractal function, and two kinds of anisotropic surfaces are generated by stretching or rotating a WM fractal curve. Their FDs are estimated by the proposed method, as well as differential boxing-counting (DBC) method, triangular prism surface area (TPSA) method and variation method (VM). The results show that the 3D-RMS method performs better than the other methods with a lower relative error for both isotropic and anisotropic surfaces, especially for the surfaces with dimensions higher than 2.5, since the relative error between the estimated value and its theoretical value decreases with theoretical FD. Finally, the electrodeposited surface, end-turning surface and grinding surface are chosen as examples to illustrate the application of 3D-RMS method on the real machined surfaces. This method gives a new way to accurately calculate the FD from the surface topographic data.

  20. Experiences of the use of FOX, an intelligent agent, for programming cochlear implant sound processors in new users.

    PubMed

    Vaerenberg, Bart; Govaerts, Paul J; de Ceulaer, Geert; Daemers, Kristin; Schauwers, Karen

    2011-01-01

    This report describes the application of the software tool "Fitting to Outcomes eXpert" (FOX) in programming the cochlear implant (CI) processor in new users. FOX is an intelligent agent to assist in the programming of CI processors. The concept of FOX is to modify maps on the basis of specific outcome measures, achieved using heuristic logic and based on a set of deterministic "rules". A prospective study was conducted on eight consecutive CI-users with a follow-up of three months. Eight adult subjects with postlingual deafness were implanted with the Advanced Bionics HiRes90k device. The implants were programmed using FOX, running a set of rules known as Eargroup's EG0910 advice, which features a set of "automaps". The protocol employed for the initial 3 months is presented, with description of the map modifications generated by FOX and the corresponding psychoacoustic test results. The 3 month median results show 25 dBHL as PTA, 77% (55 dBSPL) and 71% (70 dBSPL) phoneme score at speech audiometry and loudness scaling in or near to the normal zone at different frequencies. It is concluded that this approach is feasible to start up CI fitting and yields good outcome.

  1. On the kinetics of dendritic sidebranching: A three dimensional phase field study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, Shan; Guo, Zhipeng; Han, Zhiqiang, E-mail: zqhan@tsinghua.edu.cn

    2016-04-28

    The underlying mechanism for dendritic sidebranching was studied using 3-D phase field modeling. Results showed that in 3-D the requirement of applying the random thermal noise to induce dendritic sidebranching (i.e., normally the case for 2-D phase field simulations) was fully relaxed. The stretching of the secondary or higher order arms occurred spontaneously and symmetrically as the growth of the dendrite. With periodic external perturbation and if the stimulating frequency was lower than a critical value, both tip velocity and sidebranching would get completely synchronized with the perturbation. Whereas if the perturbation frequency was higher than the critical value, rathermore » than increasing, the sidebranching frequency would become stable and maintain at the same magnitude as that of the natural sidebranching, i.e., when no external perturbation was applied. It was shown that the underlying mechanism for sidebranching was deterministic rather than stochastic, and anisotropy tendency and curvature effect were shown to be the most important influence factors. Moreover, the difference of the anisotropy tendency would lead to an uneven distribution of curvature on the solid/liquid interface, i.e., formation of concave and convex geometries. The growth of these geometries would subsequently break the initial spherical structure of solid seed and lead to further sidebranching.« less

  2. Impact of refining the assessment of dietary exposure to cadmium in the European adult population.

    PubMed

    Ferrari, Pietro; Arcella, Davide; Heraud, Fanny; Cappé, Stefano; Fabiansson, Stefan

    2013-01-01

    Exposure assessment constitutes an important step in any risk assessment of potentially harmful substances present in food. The European Food Safety Authority (EFSA) first assessed dietary exposure to cadmium in Europe using a deterministic framework, resulting in mean values of exposure in the range of health-based guidance values. Since then, the characterisation of foods has been refined to better match occurrence and consumption data, and a new strategy to handle left-censoring in occurrence data was devised. A probabilistic assessment was performed and compared with deterministic estimates, using occurrence values at the European level and consumption data from 14 national dietary surveys. Mean estimates in the probabilistic assessment ranged from 1.38 (95% CI = 1.35-1.44) to 2.08 (1.99-2.23) µg kg⁻¹ bodyweight (bw) week⁻¹ across the different surveys, which were less than 10% lower than deterministic (middle bound) mean values that ranged from 1.50 to 2.20 µg kg⁻¹ bw week⁻¹. Probabilistic 95th percentile estimates of dietary exposure ranged from 2.65 (2.57-2.72) to 4.99 (4.62-5.38) µg kg⁻¹ bw week⁻¹, which were, with the exception of one survey, between 3% and 17% higher than middle-bound deterministic estimates. Overall, the proportion of subjects exceeding the tolerable weekly intake of 2.5 µg kg⁻¹ bw ranged from 14.8% (13.6-16.0%) to 31.2% (29.7-32.5%) according to the probabilistic assessment. The results of this work indicate that mean values of dietary exposure to cadmium in the European population were of similar magnitude using determinist or probabilistic assessments. For higher exposure levels, probabilistic estimates were almost consistently larger than deterministic counterparts, thus reflecting the impact of using the full distribution of occurrence values to determine exposure levels. It is considered prudent to use probabilistic methodology should exposure estimates be close to or exceeding health-based guidance values.

  3. The Transcriptional Regulator CBP Has Defined Spatial Associations within Interphase Nuclei

    PubMed Central

    McManus, Kirk J; Stephens, David A; Adams, Niall M; Islam, Suhail A; Freemont, Paul S; Hendzel, Michael J

    2006-01-01

    It is becoming increasingly clear that nuclear macromolecules and macromolecular complexes are compartmentalized through binding interactions into an apparent three-dimensionally ordered structure. This ordering, however, does not appear to be deterministic to the extent that chromatin and nonchromatin structures maintain a strict 3-D arrangement. Rather, spatial ordering within the cell nucleus appears to conform to stochastic rather than deterministic spatial relationships. The stochastic nature of organization becomes particularly problematic when any attempt is made to describe the spatial relationship between proteins involved in the regulation of the genome. The CREB–binding protein (CBP) is one such transcriptional regulator that, when visualised by confocal microscopy, reveals a highly punctate staining pattern comprising several hundred individual foci distributed within the nuclear volume. Markers for euchromatic sequences have similar patterns. Surprisingly, in most cases, the predicted one-to-one relationship between transcription factor and chromatin sequence is not observed. Consequently, to understand whether spatial relationships that are not coincident are nonrandom and potentially biologically important, it is necessary to develop statistical approaches. In this study, we report on the development of such an approach and apply it to understanding the role of CBP in mediating chromatin modification and transcriptional regulation. We have used nearest-neighbor distance measurements and probability analyses to study the spatial relationship between CBP and other nuclear subcompartments enriched in transcription factors, chromatin, and splicing factors. Our results demonstrate that CBP has an order of spatial association with other nuclear subcompartments. We observe closer associations between CBP and RNA polymerase II–enriched foci and SC35 speckles than nascent RNA or specific acetylated histones. Furthermore, we find that CBP has a significantly higher probability of being close to its known in vivo substrate histone H4 lysine 5 compared with the closely related H4 lysine 12. This study demonstrates that complex relationships not described by colocalization exist in the interphase nucleus and can be characterized and quantified. The subnuclear distribution of CBP is difficult to reconcile with a model where chromatin organization is the sole determinant of the nuclear organization of proteins that regulate transcription but is consistent with a close link between spatial associations and nuclear functions. PMID:17054391

  4. Approximations of noise covariance in multi-slice helical CT scans: impact on lung nodule size estimation.

    PubMed

    Zeng, Rongping; Petrick, Nicholas; Gavrielides, Marios A; Myers, Kyle J

    2011-10-07

    Multi-slice computed tomography (MSCT) scanners have become popular volumetric imaging tools. Deterministic and random properties of the resulting CT scans have been studied in the literature. Due to the large number of voxels in the three-dimensional (3D) volumetric dataset, full characterization of the noise covariance in MSCT scans is difficult to tackle. However, as usage of such datasets for quantitative disease diagnosis grows, so does the importance of understanding the noise properties because of their effect on the accuracy of the clinical outcome. The goal of this work is to study noise covariance in the helical MSCT volumetric dataset. We explore possible approximations to the noise covariance matrix with reduced degrees of freedom, including voxel-based variance, one-dimensional (1D) correlation, two-dimensional (2D) in-plane correlation and the noise power spectrum (NPS). We further examine the effect of various noise covariance models on the accuracy of a prewhitening matched filter nodule size estimation strategy. Our simulation results suggest that the 1D longitudinal, 2D in-plane and NPS prewhitening approaches can improve the performance of nodule size estimation algorithms. When taking into account computational costs in determining noise characterizations, the NPS model may be the most efficient approximation to the MSCT noise covariance matrix.

  5. A Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Parsakhoo, Zahra; Shao, Yaping

    2017-04-01

    Near-surface turbulent mixing has considerable effect on surface fluxes, cloud formation and convection in the atmospheric boundary layer (ABL). Its quantifications is however a modeling and computational challenge since the small eddies are not fully resolved in Eulerian models directly. We have developed a Lagrangian stochastic model to demonstrate multi-scale interactions between convection and land surface heterogeneity in the atmospheric boundary layer based on the Ito Stochastic Differential Equation (SDE) for air parcels (particles). Due to the complexity of the mixing in the ABL, we find that linear Ito SDE cannot represent convections properly. Three strategies have been tested to solve the problem: 1) to make the deterministic term in the Ito equation non-linear; 2) to change the random term in the Ito equation fractional, and 3) to modify the Ito equation by including Levy flights. We focus on the third strategy and interpret mixing as interaction between at least two stochastic processes with different Lagrangian time scales. The model is in progress to include the collisions among the particles with different characteristic and to apply the 3D model for real cases. One application of the model is emphasized: some land surface patterns are generated and then coupled with the Large Eddy Simulation (LES).

  6. Integration of an open interface PC scene generator using COTS DVI converter hardware

    NASA Astrophysics Data System (ADS)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  7. High Fidelity Preparation of a Single Atom in Its 2D Center of Mass Ground State

    NASA Astrophysics Data System (ADS)

    Sompet, Pimonpan; Fung, Yin Hsien; Schwartz, Eyal; Hunter, Matthew D. J.; Phrompao, Jindaratsamee; Andersen, Mikkel F.

    2017-04-01

    Complete control over quantum states of individual atoms is important for the study of the microscopic world. Here, we present a push button method for high fidelity preparation of a single 85Rb atom in the vibrational ground state of tightly focused optical tweezers. The method combines near-deterministic preparation of a single atom with magnetically-insensitive Raman sideband cooling. We achieve 2D cooling in the radial plane with a ground state population of 0.85, which provides a fidelity of 0.7 for the entire procedure (loading and cooling). The Raman beams couple two sublevels (| F = 3 , m = 0 〉 and | F = 2 , m = 0 〉) that are indifferent to magnetic noise to first order. This leads to long atomic coherence times, and allows us to implement the cooling in an environment where magnetic field fluctuations prohibit previously demonstrated variations. Additionally, we implement the trapping and manipulation of two atoms confined in separate dynamically reconfigurable optical tweezers, to study few-body dynamics.

  8. Deterministic quantum dense coding networks

    NASA Astrophysics Data System (ADS)

    Roy, Saptarshi; Chanda, Titas; Das, Tamoghna; Sen(De), Aditi; Sen, Ujjwal

    2018-07-01

    We consider the scenario of deterministic classical information transmission between multiple senders and a single receiver, when they a priori share a multipartite quantum state - an attempt towards building a deterministic dense coding network. Specifically, we prove that in the case of two or three senders and a single receiver, generalized Greenberger-Horne-Zeilinger (gGHZ) states are not beneficial for sending classical information deterministically beyond the classical limit, except when the shared state is the GHZ state itself. On the other hand, three- and four-qubit generalized W (gW) states with specific parameters as well as the four-qubit Dicke states can provide a quantum advantage of sending the information in deterministic dense coding. Interestingly however, numerical simulations in the three-qubit scenario reveal that the percentage of states from the GHZ-class that are deterministic dense codeable is higher than that of states from the W-class.

  9. Signal separation by nonlinear projections: The fetal electrocardiogram

    NASA Astrophysics Data System (ADS)

    Schreiber, Thomas; Kaplan, Daniel T.

    1996-05-01

    We apply a locally linear projection technique which has been developed for noise reduction in deterministically chaotic signals to extract the fetal component from scalar maternal electrocardiographic (ECG) recordings. Although we do not expect the maternal ECG to be deterministic chaotic, typical signals are effectively confined to a lower-dimensional manifold when embedded in delay space. The method is capable of extracting fetal heart rate even when the fetal component and the noise are of comparable amplitude. If the noise is small, more details of the fetal ECG, like P and T waves, can be recovered.

  10. A Deterministic Transport Code for Space Environment Electrons

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.

    2010-01-01

    A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.

  11. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  12. Deterministic and stochastic methods of calculation of polarization characteristics of radiation in natural environment

    NASA Astrophysics Data System (ADS)

    Strelkov, S. A.; Sushkevich, T. A.; Maksakova, S. V.

    2017-11-01

    We are talking about russian achievements of the world level in the theory of radiation transfer, taking into account its polarization in natural media and the current scientific potential developing in Russia, which adequately provides the methodological basis for theoretically-calculated research of radiation processes and radiation fields in natural media using supercomputers and mass parallelism. A new version of the matrix transfer operator is proposed for solving problems of polarized radiation transfer in heterogeneous media by the method of influence functions, when deterministic and stochastic methods can be combined.

  13. The Role of Probabilistic Design Analysis Methods in Safety and Affordability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2016-01-01

    For the last several years, NASA and its contractors have been working together to build space launch systems to commercialize space. Developing commercial affordable and safe launch systems becomes very important and requires a paradigm shift. This paradigm shift enforces the need for an integrated systems engineering environment where cost, safety, reliability, and performance need to be considered to optimize the launch system design. In such an environment, rule based and deterministic engineering design practices alone may not be sufficient to optimize margins and fault tolerance to reduce cost. As a result, introduction of Probabilistic Design Analysis (PDA) methods to support the current deterministic engineering design practices becomes a necessity to reduce cost without compromising reliability and safety. This paper discusses the importance of PDA methods in NASA's new commercial environment, their applications, and the key role they can play in designing reliable, safe, and affordable launch systems. More specifically, this paper discusses: 1) The involvement of NASA in PDA 2) Why PDA is needed 3) A PDA model structure 4) A PDA example application 5) PDA link to safety and affordability.

  14. Chaotic behavior in the locomotion of Amoeba proteus.

    PubMed

    Miyoshi, H; Kagawa, Y; Tsuchiya, Y

    2001-01-01

    The locomotion of Amoeba proteus has been investigated by algorithms evaluating correlation dimension and Lyapunov spectrum developed in the field of nonlinear science. It is presumed by these parameters whether the random behavior of the system is stochastic or deterministic. For the analysis of the nonlinear parameters, n-dimensional time-delayed vectors have been reconstructed from a time series of periphery and area of A. proteus images captured with a charge-coupled-device camera, which characterize its random motion. The correlation dimension analyzed has shown the random motion of A. proteus is subjected only to 3-4 macrovariables, though the system is a complex system composed of many degrees of freedom. Furthermore, the analysis of the Lyapunov spectrum has shown its largest exponent takes positive values. These results indicate the random behavior of A. proteus is chaotic and deterministic motion on an attractor with low dimension. It may be important for the elucidation of the cell locomotion to take account of nonlinear interactions among a small number of dynamics such as the sol-gel transformation, the cytoplasmic streaming, and the relating chemical reaction occurring in the cell.

  15. Review of Hybrid (Deterministic/Monte Carlo) Radiation Transport Methods, Codes, and Applications at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, John C; Peplow, Douglas E.; Mosher, Scott W

    2010-01-01

    This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less

  16. Computer modeling of dynamic necking in bars

    NASA Astrophysics Data System (ADS)

    Partom, Yehuda; Lindenfeld, Avishay

    2017-06-01

    Necking of thin bodies (bars, plates, shells) is one form of strain localization in ductile materials that may lead to fracture. The phenomenon of necking has been studied extensively, initially for quasistatic loading and then also for dynamic loading. Nevertheless, many issues concerning necking are still unclear. Among these are: 1) is necking a random or deterministic process; 2) how does the specimen choose the final neck location; 3) to what extent do perturbations (material or geometrical) influence the neck forming process; and 4) how do various parameters (material, geometrical, loading) influence the neck forming process. Here we address these issues and others using computer simulations with a hydrocode. Among other things we find that: 1) neck formation is a deterministic process, and by changing one of the parameters influencing it monotonously, the final neck location moves monotonously as well; 2) the final neck location is sensitive to the radial velocity of the end boundaries, and as motion of these boundaries is not fully controlled in tests, this may be the reason why neck formation is sometimes regarded as a random process; and 3) neck formation is insensitive to small perturbations, which is probably why it is a deterministic process.

  17. Deterministic multi-step rotation of magnetic single-domain state in Nickel nanodisks using multiferroic magnetoelastic coupling

    NASA Astrophysics Data System (ADS)

    Sohn, Hyunmin; Liang, Cheng-yen; Nowakowski, Mark E.; Hwang, Yongha; Han, Seungoh; Bokor, Jeffrey; Carman, Gregory P.; Candler, Robert N.

    2017-10-01

    We demonstrate deterministic multi-step rotation of a magnetic single-domain (SD) state in Nickel nanodisks using the multiferroic magnetoelastic effect. Ferromagnetic Nickel nanodisks are fabricated on a piezoelectric Lead Zirconate Titanate (PZT) substrate, surrounded by patterned electrodes. With the application of a voltage between opposing electrode pairs, we generate anisotropic in-plane strains that reshape the magnetic energy landscape of the Nickel disks, reorienting magnetization toward a new easy axis. By applying a series of voltages sequentially to adjacent electrode pairs, circulating in-plane anisotropic strains are applied to the Nickel disks, deterministically rotating a SD state in the Nickel disks by increments of 45°. The rotation of the SD state is numerically predicted by a fully-coupled micromagnetic/elastodynamic finite element analysis (FEA) model, and the predictions are experimentally verified with magnetic force microscopy (MFM). This experimental result will provide a new pathway to develop energy efficient magnetic manipulation techniques at the nanoscale.

  18. Delay compensation in integrated communication and control systems. I - Conceptual development and analysis

    NASA Technical Reports Server (NTRS)

    Luck, Rogelio; Ray, Asok

    1990-01-01

    A procedure for compensating for the effects of distributed network-induced delays in integrated communication and control systems (ICCS) is proposed. The problem of analyzing systems with time-varying and possibly stochastic delays could be circumvented by use of a deterministic observer which is designed to perform under certain restrictive but realistic assumptions. The proposed delay-compensation algorithm is based on a deterministic state estimator and a linear state-variable-feedback control law. The deterministic observer can be replaced by a stochastic observer without any structural modifications of the delay compensation algorithm. However, if a feedforward-feedback control law is chosen instead of the state-variable feedback control law, the observer must be modified as a conventional nondelayed system would be. Under these circumstances, the delay compensation algorithm would be accordingly changed. The separation principle of the classical Luenberger observer holds true for the proposed delay compensator. The algorithm is suitable for ICCS in advanced aircraft, spacecraft, manufacturing automation, and chemical process applications.

  19. On the usage of ultrasound computational models for decision making under ambiguity

    NASA Astrophysics Data System (ADS)

    Dib, Gerges; Sexton, Samuel; Prowant, Matthew; Crawford, Susan; Diaz, Aaron

    2018-04-01

    Computer modeling and simulation is becoming pervasive within the non-destructive evaluation (NDE) industry as a convenient tool for designing and assessing inspection techniques. This raises a pressing need for developing quantitative techniques for demonstrating the validity and applicability of the computational models. Computational models provide deterministic results based on deterministic and well-defined input, or stochastic results based on inputs defined by probability distributions. However, computational models cannot account for the effects of personnel, procedures, and equipment, resulting in ambiguity about the efficacy of inspections based on guidance from computational models only. In addition, ambiguity arises when model inputs, such as the representation of realistic cracks, cannot be defined deterministically, probabilistically, or by intervals. In this work, Pacific Northwest National Laboratory demonstrates the ability of computational models to represent field measurements under known variabilities, and quantify the differences using maximum amplitude and power spectrum density metrics. Sensitivity studies are also conducted to quantify the effects of different input parameters on the simulation results.

  20. Deterministic Creation of Macroscopic Cat States

    PubMed Central

    Lombardo, Daniel; Twamley, Jason

    2015-01-01

    Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157

  1. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography.

    PubMed

    Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan

    2018-04-11

    The development of multinode quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates, and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of preselected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multimode interference beamsplitter via in situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with g (2) (0) = 0.13 ± 0.02. Due to its high patterning resolution as well as spectral and spatial control, in situ electron beam lithography allows for integration of preselected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way toward multinode, fully integrated quantum photonic chips.

  2. Optimized linear motor and digital PID controller setup used in Mössbauer spectrometer

    NASA Astrophysics Data System (ADS)

    Kohout, Pavel; Kouřil, Lukáš; Navařík, Jakub; Novák, Petr; Pechoušek, Jiří

    2014-10-01

    Optimization of a linear motor and digital PID controller setup used in a Mössbauer spectrometer is presented. Velocity driving system with a digital PID feedback subsystem was developed in the LabVIEW graphical environment and deployed on the sbRIO real-time hardware device (National Instruments). The most important data acquisition processes are performed as real-time deterministic tasks on an FPGA chip. Velocity transducer of a double loudspeaker type with a power amplifier circuit is driven by the system. Series of calibration measurements were proceeded to find the optimal setup of the P, I, D parameters together with velocity error signal analysis. The shape and given signal characteristics of the velocity error signal are analyzed in details. Remote applications for controlling and monitoring the PID system from computer or smart phone, respectively, were also developed. The best setup and P, I, D parameters were set and calibration spectrum of α-Fe sample with an average nonlinearity of the velocity scale below 0.08% was collected. Furthermore, the width of the spectral line below 0.30 mm/s was observed. Powerful and complex velocity driving system was designed.

  3. A Prototype Regional GSI-based EnKF-Variational Hybrid Data Assimilation System for the Rapid Refresh Forecasting System: Dual-Resolution Implementation and Testing Results

    NASA Astrophysics Data System (ADS)

    Pan, Yujie; Xue, Ming; Zhu, Kefeng; Wang, Mingjun

    2018-05-01

    A dual-resolution (DR) version of a regional ensemble Kalman filter (EnKF)-3D ensemble variational (3DEnVar) coupled hybrid data assimilation system is implemented as a prototype for the operational Rapid Refresh forecasting system. The DR 3DEnVar system combines a high-resolution (HR) deterministic background forecast with lower-resolution (LR) EnKF ensemble perturbations used for flow-dependent background error covariance to produce a HR analysis. The computational cost is substantially reduced by running the ensemble forecasts and EnKF analyses at LR. The DR 3DEnVar system is tested with 3-h cycles over a 9-day period using a 40/˜13-km grid spacing combination. The HR forecasts from the DR hybrid analyses are compared with forecasts launched from HR Gridpoint Statistical Interpolation (GSI) 3D variational (3DVar) analyses, and single LR hybrid analyses interpolated to the HR grid. With the DR 3DEnVar system, a 90% weight for the ensemble covariance yields the lowest forecast errors and the DR hybrid system clearly outperforms the HR GSI 3DVar. Humidity and wind forecasts are also better than those launched from interpolated LR hybrid analyses, but the temperature forecasts are slightly worse. The humidity forecasts are improved most. For precipitation forecasts, the DR 3DEnVar always outperforms HR GSI 3DVar. It also outperforms the LR 3DEnVar, except for the initial forecast period and lower thresholds.

  4. Dynamical Epidemic Suppression Using Stochastic Prediction and Control

    DTIC Science & Technology

    2004-10-28

    initial probability density function (PDF), p: D C R2 -- R, is defined by the stochastic Frobenius - Perron For deterministic systems, normal methods of...induced chaos. To analyze the qualitative change, we apply the technique of the stochastic Frobenius - Perron operator [L. Billings et al., Phys. Rev. Lett...transition matrix describing the probability of transport from one region of phase space to another, which approximates the stochastic Frobenius - Perron

  5. Results of a Neutronic Simulation of HTR-Proteus Core 4.2 using PEBBED and other INL Reactor Physics Tools: FY-09 Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hans D. Gougar

    The Idaho National Laboratory’s deterministic neutronics analysis codes and methods were applied to the computation of the core multiplication factor of the HTR-Proteus pebble bed reactor critical facility. A combination of unit cell calculations (COMBINE-PEBDAN), 1-D discrete ordinates transport (SCAMP), and nodal diffusion calculations (PEBBED) were employed to yield keff and flux profiles. Preliminary results indicate that these tools, as currently configured and used, do not yield satisfactory estimates of keff. If control rods are not modeled, these methods can deliver much better agreement with experimental core eigenvalues which suggests that development efforts should focus on modeling control rod andmore » other absorber regions. Under some assumptions and in 1D subcore analyses, diffusion theory agrees well with transport. This suggests that developments in specific areas can produce a viable core simulation approach. Some corrections have been identified and can be further developed, specifically: treatment of the upper void region, treatment of inter-pebble streaming, and explicit (multiscale) transport modeling of TRISO fuel particles as a first step in cross section generation. Until corrections are made that yield better agreement with experiment, conclusions from core design and burnup analyses should be regarded as qualitative and not benchmark quality.« less

  6. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis

    PubMed Central

    Sattar, Ahmed M.A.; Raslan, Yasser M.

    2013-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude. PMID:25685476

  7. Coupling hydrologic and hydraulic models to take into consideration retention effects on extreme peak discharges in Switzerland

    NASA Astrophysics Data System (ADS)

    Felder, Guido; Zischg, Andreas; Weingartner, Rolf

    2015-04-01

    Estimating peak discharges with very low probabilities is still accompanied by large uncertainties. Common estimation methods are usually based on extreme value statistics applied to observed time series or to hydrological model outputs. However, such methods assume the system to be stationary and do not specifically consider non-stationary effects. Observed time series may exclude events where peak discharge is damped by retention effects, as this process does not occur until specific thresholds, possibly beyond those of the highest measured event, are exceeded. Hydrological models can be complemented and parameterized with non-linear functions. However, in such cases calibration depends on observed data and non-stationary behaviour is not deterministically calculated. Our study discusses the option of considering retention effects on extreme peak discharges by coupling hydrological and hydraulic models. This possibility is tested by forcing the semi-distributed deterministic hydrological model PREVAH with randomly generated, physically plausible extreme precipitation patterns. The resulting hydrographs are then used to force the hydraulic model BASEMENT-ETH (riverbed in 1D, potential inundation areas in 2D). The procedure ensures that the estimated extreme peak discharge does not exceed the physical limit given by the riverbed capacity and that the dampening effect of inundation processes on peak discharge is considered.

  8. Predicting morphological changes DS New Naga-Hammadi Barrage for extreme Nile flood flows: A Monte Carlo analysis.

    PubMed

    Sattar, Ahmed M A; Raslan, Yasser M

    2014-01-01

    While construction of the Aswan High Dam (AHD) has stopped concurrent flooding events, River Nile is still subject to low intensity flood waves resulting from controlled release of water from the dam reservoir. Analysis of flow released from New Naga-Hammadi Barrage, which is located at 3460 km downstream AHD indicated an increase in magnitude of flood released from the barrage in the past 10 years. A 2D numerical mobile bed model is utilized to investigate the possible morphological changes in the downstream of Naga-Hammadi Barrage from possible higher flood releases. Monte Carlo simulation analyses (MCS) is applied to the deterministic results of the 2D model to account for and assess the uncertainty of sediment parameters and formulations in addition to sacristy of field measurements. Results showed that the predicted volume of erosion yielded the highest uncertainty and variation from deterministic run, while navigation velocity yielded the least uncertainty. Furthermore, the error budget method is used to rank various sediment parameters for their contribution in the total prediction uncertainty. It is found that the suspended sediment contributed to output uncertainty more than other sediment parameters followed by bed load with 10% less order of magnitude.

  9. Sensitivity Tests Between Vs30 and Detailed Shear Wave Profiles Using 1D and 3D Site Response Analysis, Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    West, Loyd Travis

    Site characterization is an essential aspect of hazard analysis and the time-averaged shear-wave velocity to 30 m depth "Vs30" for site-class has become a critical parameter in site-specific and probabilistic hazard analysis. Yet, the general applicability of Vs30 can be ambiguous and much debate and research surround its application. In 2007, in part to mitigate the uncertainty associated with the use of Vs30 in Las Vegas Valley, the Clark County Building Department (CCBD) in collaboration with the Nevada System of Higher Education (NSHE) embarked on an endeavor to map Vs30 using a geophysical methods approach for a site-class microzonation map of over 500 square miles (1500 km2) in southern Nevada. The resulting dataset, described by Pancha et al. (2017), contains over 10,700 1D shear-wave-velocity-depth profiles (SWVP) that constitute a rich database of 3D shear-wave velocity structure that is both laterally and vertical heterogenous. This study capitalizes on the uniquely detailed and spatially dense CCBD database to carry out sensitivity tests on the detailed shear-wave-velocity-profiles and the Vs30 utilizing 1D and 3D site-response approaches. Sensitivity tests are derived from the 1D oscillator response of a single-degree-of-freedom-oscillator and from 3D finite-difference deterministic simulations up to 15 Hz frequency using similar model parameters. Results demonstrate that the detailed SWVP are amplifying ground motions by roughly 50% over the simple Vs30 models, above 4.6 Hz frequency. Numerical simulations also depict significant lateral resonance, focusing, and scattering from seismic energy attributed to the 3D small-scale heterogeneities of the shear-wave-velocity profiles that result in a 70% increase in peak ground velocity. Additionally, PGV ratio maps clearly establish that the increased amplification from the detailed SWVPs is consistent throughout the model space. As a corollary, this study demonstrates the use of finite-differencing numerical based methods to simulate ground motions at high frequencies, up to 15 Hz.

  10. A Deterministic Interfacial Cyclic Oxidation Spalling Model. Part 1; Model Development and Parametric Response

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2002-01-01

    An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.

  11. The relationship between stochastic and deterministic quasi-steady state approximations.

    PubMed

    Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R

    2015-11-23

    The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.

  12. Evaluation of the DRAGON code for VHTR design analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taiwo, T. A.; Kim, T. K.; Nuclear Engineering Division

    2006-01-12

    This letter report summarizes three activities that were undertaken in FY 2005 to gather information on the DRAGON code and to perform limited evaluations of the code performance when used in the analysis of the Very High Temperature Reactor (VHTR) designs. These activities include: (1) Use of the code to model the fuel elements of the helium-cooled and liquid-salt-cooled VHTR designs. Results were compared to those from another deterministic lattice code (WIMS8) and a Monte Carlo code (MCNP). (2) The preliminary assessment of the nuclear data library currently used with the code and libraries that have been provided by themore » IAEA WIMS-D4 Library Update Project (WLUP). (3) DRAGON workshop held to discuss the code capabilities for modeling the VHTR.« less

  13. Tiedeman's Approach to Career Development.

    ERIC Educational Resources Information Center

    Harren, Vincent A.

    Basic to Tiedeman's approach to career development and decision making is the assumption that one is responsible for one's own behavior because one has the capacity for choice and lives in a world which is not deterministic. Tiedeman, a cognitive-developmental theorist, views continuity of development as internal or psychological while…

  14. The "Chaos" Pattern in Piaget's Theory of Cognitive Development.

    ERIC Educational Resources Information Center

    Lindsay, Jean S.

    Piaget's theory of the cognitive development of the child is related to the recently developed non-linear "chaos" model. The term "chaos" refers to the tendency of dynamical, non-linear systems toward irregular, sometimes unpredictable, deterministic behavior. Piaget identified this same pattern in his model of cognitive…

  15. Exact Dynamics via Poisson Process: a unifying Monte Carlo paradigm

    NASA Astrophysics Data System (ADS)

    Gubernatis, James

    2014-03-01

    A common computational task is solving a set of ordinary differential equations (o.d.e.'s). A little known theorem says that the solution of any set of o.d.e.'s is exactly solved by the expectation value over a set of arbitary Poisson processes of a particular function of the elements of the matrix that defines the o.d.e.'s. The theorem thus provides a new starting point to develop real and imaginary-time continous-time solvers for quantum Monte Carlo algorithms, and several simple observations enable various quantum Monte Carlo techniques and variance reduction methods to transfer to a new context. I will state the theorem, note a transformation to a very simple computational scheme, and illustrate the use of some techniques from the directed-loop algorithm in context of the wavefunction Monte Carlo method that is used to solve the Lindblad master equation for the dynamics of open quantum systems. I will end by noting that as the theorem does not depend on the source of the o.d.e.'s coming from quantum mechanics, it also enables the transfer of continuous-time methods from quantum Monte Carlo to the simulation of various classical equations of motion heretofore only solved deterministically.

  16. Stochastic spectral projection of electrochemical thermal model for lithium-ion cell state estimation

    NASA Astrophysics Data System (ADS)

    Tagade, Piyush; Hariharan, Krishnan S.; Kolake, Subramanya Mayya; Song, Taewon; Oh, Dukjin

    2017-03-01

    A novel approach for integrating a pseudo-two dimensional electrochemical thermal (P2D-ECT) model and data assimilation algorithm is presented for lithium-ion cell state estimation. This approach refrains from making any simplifications in the P2D-ECT model while making it amenable for online state estimation. Though deterministic, uncertainty in the initial states induces stochasticity in the P2D-ECT model. This stochasticity is resolved by spectrally projecting the stochastic P2D-ECT model on a set of orthogonal multivariate Hermite polynomials. Volume averaging in the stochastic dimensions is proposed for efficient numerical solution of the resultant model. A state estimation framework is developed using a transformation of the orthogonal basis to assimilate the measurables with this system of equations. Effectiveness of the proposed method is first demonstrated by assimilating the cell voltage and temperature data generated using a synthetic test bed. This validated method is used with the experimentally observed cell voltage and temperature data for state estimation at different operating conditions and drive cycle protocols. The results show increased prediction accuracy when the data is assimilated every 30s. High accuracy of the estimated states is exploited to infer temperature dependent behavior of the lithium-ion cell.

  17. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  18. RDS - A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    NASA Astrophysics Data System (ADS)

    Salim, Mohd Faiz; Roslan, Ridha; Ibrahim, Mohd Rizal Mamat @

    2014-02-01

    Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges.

  19. Deterministic Walks with Choice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  20. Development of TIF based figuring algorithm for deterministic pitch tool polishing

    NASA Astrophysics Data System (ADS)

    Yi, Hyun-Su; Kim, Sug-Whan; Yang, Ho-Soon; Lee, Yun-Woo

    2007-12-01

    Pitch is perhaps the oldest material used for optical polishing, leaving superior surface texture, and has been used widely in the optics shop floor. However, for its unpredictable controllability of removal characteristics, the pitch tool polishing has been rarely analysed quantitatively and many optics shops rely heavily on optician's "feel" even today. In order to bring a degree of process controllability to the pitch tool polishing, we added motorized tool motions to the conventional Draper type polishing machine and modelled the tool path in the absolute machine coordinate. We then produced a number of Tool Influence Function (TIF) both from an analytical model and a series of experimental polishing runs using the pitch tool. The theoretical TIFs agreed well with the experimental TIFs to the profile accuracy of 79 % in terms of its shape. The surface figuring algorithm was then developed in-house utilizing both theoretical and experimental TIFs. We are currently undertaking a series of trial figuring experiments to prove the performance of the polishing algorithm, and the early results indicate that the highly deterministic material removal control with the pitch tool can be achieved to a certain level of form error. The machine renovation, TIF theory and experimental confirmation, figuring simulation results are reported together with implications to deterministic polishing.

  1. Identifying Corresponding Patches in SAR and Optical Images With a Pseudo-Siamese CNN

    NASA Astrophysics Data System (ADS)

    Hughes, Lloyd H.; Schmitt, Michael; Mou, Lichao; Wang, Yuanyuan; Zhu, Xiao Xiang

    2018-05-01

    In this letter, we propose a pseudo-siamese convolutional neural network (CNN) architecture that enables to solve the task of identifying corresponding patches in very-high-resolution (VHR) optical and synthetic aperture radar (SAR) remote sensing imagery. Using eight convolutional layers each in two parallel network streams, a fully connected layer for the fusion of the features learned in each stream, and a loss function based on binary cross-entropy, we achieve a one-hot indication if two patches correspond or not. The network is trained and tested on an automatically generated dataset that is based on a deterministic alignment of SAR and optical imagery via previously reconstructed and subsequently co-registered 3D point clouds. The satellite images, from which the patches comprising our dataset are extracted, show a complex urban scene containing many elevated objects (i.e. buildings), thus providing one of the most difficult experimental environments. The achieved results show that the network is able to predict corresponding patches with high accuracy, thus indicating great potential for further development towards a generalized multi-sensor key-point matching procedure. Index Terms-synthetic aperture radar (SAR), optical imagery, data fusion, deep learning, convolutional neural networks (CNN), image matching, deep matching

  2. Diffusion-Based Model for Synaptic Molecular Communication Channel.

    PubMed

    Khan, Tooba; Bilgin, Bilgesu A; Akan, Ozgur B

    2017-06-01

    Computational methods have been extensively used to understand the underlying dynamics of molecular communication methods employed by nature. One very effective and popular approach is to utilize a Monte Carlo simulation. Although it is very reliable, this method can have a very high computational cost, which in some cases renders the simulation impractical. Therefore, in this paper, for the special case of an excitatory synaptic molecular communication channel, we present a novel mathematical model for the diffusion and binding of neurotransmitters that takes into account the effects of synaptic geometry in 3-D space and re-absorption of neurotransmitters by the transmitting neuron. Based on this model we develop a fast deterministic algorithm, which calculates expected value of the output of this channel, namely, the amplitude of excitatory postsynaptic potential (EPSP), for given synaptic parameters. We validate our algorithm by a Monte Carlo simulation, which shows total agreement between the results of the two methods. Finally, we utilize our model to quantify the effects of variation in synaptic parameters, such as position of release site, receptor density, size of postsynaptic density, diffusion coefficient, uptake probability, and number of neurotransmitters in a vesicle, on maximum number of bound receptors that directly affect the peak amplitude of EPSP.

  3. DCBRP: a deterministic chain-based routing protocol for wireless sensor networks.

    PubMed

    Marhoon, Haydar Abdulameer; Mahmuddin, M; Nor, Shahrudin Awang

    2016-01-01

    Wireless sensor networks (WSNs) are a promising area for both researchers and industry because of their various applications The sensor node expends the majority of its energy on communication with other nodes. Therefore, the routing protocol plays an important role in delivering network data while minimizing energy consumption as much as possible. The chain-based routing approach is superior to other approaches. However, chain-based routing protocols still expend substantial energy in the Chain Head (CH) node. In addition, these protocols also have the bottleneck issues. A novel routing protocol which is Deterministic Chain-Based Routing Protocol (DCBRP). DCBRP consists of three mechanisms: Backbone Construction Mechanism, Chain Head Selection (CHS), and the Next Hop Connection Mechanism. The CHS mechanism is presented in detail, and it is evaluated through comparison with the CCM and TSCP using an ns-3 simulator. It show that DCBRP outperforms both CCM and TSCP in terms of end-to-end delay by 19.3 and 65%, respectively, CH energy consumption by 18.3 and 23.0%, respectively, overall energy consumption by 23.7 and 31.4%, respectively, network lifetime by 22 and 38%, respectively, and the energy*delay metric by 44.85 and 77.54%, respectively. DCBRP can be used in any deterministic node deployment applications, such as smart cities or smart agriculture, to reduce energy depletion and prolong the lifetimes of WSNs.

  4. Using a Remotely Piloted Aircraft System (RPAS) to analyze the stability of a natural rock slope

    NASA Astrophysics Data System (ADS)

    Salvini, Riccardo; Esposito, Giuseppe; Mastrorocco, Giovanni; Seddaiu, Marcello

    2016-04-01

    This paper describes the application of a rotary wing RPAS for monitoring the stability of a natural rock slope in the municipality of Vecchiano (Pisa, Italy). The slope under investigation is approximately oriented NNW-SSE and has a length of about 320 m; elevation ranges from about 7 to 80 m a.s.l.. The hill consists of stratified limestone, somewhere densely fractured, with dip direction predominantly oriented in a normal way respect to the slope. Fracture traces are present in variable lengths, from decimetre to metre, and penetrate inward the rock versant with thickness difficult to estimate, often exceeding one meter in depth. The intersection between different fracture systems and the slope surface generates rocky blocks and wedges of variable size that may be subject to phenomena of gravitational instability (with reference to the variation of hydraulic and dynamic conditions). Geometrical and structural info about the rock mass, necessary to perform the analysis of the slope stability, were obtained in this work from geo-referenced 3D point clouds acquired using photogrammetric and laser scanning techniques. In particular, a terrestrial laser scanning was carried out from two different point of view using a Leica Scanstation2. The laser survey created many shadows in the data due to the presence of vegetation in the lower parts of the slope and limiting the feasibility of geo-structural survey. To overcome such a limitation, we utilized a rotary wing Aibotix Aibot X6 RPAS geared with a Nikon D3200 camera. The drone flights were executed in manual modality and the images were acquired, according to the characteristics of the outcrops, under different acquisition angles. Furthermore, photos were captured very close to the versant (a few meters), allowing to produce a dense 3D point cloud (about 80 Ma points) by the image processing. A topographic survey was carried out in order to guarantee the necessary spatial accuracy to the process of images exterior orientation. The coordinates of GCPs were calculated through the post-processing of data collected by using two GPS receivers, operating in static modality, and a Total Station. The photogrammetric processing of image blocks allowed us to create the 3D point cloud, DTM, orthophoto, and 3D textured model with high level of cartographic detail. Discontinuities were deterministically characterized in terms of attitude, persistence, and spacing. Moreover, the main discontinuity sets were identified through a density analysis of attitudes in stereographic projection. In addition, the size and shape of potentially unstable blocks identified along the rock slope were measured. Finally, using additional data from traditional engineering-geological surveys executed in accessible outcrops, the kinematic and dynamic stability analysis of the rocky slope was performed. Results from this step have indicated the deterministic safety factors of rock blocks and wedges, and will be used by local Authorities to plan the protection works for safety guarantee. Results from this application show the great advantage of modern RPAS that can be successfully applied for the analysis of sub-vertical rocky slopes, especially in areas either difficult to access with traditional techniques or masked by the presence of vegetation. KEY WORDS: 3D point cloud, RPAS photogrammetry, Terrestrial laser scanning, Rock slope, Fracture mapping, Stability analysis

  5. Field evaluation of an avian risk assessment model

    USGS Publications Warehouse

    Vyas, N.B.; Spann, J.W.; Hulse, C.S.; Borges, S.L.; Bennett, R.S.; Torrez, M.; Williams, B.I.; Leffel, R.

    2006-01-01

    We conducted two laboratory subacute dietary toxicity tests and one outdoor subacute dietary toxicity test to determine the effectiveness of the U.S. Environmental Protection Agency's deterministic risk assessment model for evaluating the potential of adverse effects to birds in the field. We tested technical-grade diazinon and its D Z N- 50W (50% diazinon active ingredient wettable powder) formulation on Canada goose (Branta canadensis) goslings. Brain acetylcholinesterase activity was measured, and the feathers and skin, feet. and gastrointestinal contents were analyzed for diazinon residues. The dose-response curves showed that diazinon was significantly more toxic to goslings in the outdoor test than in the laboratory tests. The deterministic risk assessment method identified the potential for risk to birds in general, but the factors associated with extrapolating from the laboratory to the field, and from the laboratory test species to other species, resulted in the underestimation of risk to the goslings. The present study indicates that laboratory-based risk quotients should be interpreted with caution.

  6. Essentialism goes social: belief in social determinism as a component of psychological essentialism.

    PubMed

    Rangel, Ulrike; Keller, Johannes

    2011-06-01

    Individuals tend to explain the characteristics of others with reference to an underlying essence, a tendency that has been termed psychological essentialism. Drawing on current conceptualizations of essentialism as a fundamental mode of social thinking, and on prior studies investigating belief in genetic determinism (BGD) as a component of essentialism, we argue that BGD cannot constitute the sole basis of individuals' essentialist reasoning. Accordingly, we propose belief in social determinism (BSD) as a complementary component of essentialism, which relies on the belief that a person's essential character is shaped by social factors (e.g., upbringing, social background). We developed a scale to measure this social component of essentialism. Results of five correlational studies indicate that (a) BGD and BSD are largely independent, (b) BGD and BSD are related to important correlates of essentialist thinking (e.g., dispositionism, perceived group homogeneity), (c) BGD and BSD are associated with indicators of fundamental epistemic and ideological motives, and (d) the endorsement of each lay theory is associated with vital social-cognitive consequences (particularly stereotyping and prejudice). Two experimental studies examined the idea that the relationship between BSD and prejudice is bidirectional in nature. Study 6 reveals that rendering social-deterministic explanations salient results in increased levels of ingroup favoritism in individuals who chronically endorse BSD. Results of Study 7 show that priming of prejudice enhances endorsement of social-deterministic explanations particularly in persons habitually endorsing prejudiced attitudes. 2011 APA, all rights reserved

  7. Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates

    DOEpatents

    Melechko, Anatoli V [Oak Ridge, TN; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan [Ithaca, NY; Merkulov, Vladimir I [Knoxville, TN; Doktycz, Mitchel J [Knoxville, TN; Lowndes, Douglas H [Knoxville, TN; Simpson, Michael L [Knoxville, TN

    2011-05-17

    Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.

  8. Human brain detects short-time nonlinear predictability in the temporal fine structure of deterministic chaotic sounds

    NASA Astrophysics Data System (ADS)

    Itoh, Kosuke; Nakada, Tsutomu

    2013-04-01

    Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.

  9. A deterministic particle method for one-dimensional reaction-diffusion equations

    NASA Technical Reports Server (NTRS)

    Mascagni, Michael

    1995-01-01

    We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.

  10. Forecasting risk along a river basin using a probabilistic and deterministic model for environmental risk assessment of effluents through ecotoxicological evaluation and GIS.

    PubMed

    Gutiérrez, Simón; Fernandez, Carlos; Barata, Carlos; Tarazona, José Vicente

    2009-12-20

    This work presents a computer model for Risk Assessment of Basins by Ecotoxicological Evaluation (RABETOX). The model is based on whole effluent toxicity testing and water flows along a specific river basin. It is capable of estimating the risk along a river segment using deterministic and probabilistic approaches. The Henares River Basin was selected as a case study to demonstrate the importance of seasonal hydrological variations in Mediterranean regions. As model inputs, two different ecotoxicity tests (the miniaturized Daphnia magna acute test and the D.magna feeding test) were performed on grab samples from 5 waste water treatment plant effluents. Also used as model inputs were flow data from the past 25 years, water velocity measurements and precise distance measurements using Geographical Information Systems (GIS). The model was implemented into a spreadsheet and the results were interpreted and represented using GIS in order to facilitate risk communication. To better understand the bioassays results, the effluents were screened through SPME-GC/MS analysis. The deterministic model, performed each month during one calendar year, showed a significant seasonal variation of risk while revealing that September represents the worst-case scenario with values up to 950 Risk Units. This classifies the entire area of study for the month of September as "sublethal significant risk for standard species". The probabilistic approach using Monte Carlo analysis was performed on 7 different forecast points distributed along the Henares River. A 0% probability of finding "low risk" was found at all forecast points with a more than 50% probability of finding "potential risk for sensitive species". The values obtained through both the deterministic and probabilistic approximations reveal the presence of certain substances, which might be causing sublethal effects in the aquatic species present in the Henares River.

  11. Controlled deterministic implantation by nanostencil lithography at the limit of ion-aperture straggling

    NASA Astrophysics Data System (ADS)

    Alves, A. D. C.; Newnham, J.; van Donkelaar, J. A.; Rubanov, S.; McCallum, J. C.; Jamieson, D. N.

    2013-04-01

    Solid state electronic devices fabricated in silicon employ many ion implantation steps in their fabrication. In nanoscale devices deterministic implants of dopant atoms with high spatial precision will be needed to overcome problems with statistical variations in device characteristics and to open new functionalities based on controlled quantum states of single atoms. However, to deterministically place a dopant atom with the required precision is a significant technological challenge. Here we address this challenge with a strategy based on stepped nanostencil lithography for the construction of arrays of single implanted atoms. We address the limit on spatial precision imposed by ion straggling in the nanostencil—fabricated with the readily available focused ion beam milling technique followed by Pt deposition. Two nanostencils have been fabricated; a 60 nm wide aperture in a 3 μm thick Si cantilever and a 30 nm wide aperture in a 200 nm thick Si3N4 membrane. The 30 nm wide aperture demonstrates the fabricating process for sub-50 nm apertures while the 60 nm aperture was characterized with 500 keV He+ ion forward scattering to measure the effect of ion straggling in the collimator and deduce a model for its internal structure using the GEANT4 ion transport code. This model is then applied to simulate collimation of a 14 keV P+ ion beam in a 200 nm thick Si3N4 membrane nanostencil suitable for the implantation of donors in silicon. We simulate collimating apertures with widths in the range of 10-50 nm because we expect the onset of J-coupling in a device with 30 nm donor spacing. We find that straggling in the nanostencil produces mis-located implanted ions with a probability between 0.001 and 0.08 depending on the internal collimator profile and the alignment with the beam direction. This result is favourable for the rapid prototyping of a proof-of-principle device containing multiple deterministically implanted dopants.

  12. Comparison of probabilistic and deterministic fiber tracking of cranial nerves.

    PubMed

    Zolal, Amir; Sobottka, Stephan B; Podlesek, Dino; Linn, Jennifer; Rieger, Bernhard; Juratli, Tareq A; Schackert, Gabriele; Kitzler, Hagen H

    2017-09-01

    OBJECTIVE The depiction of cranial nerves (CNs) using diffusion tensor imaging (DTI) is of great interest in skull base tumor surgery and DTI used with deterministic tracking methods has been reported previously. However, there are still no good methods usable for the elimination of noise from the resulting depictions. The authors have hypothesized that probabilistic tracking could lead to more accurate results, because it more efficiently extracts information from the underlying data. Moreover, the authors have adapted a previously described technique for noise elimination using gradual threshold increases to probabilistic tracking. To evaluate the utility of this new approach, a comparison is provided with this work between the gradual threshold increase method in probabilistic and deterministic tracking of CNs. METHODS Both tracking methods were used to depict CNs II, III, V, and the VII+VIII bundle. Depiction of 240 CNs was attempted with each of the above methods in 30 healthy subjects, which were obtained from 2 public databases: the Kirby repository (KR) and Human Connectome Project (HCP). Elimination of erroneous fibers was attempted by gradually increasing the respective thresholds (fractional anisotropy [FA] and probabilistic index of connectivity [PICo]). The results were compared with predefined ground truth images based on corresponding anatomical scans. Two label overlap measures (false-positive error and Dice similarity coefficient) were used to evaluate the success of both methods in depicting the CN. Moreover, the differences between these parameters obtained from the KR and HCP (with higher angular resolution) databases were evaluated. Additionally, visualization of 10 CNs in 5 clinical cases was attempted with both methods and evaluated by comparing the depictions with intraoperative findings. RESULTS Maximum Dice similarity coefficients were significantly higher with probabilistic tracking (p < 0.001; Wilcoxon signed-rank test). The false-positive error of the last obtained depiction was also significantly lower in probabilistic than in deterministic tracking (p < 0.001). The HCP data yielded significantly better results in terms of the Dice coefficient in probabilistic tracking (p < 0.001, Mann-Whitney U-test) and in deterministic tracking (p = 0.02). The false-positive errors were smaller in HCP data in deterministic tracking (p < 0.001) and showed a strong trend toward significance in probabilistic tracking (p = 0.06). In the clinical cases, the probabilistic method visualized 7 of 10 attempted CNs accurately, compared with 3 correct depictions with deterministic tracking. CONCLUSIONS High angular resolution DTI scans are preferable for the DTI-based depiction of the cranial nerves. Probabilistic tracking with a gradual PICo threshold increase is more effective for this task than the previously described deterministic tracking with a gradual FA threshold increase and might represent a method that is useful for depicting cranial nerves with DTI since it eliminates the erroneous fibers without manual intervention.

  13. Stochastic and Deterministic Fluctuations in Stimulated Brillouin Scattering

    DTIC Science & Technology

    1990-10-01

    and J. R. Ackerhalt, "Instabilities in the Propagation of Arbitrarily Polarized Counterpropagating Waves in a Nonlinear Kerr Medium," Optical...Ackerhalt, and P. W. Milonni, "Instabilities and Chaos in the Polarizations of Counterpropagating Light Fields," Phys. Rev. Lett. 58, 2432 (1987). iv P...Plenum, New York (1990). V D. J. Gauthier, M. S. Malcuit, A. L. Gaeta, and R. W. Boyd, " Polarization Bistability of Counterpropagating Beams," Phys. Rev

  14. Experience Catalysts: How They Fill the Acquisition Experience Gap for the DoD

    DTIC Science & Technology

    2012-01-01

    Russ- Eft , 1997). Other studies have shown that “the more managers are trained in how to support and coach the skills their employees learn, the more...efficacy, age, etc. (Bassi and Russ- Eft , 1997). Making a deterministic forecast is difficult. Experience Catalysts: How They Fill the Acquisition... tap freely. Provide easy access to sources of expertise. It deepens their knowledge base, expands per- spectives, and fuels their experience engine

  15. Inclusion of Multiple Functional Types in an Automaton Model of Bioturbation and Their Effects on Sediments Properties

    DTIC Science & Technology

    2007-09-30

    if the traditional models adequately parameterize and characterize the actual mixing. As an example of the application of this method , we have...2) Deterministic Modelling Results. As noted above, we are working on a stochastic method of modelling transient and short-lived tracers...heterogeneity. RELATED PROJECTS We have worked in collaboration with Peter Jumars (Univ. Maine), and his PhD student Kelley Dorgan, who are measuring

  16. Hybrid discrete ordinates and characteristics method for solving the linear Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Yi, Ce

    With the ability of computer hardware and software increasing rapidly, deterministic methods to solve the linear Boltzmann equation (LBE) have attracted some attention for computational applications in both the nuclear engineering and medical physics fields. Among various deterministic methods, the discrete ordinates method (SN) and the method of characteristics (MOC) are two of the most widely used methods. The SN method is the traditional approach to solve the LBE for its stability and efficiency. While the MOC has some advantages in treating complicated geometries. However, in 3-D problems requiring a dense discretization grid in phase space (i.e., a large number of spatial meshes, directions, or energy groups), both methods could suffer from the need for large amounts of memory and computation time. In our study, we developed a new hybrid algorithm by combing the two methods into one code, TITAN. The hybrid approach is specifically designed for application to problems containing low scattering regions. A new serial 3-D time-independent transport code has been developed. Under the hybrid approach, the preferred method can be applied in different regions (blocks) within the same problem model. Since the characteristics method is numerically more efficient in low scattering media, the hybrid approach uses a block-oriented characteristics solver in low scattering regions, and a block-oriented SN solver in the remainder of the physical model. In the TITAN code, a physical problem model is divided into a number of coarse meshes (blocks) in Cartesian geometry. Either the characteristics solver or the SN solver can be chosen to solve the LBE within a coarse mesh. A coarse mesh can be filled with fine meshes or characteristic rays depending on the solver assigned to the coarse mesh. Furthermore, with its object-oriented programming paradigm and layered code structure, TITAN allows different individual spatial meshing schemes and angular quadrature sets for each coarse mesh. Two quadrature types (level-symmetric and Legendre-Chebyshev quadrature) along with the ordinate splitting techniques (rectangular splitting and PN-TN splitting) are implemented. In the S N solver, we apply a memory-efficient 'front-line' style paradigm to handle the fine mesh interface fluxes. In the characteristics solver, we have developed a novel 'backward' ray-tracing approach, in which a bi-linear interpolation procedure is used on the incoming boundaries of a coarse mesh. A CPU-efficient scattering kernel is shared in both solvers within the source iteration scheme. Angular and spatial projection techniques are developed to transfer the angular fluxes on the interfaces of coarse meshes with different discretization grids. The performance of the hybrid algorithm is tested in a number of benchmark problems in both nuclear engineering and medical physics fields. Among them are the Kobayashi benchmark problems and a computational tomography (CT) device model. We also developed an extra sweep procedure with the fictitious quadrature technique to calculate angular fluxes along directions of interest. The technique is applied in a single photon emission computed tomography (SPECT) phantom model to simulate the SPECT projection images. The accuracy and efficiency of the TITAN code are demonstrated in these benchmarks along with its scalability. A modified version of the characteristics solver is integrated in the PENTRAN code and tested within the parallel engine of PENTRAN. The limitations on the hybrid algorithm are also studied.

  17. Multiple scattering of waves in random media: Application to the study of the city-site effect in Mexico City area.

    NASA Astrophysics Data System (ADS)

    Ishizawa, O. A.; Clouteau, D.

    2007-12-01

    Long-duration, amplifications and spatial response's variability of the seismic records registered in Mexico City during the September 1985 earthquake cannot only be explained by the soil velocity model. We will try to explain these phenomena by studying the extent of the effect of buildings' diffracted wave fields during an earthquake. The main question is whether the presence of a large number of buildings can significantly modify the seismic wave field. We are interested in the interaction between the incident wave field propagating in a stratified half- space and a large number of structures at the free surface, i.e., the coupled city-site effect. We study and characterize the seismic wave propagation regimes in a city using the theory of wave propagation in random media. In the coupled city-site system, the buildings are modeled as resonant scatterers uniformly distributed at the surface of a deterministic, horizontally layered elastic half-space representing the soil. Based on the mean-field and the field correlation equations, we build a theoretical model which takes into account the multiple scattering of seismic waves and allows us to describe the coupled city-site system behavior in a simple and rapid way. The results obtained for the configurationally averaged field quantities are validated by means of 3D results for the seismic response of a deterministic model. The numerical simulations of this model are computed with MISS3D code based on classical Soil-Structure Interaction techniques and on a variational coupling between Boundary Integral Equations for a layered soil and a modal Finite Element approach for the buildings. This work proposes a detailed numerical and a theoretical analysis of the city-site interaction (CSI) in Mexico City area. The principal parameters in the study of the CSI are the buildings resonant frequency distribution, the soil characteristics of the site, the urban density and position of the buildings in the city, as well as the type of incident wave. The main results of the theoretical and numerical models allow us to characterize the seismic movement in urban areas.

  18. Probabilistic inversion of electrical resistivity data from bench-scale experiments: On model parameterization for CO2 sequestration monitoring

    NASA Astrophysics Data System (ADS)

    Breen, S. J.; Lochbuehler, T.; Detwiler, R. L.; Linde, N.

    2013-12-01

    Electrical resistivity tomography (ERT) is a well-established method for geophysical characterization and has shown potential for monitoring geologic CO2 sequestration, due to its sensitivity to electrical resistivity contrasts generated by liquid/gas saturation variability. In contrast to deterministic ERT inversion approaches, probabilistic inversion provides not only a single saturation model but a full posterior probability density function for each model parameter. Furthermore, the uncertainty inherent in the underlying petrophysics (e.g., Archie's Law) can be incorporated in a straightforward manner. In this study, the data are from bench-scale ERT experiments conducted during gas injection into a quasi-2D (1 cm thick), translucent, brine-saturated sand chamber with a packing that mimics a simple anticlinal geological reservoir. We estimate saturation fields by Markov chain Monte Carlo sampling with the MT-DREAM(ZS) algorithm and compare them quantitatively to independent saturation measurements from a light transmission technique, as well as results from deterministic inversions. Different model parameterizations are evaluated in terms of the recovered saturation fields and petrophysical parameters. The saturation field is parameterized (1) in cartesian coordinates, (2) by means of its discrete cosine transform coefficients, and (3) by fixed saturation values and gradients in structural elements defined by a gaussian bell of arbitrary shape and location. Synthetic tests reveal that a priori knowledge about the expected geologic structures (as in parameterization (3)) markedly improves the parameter estimates. The number of degrees of freedom thus strongly affects the inversion results. In an additional step, we explore the effects of assuming that the total volume of injected gas is known a priori and that no gas has migrated away from the monitored region.

  19. Equilibrium reconstruction in an iron core tokamak using a deterministic magnetisation model

    NASA Astrophysics Data System (ADS)

    Appel, L. C.; Lupelli, I.; JET Contributors

    2018-02-01

    In many tokamaks ferromagnetic material, usually referred to as an iron-core, is present in order to improve the magnetic coupling between the solenoid and the plasma. The presence of the iron core in proximity to the plasma changes the magnetic topology with consequent effects on the magnetic field structure and the plasma boundary. This paper considers the problem of obtaining the free-boundary plasma equilibrium solution in the presence of ferromagnetic material based on measured constraints. The current approach employs a model described by O'Brien et al. (1992) in which the magnetisation currents at the iron-air boundary are represented by a set of free parameters and appropriate boundary conditions are enforced via a set of quasi-measurements on the material boundary. This can lead to the possibility of overfitting the data and hiding underlying issues with the measured signals. Although the model typically achieves good fits to measured magnetic signals there are significant discrepancies in the inferred magnetic topology compared with other plasma diagnostic measurements that are independent of the magnetic field. An alternative approach for equilibrium reconstruction in iron-core tokamaks, termed the deterministic magnetisation model is developed and implemented in EFIT++. The iron is represented by a boundary current with the gradients in the magnetisation dipole state generating macroscopic internal magnetisation currents. A model for the boundary magnetisation currents at the iron-air interface is developed using B-Splines enabling continuity to arbitrary order; internal magnetisation currents are allocated to triangulated regions within the iron, and a method to enable adaptive refinement is implemented. The deterministic model has been validated by comparing it with a synthetic 2-D electromagnetic model of JET. It is established that the maximum field discrepancy is less than 1.5 mT throughout the vacuum region enclosing the plasma. The discrepancies of simulated magnetic probe signals are accurate to within 1% for signals with absolute magnitude greater than 100mT; in all other cases agreement is to within 1mT. The effect of neglecting the internal magnetisation currents increases the maximum discrepancy in the vacuum region to >20mT, resulting in errors of 5%-10% in the simulated probe signals. The fact that the previous model neglects the internal magnetisation currents (and also has additional free parameters when fitting the measured data) makes it unsuitable for analysing data in the absence of plasma current. The discrepancy of the poloidal magnetic flux within the vacuum vessel is to within 0.1Wb. Finally the deterministic model is applied to an equilibrium force-balance solution of a JET discharge using experimental data. It is shown that the discrepancies of the outboard separatrix position, and the outer strike-point position inferred from Thomson Scattering and Infrared camera data are much improved beyond the routine equilibrium reconstruction, whereas the discrepancy of the inner strike-point position is similar.

  20. Seismicity map tools for earthquake studies

    NASA Astrophysics Data System (ADS)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  1. VASP-4096: a very high performance programmable device for digital media processing applications

    NASA Astrophysics Data System (ADS)

    Krikelis, Argy

    2001-03-01

    Over the past few years, technology drivers for microprocessors have changed significantly. Media data delivery and processing--such as telecommunications, networking, video processing, speech recognition and 3D graphics--is increasing in importance and will soon dominate the processing cycles consumed in computer-based systems. This paper presents the architecture of the VASP-4096 processor. VASP-4096 provides high media performance with low energy consumption by integrating associative SIMD parallel processing with embedded microprocessor technology. The major innovations in the VASP-4096 is the integration of thousands of processing units in a single chip that are capable of support software programmable high-performance mathematical functions as well as abstract data processing. In addition to 4096 processing units, VASP-4096 integrates on a single chip a RISC controller that is an implementation of the SPARC architecture, 128 Kbytes of Data Memory, and I/O interfaces. The SIMD processing in VASP-4096 implements the ASProCore architecture, which is a proprietary implementation of SIMD processing, operates at 266 MHz with program instructions issued by the RISC controller. The device also integrates a 64-bit synchronous main memory interface operating at 133 MHz (double-data rate), and a 64- bit 66 MHz PCI interface. VASP-4096, compared with other processors architectures that support media processing, offers true performance scalability, support for deterministic and non-deterministic data processing on a single device, and software programmability that can be re- used in future chip generations.

  2. Development of a Probabilistic Component Mode Synthesis Method for the Analysis of Non-Deterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1995-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  3. Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System

    ERIC Educational Resources Information Center

    Maiti, Alakes; Samanta, G. P.

    2005-01-01

    This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…

  4. ShinyGPAS: interactive genomic prediction accuracy simulator based on deterministic formulas.

    PubMed

    Morota, Gota

    2017-12-20

    Deterministic formulas for the accuracy of genomic predictions highlight the relationships among prediction accuracy and potential factors influencing prediction accuracy prior to performing computationally intensive cross-validation. Visualizing such deterministic formulas in an interactive manner may lead to a better understanding of how genetic factors control prediction accuracy. The software to simulate deterministic formulas for genomic prediction accuracy was implemented in R and encapsulated as a web-based Shiny application. Shiny genomic prediction accuracy simulator (ShinyGPAS) simulates various deterministic formulas and delivers dynamic scatter plots of prediction accuracy versus genetic factors impacting prediction accuracy, while requiring only mouse navigation in a web browser. ShinyGPAS is available at: https://chikudaisei.shinyapps.io/shinygpas/ . ShinyGPAS is a shiny-based interactive genomic prediction accuracy simulator using deterministic formulas. It can be used for interactively exploring potential factors that influence prediction accuracy in genome-enabled prediction, simulating achievable prediction accuracy prior to genotyping individuals, or supporting in-class teaching. ShinyGPAS is open source software and it is hosted online as a freely available web-based resource with an intuitive graphical user interface.

  5. Computer aided manufacturing for complex freeform optics

    NASA Astrophysics Data System (ADS)

    Wolfs, Franciscus; Fess, Ed; Johns, Dustin; LePage, Gabriel; Matthews, Greg

    2017-10-01

    Recently, the desire to use freeform optics has been increasing. Freeform optics can be used to expand the capabilities of optical systems and reduce the number of optics needed in an assembly. The traits that increase optical performance also present challenges in manufacturing. As tolerances on freeform optics become more stringent, it is necessary to continue to improve methods for how the grinding and polishing processes interact with metrology. To create these complex shapes, OptiPro has developed a computer aided manufacturing package called PROSurf. PROSurf generates tool paths required for grinding and polishing freeform optics with multiple axes of motion. It also uses metrology feedback for deterministic corrections. ProSurf handles 2 key aspects of the manufacturing process that most other CAM systems struggle with. The first is having the ability to support several input types (equations, CAD models, point clouds) and still be able to create a uniform high-density surface map useable for generating a smooth tool path. The second is to improve the accuracy of mapping a metrology file to the part surface. To perform this OptiPro is using 3D error maps instead of traditional 2D maps. The metrology error map drives the tool path adjustment applied during processing. For grinding, the error map adjusts the tool position to compensate for repeatable system error. For polishing, the error map drives the relative dwell times of the tool across the part surface. This paper will present the challenges associated with these issues and solutions that we have created.

  6. Recent Developments in the UltraForm Finishing and UltraSurf Measuring of Axisymmetric IR Domes

    DTIC Science & Technology

    2010-06-08

    Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY NOTES Presented at Mirror Technology Days, Boulder, Colorado, USA......deterministic fabrication solution for a wide range of newly developed windows , domes and mirrors . COMMERCIALIZATION  UltraForm Finishing ( UFF

  7. Flood hazard assessment using 1D and 2D approaches

    NASA Astrophysics Data System (ADS)

    Petaccia, Gabriella; Costabile, Pierfranco; Macchione, Francesco; Natale, Luigi

    2013-04-01

    The EU flood risk Directive (Directive 2007/60/EC) prescribes risk assessment and mapping to develop flood risk management plans. Flood hazard mapping may be carried out with mathematical models able to determine flood-prone areas once realistic conditions (in terms of discharge or water levels) are imposed at the boundaries of the case study. The deterministic models are mainly based on shallow water equations expressed in their 1D or 2D formulation. The 1D approach is widely used, especially in technical studies, due to its relative simplicity, its computational efficiency and also because it requires topographical data not as expensive as the ones needed by 2D models. Even if in a great number of practical situations, such as modeling in-channel flows and not too wide floodplains, the 1D approach may provide results close to the prediction of a more sophisticated 2D model, it must be pointed out that the correct use of a 1D model in practical situations is more complex than it may seem. The main issues to be correctly modeled in a 1D approach are the definition of hydraulic structures such as bridges and buildings interacting with the flow and the treatment of the tributaries. Clearly all these aspects have to be taken into account also in the 2D modeling, but with fewer difficulties. The purpose of this paper is to show how the above cited issues can be described using a 1D or 2D unsteady flow modeling. In particular the Authors will show the devices that have to be implemented in 1D modeling to get reliable predictions of water levels and discharges comparable to the ones obtained using a 2D model. Attention will be focused on an actual river (Crati river) located in the South of Italy. This case study is quite complicated since it deals with the simulation of channeled flows, overbank flows, interactions with buildings, bridges and tributaries. Accurate techniques, intentionally developed by the Authors to take into account all these peculiarities in 1D and 2D modeling, will be presented, compared and discussed.

  8. Using proxies to explore ensemble uncertainty in climate impact studies: the example of air pollution

    NASA Astrophysics Data System (ADS)

    Lemaire, V. E. P.; Colette, A.; Menut, L.

    2015-10-01

    Because of its sensitivity to unfavorable weather patterns, air pollution is sensitive to climate change so that, in the future, a climate penalty could jeopardize the expected efficiency of air pollution mitigation measures. A common method to assess the impact of climate on air quality consists in implementing chemistry-transport models forced by climate projection. However, the computing cost of such method requires optimizing ensemble exploration techniques. By using a training dataset of deterministic projection of climate and air quality over Europe, we identified the main meteorological drivers of air quality for 8 regions in Europe and developed simple statistical models that could be used to predict air pollutant concentrations. The evolution of the key climate variables driving either particulate or gaseous pollution allows concluding on the robustness of the climate impact on air quality. The climate benefit for PM2.5 was confirmed -0.96 (±0.18), -1.00 (±0.37), -1.16 ± (0.23) μg m-3, for resp. Eastern Europe, Mid Europe and Northern Italy and for the Eastern Europe, France, Iberian Peninsula, Mid Europe and Northern Italy regions a climate penalty on ozone was identified 10.11 (±3.22), 8.23 (±2.06), 9.23 (±1.13), 6.41 (±2.14), 7.43 (±2.02) μg m-3. This technique also allows selecting a subset of relevant regional climate model members that should be used in priority for future deterministic projections.

  9. Fatal and nonfatal risk associated with recycle of D&D-generated concrete

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boren, J.K.; Ayers, K.W.; Parker, F.L.

    1997-02-01

    As decontamination and decommissioning activities proceed within the U.S. Department of Energy Complex, vast volumes of uncontaminated and contaminated concrete will be generated. The current practice of decontaminating and landfilling the concrete is an expensive and potentially wasteful practice. Research is being conducted at Vanderbilt University to assess the economic, social, legal, and political ramifications of alternate methods of dealing with waste concrete. An important aspect of this research work is the assessment of risk associated with the various alternatives. A deterministic risk assessment model has been developed which quantifies radiological as well as non-radiological risks associated with concrete disposalmore » and recycle activities. The risk model accounts for fatal as well as non-fatal risks to both workers and the public. Preliminary results indicate that recycling of concrete presents potentially lower risks than the current practice. Radiological considerations are shown to be of minor importance in comparison to other sources of risk, with conventional transportation fatalities and injuries dominating. Onsite activities can also be a major contributor to non-fatal risk.« less

  10. Reducing the critical particle diameter in (highly) asymmetric sieve-based lateral displacement devices.

    PubMed

    Dijkshoorn, J P; Schutyser, M A I; Sebris, M; Boom, R M; Wagterveld, R M

    2017-10-26

    Deterministic lateral displacement technology was originally developed in the realm of microfluidics, but has potential for larger scale separation as well. In our previous studies, we proposed a sieve-based lateral displacement device inspired on the principle of deterministic lateral displacement. The advantages of this new device is that it gives a lower pressure drop, lower risk of particle accumulation, higher throughput and is simpler to manufacture. However, until now this device has only been investigated for its separation of large particles of around 785 µm diameter. To separate smaller particles, we investigate several design parameters for their influence on the critical particle diameter. In a dimensionless evaluation, device designs with different geometry and dimensions were compared. It was found that sieve-based lateral displacement devices are able to displace particles due to the crucial role of the flow profile, despite of their unusual and asymmetric design. These results demonstrate the possibility to actively steer the velocity profile in order to reduce the critical diameter in deterministic lateral displacement devices, which makes this separation principle more accessible for large-scale, high throughput applications.

  11. A variational method for analyzing limit cycle oscillations in stochastic hybrid systems

    NASA Astrophysics Data System (ADS)

    Bressloff, Paul C.; MacLaurin, James

    2018-06-01

    Many systems in biology can be modeled through ordinary differential equations, which are piece-wise continuous, and switch between different states according to a Markov jump process known as a stochastic hybrid system or piecewise deterministic Markov process (PDMP). In the fast switching limit, the dynamics converges to a deterministic ODE. In this paper, we develop a phase reduction method for stochastic hybrid systems that support a stable limit cycle in the deterministic limit. A classic example is the Morris-Lecar model of a neuron, where the switching Markov process is the number of open ion channels and the continuous process is the membrane voltage. We outline a variational principle for the phase reduction, yielding an exact analytic expression for the resulting phase dynamics. We demonstrate that this decomposition is accurate over timescales that are exponential in the switching rate ɛ-1 . That is, we show that for a constant C, the probability that the expected time to leave an O(a) neighborhood of the limit cycle is less than T scales as T exp (-C a /ɛ ) .

  12. Effect of nonlinearity in hybrid kinetic Monte Carlo-continuum models.

    PubMed

    Balter, Ariel; Lin, Guang; Tartakovsky, Alexandre M

    2012-01-01

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a kinetic Monte Carlo (KMC) model for a surface to a finite-difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition-dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition-dissolution model including competitive adsorption, which leads to a nonlinear rate, and show that in this case the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.

  13. Effect of Nonlinearity in Hybrid Kinetic Monte Carlo-Continuum Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balter, Ariel I.; Lin, Guang; Tartakovsky, Alexandre M.

    2012-04-23

    Recently there has been interest in developing efficient ways to model heterogeneous surface reactions with hybrid computational models that couple a KMC model for a surface to a finite difference model for bulk diffusion in a continuous domain. We consider two representative problems that validate a hybrid method and also show that this method captures the combined effects of nonlinearity and stochasticity. We first validate a simple deposition/dissolution model with a linear rate showing that the KMC-continuum hybrid agrees with both a fully deterministic model and its analytical solution. We then study a deposition/dissolution model including competitive adsorption, which leadsmore » to a nonlinear rate, and show that, in this case, the KMC-continuum hybrid and fully deterministic simulations do not agree. However, we are able to identify the difference as a natural result of the stochasticity coming from the KMC surface process. Because KMC captures inherent fluctuations, we consider it to be more realistic than a purely deterministic model. Therefore, we consider the KMC-continuum hybrid to be more representative of a real system.« less

  14. Deterministic Integration of Quantum Dots into on-Chip Multimode Interference Beamsplitters Using in Situ Electron Beam Lithography

    NASA Astrophysics Data System (ADS)

    Schnauber, Peter; Schall, Johannes; Bounouar, Samir; Höhne, Theresa; Park, Suk-In; Ryu, Geun-Hwan; Heindel, Tobias; Burger, Sven; Song, Jin-Dong; Rodt, Sven; Reitzenstein, Stephan

    2018-04-01

    The development of multi-node quantum optical circuits has attracted great attention in recent years. In particular, interfacing quantum-light sources, gates and detectors on a single chip is highly desirable for the realization of large networks. In this context, fabrication techniques that enable the deterministic integration of pre-selected quantum-light emitters into nanophotonic elements play a key role when moving forward to circuits containing multiple emitters. Here, we present the deterministic integration of an InAs quantum dot into a 50/50 multi-mode interference beamsplitter via in-situ electron beam lithography. We demonstrate the combined emitter-gate interface functionality by measuring triggered single-photon emission on-chip with $g^{(2)}(0) = 0.13\\pm 0.02$. Due to its high patterning resolution as well as spectral and spatial control, in-situ electron beam lithography allows for integration of pre-selected quantum emitters into complex photonic systems. Being a scalable single-step approach, it paves the way towards multi-node, fully integrated quantum photonic chips.

  15. Reconstruction and analysis of sub-plinian tephra dispersal during the 1530 A.D. Soufrière (Guadeloupe) eruption: Implications for scenario definition and hazards assessment

    NASA Astrophysics Data System (ADS)

    Komorowski, J.-C.; Legendre, Y.; Caron, B.; Boudon, G.

    2008-12-01

    The last magmatic eruption of Soufrière of Guadeloupe dated at 1530 A.D. (Soufrière eruption) is characterized by an onset with a partial flank-collapse and emplacement of a debris-avalanche that was followed by a sub-plinian VEI 2-3 explosive short-lived eruption (Phase-1) with a column that reached a height between 9 and 12 km producing about 3.9 × 10 6 m 3 DRE (16.3 × 10 6 m 3 bulk) of juvenile products. The column recurrently collapsed generating scoriaceous pyroclastic flows in radiating valleys up to a distance of 5-6 km with a maximum interpolated bulk deposit volume of 11.7 × 10 6 m 3 (5 × 10 6 m 3 DRE). We have used HAZMAP, a numerical simple first-order model of tephra dispersal [Macedonio, G., Costa, A., Longo, A., 2005. A computer model for volcanic ash fallout and assessment of subsequent hazard. Comput. Geosci. 31, 837-845] to reconstruct to a first approximation the potential dispersal of tephra and associated tephra mass loadings generated by the sub-plinian Phase 1 of the 1530 A.D. eruption. We have tested our model on a deterministic average dry season wind profile that best-fits the available data as well as on a set of randomly selected wind profiles over a 5 year interval that allows the elaboration of probabilistic maps for the exceedance of specific tephra mass load thresholds. Results show that in the hypothesis of a future 1530 A.D. scenario, populated areas to a distance of 3-4 km west-southwest of the vent could be subjected to a static load pressure between 2 and 10 kPa in case of wet tephra, susceptible to cause variable degrees of roof damage. Our results provide volcanological input parameters for scenario and event-tree definition, for assessing volcanic risks and evaluating their impact in case of a future sub-plinian eruption which could affect up to 70 000 people in southern Basse-Terre island and the region. They also provide a framework to aid decision-making concerning land management and development. A sub-plinian eruption is the most likely magmatic scenario in case of a future eruption of this volcano which has shown, since 1992, increasing signs of low-energy seismic, thermal, and acid degassing unrest without significant deformation.

  16. Effect of Dietary Countermeasures and Impact of Gravity on Renal Calculi Size Distributions Predicted by PBE-System and PBE-CFD Models

    NASA Technical Reports Server (NTRS)

    Kassemi, M.; Thompson, D.; Goodenow, D.; Gokoglu, S.; Myers, J.

    2016-01-01

    Renal stone disease is not only a concern on earth but can conceivably pose a serious risk to the astronauts health and safety in Space. In this work, two different deterministic models based on a Population Balance Equation (PBE) analysis of renal stone formation are developed to assess the risks of critical renal stone incidence for astronauts during space travel. In the first model, the nephron is treated as a continuous mixed suspension mixed product removal crystallizer and the PBE for the nucleating, growing and agglomerating renal calculi is coupled to speciation calculations performed by JESS. Predictions of stone size distributions in the kidney using this model indicate that the astronaut in microgravity is at noticeably greater but still subcritical risk and recommend administration of citrate and augmented hydration as effective means of minimizing and containing this risk. In the second model, the PBE analysis is coupled to a Computational Fluid Dynamics (CFD) model for flow of urine and transport of Calcium and Oxalate in the nephron to predict the impact of gravity on the stone size distributions. Results presented for realistic 3D tubule and collecting duct geometries, clearly indicate that agglomeration is the primary mode of size enhancement in both 1g and microgravity. 3D numerical simulations seem to further indicate that there will be an increased number of smaller stones developed in microgravity that will likely pass through the nephron in the absence of wall adhesion. However, upon reentry to a 1g (Earth) or 38g (Mars) partial gravitational fields, the renal calculi can lag behind the urinary flow in tubules that are adversely oriented with respect to the gravitational field and grow agglomerate to large sizes that are sedimented near the wall with increased propensity for wall adhesion, plaque formation, and risk to the astronauts.

  17. Advanced Neutronics Tools for BWR Design Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santamarina, A.; Hfaiedh, N.; Letellier, R.

    2006-07-01

    This paper summarizes the developments implemented in the new APOLLO2.8 neutronics tool to meet the required target accuracy in LWR applications, particularly void effects and pin-by-pin power map in BWRs. The Method Of Characteristics was developed to allow efficient LWR assembly calculations in 2D-exact heterogeneous geometry; resonant reaction calculation was improved by the optimized SHEM-281 group mesh, which avoids resonance self-shielding approximation below 23 eV, and the new space-dependent method for resonant mixture that accounts for resonance overlapping. Furthermore, a new library CEA2005, processed from JEFF3.1 evaluations involving feedback from Critical Experiments and LWR P.I.E, is used. The specific '2005-2007more » BWR Plan' settled to demonstrate the validation/qualification of this neutronics tool is described. Some results from the validation process are presented: the comparison of APOLLO2.8 results to reference Monte Carlo TRIPOLI4 results on specific BWR benchmarks emphasizes the ability of the deterministic tool to calculate BWR assembly multiplication factor within 200 pcm accuracy for void fraction varying from 0 to 100%. The qualification process against the BASALA mock-up experiment stresses APOLLO2.8/CEA2005 performances: pin-by-pin power is always predicted within 2% accuracy, reactivity worth of B4C or Hf cruciform control blade, as well as Gd pins, is predicted within 1.2% accuracy. (authors)« less

  18. Three-dimensional printing and deformation behavior of low-density target structures by two-photon polymerization

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Stein, Ori; Campbell, John H.; Jiang, Lijia; Petta, Nicole; Lu, Yongfeng

    2017-08-01

    Two-photon polymerization (2PP), a 3D nano to microscale additive manufacturing process, is being used for the first time to fabricate small custom experimental packages ("targets") to support laser-driven high-energy-density (HED) physics research. Of particular interest is the use of 2PP to deterministically print low-density, low atomic-number (CHO) polymer matrices ("foams") at millimeter scale with sub-micrometer resolution. Deformation during development and drying of the foam structures remains a challenge when using certain commercial photo-resins; here we compare use of acrylic resins IP-S and IP-Dip. The mechanical strength of polymeric beam and foam structures is examined particularly the degree of deformation that occurs during the development and drying processes. The magnitude of the shrinkage in the two resins in quantified by printing sample structures and by use of FEA to simulate the deformation. Capillary drying forces are shown to be small and likely below the elastic limit of the core foam structure. In contrast the substantial shrinkage in IP-Dip ( 5-10%) cause large shear stresses and associated plastic deformation particularly near constrained boundaries such as the substrate and locations with sharp density variation. The inherent weakness of stitching boundaries is also evident and in certain cases can lead to delamination. Use of IP-S shows marked reduction in deformation with a minor loss of print resolution

  19. Experimental & Numerical Modeling of Non-combusting Model Firebrands' Transport

    NASA Astrophysics Data System (ADS)

    Tohidi, Ali; Kaye, Nigel

    2016-11-01

    Fire spotting is one of the major mechanisms of wildfire spread. Three phases of this phenomenon are firebrand formation and break-off from burning vegetation, lofting and downwind transport of firebrands through the velocity field of the wildfire, and spot fire ignition upon landing. The lofting and downwind transport phase is modeled by conducting large-scale wind tunnel experiments. Non-combusting rod-like model firebrands with different aspect ratios are released within the velocity field of a jet in a boundary layer cross-flow that approximates the wildfire velocity field. Characteristics of the firebrand dispersion are quantified by capturing the full trajectory of the model firebrands using the developed image processing algorithm. The results show that the lofting height has a direct impact on the maximum travel distance of the model firebrands. Also, the experimental results are utilized for validation of a highly scalable coupled stochastic & parametric firebrand flight model that, couples the LES-resolved velocity field of a jet-in-nonuniform-cross-flow (JINCF) with a 3D fully deterministic 6-degrees-of-freedom debris transport model. The validation results show that the developed numerical model is capable of estimating average statistics of the firebrands' flight. Authors would like to thank support of the National Science Foundation under Grant No. 1200560. Also, the presenter (Ali Tohid) would like to thank Dr. Michael Gollner from the University of Maryland College Park for the conference participation support.

  20. Use of a remotely piloted aircraft system for hazard assessment in a rocky mining area (Lucca, Italy)

    NASA Astrophysics Data System (ADS)

    Salvini, Riccardo; Mastrorocco, Giovanni; Esposito, Giuseppe; Di Bartolo, Silvia; Coggan, John; Vanneschi, Claudio

    2018-01-01

    The use of remote sensing techniques is now common practice in different working environments, including engineering geology. Moreover, in recent years the development of structure from motion (SfM) methods, together with rapid technological improvement, has allowed the widespread use of cost-effective remotely piloted aircraft systems (RPAS) for acquiring detailed and accurate geometrical information even in evolving environments, such as mining contexts. Indeed, the acquisition of remotely sensed data from hazardous areas provides accurate 3-D models and high-resolution orthophotos minimizing the risk for operators. The quality and quantity of the data obtainable from RPAS surveys can then be used for inspection of mining areas, audit of mining design, rock mass characterizations, stability analysis investigations and monitoring activities. Despite the widespread use of RPAS, its potential and limitations still have to be fully understood.In this paper a case study is shown where a RPAS was used for the engineering geological investigation of a closed marble mine area in Italy; direct ground-based techniques could not be applied for safety reasons. In view of the re-activation of mining operations, high-resolution images taken from different positions and heights were acquired and processed using SfM techniques to obtain an accurate and detailed 3-D model of the area. The geometrical and radiometrical information was subsequently used for a deterministic rock mass characterization, which led to the identification of two large marble blocks that pose a potential significant hazard issue for the future workforce. A preliminary stability analysis, with a focus on investigating the contribution of potential rock bridges, was then performed in order to demonstrate the potential use of RPAS information in engineering geological contexts for geohazard identification, awareness and reduction.

  1. Performance-based design factors for pile foundations.

    DOT National Transportation Integrated Search

    2014-10-01

    The seismic design of pile foundations is currently performed in a relatively simple, deterministic manner. This : report describes the development of a performance-based framework to create seismic designs of pile group : foundations that consider a...

  2. Evaluation of Deployment Challenges of Wireless Sensor Networks at Signalized Intersections

    PubMed Central

    Azpilicueta, Leyre; López-Iturri, Peio; Aguirre, Erik; Martínez, Carlos; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2016-01-01

    With the growing demand of Intelligent Transportation Systems (ITS) for safer and more efficient transportation, research on and development of such vehicular communication systems have increased considerably in the last years. The use of wireless networks in vehicular environments has grown exponentially. However, it is highly important to analyze radio propagation prior to the deployment of a wireless sensor network in such complex scenarios. In this work, the radio wave characterization for ISM 2.4 GHz and 5 GHz Wireless Sensor Networks (WSNs) deployed taking advantage of the existence of traffic light infrastructure has been assessed. By means of an in-house developed 3D ray launching algorithm, the impact of topology as well as urban morphology of the environment has been analyzed, emulating the realistic operation in the framework of the scenario. The complexity of the scenario, which is an intersection city area with traffic lights, vehicles, people, buildings, vegetation and urban environment, makes necessary the channel characterization with accurate models before the deployment of wireless networks. A measurement campaign has been conducted emulating the interaction of the system, in the vicinity of pedestrians as well as nearby vehicles. A real time interactive application has been developed and tested in order to visualize and monitor traffic as well as pedestrian user location and behavior. Results show that the use of deterministic tools in WSN deployment can aid in providing optimal layouts in terms of coverage, capacity and energy efficiency of the network. PMID:27455270

  3. Evaluation of Deployment Challenges of Wireless Sensor Networks at Signalized Intersections.

    PubMed

    Azpilicueta, Leyre; López-Iturri, Peio; Aguirre, Erik; Martínez, Carlos; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2016-07-22

    With the growing demand of Intelligent Transportation Systems (ITS) for safer and more efficient transportation, research on and development of such vehicular communication systems have increased considerably in the last years. The use of wireless networks in vehicular environments has grown exponentially. However, it is highly important to analyze radio propagation prior to the deployment of a wireless sensor network in such complex scenarios. In this work, the radio wave characterization for ISM 2.4 GHz and 5 GHz Wireless Sensor Networks (WSNs) deployed taking advantage of the existence of traffic light infrastructure has been assessed. By means of an in-house developed 3D ray launching algorithm, the impact of topology as well as urban morphology of the environment has been analyzed, emulating the realistic operation in the framework of the scenario. The complexity of the scenario, which is an intersection city area with traffic lights, vehicles, people, buildings, vegetation and urban environment, makes necessary the channel characterization with accurate models before the deployment of wireless networks. A measurement campaign has been conducted emulating the interaction of the system, in the vicinity of pedestrians as well as nearby vehicles. A real time interactive application has been developed and tested in order to visualize and monitor traffic as well as pedestrian user location and behavior. Results show that the use of deterministic tools in WSN deployment can aid in providing optimal layouts in terms of coverage, capacity and energy efficiency of the network.

  4. Species removal from aqueous radioactive waste by deep-bed filtration.

    PubMed

    Dobre, Tănase; Zicman, Laura Ruxandra; Pârvulescu, Oana Cristina; Neacşu, Elena; Ciobanu, Cătălin; Drăgolici, Felicia Nicoleta

    2018-05-26

    Performances of aqueous suspension treatment by deep-bed sand filtration were experimentally studied and simulated. A semiempirical deterministic model and a stochastic model were used to predict the removal of clay particles (20 μm) from diluted suspensions. Model parameters, which were fitted based on experimental data, were linked by multiple linear correlations to the process factors, i.e., sand grain size (0.5 and 0.8 mm), bed depth (0.2 and 0.4 m), clay concentration in the feed suspension (1 and 2 kg p /m 3 ), suspension superficial velocity (0.015 and 0.020 m/s), and operating temperature (25 and 45 °C). These relationships were used to predict the bed radioactivity determined by the deposition of radioactive suspended particles (>50 nm) from low and medium level aqueous radioactive waste. A deterministic model based on mass balance, kinetic, and interface equilibrium equations was developed to predict the multicomponent sorption of 60 Co, 137 Cs, 241 Am, and 3 H radionuclides (0.1-0.3 nm). A removal of 98.7% of radioactive particles was attained by filtering a radioactive wastewater volume of 10 m 3 (0.5 mm sand grain size, 0.3 m bed depth, 0.223 kg p /m 3 suspended solid concentration in the feed suspension, 0.003 m/s suspension superficial velocity, and 25 °C operating temperature). Predicted results revealed that the bed radioactivity determined by the sorption of radionuclides (0.01 kBq/kg b ) was significantly lower than the bed radioactivities caused by the deposition of radioactive particles (0.5-1.8 kBq/kg b ). Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Experimental study and simulation of space charge stimulated discharge

    NASA Astrophysics Data System (ADS)

    Noskov, M. D.; Malinovski, A. S.; Cooke, C. M.; Wright, K. A.; Schwab, A. J.

    2002-11-01

    The electrical discharge of volume distributed space charge in poly(methylmethacrylate) (PMMA) has been investigated both experimentally and by computer simulation. The experimental space charge was implanted in dielectric samples by exposure to a monoenergetic electron beam of 3 MeV. Electrical breakdown through the implanted space charge region within the sample was initiated by a local electric field enhancement applied to the sample surface. A stochastic-deterministic dynamic model for electrical discharge was developed and used in a computer simulation of these breakdowns. The model employs stochastic rules to describe the physical growth of the discharge channels, and deterministic laws to describe the electric field, the charge, and energy dynamics within the discharge channels and the dielectric. Simulated spatial-temporal and current characteristics of the expanding discharge structure during physical growth are quantitatively compared with the experimental data to confirm the discharge model. It was found that a single fixed set of physically based dielectric parameter values was adequate to simulate the complete family of experimental space charge discharges in PMMA. It is proposed that such a set of parameters also provides a useful means to quantify the breakdown properties of other dielectrics.

  6. Criticality Calculations with MCNP6 - Practical Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2016-11-29

    These slides are used to teach MCNP (Monte Carlo N-Particle) usage to nuclear criticality safety analysts. The following are the lecture topics: course information, introduction, MCNP basics, criticality calculations, advanced geometry, tallies, adjoint-weighted tallies and sensitivities, physics and nuclear data, parameter studies, NCS validation I, NCS validation II, NCS validation III, case study 1 - solution tanks, case study 2 - fuel vault, case study 3 - B&W core, case study 4 - simple TRIGA, case study 5 - fissile mat. vault, criticality accident alarm systems. After completion of this course, you should be able to: Develop an input modelmore » for MCNP; Describe how cross section data impact Monte Carlo and deterministic codes; Describe the importance of validation of computer codes and how it is accomplished; Describe the methodology supporting Monte Carlo codes and deterministic codes; Describe pitfalls of Monte Carlo calculations; Discuss the strengths and weaknesses of Monte Carlo and Discrete Ordinants codes; The diffusion theory model is not strictly valid for treating fissile systems in which neutron absorption, voids, and/or material boundaries are present. In the context of these limitations, identify a fissile system for which a diffusion theory solution would be adequate.« less

  7. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USDA-ARS?s Scientific Manuscript database

    The predictions of most terrestrial ecosystem models originate from deterministic simulations. Relatively few uncertainty evaluation exercises in model outputs are performed by either model developers or users. This issue has important consequences for decision makers who rely on models to develop n...

  8. Trend analysis of Arctic sea ice extent

    NASA Astrophysics Data System (ADS)

    Silva, M. E.; Barbosa, S. M.; Antunes, Luís; Rocha, Conceição

    2009-04-01

    The extent of Arctic sea ice is a fundamental parameter of Arctic climate variability. In the context of climate change, the area covered by ice in the Arctic is a particularly useful indicator of recent changes in the Arctic environment. Climate models are in near universal agreement that Arctic sea ice extent will decline through the 21st century as a consequence of global warming and many studies predict a ice free Arctic as soon as 2012. Time series of satellite passive microwave observations allow to assess the temporal changes in the extent of Arctic sea ice. Much of the analysis of the ice extent time series, as in most climate studies from observational data, have been focussed on the computation of deterministic linear trends by ordinary least squares. However, many different processes, including deterministic, unit root and long-range dependent processes can engender trend like features in a time series. Several parametric tests have been developed, mainly in econometrics, to discriminate between stationarity (no trend), deterministic trend and stochastic trends. Here, these tests are applied in the trend analysis of the sea ice extent time series available at National Snow and Ice Data Center. The parametric stationary tests, Augmented Dickey-Fuller (ADF), Phillips-Perron (PP) and the KPSS, do not support an overall deterministic trend in the time series of Arctic sea ice extent. Therefore, alternative parametrizations such as long-range dependence should be considered for characterising long-term Arctic sea ice variability.

  9. A comparison between Gauss-Newton and Markov chain Monte Carlo basedmethods for inverting spectral induced polarization data for Cole-Coleparameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong; Kemna, Andreas; Hubbard, Susan S.

    2008-05-15

    We develop a Bayesian model to invert spectral induced polarization (SIP) data for Cole-Cole parameters using Markov chain Monte Carlo (MCMC) sampling methods. We compare the performance of the MCMC based stochastic method with an iterative Gauss-Newton based deterministic method for Cole-Cole parameter estimation through inversion of synthetic and laboratory SIP data. The Gauss-Newton based method can provide an optimal solution for given objective functions under constraints, but the obtained optimal solution generally depends on the choice of initial values and the estimated uncertainty information is often inaccurate or insufficient. In contrast, the MCMC based inversion method provides extensive globalmore » information on unknown parameters, such as the marginal probability distribution functions, from which we can obtain better estimates and tighter uncertainty bounds of the parameters than with the deterministic method. Additionally, the results obtained with the MCMC method are independent of the choice of initial values. Because the MCMC based method does not explicitly offer single optimal solution for given objective functions, the deterministic and stochastic methods can complement each other. For example, the stochastic method can first be used to obtain the means of the unknown parameters by starting from an arbitrary set of initial values and the deterministic method can then be initiated using the means as starting values to obtain the optimal estimates of the Cole-Cole parameters.« less

  10. Determining the bias and variance of a deterministic finger-tracking algorithm.

    PubMed

    Morash, Valerie S; van der Velden, Bas H M

    2016-06-01

    Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates.

  11. Robust planning of dynamic wireless charging infrastructure for battery electric buses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zhaocai; Song, Ziqi

    Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less

  12. Robust planning of dynamic wireless charging infrastructure for battery electric buses

    DOE PAGES

    Liu, Zhaocai; Song, Ziqi

    2017-10-01

    Battery electric buses with zero tailpipe emissions have great potential in improving environmental sustainability and livability of urban areas. However, the problems of high cost and limited range associated with on-board batteries have substantially limited the popularity of battery electric buses. The technology of dynamic wireless power transfer (DWPT), which provides bus operators with the ability to charge buses while in motion, may be able to effectively alleviate the drawbacks of electric buses. In this paper, we address the problem of simultaneously selecting the optimal location of the DWPT facilities and designing the optimal battery sizes of electric buses formore » a DWPT electric bus system. The problem is first constructed as a deterministic model in which the uncertainty of energy consumption and travel time of electric buses is neglected. The methodology of robust optimization (RO) is then adopted to address the uncertainty of energy consumption and travel time. The affinely adjustable robust counterpart (AARC) of the deterministic model is developed, and its equivalent tractable mathematical programming is derived. Both the deterministic model and the robust model are demonstrated with a real-world bus system. The results of our study demonstrate that the proposed deterministic model can effectively determine the allocation of DWPT facilities and the battery sizes of electric buses for a DWPT electric bus system; and the robust model can further provide optimal designs that are robust against the uncertainty of energy consumption and travel time for electric buses.« less

  13. Excitation of Crossflow Instabilities in a Swept Wing Boundary Layer

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Choudhari, Meelan; Li, Fei; Streett, Craig L.; Chang, Chau-Lyan

    2010-01-01

    The problem of crossflow receptivity is considered in the context of a canonical 3D boundary layer (viz., the swept Hiemenz boundary layer) and a swept airfoil used recently in the SWIFT flight experiment performed at Texas A&M University. First, Hiemenz flow is used to analyze localized receptivity due to a spanwise periodic array of small amplitude roughness elements, with the goal of quantifying the effects of array size and location. Excitation of crossflow modes via nonlocalized but deterministic distribution of surface nonuniformity is also considered and contrasted with roughness induced acoustic excitation of Tollmien-Schlichting waves. Finally, roughness measurements on the SWIFT model are used to model the effects of random, spatially distributed roughness of sufficiently small amplitude with the eventual goal of enabling predictions of initial crossflow disturbance amplitudes as functions of surface roughness parameters.

  14. Analyzing the future of army aeromedical evacuation units and equipment: a mixed methods, requirements-based approach.

    PubMed

    Bastian, Nathaniel D; Brown, David; Fulton, Lawrence V; Mitchell, Robert; Pollard, Wayne; Robinson, Mark; Wilson, Ronald

    2013-03-01

    We utilize a mixed methods approach to provide three new, separate analyses as part of the development of the next aeromedical evacuation (MEDEVAC) platform of the Future of Vertical Lift (FVL) program. The research questions follow: RQ1) What are the optimal capabilities of a FVL MEDEVAC platform given an Afghanistan-like scenario and parameters associated with the treatment/ground evacuation capabilities in that theater?; RQ2) What are the MEDEVAC trade-off considerations associated with different aircraft engines operating under variable conditions?; RQ3) How does the additional weight of weaponizing the current MEDEVAC fleet affect range, coverage radius, and response time? We address RQ1 using discrete-event simulation based partially on qualitative assessments from the field, while RQ2 and RQ3 are based on deterministic analysis. Our results confirm previous findings that travel speeds in excess of 250 knots and ranges in excess of 300 nautical miles are advisable for the FVL platform design, thereby reducing the medical footprint in stability operations. We recommend a specific course of action regarding a potential engine bridging strategy based on deterministic analysis of endurance and altitude, and we suggest that the weaponization of the FVL MEDEVAC aircraft will have an adverse effect on coverage capability. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  15. Integration of 3D photogrammetric outcrop models in the reservoir modelling workflow

    NASA Astrophysics Data System (ADS)

    Deschamps, Remy; Joseph, Philippe; Lerat, Olivier; Schmitz, Julien; Doligez, Brigitte; Jardin, Anne

    2014-05-01

    3D technologies are now widely used in geosciences to reconstruct outcrops in 3D. The technology used for the 3D reconstruction is usually based on Lidar, which provides very precise models. Such datasets offer the possibility to build well-constrained outcrop analogue models for reservoir study purposes. The photogrammetry is an alternate methodology which principles are based in determining the geometric properties of an object from photographic pictures taken from different angles. Outcrop data acquisition is easy, and this methodology allows constructing 3D outcrop models with many advantages such as: - light and fast acquisition, - moderate processing time (depending on the size of the area of interest), - integration of field data and 3D outcrops into the reservoir modelling tools. Whatever the method, the advantages of digital outcrop model are numerous as already highlighted by Hodgetts (2013), McCaffrey et al. (2005) and Pringle et al. (2006): collection of data from otherwise inaccessible areas, access to different angles of view, increase of the possible measurements, attributes analysis, fast rate of data collection, and of course training and communication. This paper proposes a workflow where 3D geocellular models are built by integrating all sources of information from outcrops (surface picking, sedimentological sections, structural and sedimentary dips…). The 3D geomodels that are reconstructed can be used at the reservoir scale, in order to compare the outcrop information with subsurface models: the detailed facies models of the outcrops are transferred into petrophysical and acoustic models, which are used to test different scenarios of seismic and fluid flow modelling. The detailed 3D models are also used to test new techniques of static reservoir modelling, based either on geostatistical approaches or on deterministic (process-based) simulation techniques. A modelling workflow has been designed to model reservoir geometries and properties from 3D outcrop data, including geostatistical modelling and fluid flow simulations The case study is a turbidite reservoir analog in Northern Spain (Ainsa). In this case study, we can compare reservoir models that have been built with conventional data set (1D pseudowells), and reservoir model built from 3D outcrop data directly used to constrain the reservoir architecture. This approach allows us to assess the benefits of integrating geotagged 3D outcrop data into reservoir models. References: HODGETTS, D., (2013): Laser scanning and digital outcrop geology in the petroleum industry : a review. Marine and Petroleum Geology, 46, 335-354. McCAFFREY, K.J.W., JONES, R.R., HOLDSWORTH, R.E., WILSON, R.W., CLEGG, P., IMBER, J., HOLLIMAN, N., TRINKS, I., (2005): Unlocking the spatial dimension: digital technologies and the future of geoscience fieldwork. Journal of the Geological Society 162, 927-938 PRINGLE, J.K., HOWELL, J.A., HODGETTS, D., WESTERMAN, A.R., HODGSON, D.M., 2006. Virtual outcrop models of petroleum reservoir analogues: a review of the current state-of-the-art. First Break 24, 33-42.

  16. On the Feedback Phenomenon of an Impinging Jet

    DTIC Science & Technology

    1979-09-01

    the double-structured nature of turbulent flows: time dependent quasi- ordered large scale structures, and fine-scale random structures. Numerous ...downstream and upstream waves d Nozzle diameter f Frequency (Hz) Gf Normalized power si.c ,ur’ of i G ,(f) Normalized cr,- tr bee -en i(t) and J(t) I ,j xiv...1975) suggested that these quasi- ordered structures are deterministic, in the sense that they have a characteristic shape, size and convection motion

  17. The past, present and future of cyber-physical systems: a focus on models.

    PubMed

    Lee, Edward A

    2015-02-26

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical.

  18. The Past, Present and Future of Cyber-Physical Systems: A Focus on Models

    PubMed Central

    Lee, Edward A.

    2015-01-01

    This paper is about better engineering of cyber-physical systems (CPSs) through better models. Deterministic models have historically proven extremely useful and arguably form the kingpin of the industrial revolution and the digital and information technology revolutions. Key deterministic models that have proven successful include differential equations, synchronous digital logic and single-threaded imperative programs. Cyber-physical systems, however, combine these models in such a way that determinism is not preserved. Two projects show that deterministic CPS models with faithful physical realizations are possible and practical. The first project is PRET, which shows that the timing precision of synchronous digital logic can be practically made available at the software level of abstraction. The second project is Ptides (programming temporally-integrated distributed embedded systems), which shows that deterministic models for distributed cyber-physical systems have practical faithful realizations. These projects are existence proofs that deterministic CPS models are possible and practical. PMID:25730486

  19. Dynamical Localization for Unitary Anderson Models

    NASA Astrophysics Data System (ADS)

    Hamza, Eman; Joye, Alain; Stolz, Günter

    2009-11-01

    This paper establishes dynamical localization properties of certain families of unitary random operators on the d-dimensional lattice in various regimes. These operators are generalizations of one-dimensional physical models of quantum transport and draw their name from the analogy with the discrete Anderson model of solid state physics. They consist in a product of a deterministic unitary operator and a random unitary operator. The deterministic operator has a band structure, is absolutely continuous and plays the role of the discrete Laplacian. The random operator is diagonal with elements given by i.i.d. random phases distributed according to some absolutely continuous measure and plays the role of the random potential. In dimension one, these operators belong to the family of CMV-matrices in the theory of orthogonal polynomials on the unit circle. We implement the method of Aizenman-Molchanov to prove exponential decay of the fractional moments of the Green function for the unitary Anderson model in the following three regimes: In any dimension, throughout the spectrum at large disorder and near the band edges at arbitrary disorder and, in dimension one, throughout the spectrum at arbitrary disorder. We also prove that exponential decay of fractional moments of the Green function implies dynamical localization, which in turn implies spectral localization. These results complete the analogy with the self-adjoint case where dynamical localization is known to be true in the same three regimes.

  20. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  1. Root Water Uptake and Tracer Transport in a Lupin Root System: Integration of Magnetic Resonance Images and the Numerical Model RSWMS

    NASA Astrophysics Data System (ADS)

    Pohlmeier, Andreas; Vanderborght, Jan; Haber-Pohlmeier, Sabina; Wienke, Sandra; Vereecken, Harry; Javaux, Mathieu

    2010-05-01

    Combination of experimental studies with detailed deterministic models help understand root water uptake processes. Recently, Javaux et al. developed the RSWMS model by integration of Doussańs root model into the well established SWMS code[1], which simulates water and solute transport in unsaturated soil [2, 3]. In order to confront RSWMS modeling results to experimental data, we used Magnetic Resonance Imaging (MRI) technique to monitor root water uptake in situ. Non-invasive 3-D imaging of root system architecture, water content distributions and tracer transport by MR were performed and compared with numerical model calculations. Two MRI experiments were performed and modeled: i) water uptake during drought stress and ii) transport of a locally injected tracer (Gd-DTPA) to the soil-root system driven by root water uptake. Firstly, the high resolution MRI image (0.23x0.23x0.5mm) of the root system was transferred into a continuous root system skeleton by a combination of thresholding, region-growing filtering and final manual 3D redrawing of the root strands. Secondly, the two experimental scenarios were simulated by RSWMS with a resolution of about 3mm. For scenario i) the numerical simulations could reproduce the general trend that is the strong water depletion from the top layer of the soil. However, the creation of depletion zones in the vicinity of the roots could not be simulated, due to a poor initial evaluation of the soil hydraulic properties, which equilibrates instantaneously larger differences in water content. The determination of unsaturated conductivities at low water content was needed to improve the model calculations. For scenario ii) simulations confirmed the solute transport towards the roots by advection. 1. Simunek, J., T. Vogel, and M.T. van Genuchten, The SWMS_2D Code for Simulating Water Flow and Solute Transport in Two-Dimensional Variably Saturated Media. Version 1.21. 1994, U.S. Salinity Laboratory, USDA, ARS: Riverside, California. 2. Javaux, M., et al., Use of a Three-Dimensional Detailed Modeling Approach for Predicting Root Water Uptake. Vadose Zone J., 2008. 7(3): p. 1079-1088. 3. Schröder, T., et al., Effect of Local Soil Hydraulic Conductivity Drop Using a Three Dimensional Root Water Uptake Model. Vadose Zone J., 2008. 7(3): p. 1089-1098.

  2. On the generation of tangential ground motion by underground explosions in jointed rocks

    NASA Astrophysics Data System (ADS)

    Vorobiev, Oleg; Ezzedine, Souheil; Antoun, Tarabay; Glenn, Lewis

    2015-03-01

    This paper describes computational studies of tangential ground motions generated by spherical explosions in a heavily jointed granite formation. Various factors affecting the shear wave generation are considered, including joint spacing, orientation and frictional properties. Simulations are performed both in 2-D for a single joint set to elucidate the basic response mechanisms, and in 3-D for multiple joint sets to realistically represent in situ conditions in a realistic geological setting. The joints are modelled explicitly using both contact elements and weakness planes in the material. Simulations are performed both deterministically and stochastically to quantify the effects of geological uncertainties on near field ground motions. The mechanical properties of the rock and the joints as well as the joint spacing and orientation are taken from experimental test data and geophysical logs corresponding to the Climax Stock granitic outcrop, which is the geological setting of the source physics experiment (SPE). Agreement between simulation results and near field wave motion data from SPE enables newfound understanding of the origin and extent of non-spherical motions associated with underground explosions in fractured geological media.

  3. Chaos-order transition in foraging behavior of ants.

    PubMed

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-06-10

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants' physical abilities, and ants' knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal.

  4. Developing a stochastic conflict resolution model for urban runoff quality management: Application of info-gap and bargaining theories

    NASA Astrophysics Data System (ADS)

    Ghodsi, Seyed Hamed; Kerachian, Reza; Estalaki, Siamak Malakpour; Nikoo, Mohammad Reza; Zahmatkesh, Zahra

    2016-02-01

    In this paper, two deterministic and stochastic multilateral, multi-issue, non-cooperative bargaining methodologies are proposed for urban runoff quality management. In the proposed methodologies, a calibrated Storm Water Management Model (SWMM) is used to simulate stormwater runoff quantity and quality for different urban stormwater runoff management scenarios, which have been defined considering several Low Impact Development (LID) techniques. In the deterministic methodology, the best management scenario, representing location and area of LID controls, is identified using the bargaining model. In the stochastic methodology, uncertainties of some key parameters of SWMM are analyzed using the info-gap theory. For each water quality management scenario, robustness and opportuneness criteria are determined based on utility functions of different stakeholders. Then, to find the best solution, the bargaining model is performed considering a combination of robustness and opportuneness criteria for each scenario based on utility function of each stakeholder. The results of applying the proposed methodology in the Velenjak urban watershed located in the northeastern part of Tehran, the capital city of Iran, illustrate its practical utility for conflict resolution in urban water quantity and quality management. It is shown that the solution obtained using the deterministic model cannot outperform the result of the stochastic model considering the robustness and opportuneness criteria. Therefore, it can be concluded that the stochastic model, which incorporates the main uncertainties, could provide more reliable results.

  5. Chaos–order transition in foraging behavior of ants

    PubMed Central

    Li, Lixiang; Peng, Haipeng; Kurths, Jürgen; Yang, Yixian; Schellnhuber, Hans Joachim

    2014-01-01

    The study of the foraging behavior of group animals (especially ants) is of practical ecological importance, but it also contributes to the development of widely applicable optimization problem-solving techniques. Biologists have discovered that single ants exhibit low-dimensional deterministic-chaotic activities. However, the influences of the nest, ants’ physical abilities, and ants’ knowledge (or experience) on foraging behavior have received relatively little attention in studies of the collective behavior of ants. This paper provides new insights into basic mechanisms of effective foraging for social insects or group animals that have a home. We propose that the whole foraging process of ants is controlled by three successive strategies: hunting, homing, and path building. A mathematical model is developed to study this complex scheme. We show that the transition from chaotic to periodic regimes observed in our model results from an optimization scheme for group animals with a home. According to our investigation, the behavior of such insects is not represented by random but rather deterministic walks (as generated by deterministic dynamical systems, e.g., by maps) in a random environment: the animals use their intelligence and experience to guide them. The more knowledge an ant has, the higher its foraging efficiency is. When young insects join the collective to forage with old and middle-aged ants, it benefits the whole colony in the long run. The resulting strategy can even be optimal. PMID:24912159

  6. Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate

    NASA Astrophysics Data System (ADS)

    Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing

    2014-09-01

    We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.

  7. Validity of deterministic record linkage using multiple indirect personal identifiers: linking a large registry to claims data.

    PubMed

    Setoguchi, Soko; Zhu, Ying; Jalbert, Jessica J; Williams, Lauren A; Chen, Chih-Ying

    2014-05-01

    Linking patient registries with administrative databases can enhance the utility of the databases for epidemiological and comparative effectiveness research. However, registries often lack direct personal identifiers, and the validity of record linkage using multiple indirect personal identifiers is not well understood. Using a large contemporary national cardiovascular device registry and 100% Medicare inpatient data, we linked hospitalization-level records. The main outcomes were the validity measures of several deterministic linkage rules using multiple indirect personal identifiers compared with rules using both direct and indirect personal identifiers. Linkage rules using 2 or 3 indirect, patient-level identifiers (ie, date of birth, sex, admission date) and hospital ID produced linkages with sensitivity of 95% and specificity of 98% compared with a gold standard linkage rule using a combination of both direct and indirect identifiers. Ours is the first large-scale study to validate the performance of deterministic linkage rules without direct personal identifiers. When linking hospitalization-level records in the absence of direct personal identifiers, provider information is necessary for successful linkage. © 2014 American Heart Association, Inc.

  8. Modelling the interaction between flooding events and economic growth

    NASA Astrophysics Data System (ADS)

    Grames, J.; Prskawetz, A.; Grass, D.; Blöschl, G.

    2015-06-01

    Socio-hydrology describes the interaction between the socio-economy and water. Recent models analyze the interplay of community risk-coping culture, flooding damage and economic growth (Di Baldassarre et al., 2013; Viglione et al., 2014). These models descriptively explain the feedbacks between socio-economic development and natural disasters like floods. Contrary to these descriptive models, our approach develops an optimization model, where the intertemporal decision of an economic agent interacts with the hydrological system. In order to build this first economic growth model describing the interaction between the consumption and investment decisions of an economic agent and the occurrence of flooding events, we transform an existing descriptive stochastic model into an optimal deterministic model. The intermediate step is to formulate and simulate a descriptive deterministic model. We develop a periodic water function to approximate the former discrete stochastic time series of rainfall events. Due to the non-autonomous exogenous periodic rainfall function the long-term path of consumption and investment will be periodic.

  9. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  10. Scattering effects of machined optical surfaces

    NASA Astrophysics Data System (ADS)

    Thompson, Anita Kotha

    1998-09-01

    Optical fabrication is one of the most labor-intensive industries in existence. Lensmakers use pitch to affix glass blanks to metal chucks that hold the glass as they grind it with tools that have not changed much in fifty years. Recent demands placed on traditional optical fabrication processes in terms of surface accuracy, smoothnesses, and cost effectiveness has resulted in the exploitation of precision machining technology to develop a new generation of computer numerically controlled (CNC) optical fabrication equipment. This new kind of precision machining process is called deterministic microgrinding. The most conspicuous feature of optical surfaces manufactured by the precision machining processes (such as single-point diamond turning or deterministic microgrinding) is the presence of residual cutting tool marks. These residual tool marks exhibit a highly structured topography of periodic azimuthal or radial deterministic marks in addition to random microroughness. These distinct topographic features give rise to surface scattering effects that can significantly degrade optical performance. In this dissertation project we investigate the scattering behavior of machined optical surfaces and their imaging characteristics. In particular, we will characterize the residual optical fabrication errors and relate the resulting scattering behavior to the tool and machine parameters in order to evaluate and improve the deterministic microgrinding process. Other desired information derived from the investigation of scattering behavior is the optical fabrication tolerances necessary to satisfy specific image quality requirements. Optical fabrication tolerances are a major cost driver for any precision optical manufacturing technology. The derivation and control of the optical fabrication tolerances necessary for different applications and operating wavelength regimes will play a unique and central role in establishing deterministic microgrinding as a preferred and a cost-effective optical fabrication process. Other well understood optical fabrication processes will also be reviewed and a performance comparison with the conventional grinding and polishing technique will be made to determine any inherent advantages in the optical quality of surfaces produced by other techniques.

  11. The 2015 Gorkha (Nepal) earthquake sequence: I. Source modeling and deterministic 3D ground shaking

    NASA Astrophysics Data System (ADS)

    Wei, Shengji; Chen, Meng; Wang, Xin; Graves, Robert; Lindsey, Eric; Wang, Teng; Karakaş, Çağıl; Helmberger, Don

    2018-01-01

    To better quantify the relatively long period (< 0.3 Hz) shaking experienced during the 2015 Gorkha (Nepal) earthquake sequence, we study the finite rupture processes and the associated 3D ground motion of the Mw7.8 mainshock and the Mw7.2 aftershock. The 3D synthetics are then used in the broadband ground shaking in Kathmandu with a hybrid approach, summarized in a companion paper (Chen and Wei, 2017, submitted together). We determined the coseismic rupture process of the mainshock by joint inversion of InSAR/SAR, GPS (static and high-rate), strong motion and teleseismic waveforms. Our inversion for the mainshock indicates unilateral rupture towards the ESE, with an average rupture speed of 3.0 km/s and a total duration of 60 s. Additionally, we find that the beginning part of the rupture (5-18 s) has about 40% longer rise time than the rest of the rupture, as well as slower rupture velocity. Our model shows two strong asperities occurring 24 s and 36 s after the origin and located 30 km to the northwest and northeast of the Kathmandu valley, respectively. In contrast, the Mw7.2 aftershock is more compact both in time and space, as revealed by joint inversion of teleseismic body waves and InSAR data. The different rupture features between the mainshock and the aftershock could be related to difference in fault zone structure. The mainshock and aftershock ground motions in the Kathmandu valley, recorded by both strong motion and high-rate GPS stations, exhibited strong amplification around 0.2 Hz. A simplified 3D basin model, calibrated by an Mw5.2 aftershock, can match the observed waveforms reasonably well at 0.3 Hz and lower frequency. The 3D simulations indicate that the basin structure trapped the wavefield and produced an extensive ground vibration. Our study suggests that the combination of rupture characteristics and propagational complexity are required to understand the ground shaking produced by hazardous earthquakes such as the Gorkha event.

  12. Transferring arbitrary d-dimensional quantum states of a superconducting transmon qudit in circuit QED.

    PubMed

    Liu, Tong; Su, Qi-Ping; Yang, Jin-Hu; Zhang, Yu; Xiong, Shao-Jie; Liu, Jin-Ming; Yang, Chui-Ping

    2017-08-01

    A qudit (d-level quantum system) has a large Hilbert space and thus can be used to achieve many quantum information and communication tasks. Here, we propose a method to transfer arbitrary d-dimensional quantum states (known or unknown) between two superconducting transmon qudits coupled to a single cavity. The state transfer can be performed by employing resonant interactions only. In addition, quantum states can be deterministically transferred without measurement. Numerical simulations show that high-fidelity transfer of quantum states between two superconducting transmon qudits (d ≤ 5) is feasible with current circuit QED technology. This proposal is quite general and can be applied to accomplish the same task with natural or artificial atoms of a ladder-type level structure coupled to a cavity or resonator.

  13. Robust Planning for Effects-Based Operations

    DTIC Science & Technology

    2006-06-01

    Algorithm ......................................... 34 2.6 Robust Optimization Literature ..................................... 36 2.6.1 Protecting Against...Model Formulation ...................... 55 3.1.5 Deterministic EBO Model Example and Performance ............. 59 3.1.6 Greedy Algorithm ...111 4.1.9 Conclusions on Robust EBO Model Performance .................... 116 4.2 Greedy Algorithm versus EBO Models

  14. Deterministic binary vectors for efficient automated indexing of MEDLINE/PubMed abstracts.

    PubMed

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R; Bernstam, Elmer V; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI.

  15. Deterministic Binary Vectors for Efficient Automated Indexing of MEDLINE/PubMed Abstracts

    PubMed Central

    Wahle, Manuel; Widdows, Dominic; Herskovic, Jorge R.; Bernstam, Elmer V.; Cohen, Trevor

    2012-01-01

    The need to maintain accessibility of the biomedical literature has led to development of methods to assist human indexers by recommending index terms for newly encountered articles. Given the rapid expansion of this literature, it is essential that these methods be scalable. Document vector representations are commonly used for automated indexing, and Random Indexing (RI) provides the means to generate them efficiently. However, RI is difficult to implement in real-world indexing systems, as (1) efficient nearest-neighbor search requires retaining all document vectors in RAM, and (2) it is necessary to maintain a store of randomly generated term vectors to index future documents. Motivated by these concerns, this paper documents the development and evaluation of a deterministic binary variant of RI. The increased capacity demonstrated by binary vectors has implications for information retrieval, and the elimination of the need to retain term vectors facilitates distributed implementations, enhancing the scalability of RI. PMID:23304369

  16. Automated variance reduction for MCNP using deterministic methods.

    PubMed

    Sweezy, J; Brown, F; Booth, T; Chiaramonte, J; Preeg, B

    2005-01-01

    In order to reduce the user's time and the computer time needed to solve deep penetration problems, an automated variance reduction capability has been developed for the MCNP Monte Carlo transport code. This new variance reduction capability developed for MCNP5 employs the PARTISN multigroup discrete ordinates code to generate mesh-based weight windows. The technique of using deterministic methods to generate importance maps has been widely used to increase the efficiency of deep penetration Monte Carlo calculations. The application of this method in MCNP uses the existing mesh-based weight window feature to translate the MCNP geometry into geometry suitable for PARTISN. The adjoint flux, which is calculated with PARTISN, is used to generate mesh-based weight windows for MCNP. Additionally, the MCNP source energy spectrum can be biased based on the adjoint energy spectrum at the source location. This method can also use angle-dependent weight windows.

  17. Probabilistic Scenario-based Seismic Risk Analysis for Critical Infrastructures Method and Application for a Nuclear Power Plant

    NASA Astrophysics Data System (ADS)

    Klügel, J.

    2006-12-01

    Deterministic scenario-based seismic hazard analysis has a long tradition in earthquake engineering for developing the design basis of critical infrastructures like dams, transport infrastructures, chemical plants and nuclear power plants. For many applications besides of the design of infrastructures it is of interest to assess the efficiency of the design measures taken. These applications require a method allowing to perform a meaningful quantitative risk analysis. A new method for a probabilistic scenario-based seismic risk analysis has been developed based on a probabilistic extension of proven deterministic methods like the MCE- methodology. The input data required for the method are entirely based on the information which is necessary to perform any meaningful seismic hazard analysis. The method is based on the probabilistic risk analysis approach common for applications in nuclear technology developed originally by Kaplan & Garrick (1981). It is based (1) on a classification of earthquake events into different size classes (by magnitude), (2) the evaluation of the frequency of occurrence of events, assigned to the different classes (frequency of initiating events, (3) the development of bounding critical scenarios assigned to each class based on the solution of an optimization problem and (4) in the evaluation of the conditional probability of exceedance of critical design parameters (vulnerability analysis). The advantage of the method in comparison with traditional PSHA consists in (1) its flexibility, allowing to use different probabilistic models for earthquake occurrence as well as to incorporate advanced physical models into the analysis, (2) in the mathematically consistent treatment of uncertainties, and (3) in the explicit consideration of the lifetime of the critical structure as a criterion to formulate different risk goals. The method was applied for the evaluation of the risk of production interruption losses of a nuclear power plant during its residual lifetime.

  18. 3-D Resistivity Structure of La Soufrière Volcano (Guadeloupe): New Insights into the Hydrothermal System and Associated Hazards

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, M.; Nicollin, F.; Komorowski, J. C.; Gibert, D.; Deroussi, S.

    2015-12-01

    The 3-D electrical resistivity model of the dome of La Soufrière of Guadeloupe volcano was obtained by inverting more than 23000 electrical resistivity tomography (ERT) and mise-a-la-masse data points. Data acquisition involved 2-D and 3-D protocols, including several pairs of injection electrodes located on opposite sides of the volcano. For the mise-a-la-masse measurements, the contact with a conductive mass was achieved by immersing one of the current electrodes in the Tarissan acid pond (~25 Siemens/m) located in the volcano's summit. The 3-D inversion was performed using a deterministic smoothness-constrained least-squares algorithm with unstructured grid modeling to accurately account for topography. Resistivity contrasts of more than 4 orders of magnitude are observed. A thick and high-angle conductive structure is located in the volcano's southern flank. It extends from the Tarissan Crater's acid pond on the summit to a hot spring region located close to the dome's southern base. This suggests that a large hydrothermal reservoir is located below the southern base of the dome, and connected to the acid pond of the summit's main crater. Therefore, the steep southern flanks of the volcano could be resting on a low-strength, high-angle discontinuity saturated with circulating and possibly pressurized hydrothermal fluids. This could favor partial edifice collapse and lateral directed explosions as shown recurrently in the volcano's history. The resistivity model also reveals smaller hydrothermal reservoirs in the south-east and northern flanks that are linked to the main historical eruptive fractures and to ancient collapse structures such as the Cratère Amic structure. We discuss the main resistivity structures in relation with the geometry of observed faults, historical eruptive fractures, the dynamics of the near surface hydrothermal system manifestations on the dome and the potential implications for future hazards scenarios .

  19. Deterministic earthquake scenario for the Basel area: Simulating strong motions and site effects for Basel, Switzerland

    NASA Astrophysics Data System (ADS)

    OpršAl, Ivo; FäH, Donat; Mai, P. Martin; Giardini, Domenico

    2005-04-01

    The Basel earthquake of 18 October 1356 is considered one of the most serious earthquakes in Europe in recent centuries (I0 = IX, M ≈ 6.5-6.9). In this paper we present ground motion simulations for earthquake scenarios for the city of Basel and its vicinity. The numerical modeling combines the finite extent pseudodynamic and kinematic source models with complex local structure in a two-step hybrid three-dimensional (3-D) finite difference (FD) method. The synthetic seismograms are accurate in the frequency band 0-2.2 Hz. The 3-D FD is a linear explicit displacement formulation using an irregular rectangular grid including topography. The finite extent rupture model is adjacent to the free surface because the fault has been recognized through trenching on the Reinach fault. We test two source models reminiscent of past earthquakes (the 1999 Athens and the 1989 Loma Prieta earthquake) to represent Mw ≈ 5.9 and Mw ≈ 6.5 events that occur approximately to the south of Basel. To compare the effect of the same wave field arriving at the site from other directions, we considered the same sources placed east and west of the city. The local structural model is determined from the area's recently established P and S wave velocity structure and includes topography. The selected earthquake scenarios show strong ground motion amplification with respect to a bedrock site, which is in contrast to previous 2-D simulations for the same area. In particular, we found that the edge effects from the 3-D structural model depend strongly on the position of the earthquake source within the modeling domain.

  20. Developmental logics: Brain science, child welfare, and the ethics of engagement in Japan.

    PubMed

    Goldfarb, Kathryn E

    2015-10-01

    This article explores the unintended consequences of the ways scholars and activists take up the science of child development to critique the Japanese child welfare system. Since World War II, Japan has depended on a system of child welfare institutions (baby homes and children's homes) to care for state wards. Opponents of institutional care advocate instead for family foster care and adoption, and cite international research on the developmental harms of institutionalizing newborns and young children during the "critical period" of the first few years. The "critical period" is understood as the time during which the caregiving a child receives shapes neurological development and later capacity to build interpersonal relationships. These discourses appear to press compellingly for system reform, the proof resting on seemingly objective knowledge about child development. However, scientific evidence of harm is often mobilized in tandem with arguments that the welfare system is rooted in Japanese culture, suggesting durability and resistance to change. Further, reform efforts that use universalizing child science as "proof" of the need for change are prone to slip into deterministic language that pathologizes the experiences of people who grew up in the system. This article explores the reasons why deterministic models of child development, rather than more open-ended models like neuroplasticity, dominate activist rhetorics. It proposes a concept, "ethics of engagement," to advocate for attention to multiple scales and domains through which interpersonal ties are experienced and embodied over time. Finally, it suggests the possibility of child welfare reform movements that take seriously the need for caring and transformative relationships throughout life, beyond the first "critical years," that do not require deterministic logics of permanent delay or damage. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Development of Methodologies for IV and V of Neural Networks

    NASA Technical Reports Server (NTRS)

    Taylor, Brian; Darrah, Marjorie

    2003-01-01

    Non-deterministic systems often rely upon neural network (NN) technology to "lean" to manage flight systems under controlled conditions using carefully chosen training sets. How can these adaptive systems be certified to ensure that they will become increasingly efficient and behave appropriately in real-time situations? The bulk of Independent Verification and Validation (IV&V) research of non-deterministic software control systems such as Adaptive Flight Controllers (AFC's) addresses NNs in well-behaved and constrained environments such as simulations and strict process control. However, neither substantive research, nor effective IV&V techniques have been found to address AFC's learning in real-time and adapting to live flight conditions. Adaptive flight control systems offer good extensibility into commercial aviation as well as military aviation and transportation. Consequently, this area of IV&V represents an area of growing interest and urgency. ISR proposes to further the current body of knowledge to meet two objectives: Research the current IV&V methods and assess where these methods may be applied toward a methodology for the V&V of Neural Network; and identify effective methods for IV&V of NNs that learn in real-time, including developing a prototype test bed for IV&V of AFC's. Currently. no practical method exists. lSR will meet these objectives through the tasks identified and described below. First, ISR will conduct a literature review of current IV&V technology. TO do this, ISR will collect the existing body of research on IV&V of non-deterministic systems and neural network. ISR will also develop the framework for disseminating this information through specialized training. This effort will focus on developing NASA's capability to conduct IV&V of neural network systems and to provide training to meet the increasing need for IV&V expertise in such systems.

  2. Atomic layer-by-layer thermoelectric conversion in topological insulator bismuth/antimony tellurides.

    PubMed

    Sung, Ji Ho; Heo, Hoseok; Hwang, Inchan; Lim, Myungsoo; Lee, Donghun; Kang, Kibum; Choi, Hee Cheul; Park, Jae-Hoon; Jhi, Seung-Hoon; Jo, Moon-Ho

    2014-07-09

    Material design for direct heat-to-electricity conversion with substantial efficiency essentially requires cooperative control of electrical and thermal transport. Bismuth telluride (Bi2Te3) and antimony telluride (Sb2Te3), displaying the highest thermoelectric power at room temperature, are also known as topological insulators (TIs) whose electronic structures are modified by electronic confinements and strong spin-orbit interaction in a-few-monolayers thickness regime, thus possibly providing another degree of freedom for electron and phonon transport at surfaces. Here, we explore novel thermoelectric conversion in the atomic monolayer steps of a-few-layer topological insulating Bi2Te3 (n-type) and Sb2Te3 (p-type). Specifically, by scanning photoinduced thermoelectric current imaging at the monolayer steps, we show that efficient thermoelectric conversion is accomplished by optothermal motion of hot electrons (Bi2Te3) and holes (Sb2Te3) through 2D subbands and topologically protected surface states in a geometrically deterministic manner. Our discovery suggests that the thermoelectric conversion can be interiorly achieved at the atomic steps of a homogeneous medium by direct exploiting of quantum nature of TIs, thus providing a new design rule for the compact thermoelectric circuitry at the ultimate size limit.

  3. Development of a flood early warning system and communication with end-users: the Vipava/Vipacco case study in the KULTURisk FP7 project

    NASA Astrophysics Data System (ADS)

    Grossi, Giovanna; Caronna, Paolo; Ranzi, Roberto

    2014-05-01

    Within the framework of risk communication, the goal of an early warning system is to support the interaction between technicians and authorities (and subsequently population) as a prevention measure. The methodology proposed in the KULTURisk FP7 project aimed to build a closer collaboration between these actors, in the perspective of promoting pro-active actions to mitigate the effects of flood hazards. The transnational (Slovenia/ Italy) Soča/Isonzo case study focused on this concept of cooperation between stakeholders and hydrological forecasters. The DIMOSHONG_VIP hydrological model was calibrated for the Vipava/Vipacco River (650 km2), a tributary of the Soča/Isonzo River, on the basis of flood events occurred between 1998 and 2012. The European Centre for Medium-Range Weather Forecasts (ECMWF) provided the past meteorological forecasts, both deterministic (1 forecast) and probabilistic (51 ensemble members). The resolution of the ECMWF grid is currently about 15 km (Deterministic-DET) and 30 km (Ensemble Prediction System-EPS). A verification was conducted to validate the flood-forecast outputs of the DIMOSHONG_VIP+ECMWF early warning system. Basic descriptive statistics, like event probability, probability of a forecast occurrence and frequency bias were determined. Some performance measures were calculated, such as hit rate (probability of detection) and false alarm rate (probability of false detection). Relative Opening Characteristic (ROC) curves were generated both for deterministic and probabilistic forecasts. These analysis showed a good performance of the early warning system, in respect of the small size of the sample. A particular attention was spent to the design of flood-forecasting output charts, involving and inquiring stakeholders (Alto Adriatico River Basin Authority), hydrology specialists in the field, and common people. Graph types for both forecasted precipitation and discharge were set. Three different risk thresholds were identified ("attention", "pre-alarm" or "alert", "alarm"), with an "icon-style" representation, suitable for communication to civil protection stakeholders or the public. Aiming at showing probabilistic representations in a "user-friendly" way, we opted for the visualization of the single deterministic forecasted hydrograph together with the 5%, 25%, 50%, 75% and 95% percentiles bands of the Hydrological Ensemble Prediction System (HEPS). HEPS is generally used for 3-5 days hydrological forecasts, while the error due to incorrect initial data is comparable to the error due to the lower resolution with respect to the deterministic forecast. In the short term forecasting (12-48 hours) the HEPS-members show obviously a similar tendency; in this case, considering its higher resolution, the deterministic forecast is expected to be more effective. The plot of different forecasts in the same chart allows the use of model outputs from 4/5 days to few hours before a potential flood event. This framework was built to help a stakeholder, like a mayor, a civil protection authority, etc, in the flood control and management operations, and was designed to be included in a wider decision support system.

  4. Economic analysis of interventions to improve village chicken production in Myanmar.

    PubMed

    Henning, J; Morton, J; Pym, R; Hla, T; Sunn, K; Meers, J

    2013-07-01

    A cost-benefit analysis using deterministic and stochastic modelling was conducted to identify the net benefits for households that adopt (1) vaccination of individual birds against Newcastle disease (ND) or (2) improved management of chick rearing by providing coops for the protection of chicks from predation and chick starter feed inside a creep feeder to support chicks' nutrition in village chicken flocks in Myanmar. Partial budgeting was used to assess the additional costs and benefits associated with each of the two interventions tested relative to neither strategy. In the deterministic model, over the first 3 years after the introduction of the interventions, the cumulative sum of the net differences from neither strategy was 13,189Kyat for ND vaccination and 77,645Kyat for improved chick management (effective exchange rate in 2005: 1000Kyat=1$US). Both interventions were also profitable after discounting over a 10-year period; Net Present Values for ND vaccination and improved chick management were 30,791 and 167,825Kyat, respectively. The Benefit-Cost Ratio for ND vaccination was very high (28.8). This was lower for improved chick management, due to greater costs of the intervention, but still favourable at 4.7. Using both interventions concurrently yielded a Net Present Value of 470,543Kyat and a Benefit-Cost Ratio of 11.2 over the 10-year period in the deterministic model. Using the stochastic model, for the first 3 years following the introduction of the interventions, the mean cumulative sums of the net difference were similar to those values obtained from the deterministic model. Sensitivity analysis indicated that the cumulative net differences were strongly influenced by grower bird sale income, particularly under improved chick management. The effects of the strategies on odds of households selling and consuming birds after 7 months, and numbers of birds being sold or consumed after this period also influenced profitability. Cost variations for equipment used under improved chick management were not markedly associated with profitability. Net Present Values and Benefit-Cost Ratios discounted over a 10-year period were also similar to the deterministic model when mean values obtained through stochastic modelling were used. In summary, the study showed that ND vaccination and improved chick management can improve the viability and profitability of village chicken production in Myanmar. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Optimization of a growth process for as-grown 2D materials-based devices

    NASA Astrophysics Data System (ADS)

    Lindquist, Miles; Khadka, Sudiksha; Aleithan, Shrouq; Blumer, Ari; Wickramasinghe, Thushan; Thorat, Ruhi; Kordesch, Martin; Stinaff, Eric

    We will present the effects of varying key parameters of a deterministic growth method for producing self-contacted 2D transition metal dichalcogenides. Chemical vapor deposition is used to grow a film of 2D material nucleated around and seeded from metallic features prepared by photolithography and sputtering on a Si/SiO2 substrate prior to growth. We will focus on a particular method of growing variable MoS2 based device structures. The goal of this work is to arrive at robust platform for growing a variety of device structures by systematically altering parameters such as the amount of reactants used, the heat of the substrate and oxide powder, and the flow rate of argon gas used. These results will help advance a comprehensive process for the scalable production of as-grown, complex, 2D materials-based device architectures.

  6. Probabilistic Modeling of the Renal Stone Formation Module

    NASA Technical Reports Server (NTRS)

    Best, Lauren M.; Myers, Jerry G.; Goodenow, Debra A.; McRae, Michael P.; Jackson, Travis C.

    2013-01-01

    The Integrated Medical Model (IMM) is a probabilistic tool, used in mission planning decision making and medical systems risk assessments. The IMM project maintains a database of over 80 medical conditions that could occur during a spaceflight, documenting an incidence rate and end case scenarios for each. In some cases, where observational data are insufficient to adequately define the inflight medical risk, the IMM utilizes external probabilistic modules to model and estimate the event likelihoods. One such medical event of interest is an unpassed renal stone. Due to a high salt diet and high concentrations of calcium in the blood (due to bone depletion caused by unloading in the microgravity environment) astronauts are at a considerable elevated risk for developing renal calculi (nephrolithiasis) while in space. Lack of observed incidences of nephrolithiasis has led HRP to initiate the development of the Renal Stone Formation Module (RSFM) to create a probabilistic simulator capable of estimating the likelihood of symptomatic renal stone presentation in astronauts on exploration missions. The model consists of two major parts. The first is the probabilistic component, which utilizes probability distributions to assess the range of urine electrolyte parameters and a multivariate regression to transform estimated crystal density and size distributions to the likelihood of the presentation of nephrolithiasis symptoms. The second is a deterministic physical and chemical model of renal stone growth in the kidney developed by Kassemi et al. The probabilistic component of the renal stone model couples the input probability distributions describing the urine chemistry, astronaut physiology, and system parameters with the physical and chemical outputs and inputs to the deterministic stone growth model. These two parts of the model are necessary to capture the uncertainty in the likelihood estimate. The model will be driven by Monte Carlo simulations, continuously randomly sampling the probability distributions of the electrolyte concentrations and system parameters that are inputs into the deterministic model. The total urine chemistry concentrations are used to determine the urine chemistry activity using the Joint Expert Speciation System (JESS), a biochemistry model. Information used from JESS is then fed into the deterministic growth model. Outputs from JESS and the deterministic model are passed back to the probabilistic model where a multivariate regression is used to assess the likelihood of a stone forming and the likelihood of a stone requiring clinical intervention. The parameters used to determine to quantify these risks include: relative supersaturation (RS) of calcium oxalate, citrate/calcium ratio, crystal number density, total urine volume, pH, magnesium excretion, maximum stone width, and ureteral location. Methods and Validation: The RSFM is designed to perform a Monte Carlo simulation to generate probability distributions of clinically significant renal stones, as well as provide an associated uncertainty in the estimate. Initially, early versions will be used to test integration of the components and assess component validation and verification (V&V), with later versions used to address questions regarding design reference mission scenarios. Once integrated with the deterministic component, the credibility assessment of the integrated model will follow NASA STD 7009 requirements.

  7. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    NASA Astrophysics Data System (ADS)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I. Horenko. On identification of nonstationary factor models and its application to atmospherical data analysis. J. Atm. Sci., 67:1559-1574, 2010. [2] P. Metzner, L. Putzig and I. Horenko. Analysis of persistent non-stationary time series and applications. CAMCoS, 7:175-229, 2012. [3] M. Uhlmann. Generation of a temporally well-resolved sequence of snapshots of the flow-field in turbulent plane channel flow. URL: http://www-turbul.ifh.unikarlsruhe.de/uhlmann/reports/produce.pdf, 2000. [4] Th. von Larcher, A. Beck, R. Klein, I. Horenko, P. Metzner, M. Waidmann, D. Igdalov, G. Gassner and C.-D. Munz. Towards a Framework for the Stochastic Modelling of Subgrid Scale Fluxes for Large Eddy Simulation. Meteorol. Z., 24:313-342, 2015.

  8. Deterministic and stochastic CTMC models from Zika disease transmission

    NASA Astrophysics Data System (ADS)

    Zevika, Mona; Soewono, Edy

    2018-03-01

    Zika infection is one of the most important mosquito-borne diseases in the world. Zika virus (ZIKV) is transmitted by many Aedes-type mosquitoes including Aedes aegypti. Pregnant women with the Zika virus are at risk of having a fetus or infant with a congenital defect and suffering from microcephaly. Here, we formulate a Zika disease transmission model using two approaches, a deterministic model and a continuous-time Markov chain stochastic model. The basic reproduction ratio is constructed from a deterministic model. Meanwhile, the CTMC stochastic model yields an estimate of the probability of extinction and outbreaks of Zika disease. Dynamical simulations and analysis of the disease transmission are shown for the deterministic and stochastic models.

  9. Distinguishing between stochasticity and determinism: Examples from cell cycle duration variability.

    PubMed

    Pearl Mizrahi, Sivan; Sandler, Oded; Lande-Diner, Laura; Balaban, Nathalie Q; Simon, Itamar

    2016-01-01

    We describe a recent approach for distinguishing between stochastic and deterministic sources of variability, focusing on the mammalian cell cycle. Variability between cells is often attributed to stochastic noise, although it may be generated by deterministic components. Interestingly, lineage information can be used to distinguish between variability and determinism. Analysis of correlations within a lineage of the mammalian cell cycle duration revealed its deterministic nature. Here, we discuss the sources of such variability and the possibility that the underlying deterministic process is due to the circadian clock. Finally, we discuss the "kicked cell cycle" model and its implication on the study of the cell cycle in healthy and cancerous tissues. © 2015 WILEY Periodicals, Inc.

  10. Estimation des paramètres d'un modèle hydrologique mixte appliqué à la région du haut plateau Bolivien

    NASA Astrophysics Data System (ADS)

    Gárfias, Jaime; Verrette, Jean-Louis; Antigüedad, Iñaki; André, Cécile

    1996-03-01

    This paper discusses the development and application of a technique which permits the analysis and improvement of hydrological models for the management of water resources of complex systems. Considering that such models are intended for practical application, the model was applied to the conditions of the Bolivian highlands. The model consisted of a deterministic part (HEC-1 model) linked to a stochastic component. The experience acquired indicated the possibility of adapting a more general procedure to compensate for the lack of rigour in the homoscedastic and independence hypothesis of the residuals. Use of this concept improved the estimation accuracy of the parameters and provided independent residuals with constant variance. A Box-Cox transformation was used to stabilize error variance and an autoregressive model was used to remove autocorrelation in the residuals.

  11. Deterministic mechanisms define the long-term anaerobic digestion microbiome and its functionality regardless of the initial microbial community.

    PubMed

    Peces, M; Astals, S; Jensen, P D; Clarke, W P

    2018-05-17

    The impact of the starting inoculum on long-term anaerobic digestion performance, process functionality and microbial community composition remains unclear. To understand the impact of starting inoculum, active microbial communities from four different full-scale anaerobic digesters were each used to inoculate four continuous lab-scale anaerobic digesters, which were operated identically for 295 days. Digesters were operated at 15 days solid retention time, an organic loading rate of 1 g COD L r -1 d -1 (75:25 - cellulose:casein) and 37 °C. Results showed that long-term process performance, metabolic rates (hydrolytic, acetogenic, and methanogenic) and microbial community are independent of the inoculum source. Digesters process performance converged after 80 days, while metabolic rates and microbial communities converged after 120-145 days. The convergence of the different microbial communities towards a core-community proves that the deterministic factors (process operational conditions) were a stronger driver than the initial microbial community composition. Indeed, the core-community represented 72% of the relative abundance among the four digesters. Moreover, a number of positive correlations were observed between higher metabolic rates and the relative abundance of specific microbial groups. These correlations showed that both substrate consumers and suppliers trigger higher metabolic rates, expanding the knowledge of the nexus between microorganisms and functionality. Overall, these results support that deterministic factors control microbial communities in bioreactors independently of the inoculum source. Hence, it seems plausible that a desired microbial composition and functionality can be achieved by tuning process operational conditions. Copyright © 2018. Published by Elsevier Ltd.

  12. Stochastic Parametrisations and Regime Behaviour of Atmospheric Models

    NASA Astrophysics Data System (ADS)

    Arnold, Hannah; Moroz, Irene; Palmer, Tim

    2013-04-01

    The presence of regimes is a characteristic of non-linear, chaotic systems (Lorenz, 2006). In the atmosphere, regimes emerge as familiar circulation patterns such as the El-Nino Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and Scandinavian Blocking events. In recent years there has been much interest in the problem of identifying and studying atmospheric regimes (Solomon et al, 2007). In particular, how do these regimes respond to an external forcing such as anthropogenic greenhouse gas emissions? The importance of regimes in observed trends over the past 50-100 years indicates that in order to predict anthropogenic climate change, our climate models must be able to represent accurately natural circulation regimes, their statistics and variability. It is well established that representing model uncertainty as well as initial condition uncertainty is important for reliable weather forecasts (Palmer, 2001). In particular, stochastic parametrisation schemes have been shown to improve the skill of weather forecast models (e.g. Berner et al., 2009; Frenkel et al., 2012; Palmer et al., 2009). It is possible that including stochastic physics as a representation of model uncertainty could also be beneficial in climate modelling, enabling the simulator to explore larger regions of the climate attractor including other flow regimes. An alternative representation of model uncertainty is a perturbed parameter scheme, whereby physical parameters in subgrid parametrisation schemes are perturbed about their optimal value. Perturbing parameters gives a greater control over the ensemble than multi-model or multiparametrisation ensembles, and has been used as a representation of model uncertainty in climate prediction (Stainforth et al., 2005; Rougier et al., 2009). We investigate the effect of including representations of model uncertainty on the regime behaviour of a simulator. A simple chaotic model of the atmosphere, the Lorenz '96 system, is used to study the predictability of regime changes (Lorenz 1996, 2006). Three types of models are considered: a deterministic parametrisation scheme, stochastic parametrisation schemes with additive or multiplicative noise, and a perturbed parameter ensemble. Each forecasting scheme was tested on its ability to reproduce the attractor of the full system, defined in a reduced space based on EOF decomposition. None of the forecast models accurately capture the less common regime, though a significant improvement is observed over the deterministic parametrisation when a temporally correlated stochastic parametrisation is used. The attractor for the perturbed parameter ensemble improves on that forecast by the deterministic or white additive schemes, showing a distinct peak in the attractor corresponding to the less common regime. However, the 40 constituent members of the perturbed parameter ensemble each differ greatly from the true attractor, with many only showing one dominant regime with very rare transitions. These results indicate that perturbed parameter ensembles must be carefully analysed as individual members may have very different characteristics to the ensemble mean and to the true system being modelled. On the other hand, the stochastic parametrisation schemes tested performed well, improving the simulated climate, and motivating the development of a stochastic earth-system simulator for use in climate prediction. J. Berner, G. J. Shutts, M. Leutbecher, and T. N. Palmer. A spectral stochastic kinetic energy backscatter scheme and its impact on flow dependent predictability in the ECMWF ensemble prediction system. J. Atmos. Sci., 66(3):603-626, 2009. Y. Frenkel, A. J. Majda, and B. Khouider. Using the stochastic multicloud model to improve tropical convective parametrisation: A paradigm example. J. Atmos. Sci., 69(3):1080-1105, 2012. E. N. Lorenz. Predictability: a problem partly solved. In Proceedings, Seminar on Predictability, 4-8 September 1995, volume 1, pages 1-18, Shinfield Park, Reading, 1996. ECMWF. E. N. Lorenz. Regimes in simple systems. J. Atmos. Sci., 63(8):2056-2073, 2006. T. N Palmer. A nonlinear dynamical perspective on model error: A proposal for non-local stochastic-dynamic parametrisation in weather and climate prediction models. Q. J. Roy. Meteor. Soc., 127(572):279-304, 2001. T. N. Palmer, R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G. J. Shutts, M. Steinheimer, and A. Weisheimer. Stochastic parametrization and model uncertainty. Technical Report 598, European Centre for Medium-Range Weather Forecasts, 2009. J. Rougier, D. M. H. Sexton, J. M. Murphy, and D. Stainforth. Analyzing the climate sensitivity of the HadSM3 climate model using ensembles from different but related experiments. J. Climate, 22:3540-3557, 2009. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, Tignor M., and H. L. Miller. Climate models and their evaluation. In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge, United Kingdom and New York, NY, USA, 2007. Cambridge University Press. D. A Stainforth, T. Aina, C. Christensen, M. Collins, N. Faull, D. J. Frame, J. A. Kettleborough, S. Knight, A. Martin, J. M. Murphy, C. Piani, D. Sexton, L. A. Smith, R. A Spicer, A. J. Thorpe, and M. R Allen. Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 433(7024):403-406, 2005.

  13. A cost analysis of first-line chemotherapy for low-risk gestational trophoblastic neoplasia.

    PubMed

    Shah, Neel T; Barroilhet, Lisa; Berkowitz, Ross S; Goldstein, Donald P; Horowitz, Neil

    2012-01-01

    To determine the optimal approach to first-line treatment for low-risk gestational trophoblastic neoplasia (GTN) using a cost analysis of 3 commonly used regimens. A decision tree of the 3 most commonly used first-line low-risk GTN treatment strategies was created, accounting for toxicities, response rates and need for second- or third-line therapy. These strategies included 8-day methotrexate (MTX)/folinic acid, weekly MTX, and pulsed actinomycin-D (act-D). Response rates, average number of cycles needed for remission, and toxicities were determined by review of the literature. Costs of each strategy were examined from a societal perspective, including the direct total treatment costs as well as the indirect lost labor production costs from work absences. Sensitivity analysis on these costs was performed using both deterministic and probabilistic cost-minimization models with the aid of decision tree software (TreeAge Pro 2011, TreeAge Inc., Williamstown, Massachusetts). We found that 8-day MTX/folinic acid is the least expensive to society, followed by pulsed act-D ($4,867 vs. $6,111 average societal cost per cure, respectively), with act-D becoming more favorable only with act-D per-cycle cost <$231, or response rate to first-line therapy > 99%. Weekly MTX is the most expensive first-line treatment strategy to society ($9,089 average cost per cure), despite being least expensive to administer per cycle, based on lower first-line response rate. Absolute societal cost of each strategy is driven by the probability of needing expensive third-line multiagent chemotherapy, however relative cost differences are robust to sensitivity analysis over the reported range of cycle number and response rate for all therapies. Based on similar efficacy and lower societal cost, we recommend 8-day MTX/folinic acid for first-line treatment of low-risk GTN.

  14. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  15. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  16. Statistics of Delta v magnitude for a trajectory correction maneuver containing deterministic and random components

    NASA Technical Reports Server (NTRS)

    Bollman, W. E.; Chadwick, C.

    1982-01-01

    A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.

  17. Blocked inverted indices for exact clustering of large chemical spaces.

    PubMed

    Thiel, Philipp; Sach-Peltason, Lisa; Ottmann, Christian; Kohlbacher, Oliver

    2014-09-22

    The calculation of pairwise compound similarities based on fingerprints is one of the fundamental tasks in chemoinformatics. Methods for efficient calculation of compound similarities are of the utmost importance for various applications like similarity searching or library clustering. With the increasing size of public compound databases, exact clustering of these databases is desirable, but often computationally prohibitively expensive. We present an optimized inverted index algorithm for the calculation of all pairwise similarities on 2D fingerprints of a given data set. In contrast to other algorithms, it neither requires GPU computing nor yields a stochastic approximation of the clustering. The algorithm has been designed to work well with multicore architectures and shows excellent parallel speedup. As an application example of this algorithm, we implemented a deterministic clustering application, which has been designed to decompose virtual libraries comprising tens of millions of compounds in a short time on current hardware. Our results show that our implementation achieves more than 400 million Tanimoto similarity calculations per second on a common desktop CPU. Deterministic clustering of the available chemical space thus can be done on modern multicore machines within a few days.

  18. Simultaneous estimation of deterministic and fractal stochastic components in non-stationary time series

    NASA Astrophysics Data System (ADS)

    García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.

    2018-07-01

    In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.

  19. Quantum cosmology of a Bianchi III LRS geometry coupled to a source free electromagnetic field

    NASA Astrophysics Data System (ADS)

    Karagiorgos, A.; Pailas, T.; Dimakis, N.; Terzis, Petros A.; Christodoulakis, T.

    2018-03-01

    We consider a Bianchi type III axisymmetric geometry in the presence of an electromagnetic field. A first result at the classical level is that the symmetry of the geometry need not be applied on the electromagnetic tensor Fμν the algebraic restrictions, implied by the Einstein field equations to the stress energy tensor Tμν, suffice to reduce the general Fμν to the appropriate form. The classical solution thus found contains a time dependent electric and a constant magnetic charge. The solution is also reachable from the corresponding mini-superspace action, which is strikingly similar to the Reissner-Nordstr{öm one. This points to a connection between the black hole geometry and the cosmological solution here found, which is the analog of the known correlation between the Schwarzschild and the Kantowski-Sachs metrics. The configuration space is drastically modified by the presence of the magnetic charge from a 3D flat to a 3D pp wave geometry. We map the emerging linear and quadratic classical integrals of motion, to quantum observables. Along with the Wheeler-DeWitt equation these observables provide unique, up to constants, wave functions. The employment of a Bohmian interpretation of these quantum states results in deterministic (semi-classical) geometries most of which are singularity free.

  20. Cascaded Kalman and particle filters for photogrammetry based gyroscope drift and robot attitude estimation.

    PubMed

    Sadaghzadeh N, Nargess; Poshtan, Javad; Wagner, Achim; Nordheimer, Eugen; Badreddin, Essameddin

    2014-03-01

    Based on a cascaded Kalman-Particle Filtering, gyroscope drift and robot attitude estimation method is proposed in this paper. Due to noisy and erroneous measurements of MEMS gyroscope, it is combined with Photogrammetry based vision navigation scenario. Quaternions kinematics and robot angular velocity dynamics with augmented drift dynamics of gyroscope are employed as system state space model. Nonlinear attitude kinematics, drift and robot angular movement dynamics each in 3 dimensions result in a nonlinear high dimensional system. To reduce the complexity, we propose a decomposition of system to cascaded subsystems and then design separate cascaded observers. This design leads to an easier tuning and more precise debugging from the perspective of programming and such a setting is well suited for a cooperative modular system with noticeably reduced computation time. Kalman Filtering (KF) is employed for the linear and Gaussian subsystem consisting of angular velocity and drift dynamics together with gyroscope measurement. The estimated angular velocity is utilized as input of the second Particle Filtering (PF) based observer in two scenarios of stochastic and deterministic inputs. Simulation results are provided to show the efficiency of the proposed method. Moreover, the experimental results based on data from a 3D MEMS IMU and a 3D camera system are used to demonstrate the efficiency of the method. © 2013 ISA Published by ISA All rights reserved.

  1. DIETARY EXPOSURES OF YOUNG CHILDREN, PART 3: MODELLING

    EPA Science Inventory

    A deterministic model was used to model dietary exposure of young children. Parameters included pesticide residue on food before handling, surface pesticide loading, transfer efficiencies and children's activity patterns. Three components of dietary pesticide exposure were includ...

  2. Materials Selection Criteria for Nuclear Power Applications: A Decision Algorithm

    NASA Astrophysics Data System (ADS)

    Rodríguez-Prieto, Álvaro; Camacho, Ana María; Sebastián, Miguel Ángel

    2016-02-01

    An innovative methodology based on stringency levels is proposed in this paper and improves the current selection method for structural materials used in demanding industrial applications. This paper describes a new approach for quantifying the stringency of materials requirements based on a novel deterministic algorithm to prevent potential failures. We have applied the new methodology to different standardized specifications used in pressure vessels design, such as SA-533 Grade B Cl.1, SA-508 Cl.3 (issued by the American Society of Mechanical Engineers), DIN 20MnMoNi55 (issued by the German Institute of Standardization) and 16MND5 (issued by the French Nuclear Commission) specifications and determine the influence of design code selection. This study is based on key scientific publications on the influence of chemical composition on the mechanical behavior of materials, which were not considered when the technological requirements were established in the aforementioned specifications. For this purpose, a new method to quantify the efficacy of each standard has been developed using a deterministic algorithm. The process of assigning relative weights was performed by consulting a panel of experts in materials selection for reactor pressure vessels to provide a more objective methodology; thus, the resulting mathematical calculations for quantitative analysis are greatly simplified. The final results show that steel DIN 20MnMoNi55 is the best material option. Additionally, more recently developed materials such as DIN 20MnMoNi55, 16MND5 and SA-508 Cl.3 exhibit mechanical requirements more stringent than SA-533 Grade B Cl.1. The methodology presented in this paper can be used as a decision tool in selection of materials for a wide range of applications.

  3. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    PubMed

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the probability of extinction for the stochastic model.

  4. Dynamic speckle - Interferometry of micro-displacements

    NASA Astrophysics Data System (ADS)

    Vladimirov, A. P.

    2012-06-01

    The problem of the dynamics of speckles in the image plane of the object, caused by random movements of scattering centers is solved. We consider three cases: 1) during the observation the points move at random, but constant speeds, and 2) the relative displacement of any pair of points is a continuous random process, and 3) the motion of the centers is the sum of a deterministic movement and random displacement. For the cases 1) and 2) the characteristics of temporal and spectral autocorrelation function of the radiation intensity can be used for determining of individually and the average relative displacement of the centers, their dispersion and the relaxation time. For the case 3) is showed that under certain conditions, the optical signal contains a periodic component, the number of periods is proportional to the derivations of the deterministic displacements. The results of experiments conducted to test and application of theory are given.

  5. Robust Observation Detection for Single Object Tracking: Deterministic and Probabilistic Patch-Based Approaches

    PubMed Central

    Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill

    2012-01-01

    In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226

  6. A random walk on water (Henry Darcy Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Koutsoyiannis, D.

    2009-04-01

    Randomness and uncertainty had been well appreciated in hydrology and water resources engineering in their initial steps as scientific disciplines. However, this changed through the years and, following other geosciences, hydrology adopted a naïve view of randomness in natural processes. Such a view separates natural phenomena into two mutually exclusive types, random or stochastic, and deterministic. When a classification of a specific process into one of these two types fails, then a separation of the process into two different, usually additive, parts is typically devised, each of which may be further subdivided into subparts (e.g., deterministic subparts such as periodic and aperiodic or trends). This dichotomous logic is typically combined with a manichean perception, in which the deterministic part supposedly represents cause-effect relationships and thus is physics and science (the "good"), whereas randomness has little relationship with science and no relationship with understanding (the "evil"). Probability theory and statistics, which traditionally provided the tools for dealing with randomness and uncertainty, have been regarded by some as the "necessary evil" but not as an essential part of hydrology and geophysics. Some took a step further to banish them from hydrology, replacing them with deterministic sensitivity analysis and fuzzy-logic representations. Others attempted to demonstrate that irregular fluctuations observed in natural processes are au fond manifestations of underlying chaotic deterministic dynamics with low dimensionality, thus attempting to render probabilistic descriptions unnecessary. Some of the above recent developments are simply flawed because they make erroneous use of probability and statistics (which, remarkably, provide the tools for such analyses), whereas the entire underlying logic is just a false dichotomy. To see this, it suffices to recall that Pierre Simon Laplace, perhaps the most famous proponent of determinism in the history of philosophy of science (cf. Laplace's demon), is, at the same time, one of the founders of probability theory, which he regarded as "nothing but common sense reduced to calculation". This harmonizes with James Clerk Maxwell's view that "the true logic for this world is the calculus of Probabilities" and was more recently and epigrammatically formulated in the title of Edwin Thompson Jaynes's book "Probability Theory: The Logic of Science" (2003). Abandoning dichotomous logic, either on ontological or epistemic grounds, we can identify randomness or stochasticity with unpredictability. Admitting that (a) uncertainty is an intrinsic property of nature; (b) causality implies dependence of natural processes in time and thus suggests predictability; but, (c) even the tiniest uncertainty (e.g., in initial conditions) may result in unpredictability after a certain time horizon, we may shape a stochastic representation of natural processes that is consistent with Karl Popper's indeterministic world view. In this representation, probability quantifies uncertainty according to the Kolmogorov system, in which probability is a normalized measure, i.e., a function that maps sets (areas where the initial conditions or the parameter values lie) to real numbers (in the interval [0, 1]). In such a representation, predictability (suggested by deterministic laws) and unpredictability (randomness) coexist, are not separable or additive components, and it is a matter of specifying the time horizon of prediction to decide which of the two dominates. An elementary numerical example has been devised to illustrate the above ideas and demonstrate that they offer a pragmatic and useful guide for practice, rather than just pertaining to philosophical discussions. A chaotic model, with fully and a priori known deterministic dynamics and deterministic inputs (without any random agent), is assumed to represent the hydrological balance in an area partly covered by vegetation. Experimentation with this toy model demonstrates, inter alia, that: (1) for short time horizons the deterministic dynamics is able to give good predictions; but (2) these predictions become extremely inaccurate and useless for long time horizons; (3) for such horizons a naïve statistical prediction (average of past data) which fully neglects the deterministic dynamics is more skilful; and (4) if this statistical prediction, in addition to past data, is combined with the probability theory (the principle of maximum entropy, in particular), it can provide a more informative prediction. Also, the toy model shows that the trajectories of the system state (and derivative properties thereof) do not resemble a regular (e.g., periodic) deterministic process nor a purely random process, but exhibit patterns indicating anti-persistence and persistence (where the latter statistically complies with a Hurst-Kolmogorov behaviour). If the process is averaged over long time scales, the anti-persistent behaviour improves predictability, whereas the persistent behaviour substantially deteriorates it. A stochastic representation of this deterministic system, which incorporates dynamics, is not only possible, but also powerful as it provides good predictions for both short and long horizons and helps to decide on when the deterministic dynamics should be considered or neglected. Obviously, a natural system is extremely more complex than this simple toy model and hence unpredictability is naturally even more prominent in the former. In addition, in a complex natural system, we can never know the exact dynamics and we must infer it from past data, which implies additional uncertainty and an additional role of stochastics in the process of formulating the system equations and estimating the involved parameters. Data also offer the only solid grounds to test any hypothesis about the dynamics, and failure of performing such testing against evidence from data renders the hypothesised dynamics worthless. If this perception of natural phenomena is adequately plausible, then it may help in studying interesting fundamental questions regarding the current state and the trends of hydrological and water resources research and their promising future paths. For instance: (i) Will it ever be possible to achieve a fully "physically based" modelling of hydrological systems that will not depend on data or stochastic representations? (ii) To what extent can hydrological uncertainty be reduced and what are the effective means for such reduction? (iii) Are current stochastic methods in hydrology consistent with observed natural behaviours? What paths should we explore for their advancement? (iv) Can deterministic methods provide solid scientific grounds for water resources engineering and management? In particular, can there be risk-free hydraulic engineering and water management? (v) Is the current (particularly important) interface between hydrology and climate satisfactory?. In particular, should hydrology rely on climate models that are not properly validated (i.e., for periods and scales not used in calibration)? In effect, is the evolution of climate and its impacts on water resources deterministically predictable?

  7. Dynamic Routing of Aircraft in the Presence of Adverse Weather Using a POMDP Framework

    NASA Technical Reports Server (NTRS)

    Balaban, Edward; Roychoudhury, Indranil; Spirkovska, Lilly; Sankararaman, Shankar; Kulkarni, Chetan; Arnon, Tomer

    2017-01-01

    Each year weather-related airline delays result in hundreds of millions of dollars in additional fuel burn, maintenance, and lost revenue, not to mention passenger inconvenience. The current approaches for aircraft route planning in the presence of adverse weather still mainly rely on deterministic methods. In contrast, this work aims to deal with the problem using a Partially Observable Markov Decision Processes (POMDPs) framework, which allows for reasoning over uncertainty (including uncertainty in weather evolution over time) and results in solutions that are more robust to disruptions. The POMDP-based decision support system is demonstrated on several scenarios involving convective weather cells and is benchmarked against a deterministic planning system with functionality similar to those currently in use or under development.

  8. A Deterministic Computational Procedure for Space Environment Electron Transport

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.

    2010-01-01

    A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.

  9. Broken flow symmetry explains the dynamics of small particles in deterministic lateral displacement arrays.

    PubMed

    Kim, Sung-Cheol; Wunsch, Benjamin H; Hu, Huan; Smith, Joshua T; Austin, Robert H; Stolovitzky, Gustavo

    2017-06-27

    Deterministic lateral displacement (DLD) is a technique for size fractionation of particles in continuous flow that has shown great potential for biological applications. Several theoretical models have been proposed, but experimental evidence has demonstrated that a rich class of intermediate migration behavior exists, which is not predicted. We present a unified theoretical framework to infer the path of particles in the whole array on the basis of trajectories in a unit cell. This framework explains many of the unexpected particle trajectories reported and can be used to design arrays for even nanoscale particle fractionation. We performed experiments that verify these predictions and used our model to develop a condenser array that achieves full particle separation with a single fluidic input.

  10. The integrated model for solving the single-period deterministic inventory routing problem

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Kamarul Irwan Abdul; Abidin, Rahimi; Iteng, Rosman; Lamsali, Hendrik

    2016-08-01

    This paper discusses the problem of efficiently managing inventory and routing problems in a two-level supply chain system. Vendor Managed Inventory (VMI) policy is an integrating decisions between a supplier and his customers. We assumed that the demand at each customer is stationary and the warehouse is implementing a VMI. The objective of this paper is to minimize the inventory and the transportation costs of the customers for a two-level supply chain. The problem is to determine the delivery quantities, delivery times and routes to the customers for the single-period deterministic inventory routing problem (SP-DIRP) system. As a result, a linear mixed-integer program is developed for the solutions of the SP-DIRP problem.

  11. Transforming Better Babies into Fitter Families: archival resources and the history of American eugenics movement, 1908-1930.

    PubMed

    Selden, Steven

    2005-06-01

    In the early 1920s, determinist conceptions of biology helped to transform Better Babies contest into Fitter Families competitions with a strong commitment to controlled human breeding. While the earlier competitions were concerned for physical and mental standards, the latter contests collected data on a broad range of presumed hereditary characters. The complex behaviors thought to be determined by one's heredity included being generous, jealous, and cruel. In today's context, the popular media often interpret advances in molecular genetics in a similarly reductive and determinist fashion. This paper argues that such a narrow interpretation of contemporary biology unnecessarily constrains the public in developing social policies concerning complex social behavior ranging from crime to intelligence.

  12. Probabilistic 3-D time-lapse inversion of magnetotelluric data: application to an enhanced geothermal system

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, M.; Linde, N.; Peacock, J.; Zyserman, F. I.; Kalscheuer, T.; Thiel, S.

    2015-12-01

    Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved. We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.

  13. Probabilistic 3-D time-lapse inversion of magnetotelluric data: Application to an enhanced geothermal system

    USGS Publications Warehouse

    Rosas-Carbajal, Marina; Linde, Nicolas; Peacock, Jared R.; Zyserman, F. I.; Kalscheuer, Thomas; Thiel, Stephan

    2015-01-01

    Surface-based monitoring of mass transfer caused by injections and extractions in deep boreholes is crucial to maximize oil, gas and geothermal production. Inductive electromagnetic methods, such as magnetotellurics, are appealing for these applications due to their large penetration depths and sensitivity to changes in fluid conductivity and fracture connectivity. In this work, we propose a 3-D Markov chain Monte Carlo inversion of time-lapse magnetotelluric data to image mass transfer following a saline fluid injection. The inversion estimates the posterior probability density function of the resulting plume, and thereby quantifies model uncertainty. To decrease computation times, we base the parametrization on a reduced Legendre moment decomposition of the plume. A synthetic test shows that our methodology is effective when the electrical resistivity structure prior to the injection is well known. The centre of mass and spread of the plume are well retrieved.We then apply our inversion strategy to an injection experiment in an enhanced geothermal system at Paralana, South Australia, and compare it to a 3-D deterministic time-lapse inversion. The latter retrieves resistivity changes that are more shallow than the actual injection interval, whereas the probabilistic inversion retrieves plumes that are located at the correct depths and oriented in a preferential north-south direction. To explain the time-lapse data, the inversion requires unrealistically large resistivity changes with respect to the base model. We suggest that this is partly explained by unaccounted subsurface heterogeneities in the base model from which time-lapse changes are inferred.

  14. An approach to model reactor core nodalization for deterministic safety analysis

    NASA Astrophysics Data System (ADS)

    Salim, Mohd Faiz; Samsudin, Mohd Rafie; Mamat @ Ibrahim, Mohd Rizal; Roslan, Ridha; Sadri, Abd Aziz; Farid, Mohd Fairus Abd

    2016-01-01

    Adopting good nodalization strategy is essential to produce an accurate and high quality input model for Deterministic Safety Analysis (DSA) using System Thermal-Hydraulic (SYS-TH) computer code. The purpose of such analysis is to demonstrate the compliance against regulatory requirements and to verify the behavior of the reactor during normal and accident conditions as it was originally designed. Numerous studies in the past have been devoted to the development of the nodalization strategy for small research reactor (e.g. 250kW) up to the bigger research reactor (e.g. 30MW). As such, this paper aims to discuss the state-of-arts thermal hydraulics channel to be employed in the nodalization for RTP-TRIGA Research Reactor specifically for the reactor core. At present, the required thermal-hydraulic parameters for reactor core, such as core geometrical data (length, coolant flow area, hydraulic diameters, and axial power profile) and material properties (including the UZrH1.6, stainless steel clad, graphite reflector) have been collected, analyzed and consolidated in the Reference Database of RTP using standardized methodology, mainly derived from the available technical documentations. Based on the available information in the database, assumptions made on the nodalization approach and calculations performed will be discussed and presented. The development and identification of the thermal hydraulics channel for the reactor core will be implemented during the SYS-TH calculation using RELAP5-3D® computer code. This activity presented in this paper is part of the development of overall nodalization description for RTP-TRIGA Research Reactor under the IAEA Norwegian Extra-Budgetary Programme (NOKEBP) mentoring project on Expertise Development through the Analysis of Reactor Thermal-Hydraulics for Malaysia, denoted as EARTH-M.

  15. Stochastic and deterministic multiscale models for systems biology: an auxin-transport case study.

    PubMed

    Twycross, Jamie; Band, Leah R; Bennett, Malcolm J; King, John R; Krasnogor, Natalio

    2010-03-26

    Stochastic and asymptotic methods are powerful tools in developing multiscale systems biology models; however, little has been done in this context to compare the efficacy of these methods. The majority of current systems biology modelling research, including that of auxin transport, uses numerical simulations to study the behaviour of large systems of deterministic ordinary differential equations, with little consideration of alternative modelling frameworks. In this case study, we solve an auxin-transport model using analytical methods, deterministic numerical simulations and stochastic numerical simulations. Although the three approaches in general predict the same behaviour, the approaches provide different information that we use to gain distinct insights into the modelled biological system. We show in particular that the analytical approach readily provides straightforward mathematical expressions for the concentrations and transport speeds, while the stochastic simulations naturally provide information on the variability of the system. Our study provides a constructive comparison which highlights the advantages and disadvantages of each of the considered modelling approaches. This will prove helpful to researchers when weighing up which modelling approach to select. In addition, the paper goes some way to bridging the gap between these approaches, which in the future we hope will lead to integrative hybrid models.

  16. Deterministic magnetorheological finishing of optical aspheric mirrors

    NASA Astrophysics Data System (ADS)

    Song, Ci; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi; Shi, Feng

    2009-05-01

    A new method magnetorheological finishing (MRF) used for deterministical finishing of optical aspheric mirrors is applied to overcome some disadvantages including low finishing efficiency, long iterative time and unstable convergence in the process of conventional polishing. Based on the introduction of the basic principle of MRF, the key techniques to implement deterministical MRF are also discussed. To demonstrate it, a 200 mm diameter K9 class concave asphere with a vertex radius of 640mm was figured on MRF polish tool developed by ourselves. Through one process about two hours, the surface accuracy peak-to-valley (PV) is improved from initial 0.216λ to final 0.179λ and root-mean-square (RMS) is improved from 0.027λ to 0.017λ (λ = 0.6328um ). High-precision and high-efficiency convergence of optical aspheric surface error shows that MRF is an advanced optical manufacturing method that owns high convergence ratio of surface figure, high precision of optical surfacing, stabile and controllable finishing process. Therefore, utilizing MRF to finish optical aspheric mirrors determinately is credible and stabile; its advantages can be also used for finishing optical elements on varieties of types such as plane mirrors and spherical mirrors.

  17. PCEMCAN - Probabilistic Ceramic Matrix Composites Analyzer: User's Guide, Version 1.0

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Mital, Subodh K.; Murthy, Pappu L. N.

    1998-01-01

    PCEMCAN (Probabalistic CEramic Matrix Composites ANalyzer) is an integrated computer code developed at NASA Lewis Research Center that simulates uncertainties associated with the constituent properties, manufacturing process, and geometric parameters of fiber reinforced ceramic matrix composites and quantifies their random thermomechanical behavior. The PCEMCAN code can perform the deterministic as well as probabilistic analyses to predict thermomechanical properties. This User's guide details the step-by-step procedure to create input file and update/modify the material properties database required to run PCEMCAN computer code. An overview of the geometric conventions, micromechanical unit cell, nonlinear constitutive relationship and probabilistic simulation methodology is also provided in the manual. Fast probability integration as well as Monte-Carlo simulation methods are available for the uncertainty simulation. Various options available in the code to simulate probabilistic material properties and quantify sensitivity of the primitive random variables have been described. The description of deterministic as well as probabilistic results have been described using demonstration problems. For detailed theoretical description of deterministic and probabilistic analyses, the user is referred to the companion documents "Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composite Behavior," NASA TP-3602, 1996 and "Probabilistic Micromechanics and Macromechanics for Ceramic Matrix Composites", NASA TM 4766, June 1997.

  18. Detailed gravity anomalies from GEOS-3 satellite altimetry data

    NASA Technical Reports Server (NTRS)

    Gopalapillai, G. S.; Mourad, A. G.

    1978-01-01

    A technique for deriving mean gravity anomalies from dense altimetry data was developed. A combination of both deterministic and statistical techniques was used. The basic mathematical model was based on the Stokes' equation which describes the analytical relationship between mean gravity anomalies and geoid undulations at a point; this undulation is a linear function of the altimetry data at that point. The overdetermined problem resulting from the excessive altimetry data available was solved using Least-Squares principles. These principles enable the simultaneous estimation of the associated standard deviations reflecting the internal consistency based on the accuracy estimates provided for the altimetry data as well as for the terrestrial anomaly data. Several test computations were made of the anomalies and their accuracy estimates using GOES-3 data.

  19. FACTORS INFLUENCING TOTAL DIETARY EXPOSURE OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify critical input parameters to assess dietary intake of young children. The model was used as a framework for understanding important factors in data collection and analysis. Factors incorporated included transfer efficiencies of pest...

  20. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    NASA Astrophysics Data System (ADS)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  1. Combining deterministic and stochastic velocity fields in the analysis of deep crustal seismic data

    NASA Astrophysics Data System (ADS)

    Larkin, Steven Paul

    Standard crustal seismic modeling obtains deterministic velocity models which ignore the effects of wavelength-scale heterogeneity, known to exist within the Earth's crust. Stochastic velocity models are a means to include wavelength-scale heterogeneity in the modeling. These models are defined by statistical parameters obtained from geologic maps of exposed crystalline rock, and are thus tied to actual geologic structures. Combining both deterministic and stochastic velocity models into a single model allows a realistic full wavefield (2-D) to be computed. By comparing these simulations to recorded seismic data, the effects of wavelength-scale heterogeneity can be investigated. Combined deterministic and stochastic velocity models are created for two datasets, the 1992 RISC seismic experiment in southeastern California and the 1986 PASSCAL seismic experiment in northern Nevada. The RISC experiment was located in the transition zone between the Salton Trough and the southern Basin and Range province. A high-velocity body previously identified beneath the Salton Trough is constrained to pinch out beneath the Chocolate Mountains to the northeast. The lateral extent of this body is evidence for the ephemeral nature of rifting loci as a continent is initially rifted. Stochastic modeling of wavelength-scale structures above this body indicate that little more than 5% mafic intrusion into a more felsic continental crust is responsible for the observed reflectivity. Modeling of the wide-angle RISC data indicates that coda waves following PmP are initially dominated by diffusion of energy out of the near-surface basin as the wavefield reverberates within this low-velocity layer. At later times, this coda consists of scattered body waves and P to S conversions. Surface waves do not play a significant role in this coda. Modeling of the PASSCAL dataset indicates that a high-gradient crust-mantle transition zone or a rough Moho interface is necessary to reduce precritical PmP energy. Possibly related, inconsistencies in published velocity models are rectified by hypothesizing the existence of large, elongate, high-velocity bodies at the base of the crust oriented to and of similar scale as the basins and ranges at the surface. This structure would result in an anisotropic lower crust.

  2. Tsunamigenic scenarios for southern Peru and northern Chile seismic gap: Deterministic and probabilistic hybrid approach for hazard assessment

    NASA Astrophysics Data System (ADS)

    González-Carrasco, J. F.; Gonzalez, G.; Aránguiz, R.; Yanez, G. A.; Melgar, D.; Salazar, P.; Shrivastava, M. N.; Das, R.; Catalan, P. A.; Cienfuegos, R.

    2017-12-01

    Plausible worst-case tsunamigenic scenarios definition plays a relevant role in tsunami hazard assessment focused in emergency preparedness and evacuation planning for coastal communities. During the last decade, the occurrence of major and moderate tsunamigenic earthquakes along worldwide subduction zones has given clues about critical parameters involved in near-field tsunami inundation processes, i.e. slip spatial distribution, shelf resonance of edge waves and local geomorphology effects. To analyze the effects of these seismic and hydrodynamic variables over the epistemic uncertainty of coastal inundation, we implement a combined methodology using deterministic and probabilistic approaches to construct 420 tsunamigenic scenarios in a mature seismic gap of southern Peru and northern Chile, extended from 17ºS to 24ºS. The deterministic scenarios are calculated using a regional distribution of trench-parallel gravity anomaly (TPGA) and trench-parallel topography anomaly (TPTA), three-dimensional Slab 1.0 worldwide subduction zones geometry model and published interseismic coupling (ISC) distributions. As result, we find four higher slip deficit zones interpreted as major seismic asperities of the gap, used in a hierarchical tree scheme to generate ten tsunamigenic scenarios with seismic magnitudes fluctuates between Mw 8.4 to Mw 8.9. Additionally, we construct ten homogeneous slip scenarios as inundation baseline. For the probabilistic approach, we implement a Karhunen - Loève expansion to generate 400 stochastic tsunamigenic scenarios over the maximum extension of the gap, with the same magnitude range of the deterministic sources. All the scenarios are simulated through a non-hydrostatic tsunami model Neowave 2D, using a classical nesting scheme, for five coastal major cities in northern Chile (Arica, Iquique, Tocopilla, Mejillones and Antofagasta) obtaining high resolution data of inundation depth, runup, coastal currents and sea level elevation. The probabilistic kinematic tsunamigenic scenarios give a more realistic slip patterns, similar to maximum slip amount of major past earthquakes. For all studied sites, the peak of slip location and shelf resonance is a first order control for the observed coastal inundation depths results.

  3. Estimating the epidemic threshold on networks by deterministic connections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Kezan, E-mail: lkzzr@sohu.com; Zhu, Guanghu; Fu, Xinchu

    2014-12-15

    For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect thanmore » those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.« less

  4. 3D calcite heterostructures for dynamic and deformable mineralized matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jaeseok; Wang, Yucai; Jiang, Yuanwen

    Scales are rooted in soft tissues, and are regenerated by specialized cells. The realization of dynamic synthetic analogues with inorganic materials has been a significant challenge, because the abiological regeneration sites that could yield deterministic growth behavior are hard to form. Here we overcome this fundamental hurdle by constructing a mutable and deformable array of three-dimensional calcite heterostructures that are partially locked in silicone. Individual calcite crystals exhibit asymmetrical dumbbell shapes and are prepared by a parallel tectonic approach under ambient conditions. Furthermore, the silicone matrix immobilizes the epitaxial nucleation sites through self-templated cavities, which enables symmetry breaking in reactionmore » dynamics and scalable manipulation of the mineral ensembles. With this platform, we devise several mineral-enabled dynamic surfaces and interfaces. For example, we show that the induced growth of minerals yields localized inorganic adhesion for biological tissue and reversible focal encapsulation for sensitive components in flexible electronics.« less

  5. Teleportation-based continuous variable quantum cryptography

    NASA Astrophysics Data System (ADS)

    Luiz, F. S.; Rigolin, Gustavo

    2017-03-01

    We present a continuous variable (CV) quantum key distribution (QKD) scheme based on the CV quantum teleportation of coherent states that yields a raw secret key made up of discrete variables for both Alice and Bob. This protocol preserves the efficient detection schemes of current CV technology (no single-photon detection techniques) and, at the same time, has efficient error correction and privacy amplification schemes due to the binary modulation of the key. We show that for a certain type of incoherent attack, it is secure for almost any value of the transmittance of the optical line used by Alice to share entangled two-mode squeezed states with Bob (no 3 dB or 50% loss limitation characteristic of beam splitting attacks). The present CVQKD protocol works deterministically (no postselection needed) with efficient direct reconciliation techniques (no reverse reconciliation) in order to generate a secure key and beyond the 50% loss case at the incoherent attack level.

  6. 3D calcite heterostructures for dynamic and deformable mineralized matrices

    DOE PAGES

    Yi, Jaeseok; Wang, Yucai; Jiang, Yuanwen; ...

    2017-09-11

    Scales are rooted in soft tissues, and are regenerated by specialized cells. The realization of dynamic synthetic analogues with inorganic materials has been a significant challenge, because the abiological regeneration sites that could yield deterministic growth behavior are hard to form. Here we overcome this fundamental hurdle by constructing a mutable and deformable array of three-dimensional calcite heterostructures that are partially locked in silicone. Individual calcite crystals exhibit asymmetrical dumbbell shapes and are prepared by a parallel tectonic approach under ambient conditions. Furthermore, the silicone matrix immobilizes the epitaxial nucleation sites through self-templated cavities, which enables symmetry breaking in reactionmore » dynamics and scalable manipulation of the mineral ensembles. With this platform, we devise several mineral-enabled dynamic surfaces and interfaces. For example, we show that the induced growth of minerals yields localized inorganic adhesion for biological tissue and reversible focal encapsulation for sensitive components in flexible electronics.« less

  7. Clk post-transcriptional control denoises circadian transcription both temporally and spatially.

    PubMed

    Lerner, Immanuel; Bartok, Osnat; Wolfson, Victoria; Menet, Jerome S; Weissbein, Uri; Afik, Shaked; Haimovich, Daniel; Gafni, Chen; Friedman, Nir; Rosbash, Michael; Kadener, Sebastian

    2015-05-08

    The transcription factor CLOCK (CLK) is essential for the development and maintenance of circadian rhythms in Drosophila. However, little is known about how CLK levels are controlled. Here we show that Clk mRNA is strongly regulated post-transcriptionally through its 3' UTR. Flies expressing Clk transgenes without normal 3' UTR exhibit variable CLK-driven transcription and circadian behaviour as well as ectopic expression of CLK-target genes in the brain. In these flies, the number of the key circadian neurons differs stochastically between individuals and within the two hemispheres of the same brain. Moreover, flies carrying Clk transgenes with deletions in the binding sites for the miRNA bantam have stochastic number of pacemaker neurons, suggesting that this miRNA mediates the deterministic expression of CLK. Overall our results demonstrate a key role of Clk post-transcriptional control in stabilizing circadian transcription, which is essential for proper development and maintenance of circadian rhythms in Drosophila.

  8. Seismic shaking scenarios in realistic 3D crustal model of Northern Italy

    NASA Astrophysics Data System (ADS)

    Molinari, I.; Morelli, A.; Basini, P.; Berbellini, A.

    2013-12-01

    Simulation of seismic wave propagation in realistic crustal structures is a fundamental tool to evaluate earthquake-generated ground shaking and assess seismic hazard. Current-generation numerical codes, and modern HPC infrastructures, allow for realistic simulations in complex 3D geologic structures. We apply such methodology to the Po Plain in Northern Italy -- a region with relatively rare earthquakes but having large property and industrial exposure, as it became clear during the two M~6 events of May 20-29, 2012. Historical seismicity is well known in this region, with maximum magnitudes estimates reaching M~7, and wave field amplitudes may be significantly amplified by the presence of the very thick sedimentary basin. Our goal is to produce estimates of expected ground shaking in Northern Italy through detailed deterministic simulations of ground motion due to expected earthquakes. We defined a three-dimensional model of the earth's crust using geo-statistical tools to merge the abundant information existing in the form of borehole data and seismic reflection profiles that had been shot in the '70s and the '80s for hydrocarbon exploration. Such information, that has been used by geologists to infer the deep structural setup, had never been merged to build a 3D model to be used for seismological simulations. We implement the model in SPECFEM3D_Cartesian and a hexahedral mesh with elements of ~2km, that allows us to simulate waves with minimum period of ~2 seconds. The model has then been optimized through comparison between simulated and recorded seismograms for the ~20 moderate-magnitude events (Mw > 4.5) that have been instrumentally recorded in the last 15 years. Realistic simulations in the frequency band of most common engineering relevance -- say, ~1 Hz -- at such a large scale would require an extremely detailed structural model, currently not available, and prohibitive computational resources. However, an interest is growing in longer period ground motion -- that impacts on the seismic response of taller structures (Cauzzi and Faccioli, 2008) -- and it is not unusual to consider the wave field up to 20s. In such period range, our Po Plain structural model has shown to be able to reproduce well basin resonance and amplification effects at stations boarding the sedimentary plain. We then simulate seismic shaking scenarios for possible sources tied to devastating historical earthquakes that are known to have occurred in the region --- such as the M~6 event that hit Modena in 1501; and the Verona, M~6.7 in 1117, quake that caused well-documented strong effects in an unusually wide area with radius of hundreds of kilometers. We explore different source geometries and rupture histories for each earthquake. We mainly focus our attention on the synthesis of the prominent surface waves that are highly amplified in deep sedimentary basin structures (e.g., Smerzini et al, 2011; Koketsu and Miyage, 2008). Such simulations hold high relevance because of the large local property exposure, due to extensive industrial and touristic infrastructure. We show that deterministic ground motion calculation can indeed provide information to be actively used to mitigate the effects of desctructive earthquakes on critical infrastructures.

  9. Deterministic quantum state transfer and remote entanglement using microwave photons.

    PubMed

    Kurpiers, P; Magnard, P; Walter, T; Royer, B; Pechal, M; Heinsoo, J; Salathé, Y; Akin, A; Storz, S; Besse, J-C; Gasparinetti, S; Blais, A; Wallraff, A

    2018-06-01

    Sharing information coherently between nodes of a quantum network is fundamental to distributed quantum information processing. In this scheme, the computation is divided into subroutines and performed on several smaller quantum registers that are connected by classical and quantum channels 1 . A direct quantum channel, which connects nodes deterministically rather than probabilistically, achieves larger entanglement rates between nodes and is advantageous for distributed fault-tolerant quantum computation 2 . Here we implement deterministic state-transfer and entanglement protocols between two superconducting qubits fabricated on separate chips. Superconducting circuits 3 constitute a universal quantum node 4 that is capable of sending, receiving, storing and processing quantum information 5-8 . Our implementation is based on an all-microwave cavity-assisted Raman process 9 , which entangles or transfers the qubit state of a transmon-type artificial atom 10 with a time-symmetric itinerant single photon. We transfer qubit states by absorbing these itinerant photons at the receiving node, with a probability of 98.1 ± 0.1 per cent, achieving a transfer-process fidelity of 80.02 ± 0.07 per cent for a protocol duration of only 180 nanoseconds. We also prepare remote entanglement on demand with a fidelity as high as 78.9 ± 0.1 per cent at a rate of 50 kilohertz. Our results are in excellent agreement with numerical simulations based on a master-equation description of the system. This deterministic protocol has the potential to be used for quantum computing distributed across different nodes of a cryogenic network.

  10. The experience of linking Victorian emergency medical service trauma data

    PubMed Central

    Boyle, Malcolm J

    2008-01-01

    Background The linking of a large Emergency Medical Service (EMS) dataset with the Victorian Department of Human Services (DHS) hospital datasets and Victorian State Trauma Outcome Registry and Monitoring (VSTORM) dataset to determine patient outcomes has not previously been undertaken in Victoria. The objective of this study was to identify the linkage rate of a large EMS trauma dataset with the Department of Human Services hospital datasets and VSTORM dataset. Methods The linking of an EMS trauma dataset to the hospital datasets utilised deterministic and probabilistic matching. The linking of three EMS trauma datasets to the VSTORM dataset utilised deterministic, probabilistic and manual matching. Results There were 66.7% of patients from the EMS dataset located in the VEMD. There were 96% of patients located in the VAED who were defined in the VEMD as being admitted to hospital. 3.7% of patients located in the VAED could not be found in the VEMD due to hospitals not reporting to the VEMD. For the EMS datasets, there was a 146% increase in successful links with the trauma profile dataset, a 221% increase in successful links with the mechanism of injury only dataset, and a 46% increase with sudden deterioration dataset, to VSTORM when using manual compared to deterministic matching. Conclusion This study has demonstrated that EMS data can be successfully linked to other health related datasets using deterministic and probabilistic matching with varying levels of success. The quality of EMS data needs to be improved to ensure better linkage success rates with other health related datasets. PMID:19014622

  11. An Alternative Approach to School Development: The Children Are the Evidence

    ERIC Educational Resources Information Center

    Drummond, Mary Jane; Hart, Susan

    2013-01-01

    In this article, the authors describe the alternative approach to school development taken by the head teacher and staff of a primary school in Hertfordshire. Their approach is based on a resolutely optimistic and anti-determinist view of every child's capacity to learn, and their commitment to working as a school-wide community of learners. The…

  12. Transformation formulas relating geodetic coordinates to a tangent to Earth, plane coordinate system

    NASA Technical Reports Server (NTRS)

    Credeur, L.

    1981-01-01

    Formulas and their approximation were developed to map geodetic position to an Earth tangent plane with an airport centered rectangular coordinate system. The transformations were developed for use in a terminal area air traffic model with deterministic aircraft traffic. The exact configured vehicle's approximation equations used in their precision microwave landing system navigation experiments.

  13. Expectancy Learning from Probabilistic Input by Infants

    PubMed Central

    Romberg, Alexa R.; Saffran, Jenny R.

    2013-01-01

    Across the first few years of life, infants readily extract many kinds of regularities from their environment, and this ability is thought to be central to development in a number of domains. Numerous studies have documented infants’ ability to recognize deterministic sequential patterns. However, little is known about the processes infants use to build and update representations of structure in time, and how infants represent patterns that are not completely predictable. The present study investigated how infants’ expectations fora simple structure develope over time, and how infants update their representations with new information. We measured 12-month-old infants’ anticipatory eye movements to targets that appeared in one of two possible locations. During the initial phase of the experiment, infants either saw targets that appeared consistently in the same location (Deterministic condition) or probabilistically in either location, with one side more frequent than the other (Probabilistic condition). After this initial divergent experience, both groups saw the same sequence of trials for the rest of the experiment. The results show that infants readily learn from both deterministic and probabilistic input, with infants in both conditions reliably predicting the most likely target location by the end of the experiment. Local context had a large influence on behavior: infants adjusted their predictions to reflect changes in the target location on the previous trial. This flexibility was particularly evident in infants with more variable prior experience (the Probabilistic condition). The results provide some of the first data showing how infants learn in real time. PMID:23439947

  14. Stochastic climate dynamics: Stochastic parametrizations and their global effects

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2010-05-01

    A well-known difficulty in modeling the atmosphere and oceans' general circulation is the limited, albeit increasing resolution possible in the numerical solution of the governing partial differential equations. While the mass, energy and momentum of an individual cloud, in the atmosphere, or convection chimney, in the oceans, is negligible, their combined effects over long times are not. Until recently, small, subgrid-scale processes were represented in general circulation models (GCMs) by deterministic "parametrizations." While A. Arakawa and associates had realized over three decades ago the conceptual need for ensembles of clouds in such parametrizations, it is only very recently that truly stochastic parametrizations have been introduced into GCMs and weather prediction models. These parametrizations essentially transform a deterministic autonomous system into a non-autonomous one, subject to random forcing. To study systematically the long-term effects of such a forcing has to rely on theory of random dynamical systems (RDS). This theory allows one to consider the detailed geometric structure of the random attractors associated with nonlinear, stochastically perturbed systems. These attractors extend the concept of strange attractors from autonomous dynamical systems to non-autonomous systems with random forcing. To illustrate the essence of the theory, its concepts and methods, we carry out a high-resolution numerical study of two "toy" models in their respective phase spaces. This study allows one to obtain a good approximation of their global random attractors, as well as of the time-dependent invariant measures supported by these attractors. The first of the two models studied herein is the Arnol'd family of circle maps in the presence of noise. The maps' fine-grained, resonant landscape --- associated with Arnol'd tongues --- is smoothed by the noise, thus permitting a comparison with the observable aspects of the "Devil's staircase" that arises in modeling the El Nino-Southern Oscillation (ENSO). These results are confirmed by studying a "French garden" that is obtained by smoothing a "Devil's quarry." Such a quarry results from coupling two circle maps, and random forcing leads to a smoothed version thereof. We thus suspect that stochastic parametrizations will stabilize the sensitive dependence on parameters that has been noticed in the development of GCMs. This talk represents joint work with Mickael D. Chekroun, D. Kondrashov, Eric Simonnet and I. Zaliapin. Several other talks and posters complement the results presented here and provide further insights into RDS theory and its application to the geosciences.

  15. A Hybrid Method of Moment Equations and Rate Equations to Modeling Gas-Grain Chemistry

    NASA Astrophysics Data System (ADS)

    Pei, Y.; Herbst, E.

    2011-05-01

    Grain surfaces play a crucial role in catalyzing many important chemical reactions in the interstellar medium (ISM). The deterministic rate equation (RE) method has often been used to simulate the surface chemistry. But this method becomes inaccurate when the number of reacting particles per grain is typically less than one, which can occur in the ISM. In this condition, stochastic approaches such as the master equations are adopted. However, these methods have mostly been constrained to small chemical networks due to the large amounts of processor time and computer power required. In this study, we present a hybrid method consisting of the moment equation approximation to the stochastic master equation approach and deterministic rate equations to treat a gas-grain model of homogeneous cold cloud cores with time-independent physical conditions. In this model, we use the standard OSU gas phase network (version OSU2006V3) which involves 458 gas phase species and more than 4000 reactions, and treat it by deterministic rate equations. A medium-sized surface reaction network which consists of 21 species and 19 reactions accounts for the productions of stable molecules such as H_2O, CO, CO_2, H_2CO, CH_3OH, NH_3 and CH_4. These surface reactions are treated by a hybrid method of moment equations (Barzel & Biham 2007) and rate equations: when the abundance of a surface species is lower than a specific threshold, say one per grain, we use the ``stochastic" moment equations to simulate the evolution; when its abundance goes above this threshold, we use the rate equations. A continuity technique is utilized to secure a smooth transition between these two methods. We have run chemical simulations for a time up to 10^8 yr at three temperatures: 10 K, 15 K, and 20 K. The results will be compared with those generated from (1) a completely deterministic model that uses rate equations for both gas phase and grain surface chemistry, (2) the method of modified rate equations (Garrod 2008), which partially takes into account the stochastic effect for surface reactions, and (3) the master equation approach solved using a Monte Carlo technique. At 10 K and standard grain sizes, our model results agree well with the above three methods, while discrepancies appear at higher temperatures and smaller grain sizes.

  16. An in-situ stimulation experiment in crystalline rock - assessment of induced seismicity levels during stimulation and related hazard for nearby infrastructure

    NASA Astrophysics Data System (ADS)

    Gischig, Valentin; Broccardo, Marco; Amann, Florian; Jalali, Mohammadreza; Esposito, Simona; Krietsch, Hannes; Doetsch, Joseph; Madonna, Claudio; Wiemer, Stefan; Loew, Simon; Giardini, Domenico

    2016-04-01

    A decameter in-situ stimulation experiment is currently being performed at the Grimsel Test Site in Switzerland by the Swiss Competence Center for Energy Research - Supply of Electricity (SCCER-SoE). The underground research laboratory lies in crystalline rock at a depth of 480 m, and exhibits well-documented geology that is presenting some analogies with the crystalline basement targeted for the exploitation of deep geothermal energy resources in Switzerland. The goal is to perform a series of stimulation experiments spanning from hydraulic fracturing to controlled fault-slip experiments in an experimental volume approximately 30 m in diameter. The experiments will contribute to a better understanding of hydro-mechanical phenomena and induced seismicity associated with high-pressure fluid injections. Comprehensive monitoring during stimulation will include observation of injection rate and pressure, pressure propagation in the reservoir, permeability enhancement, 3D dislocation along the faults, rock mass deformation near the fault zone, as well as micro-seismicity. The experimental volume is surrounded by other in-situ experiments (at 50 to 500 m distance) and by infrastructure of the local hydropower company (at ~100 m to several kilometres distance). Although it is generally agreed among stakeholders related to the experiments that levels of induced seismicity may be low given the small total injection volumes of less than 1 m3, detailed analysis of the potential impact of the stimulation on other experiments and surrounding infrastructure is essential to ensure operational safety. In this contribution, we present a procedure how induced seismic hazard can be estimated for an experimental situation that is untypical for injection-induced seismicity in terms of injection volumes, injection depths and proximity to affected objects. Both, deterministic and probabilistic methods are employed to estimate that maximum possible and the maximum expected induced earthquake magnitude. Deterministic methods are based on McGarr's upper limit for the maximum induced seismic moment. Probabilistic methods rely on estimates of Shapiro's seismogenic index and seismicity rates from past stimulation experiments that are scaled to injection volumes of interest. Using rate-and-state frictional modelling coupled to a hydro-mechanical fracture flow model, we demonstrate that large uncontrolled rupture events are unlikely to occur and that deterministic upper limits may be sufficiently conservative. The proposed workflow can be applied to similar injection experiments, for which hazard to nearby infrastructure may limit experimental design.

  17. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  18. Coupled multi-group neutron photon transport for the simulation of high-resolution gamma-ray spectroscopy applications

    NASA Astrophysics Data System (ADS)

    Burns, Kimberly Ann

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of coupled Monte Carlo-deterministic methods for the simulation of neutron-induced photons for high-resolution gamma-ray spectroscopy applications. RAdiation Detection Scenario Analysis Toolbox (RADSAT), a code which couples deterministic and Monte Carlo transport to perform radiation detection scenario analysis in three dimensions [1], was used as the building block for the methods derived in this work. RADSAT was capable of performing coupled deterministic-Monte Carlo simulations for gamma-only and neutron-only problems. The purpose of this work was to develop the methodology necessary to perform coupled neutron-photon calculations and add this capability to RADSAT. Performing coupled neutron-photon calculations requires four main steps: the deterministic neutron transport calculation, the neutron-induced photon spectrum calculation, the deterministic photon transport calculation, and the Monte Carlo detector response calculation. The necessary requirements for each of these steps were determined. A major challenge in utilizing multigroup deterministic transport methods for neutron-photon problems was maintaining the discrete neutron-induced photon signatures throughout the simulation. Existing coupled neutron-photon cross-section libraries and the methods used to produce neutron-induced photons were unsuitable for high-resolution gamma-ray spectroscopy applications. Central to this work was the development of a method for generating multigroup neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so the neutron-induced photon signatures were preserved. The RADSAT-NG cross-section library was developed as a specialized multigroup neutron-photon cross-section set for the simulation of high-resolution gamma-ray spectroscopy applications. The methodology and cross sections were tested using code-to-code comparison with MCNP5 [2] and NJOY [3]. A simple benchmark geometry was used for all cases compared with MCNP. The geometry consists of a cubical sample with a 252Cf neutron source on one side and a HPGe gamma-ray spectrometer on the opposing side. Different materials were examined in the cubical sample: polyethylene (C2H4), P, N, O, and Fe. The cross sections for each of the materials were compared to cross sections collapsed using NJOY. Comparisons of the volume-averaged neutron flux within the sample, volume-averaged photon flux within the detector, and high-purity gamma-ray spectrometer response (only for polyethylene) were completed using RADSAT and MCNP. The code-to-code comparisons show promising results for the coupled Monte Carlo-deterministic method. The RADSAT-NG cross-section production method showed good agreement with NJOY for all materials considered although some additional work is needed in the resonance region and in the first and last energy bin. Some cross section discrepancies existed in the lowest and highest energy bin, but the overall shape and magnitude of the two methods agreed. For the volume-averaged photon flux within the detector, typically the five most intense lines agree to within approximately 5% of the MCNP calculated flux for all of materials considered. The agreement in the code-to-code comparisons cases demonstrates a proof-of-concept of the method for use in RADSAT for coupled neutron-photon problems in high-resolution gamma-ray spectroscopy applications. One of the primary motivators for using the coupled method over pure Monte Carlo method is the potential for significantly lower computational times. For the code-to-code comparison cases, the run times for RADSAT were approximately 25--500 times shorter than for MCNP, as shown in Table 1. This was assuming a 40 mCi 252Cf neutron source and 600 seconds of "real-world" measurement time. The only variance reduction technique implemented in the MCNP calculation was forward biasing of the source toward the sample target. Improved MCNP runtimes could be achieved with the addition of more advanced variance reduction techniques.

  19. Controlling transient chaos in deterministic flows with applications to electrical power systems and ecology

    NASA Astrophysics Data System (ADS)

    Dhamala, Mukeshwar; Lai, Ying-Cheng

    1999-02-01

    Transient chaos is a common phenomenon in nonlinear dynamics of many physical, biological, and engineering systems. In applications it is often desirable to maintain sustained chaos even in parameter regimes of transient chaos. We address how to sustain transient chaos in deterministic flows. We utilize a simple and practical method, based on extracting the fundamental dynamics from time series, to maintain chaos. The method can result in control of trajectories from almost all initial conditions in the original basin of the chaotic attractor from which transient chaos is created. We apply our method to three problems: (1) voltage collapse in electrical power systems, (2) species preservation in ecology, and (3) elimination of undesirable bursting behavior in a chemical reaction system.

  20. FACTORS INFLUENCING TOTAL DIETARY EXPOSURES OF YOUNG CHILDREN

    EPA Science Inventory

    A deterministic model was developed to identify the critical input parameters needed to assess dietary intakes of young children. The model was used as a framework for understanding the important factors in data collection and data analysis. Factors incorporated into the model i...

Top