NASA Astrophysics Data System (ADS)
Mukherjee, L.; Zhai, P.; Hu, Y.; Winker, D. M.
2016-12-01
Among the primary factors, which determine the polarized radiation, field of a turbid medium are the single scattering properties of the medium. When multiple types of scatterers are present, the single scattering properties of the scatterers need to be properly mixed in order to find the solutions to the vector radiative transfer theory (VRT). The VRT solvers can be divided into two types: deterministic and stochastic. The deterministic solver can only accept one set of single scattering property in its smallest discretized spatial volume. When the medium contains more than one kind of scatterer, their single scattering properties are averaged, and then used as input for the deterministic solver. The stochastic solver, can work with different kinds of scatterers explicitly. In this work, two different mixing schemes are studied using the Successive Order of Scattering (SOS) method and Monte Carlo (MC) methods. One scheme is used for deterministic and the other is used for the stochastic Monte Carlo method. It is found that the solutions from the two VRT solvers using two different mixing schemes agree with each other extremely well. This confirms the equivalence to the two mixing schemes and also provides a benchmark for the VRT solution for the medium studied.
Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.
Ménigot, Sébastien; Girault, Jean-Marc
2016-09-01
The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.
Mechanisms for the target patterns formation in a stochastic bistable excitable medium
NASA Astrophysics Data System (ADS)
Verisokin, Andrey Yu.; Verveyko, Darya V.; Postnov, Dmitry E.
2018-04-01
We study the features of formation and evolution of spatiotemporal chaotic regime generated by autonomous pacemakers in excitable deterministic and stochastic bistable active media using the example of the FitzHugh - Nagumo biological neuron model under discrete medium conditions. The following possible mechanisms for the formation of autonomous pacemakers have been studied: 1) a temporal external force applied to a small region of the medium, 2) geometry of the solution region (the medium contains regions with Dirichlet or Neumann boundaries). In our work we explore the conditions for the emergence of pacemakers inducing target patterns in a stochastic bistable excitable system and propose the algorithm for their analysis.
TRUST. I. A 3D externally illuminated slab benchmark for dust radiative transfer
NASA Astrophysics Data System (ADS)
Gordon, K. D.; Baes, M.; Bianchi, S.; Camps, P.; Juvela, M.; Kuiper, R.; Lunttila, T.; Misselt, K. A.; Natale, G.; Robitaille, T.; Steinacker, J.
2017-07-01
Context. The radiative transport of photons through arbitrary three-dimensional (3D) structures of dust is a challenging problem due to the anisotropic scattering of dust grains and strong coupling between different spatial regions. The radiative transfer problem in 3D is solved using Monte Carlo or Ray Tracing techniques as no full analytic solution exists for the true 3D structures. Aims: We provide the first 3D dust radiative transfer benchmark composed of a slab of dust with uniform density externally illuminated by a star. This simple 3D benchmark is explicitly formulated to provide tests of the different components of the radiative transfer problem including dust absorption, scattering, and emission. Methods: The details of the external star, the slab itself, and the dust properties are provided. This benchmark includes models with a range of dust optical depths fully probing cases that are optically thin at all wavelengths to optically thick at most wavelengths. The dust properties adopted are characteristic of the diffuse Milky Way interstellar medium. This benchmark includes solutions for the full dust emission including single photon (stochastic) heating as well as two simplifying approximations: One where all grains are considered in equilibrium with the radiation field and one where the emission is from a single effective grain with size-distribution-averaged properties. A total of six Monte Carlo codes and one Ray Tracing code provide solutions to this benchmark. Results: The solution to this benchmark is given as global spectral energy distributions (SEDs) and images at select diagnostic wavelengths from the ultraviolet through the infrared. Comparison of the results revealed that the global SEDs are consistent on average to a few percent for all but the scattered stellar flux at very high optical depths. The image results are consistent within 10%, again except for the stellar scattered flux at very high optical depths. The lack of agreement between different codes of the scattered flux at high optical depths is quantified for the first time. Convergence tests using one of the Monte Carlo codes illustrate the sensitivity of the solutions to various model parameters. Conclusions: We provide the first 3D dust radiative transfer benchmark and validate the accuracy of this benchmark through comparisons between multiple independent codes and detailed convergence tests.
Scattering theory of stochastic electromagnetic light waves.
Wang, Tao; Zhao, Daomu
2010-07-15
We generalize scattering theory to stochastic electromagnetic light waves. It is shown that when a stochastic electromagnetic light wave is scattered from a medium, the properties of the scattered field can be characterized by a 3 x 3 cross-spectral density matrix. An example of scattering of a spatially coherent electromagnetic light wave from a deterministic medium is discussed. Some interesting phenomena emerge, including the changes of the spectral degree of coherence and of the spectral degree of polarization of the scattered field.
NASA Astrophysics Data System (ADS)
Marchetti, Luca; Priami, Corrado; Thanh, Vo Hong
2016-07-01
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance and accuracy of HRSSA against other state of the art algorithms.
Computational singular perturbation analysis of stochastic chemical systems with stiffness
NASA Astrophysics Data System (ADS)
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; Najm, Habib N.
2017-04-01
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to not only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. The algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.
Stochastic fluctuations and the detectability limit of network communities.
Floretta, Lucio; Liechti, Jonas; Flammini, Alessandro; De Los Rios, Paolo
2013-12-01
We have analyzed the detectability limits of network communities in the framework of the popular Girvan and Newman benchmark. By carefully taking into account the inevitable stochastic fluctuations that affect the construction of each and every instance of the benchmark, we come to the conclusion that the native, putative partition of the network is completely lost even before the in-degree/out-degree ratio becomes equal to that of a structureless Erdös-Rényi network. We develop a simple iterative scheme, analytically well described by an infinite branching process, to provide an estimate of the true detectability limit. Using various algorithms based on modularity optimization, we show that all of them behave (semiquantitatively) in the same way, with the same functional form of the detectability threshold as a function of the network parameters. Because the same behavior has also been found by further modularity-optimization methods and for methods based on different heuristics implementations, we conclude that indeed a correct definition of the detectability limit must take into account the stochastic fluctuations of the network construction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marchetti, Luca, E-mail: marchetti@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; University of Trento, Department of Mathematics
This paper introduces HRSSA (Hybrid Rejection-based Stochastic Simulation Algorithm), a new efficient hybrid stochastic simulation algorithm for spatially homogeneous biochemical reaction networks. HRSSA is built on top of RSSA, an exact stochastic simulation algorithm which relies on propensity bounds to select next reaction firings and to reduce the average number of reaction propensity updates needed during the simulation. HRSSA exploits the computational advantage of propensity bounds to manage time-varying transition propensities and to apply dynamic partitioning of reactions, which constitute the two most significant bottlenecks of hybrid simulation. A comprehensive set of simulation benchmarks is provided for evaluating performance andmore » accuracy of HRSSA against other state of the art algorithms.« less
Target Lagrangian kinematic simulation for particle-laden flows.
Murray, S; Lightstone, M F; Tullis, S
2016-09-01
The target Lagrangian kinematic simulation method was motivated as a stochastic Lagrangian particle model that better synthesizes turbulence structure, relative to stochastic separated flow models. By this method, the trajectories of particles are constructed according to synthetic turbulent-like fields, which conform to a target Lagrangian integral timescale. In addition to recovering the expected Lagrangian properties of fluid tracers, this method is shown to reproduce the crossing trajectories and continuity effects, in agreement with an experimental benchmark.
Learning in Stochastic Bit Stream Neural Networks.
van Daalen, Max; Shawe-Taylor, John; Zhao, Jieyu
1996-08-01
This paper presents learning techniques for a novel feedforward stochastic neural network. The model uses stochastic weights and the "bit stream" data representation. It has a clean analysable functionality and is very attractive with its great potential to be implemented in hardware using standard digital VLSI technology. The design allows simulation at three different levels and learning techniques are described for each level. The lowest level corresponds to on-chip learning. Simulation results on three benchmark MONK's problems and handwritten digit recognition with a clean set of 500 16 x 16 pixel digits demonstrate that the new model is powerful enough for the real world applications. Copyright 1996 Elsevier Science Ltd
Computational singular perturbation analysis of stochastic chemical systems with stiffness
Wang, Lijin; Han, Xiaoying; Cao, Yanzhao; ...
2017-01-25
Computational singular perturbation (CSP) is a useful method for analysis, reduction, and time integration of stiff ordinary differential equation systems. It has found dominant utility, in particular, in chemical reaction systems with a large range of time scales at continuum and deterministic level. On the other hand, CSP is not directly applicable to chemical reaction systems at micro or meso-scale, where stochasticity plays an non-negligible role and thus has to be taken into account. In this work we develop a novel stochastic computational singular perturbation (SCSP) analysis and time integration framework, and associated algorithm, that can be used to notmore » only construct accurately and efficiently the numerical solutions to stiff stochastic chemical reaction systems, but also analyze the dynamics of the reduced stochastic reaction systems. Furthermore, the algorithm is illustrated by an application to a benchmark stochastic differential equation model, and numerical experiments are carried out to demonstrate the effectiveness of the construction.« less
Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique
Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep
2015-01-01
In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics.
Strehl, Robert; Ilie, Silvana
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated on three benchmarking systems, with special focus on approximation accuracy and efficiency.
An efficient hybrid method for stochastic reaction-diffusion biochemical systems with delay
NASA Astrophysics Data System (ADS)
Sayyidmousavi, Alireza; Ilie, Silvana
2017-12-01
Many chemical reactions, such as gene transcription and translation in living cells, need a certain time to finish once they are initiated. Simulating stochastic models of reaction-diffusion systems with delay can be computationally expensive. In the present paper, a novel hybrid algorithm is proposed to accelerate the stochastic simulation of delayed reaction-diffusion systems. The delayed reactions may be of consuming or non-consuming delay type. The algorithm is designed for moderately stiff systems in which the events can be partitioned into slow and fast subsets according to their propensities. The proposed algorithm is applied to three benchmark problems and the results are compared with those of the delayed Inhomogeneous Stochastic Simulation Algorithm. The numerical results show that the new hybrid algorithm achieves considerable speed-up in the run time and very good accuracy.
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
Reconstruction of pulse noisy images via stochastic resonance
Han, Jing; Liu, Hongjun; Sun, Qibing; Huang, Nan
2015-01-01
We investigate a practical technology for reconstructing nanosecond pulse noisy images via stochastic resonance, which is based on the modulation instability. A theoretical model of this method for optical pulse signal is built to effectively recover the pulse image. The nanosecond noise-hidden images grow at the expense of noise during the stochastic resonance process in a photorefractive medium. The properties of output images are mainly determined by the input signal-to-noise intensity ratio, the applied voltage across the medium, and the correlation length of noise background. A high cross-correlation gain is obtained by optimizing these parameters. This provides a potential method for detecting low-level or hidden pulse images in various imaging applications. PMID:26067911
NASA Astrophysics Data System (ADS)
Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.
2017-12-01
The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.
Stochastic metallic-glass cellular structures exhibiting benchmark strength.
Demetriou, Marios D; Veazey, Chris; Harmon, John S; Schramm, Joseph P; Johnson, William L
2008-10-03
By identifying the key characteristic "structural scales" that dictate the resistance of a porous metallic glass against buckling and fracture, stochastic highly porous metallic-glass structures are designed capable of yielding plastically and inheriting the high plastic yield strength of the amorphous metal. The strengths attainable by the present foams appear to equal or exceed those by highly engineered metal foams such as Ti-6Al-4V or ferrous-metal foams at comparable levels of porosity, placing the present metallic-glass foams among the strongest foams known to date.
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Hybrid stochastic simulation of reaction-diffusion systems with slow and fast dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehl, Robert; Ilie, Silvana, E-mail: silvana@ryerson.ca
2015-12-21
In this paper, we present a novel hybrid method to simulate discrete stochastic reaction-diffusion models arising in biochemical signaling pathways. We study moderately stiff systems, for which we can partition each reaction or diffusion channel into either a slow or fast subset, based on its propensity. Numerical approaches missing this distinction are often limited with respect to computational run time or approximation quality. We design an approximate scheme that remedies these pitfalls by using a new blending strategy of the well-established inhomogeneous stochastic simulation algorithm and the tau-leaping simulation method. The advantages of our hybrid simulation algorithm are demonstrated onmore » three benchmarking systems, with special focus on approximation accuracy and efficiency.« less
NASA Astrophysics Data System (ADS)
Hozman, J.; Tichý, T.
2016-12-01
The paper is based on the results from our recent research on multidimensional option pricing problems. We focus on European option valuation when the price movement of the underlying asset is driven by a stochastic volatility following a square root process proposed by Heston. The stochastic approach incorporates a new additional spatial variable into this model and makes it very robust, i.e. it provides a framework to price a variety of options that is closer to reality. The main topic is to present the numerical scheme arising from the concept of discontinuous Galerkin methods and applicable to the Heston option pricing model. The numerical results are presented on artificial benchmarks as well as on reference market data.
Intrinsic optimization using stochastic nanomagnets
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-01-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053
Intrinsic optimization using stochastic nanomagnets
NASA Astrophysics Data System (ADS)
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-03-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.
Model risk for European-style stock index options.
Gençay, Ramazan; Gibson, Rajna
2007-01-01
In empirical modeling, there have been two strands for pricing in the options literature, namely the parametric and nonparametric models. Often, the support for the nonparametric methods is based on a benchmark such as the Black-Scholes (BS) model with constant volatility. In this paper, we study the stochastic volatility (SV) and stochastic volatility random jump (SVJ) models as parametric benchmarks against feedforward neural network (FNN) models, a class of neural network models. Our choice for FNN models is due to their well-studied universal approximation properties of an unknown function and its partial derivatives. Since the partial derivatives of an option pricing formula are risk pricing tools, an accurate estimation of the unknown option pricing function is essential for pricing and hedging. Our findings indicate that FNN models offer themselves as robust option pricing tools, over their sophisticated parametric counterparts in predictive settings. There are two routes to explain the superiority of FNN models over the parametric models in forecast settings. These are nonnormality of return distributions and adaptive learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.; Kornreich, D.E.
Because of the requirement of accountability and quality control in the scientific world, a demand for high-quality analytical benchmark calculations has arisen in the neutron transport community. The intent of these benchmarks is to provide a numerical standard to which production neutron transport codes may be compared in order to verify proper operation. The overall investigation as modified in the second year renewal application includes the following three primary tasks. Task 1 on two dimensional neutron transport is divided into (a) single medium searchlight problem (SLP) and (b) two-adjacent half-space SLP. Task 2 on three-dimensional neutron transport covers (a) pointmore » source in arbitrary geometry, (b) single medium SLP, and (c) two-adjacent half-space SLP. Task 3 on code verification, includes deterministic and probabilistic codes. The primary aim of the proposed investigation was to provide a suite of comprehensive two- and three-dimensional analytical benchmarks for neutron transport theory applications. This objective has been achieved. The suite of benchmarks in infinite media and the three-dimensional SLP are a relatively comprehensive set of one-group benchmarks for isotropically scattering media. Because of time and resource limitations, the extensions of the benchmarks to include multi-group and anisotropic scattering are not included here. Presently, however, enormous advances in the solution for the planar Green`s function in an anisotropically scattering medium have been made and will eventually be implemented in the two- and three-dimensional solutions considered under this grant. Of particular note in this work are the numerical results for the three-dimensional SLP, which have never before been presented. The results presented were made possible only because of the tremendous advances in computing power that have occurred during the past decade.« less
NASA Astrophysics Data System (ADS)
Zhang, Ming
2015-10-01
A theory of 2-stage acceleration of Galactic cosmic rays in supernova remnants is proposed. The first stage is accomplished by the supernova shock front, where a power-law spectrum is established up to a certain cutoff energy. It is followed by stochastic acceleration with compressible waves/turbulence in the downstream medium. With a broad \\propto {k}-2 spectrum for the compressible plasma fluctuations, the rate of stochastic acceleration is constant over a wide range of particle momentum. In this case, the stochastic acceleration process extends the power-law spectrum cutoff energy of Galactic cosmic rays to the knee without changing the spectral slope. This situation happens as long as the rate of stochastic acceleration is faster than 1/5 of the adiabatic cooling rate. A steeper spectrum of compressible plasma fluctuations that concentrate their power in long wavelengths will accelerate cosmic rays to the knee with a small bump before its cutoff in the comic-ray energy spectrum. This theory does not require a strong amplification of the magnetic field in the upstream interstellar medium in order to accelerate cosmic rays to the knee energy.
NASA Astrophysics Data System (ADS)
El-Wakil, S. A.; Sallah, M.; El-Hanbaly, A. M.
2015-10-01
The stochastic radiative transfer problem is studied in a participating planar finite continuously fluctuating medium. The problem is considered for specular- and diffusly-reflecting boundaries with linear anisotropic scattering. Random variable transformation (RVT) technique is used to get the complete average for the solution functions, that are represented by the probability-density function (PDF) of the solution process. In the RVT algorithm, a simple integral transformation to the input stochastic process (the extinction function of the medium) is applied. This linear transformation enables us to rewrite the stochastic transport equations in terms of the optical random variable (x) and the optical random thickness (L). Then the transport equation is solved deterministically to get a closed form for the solution as a function of x and L. So, the solution is used to obtain the PDF of the solution functions applying the RVT technique among the input random variable (L) and the output process (the solution functions). The obtained averages of the solution functions are used to get the complete analytical averages for some interesting physical quantities, namely, reflectivity and transmissivity at the medium boundaries. In terms of the average reflectivity and transmissivity, the average of the partial heat fluxes for the generalized problem with internal source of radiation are obtained and represented graphically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liang, Faming; Cheng, Yichen; Lin, Guang
2014-06-13
Simulated annealing has been widely used in the solution of optimization problems. As known by many researchers, the global optima cannot be guaranteed to be located by simulated annealing unless a logarithmic cooling schedule is used. However, the logarithmic cooling schedule is so slow that no one can afford to have such a long CPU time. This paper proposes a new stochastic optimization algorithm, the so-called simulated stochastic approximation annealing algorithm, which is a combination of simulated annealing and the stochastic approximation Monte Carlo algorithm. Under the framework of stochastic approximation Markov chain Monte Carlo, it is shown that themore » new algorithm can work with a cooling schedule in which the temperature can decrease much faster than in the logarithmic cooling schedule, e.g., a square-root cooling schedule, while guaranteeing the global optima to be reached when the temperature tends to zero. The new algorithm has been tested on a few benchmark optimization problems, including feed-forward neural network training and protein-folding. The numerical results indicate that the new algorithm can significantly outperform simulated annealing and other competitors.« less
Megias, Daniel; Phillips, Mark; Clifton-Hadley, Laura; Harron, Elizabeth; Eaton, David J; Sanghera, Paul; Whitfield, Gillian
2017-03-01
The HIPPO trial is a UK randomized Phase II trial of hippocampal sparing (HS) vs conventional whole-brain radiotherapy after surgical resection or radiosurgery in patients with favourable prognosis with 1-4 brain metastases. Each participating centre completed a planning benchmark case as part of the dedicated radiotherapy trials quality assurance programme (RTQA), promoting the safe and effective delivery of HS intensity-modulated radiotherapy (IMRT) in a multicentre trial setting. Submitted planning benchmark cases were reviewed using visualization for radiotherapy software (VODCA) evaluating plan quality and compliance in relation to the HIPPO radiotherapy planning and delivery guidelines. Comparison of the planning benchmark data highlighted a plan specified using dose to medium as an outlier by comparison with those specified using dose to water. Further evaluation identified that the reported plan statistics for dose to medium were lower as a result of the dose calculated at regions of PTV inclusive of bony cranium being lower relative to brain. Specification of dose to water or medium remains a source of potential ambiguity and it is essential that as part of a multicentre trial, consideration is given to reported differences, particularly in the presence of bone. Evaluation of planning benchmark data as part of an RTQA programme has highlighted an important feature of HS IMRT dosimetry dependent on dose being specified to water or medium, informing the development and undertaking of HS IMRT as part of the HIPPO trial. Advances in knowledge: The potential clinical impact of differences between dose to medium and dose to water are demonstrated for the first time, in the setting of HS whole-brain radiotherapy.
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
Three Dimensional Time Dependent Stochastic Method for Cosmic-ray Modulation
NASA Astrophysics Data System (ADS)
Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J. M.
2009-12-01
A proper understanding of the different behavior of intensities of galactic cosmic rays in different solar cycle phases requires solving the modulation equation with time dependence. We present a detailed description of our newly developed stochastic approach for cosmic ray modulation which we believe is the first attempt to solve the time dependent Parker equation in 3D evolving from our 3D steady state stochastic approach, which has been benchmarked extensively by using the finite difference method. Our 3D stochastic method is different from other stochastic approaches in literature (Ball et al 2005, Miyake et al 2005, and Florinski 2008) in several ways. For example, we employ spherical coordinates which makes the code much more efficient by reducing coordinate transformations. What's more, our stochastic differential equations are different from others because our map from Parker's original equation to the Fokker-Planck equation extends the method used by Jokipii and Levy 1977 while others don't although all 3D stochastic methods are essentially based on Ito formula. The advantage of the stochastic approach is that it also gives the probability information of travel times and path lengths of cosmic rays besides the intensities. We show that excellent agreement exists between solutions obtained by our steady state stochastic method and by the traditional finite difference method. We also show time dependent solutions for an idealized heliosphere which has a Parker magnetic field, a planar current sheet, and a simple initial condition.
Backward-stochastic-differential-equation approach to modeling of gene expression
NASA Astrophysics Data System (ADS)
Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F.; Aguiar, Paulo
2017-03-01
In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).
NASA Astrophysics Data System (ADS)
Wang, Ting; Plecháč, Petr
2017-12-01
Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.
Backward-stochastic-differential-equation approach to modeling of gene expression.
Shamarova, Evelina; Chertovskih, Roman; Ramos, Alexandre F; Aguiar, Paulo
2017-03-01
In this article, we introduce a backward method to model stochastic gene expression and protein-level dynamics. The protein amount is regarded as a diffusion process and is described by a backward stochastic differential equation (BSDE). Unlike many other SDE techniques proposed in the literature, the BSDE method is backward in time; that is, instead of initial conditions it requires the specification of end-point ("final") conditions, in addition to the model parametrization. To validate our approach we employ Gillespie's stochastic simulation algorithm (SSA) to generate (forward) benchmark data, according to predefined gene network models. Numerical simulations show that the BSDE method is able to correctly infer the protein-level distributions that preceded a known final condition, obtained originally from the forward SSA. This makes the BSDE method a powerful systems biology tool for time-reversed simulations, allowing, for example, the assessment of the biological conditions (e.g., protein concentrations) that preceded an experimentally measured event of interest (e.g., mitosis, apoptosis, etc.).
Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.
Caglar, Mehmet Umut; Pal, Ranadip
2013-01-01
Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.
Stochastic Modeling of Past Volcanic Crises
NASA Astrophysics Data System (ADS)
Woo, Gordon
2018-01-01
The statistical foundation of disaster risk analysis is past experience. From a scientific perspective, history is just one realization of what might have happened, given the randomness and chaotic dynamics of Nature. Stochastic analysis of the past is an exploratory exercise in counterfactual history, considering alternative possible scenarios. In particular, the dynamic perturbations that might have transitioned a volcano from an unrest to an eruptive state need to be considered. The stochastic modeling of past volcanic crises leads to estimates of eruption probability that can illuminate historical volcanic crisis decisions. It can also inform future economic risk management decisions in regions where there has been some volcanic unrest, but no actual eruption for at least hundreds of years. Furthermore, the availability of a library of past eruption probabilities would provide benchmark support for estimates of eruption probability in future volcanic crises.
A comparison of two- and three-dimensional stochastic models of regional solute movement
Shapiro, A.M.; Cvetkovic, V.D.
1990-01-01
Recent models of solute movement in porous media that are based on a stochastic description of the porous medium properties have been dedicated primarily to a three-dimensional interpretation of solute movement. In many practical problems, however, it is more convenient and consistent with measuring techniques to consider flow and solute transport as an areal, two-dimensional phenomenon. The physics of solute movement, however, is dependent on the three-dimensional heterogeneity in the formation. A comparison of two- and three-dimensional stochastic interpretations of solute movement in a porous medium having a statistically isotropic hydraulic conductivity field is investigated. To provide an equitable comparison between the two- and three-dimensional analyses, the stochastic properties of the transmissivity are defined in terms of the stochastic properties of the hydraulic conductivity. The variance of the transmissivity is shown to be significantly reduced in comparison to that of the hydraulic conductivity, and the transmissivity is spatially correlated over larger distances. These factors influence the two-dimensional interpretations of solute movement by underestimating the longitudinal and transverse growth of the solute plume in comparison to its description as a three-dimensional phenomenon. Although this analysis is based on small perturbation approximations and the special case of a statistically isotropic hydraulic conductivity field, it casts doubt on the use of a stochastic interpretation of the transmissivity in describing regional scale movement. However, by assuming the transmissivity to be the vertical integration of the hydraulic conductivity field at a given position, the stochastic properties of the hydraulic conductivity can be estimated from the stochastic properties of the transmissivity and applied to obtain a more accurate interpretation of solute movement. ?? 1990 Kluwer Academic Publishers.
A Ballistic Model of Choice Response Time
ERIC Educational Resources Information Center
Brown, Scott; Heathcote, Andrew
2005-01-01
Almost all models of response time (RT) use a stochastic accumulation process. To account for the benchmark RT phenomena, researchers have found it necessary to include between-trial variability in the starting point and/or the rate of accumulation, both in linear (R. Ratcliff & J. N. Rouder, 1998) and nonlinear (M. Usher & J. L. McClelland, 2001)…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andorf, M. B.; Lebedev, V. A.; Piot, P.
2015-06-01
Optical stochastic cooling (OSC) is a method of beam cooling which is expected to provide cooling rates orders of magnitude larger than ordinary stochastic cooling. Light from an undulator (the pickup) is amplified and fed back onto the particle beam via another undulator (the kicker). Fermilab is currently exploring a possible proof-of-principle experiment of the OSC at the integrable-optics test accelerator (IOTA) ring. To implement effective OSC a good correction of phase distortions in the entire band of the optical amplifier is required. In this contribution we present progress in experimental characterization of phase distortions associated to a Titanium Sapphiremore » crystal laser-gain medium (a possible candidate gain medium for the OSC experiment to be performed at IOTA). We also discuss a possible option for a mid-IR amplifier« less
Data-driven monitoring for stochastic systems and its application on batch process
NASA Astrophysics Data System (ADS)
Yin, Shen; Ding, Steven X.; Haghani Abandan Sari, Adel; Hao, Haiyang
2013-07-01
Batch processes are characterised by a prescribed processing of raw materials into final products for a finite duration and play an important role in many industrial sectors due to the low-volume and high-value products. Process dynamics and stochastic disturbances are inherent characteristics of batch processes, which cause monitoring of batch processes a challenging problem in practice. To solve this problem, a subspace-aided data-driven approach is presented in this article for batch process monitoring. The advantages of the proposed approach lie in its simple form and its abilities to deal with stochastic disturbances and process dynamics existing in the process. The kernel density estimation, which serves as a non-parametric way of estimating the probability density function, is utilised for threshold calculation. An industrial benchmark of fed-batch penicillin production is finally utilised to verify the effectiveness of the proposed approach.
75 FR 16712 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-04-02
... carrier for the 4 years that correspond with the most recently published Revenue Shortfall Allocation... approach for medium-size rail rate disputes and revising its Three-Benchmark approach for smaller rail rate.... Id. at 246-47. \\1\\ Canadian Pacific Railway Co., Soo Line Railroad Company, Delaware & Hudson Railway...
Electric load shape benchmarking for small- and medium-sized commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Xuan; Hong, Tianzhen; Chen, Yixing
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
Electric load shape benchmarking for small- and medium-sized commercial buildings
Luo, Xuan; Hong, Tianzhen; Chen, Yixing; ...
2017-07-28
Small- and medium-sized commercial buildings owners and utility managers often look for opportunities for energy cost savings through energy efficiency and energy waste minimization. However, they currently lack easy access to low-cost tools that help interpret the massive amount of data needed to improve understanding of their energy use behaviors. Benchmarking is one of the techniques used in energy audits to identify which buildings are priorities for an energy analysis. Traditional energy performance indicators, such as the energy use intensity (annual energy per unit of floor area), consider only the total annual energy consumption, lacking consideration of the fluctuation ofmore » energy use behavior over time, which reveals the time of use information and represents distinct energy use behaviors during different time spans. To fill the gap, this study developed a general statistical method using 24-hour electric load shape benchmarking to compare a building or business/tenant space against peers. Specifically, the study developed new forms of benchmarking metrics and data analysis methods to infer the energy performance of a building based on its load shape. We first performed a data experiment with collected smart meter data using over 2,000 small- and medium-sized businesses in California. We then conducted a cluster analysis of the source data, and determined and interpreted the load shape features and parameters with peer group analysis. Finally, we implemented the load shape benchmarking feature in an open-access web-based toolkit (the Commercial Building Energy Saver) to provide straightforward and practical recommendations to users. The analysis techniques were generic and flexible for future datasets of other building types and in other utility territories.« less
Binomial tau-leap spatial stochastic simulation algorithm for applications in chemical kinetics.
Marquez-Lago, Tatiana T; Burrage, Kevin
2007-09-14
In cell biology, cell signaling pathway problems are often tackled with deterministic temporal models, well mixed stochastic simulators, and/or hybrid methods. But, in fact, three dimensional stochastic spatial modeling of reactions happening inside the cell is needed in order to fully understand these cell signaling pathways. This is because noise effects, low molecular concentrations, and spatial heterogeneity can all affect the cellular dynamics. However, there are ways in which important effects can be accounted without going to the extent of using highly resolved spatial simulators (such as single-particle software), hence reducing the overall computation time significantly. We present a new coarse grained modified version of the next subvolume method that allows the user to consider both diffusion and reaction events in relatively long simulation time spans as compared with the original method and other commonly used fully stochastic computational methods. Benchmarking of the simulation algorithm was performed through comparison with the next subvolume method and well mixed models (MATLAB), as well as stochastic particle reaction and transport simulations (CHEMCELL, Sandia National Laboratories). Additionally, we construct a model based on a set of chemical reactions in the epidermal growth factor receptor pathway. For this particular application and a bistable chemical system example, we analyze and outline the advantages of our presented binomial tau-leap spatial stochastic simulation algorithm, in terms of efficiency and accuracy, in scenarios of both molecular homogeneity and heterogeneity.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.
Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.
NASA Astrophysics Data System (ADS)
Jeanmairet, Guillaume; Sharma, Sandeep; Alavi, Ali
2017-01-01
In this article we report a stochastic evaluation of the recently proposed multireference linearized coupled cluster theory [S. Sharma and A. Alavi, J. Chem. Phys. 143, 102815 (2015)]. In this method, both the zeroth-order and first-order wavefunctions are sampled stochastically by propagating simultaneously two populations of signed walkers. The sampling of the zeroth-order wavefunction follows a set of stochastic processes identical to the one used in the full configuration interaction quantum Monte Carlo (FCIQMC) method. To sample the first-order wavefunction, the usual FCIQMC algorithm is augmented with a source term that spawns walkers in the sampled first-order wavefunction from the zeroth-order wavefunction. The second-order energy is also computed stochastically but requires no additional overhead outside of the added cost of sampling the first-order wavefunction. This fully stochastic method opens up the possibility of simultaneously treating large active spaces to account for static correlation and recovering the dynamical correlation using perturbation theory. The method is used to study a few benchmark systems including the carbon dimer and aromatic molecules. We have computed the singlet-triplet gaps of benzene and m-xylylene. For m-xylylene, which has proved difficult for standard complete active space self consistent field theory with perturbative correction, we find the singlet-triplet gap to be in good agreement with the experimental values.
Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines
Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert
2016-01-01
Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650
A 1D radiative transfer benchmark with polarization via doubling and adding
NASA Astrophysics Data System (ADS)
Ganapol, B. D.
2017-11-01
Highly precise numerical solutions to the radiative transfer equation with polarization present a special challenge. Here, we establish a precise numerical solution to the radiative transfer equation with combined Rayleigh and isotropic scattering in a 1D-slab medium with simple polarization. The 2-Stokes vector solution for the fully discretized radiative transfer equation in space and direction derives from the method of doubling and adding enhanced through convergence acceleration. Updates to benchmark solutions found in the literature to seven places for reflectance and transmittance as well as for angular flux follow. Finally, we conclude with the numerical solution in a partially randomly absorbing heterogeneous medium.
NASA Astrophysics Data System (ADS)
Song, X.; Jordan, T. H.
2017-12-01
The seismic anisotropy of the continental crust is dominated by two mechanisms: the local (intrinsic) anisotropy of crustal rocks caused by the lattice-preferred orientation of their constituent minerals, and the geometric (extrinsic) anisotropy caused by the alignment and layering of elastic heterogeneities by sedimentation and deformation. To assess the relative importance of these mechanisms, we have applied Jordan's (GJI, 2015) self-consistent, second-order theory to compute the effective elastic parameters of stochastic media with hexagonal local anisotropy and small-scale 3D heterogeneities that have transversely isotropic (TI) statistics. The theory pertains to stochastic TI media in which the eighth-order covariance tensor of the elastic moduli can be separated into a one-point variance tensor that describes the local anisotropy in terms of a anisotropy orientation ratio (ξ from 0 to ∞), and a two-point correlation function that describes the geometric anisotropy in terms of a heterogeneity aspect ratio (η from 0 to ∞). If there is no local anisotropy, then, in the limiting case of a horizontal stochastic laminate (η→∞), the effective-medium equations reduce to the second-order equations derived by Backus (1962) for a stochastically layered medium. This generalization of the Backus equations to 3D stochastic media, as well as the introduction of local, stochastically rotated anisotropy, provides a powerful theory for interpreting the anisotropic signatures of sedimentation and deformation in continental environments; in particular, the parameterizations that we propose are suitable for tomographic inversions. We have verified this theory through a series high-resolution numerical experiments using both isotropic and anisotropic wave-propagation codes.
Toward Policy-Relevant Benchmarks for Interpreting Effect Sizes: Combining Effects with Costs
ERIC Educational Resources Information Center
Harris, Douglas N.
2009-01-01
The common reporting of effect sizes has been an important advance in education research in recent years. However, the benchmarks used to interpret the size of these effects--as small, medium, and large--do little to inform educational administration and policy making because they do not account for program costs. The author proposes an approach…
Fast and Efficient Stochastic Optimization for Analytic Continuation
Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...
2016-09-28
In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less
Modelling uncertainty in incompressible flow simulation using Galerkin based generalized ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-11-01
This paper presents a new algorithm, referred to here as Galerkin based generalized analysis of variance decomposition (GG-ANOVA) for modelling input uncertainties and its propagation in incompressible fluid flow. The proposed approach utilizes ANOVA to represent the unknown stochastic response. Further, the unknown component functions of ANOVA are represented using the generalized polynomial chaos expansion (PCE). The resulting functional form obtained by coupling the ANOVA and PCE is substituted into the stochastic Navier-Stokes equation (NSE) and Galerkin projection is employed to decompose it into a set of coupled deterministic 'Navier-Stokes alike' equations. Temporal discretization of the set of coupled deterministic equations is performed by employing Adams-Bashforth scheme for convective term and Crank-Nicolson scheme for diffusion term. Spatial discretization is performed by employing finite difference scheme. Implementation of the proposed approach has been illustrated by two examples. In the first example, a stochastic ordinary differential equation has been considered. This example illustrates the performance of proposed approach with change in nature of random variable. Furthermore, convergence characteristics of GG-ANOVA has also been demonstrated. The second example investigates flow through a micro channel. Two case studies, namely the stochastic Kelvin-Helmholtz instability and stochastic vortex dipole, have been investigated. For all the problems results obtained using GG-ANOVA are in excellent agreement with benchmark solutions.
Nonlinear Phase Distortion in a Ti:Sapphire Optical Amplifier for Optical Stochastic Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andorf, Matthew; Lebedev, Valeri; Piot, Philippe
2016-06-01
Optical Stochastic Cooling (OSC) has been considered for future high-luminosity colliders as it offers much faster cooling time in comparison to the micro-wave stochastic cooling. The OSC technique relies on collecting and amplifying a broadband optical signal from a pickup undulator and feeding the amplified signal back to the beam. It creates a corrective kick in a kicker undulator. Owing to its superb gain qualities and broadband amplification features, Titanium:Sapphire medium has been considered as a gain medium for the optical amplifier (OA) needed in the OSC*. A limiting factor for any OA used in OSC is the possibility ofmore » nonlinear phase distortions. In this paper we experimentally measure phase distortions by inserting a single-pass OA into one leg of a Mach-Zehnder interferometer. The measurement results are used to estimate the reduction of the corrective kick a particle would receive due to these phase distortions in the kicker undulator.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-05
... to the short- and medium-term rates to convert them to long- term rates using Bloomberg U.S... derivation of the benchmark and discount rates used to value these subsidies is discussed below. Short-Term... inflation-adjusted short-term benchmark rate, we have also excluded any countries with aberrational or...
NASA Astrophysics Data System (ADS)
Chen, Xianshun; Feng, Liang; Ong, Yew Soon
2012-07-01
In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.
Moix, Jeremy M; Ma, Jian; Cao, Jianshu
2015-03-07
A numerically exact path integral treatment of the absorption and emission spectra of open quantum systems is presented that requires only the straightforward solution of a stochastic differential equation. The approach converges rapidly enabling the calculation of spectra of large excitonic systems across the complete range of system parameters and for arbitrary bath spectral densities. With the numerically exact absorption and emission operators, one can also immediately compute energy transfer rates using the multi-chromophoric Förster resonant energy transfer formalism. Benchmark calculations on the emission spectra of two level systems are presented demonstrating the efficacy of the stochastic approach. This is followed by calculations of the energy transfer rates between two weakly coupled dimer systems as a function of temperature and system-bath coupling strength. It is shown that the recently developed hybrid cumulant expansion (see Paper II) is the only perturbative method capable of generating uniformly reliable energy transfer rates and emission spectra across a broad range of system parameters.
Multi-fidelity Gaussian process regression for prediction of random fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.
We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less
Perturbation expansions of stochastic wavefunctions for open quantum systems
NASA Astrophysics Data System (ADS)
Ke, Yaling; Zhao, Yi
2017-11-01
Based on the stochastic unravelling of the reduced density operator in the Feynman path integral formalism for an open quantum system in touch with harmonic environments, a new non-Markovian stochastic Schrödinger equation (NMSSE) has been established that allows for the systematic perturbation expansion in the system-bath coupling to arbitrary order. This NMSSE can be transformed in a facile manner into the other two NMSSEs, i.e., non-Markovian quantum state diffusion and time-dependent wavepacket diffusion method. Benchmarked by numerically exact results, we have conducted a comparative study of the proposed method in its lowest order approximation, with perturbative quantum master equations in the symmetric spin-boson model and the realistic Fenna-Matthews-Olson complex. It is found that our method outperforms the second-order time-convolutionless quantum master equation in the whole parameter regime and even far better than the fourth-order in the slow bath and high temperature cases. Besides, the method is applicable on an equal footing for any kind of spectral density function and is expected to be a powerful tool to explore the quantum dynamics of large-scale systems, benefiting from the wavefunction framework and the time-local appearance within a single stochastic trajectory.
NASA Astrophysics Data System (ADS)
Klos, Anna; Pottiaux, Eric; Van Malderen, Roeland; Bock, Olivier; Bogusz, Janusz
2017-04-01
A synthetic benchmark dataset of Integrated Water Vapour (IWV) was created within the activity of "Data homogenisation" of sub-working group WG3 of COST ES1206 Action. The benchmark dataset was created basing on the analysis of IWV differences retrieved by Global Positioning System (GPS) International GNSS Service (IGS) stations using European Centre for Medium-Range Weather Forecats (ECMWF) reanalysis data (ERA-Interim). Having analysed a set of 120 series of IWV differences (ERAI-GPS) derived for IGS stations, we delivered parameters of a number of gaps and breaks for every certain station. Moreover, we estimated values of trends, significant seasonalities and character of residuals when deterministic model was removed. We tested five different noise models and found that a combination of white and autoregressive processes of first order describes the stochastic part with a good accuracy. Basing on this analysis, we performed Monte Carlo simulations of 25 years long data with two different types of noise: white as well as combination of white and autoregressive processes. We also added few strictly defined offsets, creating three variants of synthetic dataset: easy, less-complicated and fully-complicated. The 'Easy' dataset included seasonal signals (annual, semi-annual, 3 and 4 months if present for a particular station), offsets and white noise. The 'Less-complicated' dataset included above-mentioned, as well as the combination of white and first order autoregressive processes (AR(1)+WH). The 'Fully-complicated' dataset included, beyond above, a trend and gaps. In this research, we show the impact of manual homogenisation on the estimates of trend and its error. We also cross-compare the results for three above-mentioned datasets, as the synthetized noise type might have a significant influence on manual homogenisation. Therefore, it might mostly affect the values of trend and their uncertainties when inappropriately handled. In a future, the synthetic dataset we present is going to be used as a benchmark to test various statistical tools in terms of homogenisation task.
FASTPM: a new scheme for fast simulations of dark matter and haloes
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-12-01
We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
A new numerical benchmark for variably saturated variable-density flow and transport in porous media
NASA Astrophysics Data System (ADS)
Guevara, Carlos; Graf, Thomas
2016-04-01
In subsurface hydrological systems, spatial and temporal variations in solute concentration and/or temperature may affect fluid density and viscosity. These variations could lead to potentially unstable situations, in which a dense fluid overlies a less dense fluid. These situations could produce instabilities that appear as dense plume fingers migrating downwards counteracted by vertical upwards flow of freshwater (Simmons et al., Transp. Porous Medium, 2002). As a result of unstable variable-density flow, solute transport rates are increased over large distances and times as compared to constant-density flow. The numerical simulation of variable-density flow in saturated and unsaturated media requires corresponding benchmark problems against which a computer model is validated (Diersch and Kolditz, Adv. Water Resour, 2002). Recorded data from a laboratory-scale experiment of variable-density flow and solute transport in saturated and unsaturated porous media (Simmons et al., Transp. Porous Medium, 2002) is used to define a new numerical benchmark. The HydroGeoSphere code (Therrien et al., 2004) coupled with PEST (www.pesthomepage.org) are used to obtain an optimized parameter set capable of adequately representing the data set by Simmons et al., (2002). Fingering in the numerical model is triggered using random hydraulic conductivity fields. Due to the inherent randomness, a large number of simulations were conducted in this study. The optimized benchmark model adequately predicts the plume behavior and the fate of solutes. This benchmark is useful for model verification of variable-density flow problems in saturated and/or unsaturated media.
NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
NASA Astrophysics Data System (ADS)
Moslemipour, Ghorbanali
2018-07-01
This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.
Time series analysis for minority game simulations of financial markets
NASA Astrophysics Data System (ADS)
Ferreira, Fernando F.; Francisco, Gerson; Machado, Birajara S.; Muruganandam, Paulsamy
2003-04-01
The minority game (MG) model introduced recently provides promising insights into the understanding of the evolution of prices, indices and rates in the financial markets. In this paper we perform a time series analysis of the model employing tools from statistics, dynamical systems theory and stochastic processes. Using benchmark systems and a financial index for comparison, several conclusions are obtained about the generating mechanism for this kind of evolution. The motion is deterministic, driven by occasional random external perturbation. When the interval between two successive perturbations is sufficiently large, one can find low dimensional chaos in this regime. However, the full motion of the MG model is found to be similar to that of the first differences of the SP500 index: stochastic, nonlinear and (unit root) stationary.
MoMaS reactive transport benchmark using PFLOTRAN
NASA Astrophysics Data System (ADS)
Park, H.
2017-12-01
MoMaS benchmark was developed to enhance numerical simulation capability for reactive transport modeling in porous media. The benchmark was published in late September of 2009; it is not taken from a real chemical system, but realistic and numerically challenging tests. PFLOTRAN is a state-of-art massively parallel subsurface flow and reactive transport code that is being used in multiple nuclear waste repository projects at Sandia National Laboratories including Waste Isolation Pilot Plant and Used Fuel Disposition. MoMaS benchmark has three independent tests with easy, medium, and hard chemical complexity. This paper demonstrates how PFLOTRAN is applied to this benchmark exercise and shows results of the easy benchmark test case which includes mixing of aqueous components and surface complexation. Surface complexations consist of monodentate and bidentate reactions which introduces difficulty in defining selectivity coefficient if the reaction applies to a bulk reference volume. The selectivity coefficient becomes porosity dependent for bidentate reaction in heterogeneous porous media. The benchmark is solved by PFLOTRAN with minimal modification to address the issue and unit conversions were made properly to suit PFLOTRAN.
ERIC Educational Resources Information Center
DeClark, Tom
2000-01-01
Presents an activity on waves that addresses the state standards and benchmarks of Michigan. Demonstrates waves and studies wave's medium, motion, and frequency. The activity is designed to address different learning styles. (YDS)
Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems
Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J.
2017-01-01
Abstract Motivation: Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. Results: In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ-leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. Availability and implementation: MATLAB code is available at Bioinformatics online. Contact: flassig@mpi-magdeburg.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881987
Stochastic inversion of cross-borehole radar data from metalliferous vein detection
NASA Astrophysics Data System (ADS)
Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui
2017-12-01
In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.
Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series
NASA Astrophysics Data System (ADS)
Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina
2014-05-01
The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results were evaluated using the same performance metrics. This comparison opens new perspectives for the development of an innovative procedure based on the geostatistical stochastic approach. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").
NASA Astrophysics Data System (ADS)
Hanachi, Houman; Liu, Jie; Banerjee, Avisekh; Chen, Ying
2016-05-01
Health state estimation of inaccessible components in complex systems necessitates effective state estimation techniques using the observable variables of the system. The task becomes much complicated when the system is nonlinear/non-Gaussian and it receives stochastic input. In this work, a novel sequential state estimation framework is developed based on particle filtering (PF) scheme for state estimation of general class of nonlinear dynamical systems with stochastic input. Performance of the developed framework is then validated with simulation on a Bivariate Non-stationary Growth Model (BNGM) as a benchmark. In the next step, three-year operating data of an industrial gas turbine engine (GTE) are utilized to verify the effectiveness of the developed framework. A comprehensive thermodynamic model for the GTE is therefore developed to formulate the relation of the observable parameters and the dominant degradation symptoms of the turbine, namely, loss of isentropic efficiency and increase of the mass flow. The results confirm the effectiveness of the developed framework for simultaneous estimation of multiple degradation symptoms in complex systems with noisy measured inputs.
Radiation Transport in Random Media With Large Fluctuations
NASA Astrophysics Data System (ADS)
Olson, Aaron; Prinja, Anil; Franke, Brian
2017-09-01
Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.
Bruni, Renato; Cesarone, Francesco; Scozzari, Andrea; Tardella, Fabio
2016-09-01
A large number of portfolio selection models have appeared in the literature since the pioneering work of Markowitz. However, even when computational and empirical results are described, they are often hard to replicate and compare due to the unavailability of the datasets used in the experiments. We provide here several datasets for portfolio selection generated using real-world price values from several major stock markets. The datasets contain weekly return values, adjusted for dividends and for stock splits, which are cleaned from errors as much as possible. The datasets are available in different formats, and can be used as benchmarks for testing the performances of portfolio selection models and for comparing the efficiency of the algorithms used to solve them. We also provide, for these datasets, the portfolios obtained by several selection strategies based on Stochastic Dominance models (see "On Exact and Approximate Stochastic Dominance Strategies for Portfolio Selection" (Bruni et al. [2])). We believe that testing portfolio models on publicly available datasets greatly simplifies the comparison of the different portfolio selection strategies.
Estimating rare events in biochemical systems using conditional sampling.
Sundar, V S
2017-01-28
The paper focuses on development of variance reduction strategies to estimate rare events in biochemical systems. Obtaining this probability using brute force Monte Carlo simulations in conjunction with the stochastic simulation algorithm (Gillespie's method) is computationally prohibitive. To circumvent this, important sampling tools such as the weighted stochastic simulation algorithm and the doubly weighted stochastic simulation algorithm have been proposed. However, these strategies require an additional step of determining the important region to sample from, which is not straightforward for most of the problems. In this paper, we apply the subset simulation method, developed as a variance reduction tool in the context of structural engineering, to the problem of rare event estimation in biochemical systems. The main idea is that the rare event probability is expressed as a product of more frequent conditional probabilities. These conditional probabilities are estimated with high accuracy using Monte Carlo simulations, specifically the Markov chain Monte Carlo method with the modified Metropolis-Hastings algorithm. Generating sample realizations of the state vector using the stochastic simulation algorithm is viewed as mapping the discrete-state continuous-time random process to the standard normal random variable vector. This viewpoint opens up the possibility of applying more sophisticated and efficient sampling schemes developed elsewhere to problems in stochastic chemical kinetics. The results obtained using the subset simulation method are compared with existing variance reduction strategies for a few benchmark problems, and a satisfactory improvement in computational time is demonstrated.
Multi-hadron spectroscopy in a large physical volume
NASA Astrophysics Data System (ADS)
Bulava, John; Hörz, Ben; Morningstar, Colin
2018-03-01
We demonstrate the effcacy of the stochastic LapH method to treat all-toall quark propagation on a Nf = 2 + 1 CLS ensemble with large linear spatial extent L = 5:5 fm, allowing us to obtain the benchmark elastic isovector p-wave pion-pion scattering amplitude to good precision already on a relatively small number of gauge configurations. These results hold promise for multi-hadron spectroscopy at close-to-physical pion mass with exponential finite-volume effects under control.
Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes
Dobos, László; Király, András; Abonyi, János
2012-01-01
Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Chow, Sy- Miin; Lu, Zhaohua; Zhu, Hongtu; Sherwood, Andrew
2014-01-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation–maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed. PMID:25416456
NASA Astrophysics Data System (ADS)
Hanssen, R. F.
2017-12-01
In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.
Bingi, V N; Chernavskiĭ, D S; Rubin, A B
2006-01-01
The influence of magnetic noise on the dynamics of magnetic nanoparticles under the conditions of stochastic resonance is considered. The effect of the magnetic noise is shown to be equivalent to the growth of the effective thermostat temperature for the particles at the permanent actual temperature of the medium. This regularity may be used for testing the hypothesis on the involvement of magnetic nanoparticles in the formation of biological effects of weak magnetic fields.
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-01-01
The finite resolution of general circulation models of the coupled atmosphere–ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere–ocean climate system in operational forecast mode, and the latest seasonal forecasting system—System 4—has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981–2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden–Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific–North America region. PMID:24842026
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-06-28
The finite resolution of general circulation models of the coupled atmosphere-ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere-ocean climate system in operational forecast mode, and the latest seasonal forecasting system--System 4--has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981-2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden-Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific-North America region.
NASA Astrophysics Data System (ADS)
Kwon, J.; Yang, H.
2006-12-01
Although GPS provides continuous and accurate position information, there are still some rooms for improvement of its positional accuracy, especially in the medium and long range baseline determination. In general, in case of more than 50 km baseline length, the effect of ionospheric delay is the one causing the largest degradation in positional accuracy. For example, the ionospheric delay in terms of double differenced mode easily reaches 10 cm with baseline length of 101 km. Therefore, many researchers have been tried to mitigate/reduce the effect using various modeling methods. In this paper, the optimal stochastic modeling of the ionospheric delay in terms of baseline length is presented. The data processing has been performed by constructing a Kalman filter with states of positions, ambiguities, and the ionospheric delays in the double differenced mode. Considering the long baseline length, both double differenced GPS phase and code observations are used as observables and LAMBDA has been applied to fix the ambiguities. Here, the ionospheric delay is stochastically modeled by well-known Gaussian, 1st and 3rd order Gauss-Markov process. The parameters required in those models such as correlation distance and time is determined by the least-square adjustment using ionosphere-only observables. Mainly the results and analysis from this study show the effect of stochastic models of the ionospheric delay in terms of the baseline length, models, and parameters used. In the above example with 101 km baseline length, it was found that the positional accuracy with appropriate ionospheric modeling (Gaussian) was about ±2 cm whereas it reaches about ±15 cm with no stochastic modeling. It is expected that the approach in this study contributes to improve positional accuracy, especially in medium and long range baseline determination.
A multi-scaled approach for simulating chemical reaction systems.
Burrage, Kevin; Tian, Tianhai; Burrage, Pamela
2004-01-01
In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge-Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E. coli, and conclude with a discussion on the significance of this work. Copyright 2004 Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
Stochastic Parameterization: Toward a New View of Weather and Climate Models
Berner, Judith; Achatz, Ulrich; Batté, Lauriane; ...
2017-03-31
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
Stochastic Parameterization: Toward a New View of Weather and Climate Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berner, Judith; Achatz, Ulrich; Batté, Lauriane
The last decade has seen the success of stochastic parameterizations in short-term, medium-range, and seasonal forecasts: operational weather centers now routinely use stochastic parameterization schemes to represent model inadequacy better and to improve the quantification of forecast uncertainty. Developed initially for numerical weather prediction, the inclusion of stochastic parameterizations not only provides better estimates of uncertainty, but it is also extremely promising for reducing long-standing climate biases and is relevant for determining the climate response to external forcing. This article highlights recent developments from different research groups that show that the stochastic representation of unresolved processes in the atmosphere, oceans,more » land surface, and cryosphere of comprehensive weather and climate models 1) gives rise to more reliable probabilistic forecasts of weather and climate and 2) reduces systematic model bias. We make a case that the use of mathematically stringent methods for the derivation of stochastic dynamic equations will lead to substantial improvements in our ability to accurately simulate weather and climate at all scales. Recent work in mathematics, statistical mechanics, and turbulence is reviewed; its relevance for the climate problem is demonstrated; and future research directions are outlined« less
Some variance reduction methods for numerical stochastic homogenization
Blanc, X.; Le Bris, C.; Legoll, F.
2016-01-01
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. PMID:27002065
Mukherjee, Lipi; Zhai, Peng-Wang; Hu, Yongxiang; Winker, David M.
2018-01-01
Polarized radiation fields in a turbid medium are influenced by single-scattering properties of scatterers. It is common that media contain two or more types of scatterers, which makes it essential to properly mix single-scattering properties of different types of scatterers in the vector radiative transfer theory. The vector radiative transfer solvers can be divided into two basic categories: the stochastic and deterministic methods. The stochastic method is basically the Monte Carlo method, which can handle scatterers with different scattering properties explicitly. This mixture scheme is called the external mixture scheme in this paper. The deterministic methods, however, can only deal with a single set of scattering properties in the smallest discretized spatial volume. The single-scattering properties of different types of scatterers have to be averaged before they are input to deterministic solvers. This second scheme is called the internal mixture scheme. The equivalence of these two different mixture schemes of scattering properties has not been demonstrated so far. In this paper, polarized radiation fields for several scattering media are solved using the Monte Carlo and successive order of scattering (SOS) methods and scattering media contain two types of scatterers: Rayleigh scatterers (molecules) and Mie scatterers (aerosols). The Monte Carlo and SOS methods employ external and internal mixture schemes of scatterers, respectively. It is found that the percentage differences between radiances solved by these two methods with different mixture schemes are of the order of 0.1%. The differences of Q/I, U/I, and V/I are of the order of 10−5 ~ 10−4, where I, Q, U, and V are the Stokes parameters. Therefore, the equivalence between these two mixture schemes is confirmed to the accuracy level of the radiative transfer numerical benchmarks. This result provides important guidelines for many radiative transfer applications that involve the mixture of different scattering and absorptive particles. PMID:29047543
Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems.
Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J
2017-07-15
Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ -leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. MATLAB code is available at Bioinformatics online. flassig@mpi-magdeburg.mpg.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
The concerted calculation of the BN-600 reactor for the deterministic and stochastic codes
NASA Astrophysics Data System (ADS)
Bogdanova, E. V.; Kuznetsov, A. N.
2017-01-01
The solution of the problem of increasing the safety of nuclear power plants implies the existence of complete and reliable information about the processes occurring in the core of a working reactor. Nowadays the Monte-Carlo method is the most general-purpose method used to calculate the neutron-physical characteristic of the reactor. But it is characterized by large time of calculation. Therefore, it may be useful to carry out coupled calculations with stochastic and deterministic codes. This article presents the results of research for possibility of combining stochastic and deterministic algorithms in calculation the reactor BN-600. This is only one part of the work, which was carried out in the framework of the graduation project at the NRC “Kurchatov Institute” in cooperation with S. S. Gorodkov and M. A. Kalugin. It is considering the 2-D layer of the BN-600 reactor core from the international benchmark test, published in the report IAEA-TECDOC-1623. Calculations of the reactor were performed with MCU code and then with a standard operative diffusion algorithm with constants taken from the Monte - Carlo computation. Macro cross-section, diffusion coefficients, the effective multiplication factor and the distribution of neutron flux and power were obtained in 15 energy groups. The reasonable agreement between stochastic and deterministic calculations of the BN-600 is observed.
Reconstruction of stochastic temporal networks through diffusive arrival times
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xiang
2017-06-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.
Energy storage arbitrage under day-ahead and real-time price uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, Dheepak; Uckun, Canan; Zhou, Zhi
Electricity markets must match real-time supply and demand of electricity. With increasing penetration of renewable resources, it is important that this balancing is done effectively, considering the high uncertainty of wind and solar energy. Storing electrical energy can make the grid more reliable and efficient and energy storage is proposed as a complement to highly variable renewable energy sources. However, for investments in energy storage to increase, participating in the market must become economically viable for owners. This paper proposes a stochastic formulation of a storage owner’s arbitrage profit maximization problem under uncertainty in day-ahead (DA) and real-time (RT) marketmore » prices. The proposed model helps storage owners in market bidding and operational decisions and in estimation of the economic viability of energy storage. Finally, case study results on realistic market price data show that the novel stochastic bidding approach does significantly better than the deterministic benchmark.« less
Stochastic derivative-free optimization using a trust region framework
Larson, Jeffrey; Billups, Stephen C.
2016-02-17
This study presents a trust region algorithm to minimize a function f when one has access only to noise-corrupted function values f¯. The model-based algorithm dynamically adjusts its step length, taking larger steps when the model and function agree and smaller steps when the model is less accurate. The method does not require the user to specify a fixed pattern of points used to build local models and does not repeatedly sample points. If f is sufficiently smooth and the noise is independent and identically distributed with mean zero and finite variance, we prove that our algorithm produces iterates suchmore » that the corresponding function gradients converge in probability to zero. As a result, we present a prototype of our algorithm that, while simplistic in its management of previously evaluated points, solves benchmark problems in fewer function evaluations than do existing stochastic approximation methods.« less
NASA Astrophysics Data System (ADS)
Witteveen, Jeroen A. S.; Bijl, Hester
2009-10-01
The Unsteady Adaptive Stochastic Finite Elements (UASFE) method resolves the effect of randomness in numerical simulations of single-mode aeroelastic responses with a constant accuracy in time for a constant number of samples. In this paper, the UASFE framework is extended to multi-frequency responses and continuous structures by employing a wavelet decomposition pre-processing step to decompose the sampled multi-frequency signals into single-frequency components. The effect of the randomness on the multi-frequency response is then obtained by summing the results of the UASFE interpolation at constant phase for the different frequency components. Results for multi-frequency responses and continuous structures show a three orders of magnitude reduction of computational costs compared to crude Monte Carlo simulations in a harmonically forced oscillator, a flutter panel problem, and the three-dimensional transonic AGARD 445.6 wing aeroelastic benchmark subject to random fields and random parameters with various probability distributions.
Reconstruction of stochastic temporal networks through diffusive arrival times
Li, Xun; Li, Xiang
2017-01-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687
Energy storage arbitrage under day-ahead and real-time price uncertainty
Krishnamurthy, Dheepak; Uckun, Canan; Zhou, Zhi; ...
2017-04-04
Electricity markets must match real-time supply and demand of electricity. With increasing penetration of renewable resources, it is important that this balancing is done effectively, considering the high uncertainty of wind and solar energy. Storing electrical energy can make the grid more reliable and efficient and energy storage is proposed as a complement to highly variable renewable energy sources. However, for investments in energy storage to increase, participating in the market must become economically viable for owners. This paper proposes a stochastic formulation of a storage owner’s arbitrage profit maximization problem under uncertainty in day-ahead (DA) and real-time (RT) marketmore » prices. The proposed model helps storage owners in market bidding and operational decisions and in estimation of the economic viability of energy storage. Finally, case study results on realistic market price data show that the novel stochastic bidding approach does significantly better than the deterministic benchmark.« less
Predicting the Stochastic Properties of the Shallow Subsurface for Improved Geophysical Modeling
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Vynne, J.; Bonner, J.; Lewkowicz, J.
2005-12-01
Strong ground motion data from numerous explosive field experiments and from moderate to large earthquakes show significant variations in amplitude and waveform shape with respect to both azimuth and range. Attempts to model these variations using deterministic models have often been unsuccessful. It has been hypothesized that a stochastic description of the geological medium is a more realistic approach. To estimate the stochastic properties of the shallow subsurface, we use Measurement While Drilling (MWD) data, which are routinely collected by mines in order to facilitate design of blast patterns. The parameters, such as rotation speed of the drill, torque, and penetration rate, are used to compute the rock's Specific Energy (SE), which is then related to a blastability index. We use values of SE measured at two different mines and calibrated to laboratory measurements of rock properties to determine correlation lengths of the subsurface rocks in 2D, needed to obtain 2D and 3D stochastic models. The stochastic models are then combined with the deterministic models and used to compute synthetic seismic waveforms.
Weaver, GracieLee M; Mendenhall, Brandon N; Hunnicutt, David; Picarella, Ryan; Leffelman, Brittanie; Perko, Michael; Bibeau, Daniel L
2018-05-01
The purpose of this study was to quantify the performance of organizations' worksite health promotion (WHP) activities against the benchmarking criteria included in the Well Workplace Checklist (WWC). The Wellness Council of America (WELCOA) developed a tool to assess WHP with its 100-item WWC, which represents WELCOA's 7 performance benchmarks. Workplaces. This study includes a convenience sample of organizations who completed the checklist from 2008 to 2015. The sample size was 4643 entries from US organizations. The WWC includes demographic questions, general questions about WHP programs, and scales to measure the performance against the WELCOA 7 benchmarks. Descriptive analyses of WWC items were completed separately for each year of the study period. The majority of the organizations represented each year were multisite, multishift, medium- to large-sized companies mostly in the services industry. Despite yearly changes in participating organizations, results across the WELCOA 7 benchmark scores were consistent year to year. Across all years, benchmarks that organizations performed the lowest were senior-level support, data collection, and programming; wellness teams and supportive environments were the highest scoring benchmarks. In an era marked with economic swings and health-care reform, it appears that organizations are staying consistent in their performance across these benchmarks. The WWC could be useful for organizations, practitioners, and researchers in assessing the quality of WHP programs.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
From medium heterogeneity to flow and transport: A time-domain random walk approach
NASA Astrophysics Data System (ADS)
Hakoun, V.; Comolli, A.; Dentz, M.
2017-12-01
The prediction of flow and transport processes in heterogeneous porous media is based on the qualitative and quantitative understanding of the interplay between 1) spatial variability of hydraulic conductivity, 2) groundwater flow and 3) solute transport. Using a stochastic modeling approach, we study this interplay through direct numerical simulations of Darcy flow and advective transport in heterogeneous media. First, we study flow in correlated hydraulic permeability fields and shed light on the relationship between the statistics of log-hydraulic conductivity, a medium attribute, and the flow statistics. Second, we determine relationships between Eulerian and Lagrangian velocity statistics, this means, between flow and transport attributes. We show how Lagrangian statistics and thus transport behaviors such as late particle arrival times are influenced by the medium heterogeneity on one hand and the initial particle velocities on the other. We find that equidistantly sampled Lagrangian velocities can be described by a Markov process that evolves on the characteristic heterogeneity length scale. We employ a stochastic relaxation model for the equidistantly sampled particle velocities, which is parametrized by the velocity correlation length. This description results in a time-domain random walk model for the particle motion, whose spatial transitions are characterized by the velocity correlation length and temporal transitions by the particle velocities. This approach relates the statistical medium and flow properties to large scale transport, and allows for conditioning on the initial particle velocities and thus to the medium properties in the injection region. The approach is tested against direct numerical simulations.
Komarov, Ivan; D'Souza, Roshan M
2012-01-01
The Gillespie Stochastic Simulation Algorithm (GSSA) and its variants are cornerstone techniques to simulate reaction kinetics in situations where the concentration of the reactant is too low to allow deterministic techniques such as differential equations. The inherent limitations of the GSSA include the time required for executing a single run and the need for multiple runs for parameter sweep exercises due to the stochastic nature of the simulation. Even very efficient variants of GSSA are prohibitively expensive to compute and perform parameter sweeps. Here we present a novel variant of the exact GSSA that is amenable to acceleration by using graphics processing units (GPUs). We parallelize the execution of a single realization across threads in a warp (fine-grained parallelism). A warp is a collection of threads that are executed synchronously on a single multi-processor. Warps executing in parallel on different multi-processors (coarse-grained parallelism) simultaneously generate multiple trajectories. Novel data-structures and algorithms reduce memory traffic, which is the bottleneck in computing the GSSA. Our benchmarks show an 8×-120× performance gain over various state-of-the-art serial algorithms when simulating different types of models.
A Two-Step Approach to Uncertainty Quantification of Core Simulators
Yankov, Artem; Collins, Benjamin; Klein, Markus; ...
2012-01-01
For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less
NASA Astrophysics Data System (ADS)
Le Bars, Michael; Worster, M. Grae
2006-07-01
A finite-element simulation of binary alloy solidification based on a single-domain formulation is presented and tested. Resolution of phase change is first checked by comparison with the analytical results of Worster [M.G. Worster, Solidification of an alloy from a cooled boundary, J. Fluid Mech. 167 (1986) 481-501] for purely diffusive solidification. Fluid dynamical processes without phase change are then tested by comparison with previous numerical studies of thermal convection in a pure fluid [G. de Vahl Davis, Natural convection of air in a square cavity: a bench mark numerical solution, Int. J. Numer. Meth. Fluids 3 (1983) 249-264; D.A. Mayne, A.S. Usmani, M. Crapper, h-adaptive finite element solution of high Rayleigh number thermally driven cavity problem, Int. J. Numer. Meth. Heat Fluid Flow 10 (2000) 598-615; D.C. Wan, B.S.V. Patnaik, G.W. Wei, A new benchmark quality solution for the buoyancy driven cavity by discrete singular convolution, Numer. Heat Transf. 40 (2001) 199-228], in a porous medium with a constant porosity [G. Lauriat, V. Prasad, Non-darcian effects on natural convection in a vertical porous enclosure, Int. J. Heat Mass Transf. 32 (1989) 2135-2148; P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967] and in a mixed liquid-porous medium with a spatially variable porosity [P. Nithiarasu, K.N. Seetharamu, T. Sundararajan, Natural convective heat transfer in an enclosure filled with fluid saturated variable porosity medium, Int. J. Heat Mass Transf. 40 (1997) 3955-3967; N. Zabaras, D. Samanta, A stabilized volume-averaging finite element method for flow in porous media and binary alloy solidification processes, Int. J. Numer. Meth. Eng. 60 (2004) 1103-1138]. Finally, new benchmark solutions for simultaneous flow through both fluid and porous domains and for convective solidification processes are presented, based on the similarity solutions in corner-flow geometries recently obtained by Le Bars and Worster [M. Le Bars, M.G. Worster, Interfacial conditions between a pure fluid and a porous medium: implications for binary alloy solidification, J. Fluid Mech. (in press)]. Good agreement is found for all tests, hence validating our physical and numerical methods. More generally, the computations presented here could now be considered as standard and reliable analytical benchmarks for numerical simulations, specifically and independently testing the different processes underlying binary alloy solidification.
ForceGen 3D structure and conformer generation: from small lead-like molecules to macrocyclic drugs
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2017-05-01
We introduce the ForceGen method for 3D structure generation and conformer elaboration of drug-like small molecules. ForceGen is novel, avoiding use of distance geometry, molecular templates, or simulation-oriented stochastic sampling. The method is primarily driven by the molecular force field, implemented using an extension of MMFF94s and a partial charge estimator based on electronegativity-equalization. The force field is coupled to algorithms for direct sampling of realistic physical movements made by small molecules. Results are presented on a standard benchmark from the Cambridge Crystallographic Database of 480 drug-like small molecules, including full structure generation from SMILES strings. Reproduction of protein-bound crystallographic ligand poses is demonstrated on four carefully curated data sets: the ConfGen Set (667 ligands), the PINC cross-docking benchmark (1062 ligands), a large set of macrocyclic ligands (182 total with typical ring sizes of 12-23 atoms), and a commonly used benchmark for evaluating macrocycle conformer generation (30 ligands total). Results compare favorably to alternative methods, and performance on macrocyclic compounds approaches that observed on non-macrocycles while yielding a roughly 100-fold speed improvement over alternative MD-based methods with comparable performance.
Some variance reduction methods for numerical stochastic homogenization.
Blanc, X; Le Bris, C; Legoll, F
2016-04-28
We give an overview of a series of recent studies devoted to variance reduction techniques for numerical stochastic homogenization. Numerical homogenization requires that a set of problems is solved at the microscale, the so-called corrector problems. In a random environment, these problems are stochastic and therefore need to be repeatedly solved, for several configurations of the medium considered. An empirical average over all configurations is then performed using the Monte Carlo approach, so as to approximate the effective coefficients necessary to determine the macroscopic behaviour. Variance severely affects the accuracy and the cost of such computations. Variance reduction approaches, borrowed from other contexts in the engineering sciences, can be useful. Some of these variance reduction techniques are presented, studied and tested here. © 2016 The Author(s).
Hasenauer, J; Wolf, V; Kazeroonian, A; Theis, F J
2014-09-01
The time-evolution of continuous-time discrete-state biochemical processes is governed by the Chemical Master Equation (CME), which describes the probability of the molecular counts of each chemical species. As the corresponding number of discrete states is, for most processes, large, a direct numerical simulation of the CME is in general infeasible. In this paper we introduce the method of conditional moments (MCM), a novel approximation method for the solution of the CME. The MCM employs a discrete stochastic description for low-copy number species and a moment-based description for medium/high-copy number species. The moments of the medium/high-copy number species are conditioned on the state of the low abundance species, which allows us to capture complex correlation structures arising, e.g., for multi-attractor and oscillatory systems. We prove that the MCM provides a generalization of previous approximations of the CME based on hybrid modeling and moment-based methods. Furthermore, it improves upon these existing methods, as we illustrate using a model for the dynamics of stochastic single-gene expression. This application example shows that due to the more general structure, the MCM allows for the approximation of multi-modal distributions.
Dwarf galaxies: a lab to investigate the neutron capture elements production
NASA Astrophysics Data System (ADS)
Cescutti, Gabriele
2018-06-01
In this contribution, I focus on the neutron capture elements observed in the spectra of old halo and ultra faint galaxies stars. Adopting a stochastic chemical evolution model and the Galactic halo as a benchmark, I present new constraints on the rate and time scales of r-process events, based on the discovery of the r-process rich stars in the ultra faint galaxy Reticulum 2. I also show that an s-process activated by rotation in massive stars can play an important role in the production of heavy elements.
Robust stochastic optimization for reservoir operation
NASA Astrophysics Data System (ADS)
Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin
2015-01-01
Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.
NASA Astrophysics Data System (ADS)
Stegmann, Patrick G.; Tang, Guanglin; Yang, Ping; Johnson, Benjamin T.
2018-05-01
A structural model is developed for the single-scattering properties of snow and graupel particles with a strongly heterogeneous morphology and an arbitrary variable mass density. This effort is aimed to provide a mechanism to consider particle mass density variation in the microwave scattering coefficients implemented in the Community Radiative Transfer Model (CRTM). The stochastic model applies a bicontinuous random medium algorithm to a simple base shape and uses the Finite-Difference-Time-Domain (FDTD) method to compute the single-scattering properties of the resulting complex morphology.
Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Dawson, A.; Palmer, T.
2017-12-01
Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.
NASA Astrophysics Data System (ADS)
Trindade, B. C.; Reed, P. M.
2017-12-01
The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.
A scalable moment-closure approximation for large-scale biochemical reaction networks
Kazeroonian, Atefeh; Theis, Fabian J.; Hasenauer, Jan
2017-01-01
Abstract Motivation: Stochastic molecular processes are a leading cause of cell-to-cell variability. Their dynamics are often described by continuous-time discrete-state Markov chains and simulated using stochastic simulation algorithms. As these stochastic simulations are computationally demanding, ordinary differential equation models for the dynamics of the statistical moments have been developed. The number of state variables of these approximating models, however, grows at least quadratically with the number of biochemical species. This limits their application to small- and medium-sized processes. Results: In this article, we present a scalable moment-closure approximation (sMA) for the simulation of statistical moments of large-scale stochastic processes. The sMA exploits the structure of the biochemical reaction network to reduce the covariance matrix. We prove that sMA yields approximating models whose number of state variables depends predominantly on local properties, i.e. the average node degree of the reaction network, instead of the overall network size. The resulting complexity reduction is assessed by studying a range of medium- and large-scale biochemical reaction networks. To evaluate the approximation accuracy and the improvement in computational efficiency, we study models for JAK2/STAT5 signalling and NFκB signalling. Our method is applicable to generic biochemical reaction networks and we provide an implementation, including an SBML interface, which renders the sMA easily accessible. Availability and implementation: The sMA is implemented in the open-source MATLAB toolbox CERENA and is available from https://github.com/CERENADevelopers/CERENA. Contact: jan.hasenauer@helmholtz-muenchen.de or atefeh.kazeroonian@tum.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881983
A non-stochastic iterative computational method to model light propagation in turbid media
NASA Astrophysics Data System (ADS)
McIntyre, Thomas J.; Zemp, Roger J.
2015-03-01
Monte Carlo models are widely used to model light transport in turbid media, however their results implicitly contain stochastic variations. These fluctuations are not ideal, especially for inverse problems where Jacobian matrix errors can lead to large uncertainties upon matrix inversion. Yet Monte Carlo approaches are more computationally favorable than solving the full Radiative Transport Equation. Here, a non-stochastic computational method of estimating fluence distributions in turbid media is proposed, which is called the Non-Stochastic Propagation by Iterative Radiance Evaluation method (NSPIRE). Rather than using stochastic means to determine a random walk for each photon packet, the propagation of light from any element to all other elements in a grid is modelled simultaneously. For locally homogeneous anisotropic turbid media, the matrices used to represent scattering and projection are shown to be block Toeplitz, which leads to computational simplifications via convolution operators. To evaluate the accuracy of the algorithm, 2D simulations were done and compared against Monte Carlo models for the cases of an isotropic point source and a pencil beam incident on a semi-infinite turbid medium. The model was shown to have a mean percent error less than 2%. The algorithm represents a new paradigm in radiative transport modelling and may offer a non-stochastic alternative to modeling light transport in anisotropic scattering media for applications where the diffusion approximation is insufficient.
Chou, Sheng-Kai; Jiau, Ming-Kai; Huang, Shih-Chia
2016-08-01
The growing ubiquity of vehicles has led to increased concerns about environmental issues. These concerns can be mitigated by implementing an effective carpool service. In an intelligent carpool system, an automated service process assists carpool participants in determining routes and matches. It is a discrete optimization problem that involves a system-wide condition as well as participants' expectations. In this paper, we solve the carpool service problem (CSP) to provide satisfactory ride matches. To this end, we developed a particle swarm carpool algorithm based on stochastic set-based particle swarm optimization (PSO). Our method introduces stochastic coding to augment traditional particles, and uses three terminologies to represent a particle: 1) particle position; 2) particle view; and 3) particle velocity. In this way, the set-based PSO (S-PSO) can be realized by local exploration. In the simulation and experiments, two kind of discrete PSOs-S-PSO and binary PSO (BPSO)-and a genetic algorithm (GA) are compared and examined using tested benchmarks that simulate a real-world metropolis. We observed that the S-PSO outperformed the BPSO and the GA thoroughly. Moreover, our method yielded the best result in a statistical test and successfully obtained numerical results for meeting the optimization objectives of the CSP.
Ontology for Semantic Data Integration in the Domain of IT Benchmarking.
Pfaff, Matthias; Neubig, Stefan; Krcmar, Helmut
2018-01-01
A domain-specific ontology for IT benchmarking has been developed to bridge the gap between a systematic characterization of IT services and their data-based valuation. Since information is generally collected during a benchmark exercise using questionnaires on a broad range of topics, such as employee costs, software licensing costs, and quantities of hardware, it is commonly stored as natural language text; thus, this information is stored in an intrinsically unstructured form. Although these data form the basis for identifying potentials for IT cost reductions, neither a uniform description of any measured parameters nor the relationship between such parameters exists. Hence, this work proposes an ontology for the domain of IT benchmarking, available at https://w3id.org/bmontology. The design of this ontology is based on requirements mainly elicited from a domain analysis, which considers analyzing documents and interviews with representatives from Small- and Medium-Sized Enterprises and Information and Communications Technology companies over the last eight years. The development of the ontology and its main concepts is described in detail (i.e., the conceptualization of benchmarking events, questionnaires, IT services, indicators and their values) together with its alignment with the DOLCE-UltraLite foundational ontology.
Numerical Analysis of Stochastic Dynamical Systems in the Medium-Frequency Range
2003-02-01
frequency vibration analysis such as the statistical energy analysis (SEA), the traditional modal analysis (well-suited for high and low: frequency...that the first few structural normal modes primarily constitute the total response. In the higher frequency range, the statistical energy analysis (SEA
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
Arcos-García, Álvaro; Álvarez-García, Juan A; Soria-Morillo, Luis M
2018-03-01
This paper presents a Deep Learning approach for traffic sign recognition systems. Several classification experiments are conducted over publicly available traffic sign datasets from Germany and Belgium using a Deep Neural Network which comprises Convolutional layers and Spatial Transformer Networks. Such trials are built to measure the impact of diverse factors with the end goal of designing a Convolutional Neural Network that can improve the state-of-the-art of traffic sign classification task. First, different adaptive and non-adaptive stochastic gradient descent optimisation algorithms such as SGD, SGD-Nesterov, RMSprop and Adam are evaluated. Subsequently, multiple combinations of Spatial Transformer Networks placed at distinct positions within the main neural network are analysed. The recognition rate of the proposed Convolutional Neural Network reports an accuracy of 99.71% in the German Traffic Sign Recognition Benchmark, outperforming previous state-of-the-art methods and also being more efficient in terms of memory requirements. Copyright © 2018 Elsevier Ltd. All rights reserved.
Commercial Building Energy Saver, Web App
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon
The CBES App is a web-based toolkit for use by small businesses and building owners and operators of small and medium size commercial buildings to perform energy benchmarking and retrofit analysis for buildings. The CBES App analyzes the energy performance of user's building for pre-and posto-retrofit, in conjunction with user's input data, to identify recommended retrofit measures, energy savings and economic analysis for the selected measures. The CBES App provides energy benchmarking, including getting an EnergyStar score using EnergyStar API and benchmarking against California peer buildings using the EnergyIQ API. The retrofit analysis includes a preliminary analysis by looking upmore » retrofit measures from a pre-simulated database DEEP, and a detailed analysis creating and running EnergyPlus models to calculate energy savings of retrofit measures. The CBES App builds upon the LBNL CBES API.« less
NASA Astrophysics Data System (ADS)
Sallah, M.
2014-03-01
The problem of monoenergetic radiative transfer in a finite planar stochastic atmospheric medium with polarized (vector) Rayleigh scattering is proposed. The solution is presented for an arbitrary absorption and scattering cross sections. The extinction function of the medium is assumed to be a continuous random function of position, with fluctuations about the mean taken as Gaussian distributed. The joint probability distribution function of these Gaussian random variables is used to calculate the ensemble-averaged quantities, such as reflectivity and transmissivity, for an arbitrary correlation function. A modified Gaussian probability distribution function is also used to average the solution in order to exclude the probable negative values of the optical variable. Pomraning-Eddington approximation is used, at first, to obtain the deterministic analytical solution for both the total intensity and the difference function used to describe the polarized radiation. The problem is treated with specular reflecting boundaries and angular-dependent externally incident flux upon the medium from one side and with no flux from the other side. For the sake of comparison, two different forms of the weight function, which introduced to force the boundary conditions to be fulfilled, are used. Numerical results of the average reflectivity and average transmissivity are obtained for both Gaussian and modified Gaussian probability density functions at the different degrees of polarization.
Multiscale Hy3S: hybrid stochastic simulation for supercomputers.
Salis, Howard; Sotiropoulos, Vassilios; Kaznessis, Yiannis N
2006-02-24
Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users create biological systems and analyze data. We demonstrate the accuracy and efficiency of Hy3S with examples, including a large-scale system benchmark and a complex bistable biochemical network with positive feedback. The software itself is open-sourced under the GPL license and is modular, allowing users to modify it for their own purposes. Hy3S is a powerful suite of simulation programs for simulating the stochastic dynamics of networks of biochemical reactions. Its first public version enables computational biologists to more efficiently investigate the dynamics of realistic biological systems.
Matter-wave dark solitons: stochastic versus analytical results.
Cockburn, S P; Nistazakis, H E; Horikis, T P; Kevrekidis, P G; Proukakis, N P; Frantzeskakis, D J
2010-04-30
The dynamics of dark matter-wave solitons in elongated atomic condensates are discussed at finite temperatures. Simulations with the stochastic Gross-Pitaevskii equation reveal a noticeable, experimentally observable spread in individual soliton trajectories, attributed to inherent fluctuations in both phase and density of the underlying medium. Averaging over a number of such trajectories (as done in experiments) washes out such background fluctuations, revealing a well-defined temperature-dependent temporal growth in the oscillation amplitude. The average soliton dynamics is well captured by the simpler dissipative Gross-Pitaevskii equation, both numerically and via an analytically derived equation for the soliton center based on perturbation theory for dark solitons.
NASA Astrophysics Data System (ADS)
Zemenkova, M. Y.; Shabarov, A.; Shatalov, A.; Puldas, L.
2018-05-01
The problem of the pore space description and the calculation of relative phase permeabilities (RPP) for two-phase filtration is considered. A technique for constructing a pore-network structure for constant and variable channel diameters is proposed. A description of the design model of RPP based on the capillary pressure curves is presented taking into account the variability of diameters along the length of pore channels. By the example of the calculation analysis for the core samples of the Urnenskoye and Verkhnechonskoye deposits, the possibilities of calculating RPP are shown when using the stochastic distribution of pores by diameters and medium-flow diameters.
Single realization stochastic FDTD for weak scattering waves in biological random media.
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2013-02-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result.
Single realization stochastic FDTD for weak scattering waves in biological random media
Tan, Tengmeng; Taflove, Allen; Backman, Vadim
2015-01-01
This paper introduces an iterative scheme to overcome the unresolved issues presented in S-FDTD (stochastic finite-difference time-domain) for obtaining ensemble average field values recently reported by Smith and Furse in an attempt to replace the brute force multiple-realization also known as Monte-Carlo approach with a single-realization scheme. Our formulation is particularly useful for studying light interactions with biological cells and tissues having sub-wavelength scale features. Numerical results demonstrate that such a small scale variation can be effectively modeled with a random medium problem which when simulated with the proposed S-FDTD indeed produces a very accurate result. PMID:27158153
Li, Yihe; Li, Bofeng; Gao, Yang
2015-01-01
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400
Li, Yihe; Li, Bofeng; Gao, Yang
2015-11-30
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.
Construct validity and expert benchmarking of the haptic virtual reality dental simulator.
Suebnukarn, Siriwan; Chaisombat, Monthalee; Kongpunwijit, Thanapohn; Rhienmora, Phattanapon
2014-10-01
The aim of this study was to demonstrate construct validation of the haptic virtual reality (VR) dental simulator and to define expert benchmarking criteria for skills assessment. Thirty-four self-selected participants (fourteen novices, fourteen intermediates, and six experts in endodontics) at one dental school performed ten repetitions of three mode tasks of endodontic cavity preparation: easy (mandibular premolar with one canal), medium (maxillary premolar with two canals), and hard (mandibular molar with three canals). The virtual instrument's path length was registered by the simulator. The outcomes were assessed by an expert. The error scores in easy and medium modes accurately distinguished the experts from novices and intermediates at the onset of training, when there was a significant difference between groups (ANOVA, p<0.05). The trend was consistent until trial 5. From trial 6 on, the three groups achieved similar scores. No significant difference was found between groups at the end of training. Error score analysis was not able to distinguish any group at the hard level of training. Instrument path length showed a difference in performance according to groups at the onset of training (ANOVA, p<0.05). This study established construct validity for the haptic VR dental simulator by demonstrating its discriminant capabilities between that of experts and non-experts. The experts' error scores and path length were used to define benchmarking criteria for optimal performance.
Laser beam self-focusing in turbulent dissipative media.
Hafizi, B; Peñano, J R; Palastro, J P; Fischer, R P; DiComo, G
2017-01-15
A high-power laser beam propagating through a dielectric in the presence of fluctuations is subject to diffraction, dissipation, and optical Kerr nonlinearity. A method of moments was applied to a stochastic, nonlinear enveloped wave equation to analyze the evolution of the long-term spot radius. For propagation in atmospheric turbulence described by a Kolmogorov-von Kármán spectral density, the analysis was benchmarked against field experiments in the low-power limit and compared with simulation results in the high-power regime. Dissipation reduced the effect of self-focusing and led to chromatic aberration.
Langevin model of low-energy fission
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierk, Arnold John
Since the earliest days of fission, stochastic models have been used to describe and model the process. For a quarter century, numerical solutions of Langevin equations have been used to model fission of highly excited nuclei, where microscopic potential-energy effects have been neglected. In this paper I present a Langevin model for the fission of nuclei with low to medium excitation energies, for which microscopic effects in the potential energy cannot be ignored. I solve Langevin equations in a five-dimensional space of nuclear deformations. The macroscopic-microscopic potential energy from a global nuclear structure model well benchmarked to nuclear masses ismore » tabulated on a mesh of approximately 10 7 points in this deformation space. The potential is defined continuously inside the mesh boundaries by use of a moving five-dimensional cubic spline approximation. Because of reflection symmetry, the effective mesh is nearly twice this size. For the inertia, I use a (possibly scaled) approximation to the inertia tensor defined by irrotational flow. A phenomenological dissipation tensor related to one-body dissipation is used. A normal-mode analysis of the dynamical system at the saddle point and the assumption of quasiequilibrium provide distributions of initial conditions appropriate to low excitation energies, and are extended to model spontaneous fission. A dynamical model of postscission fragment motion including dynamical deformations and separation allows the calculation of final mass and kinetic-energy distributions, along with other interesting quantities. The model makes quantitative predictions for fragment mass and kinetic-energy yields, some of which are very close to measured ones. Varying the energy of the incident neutron for induced fission allows the prediction of energy dependencies of fragment yields and average kinetic energies. With a simple approximation for spontaneous fission starting conditions, quantitative predictions are made for some observables which are close to measurements. In conclusion, this model is able to reproduce several mass and energy yield observables with a small number of physical parameters, some of which do not need to be varied after benchmarking to 235U (n, f) to predict results for other fissioning isotopes.« less
Langevin model of low-energy fission
Sierk, Arnold John
2017-09-05
Since the earliest days of fission, stochastic models have been used to describe and model the process. For a quarter century, numerical solutions of Langevin equations have been used to model fission of highly excited nuclei, where microscopic potential-energy effects have been neglected. In this paper I present a Langevin model for the fission of nuclei with low to medium excitation energies, for which microscopic effects in the potential energy cannot be ignored. I solve Langevin equations in a five-dimensional space of nuclear deformations. The macroscopic-microscopic potential energy from a global nuclear structure model well benchmarked to nuclear masses ismore » tabulated on a mesh of approximately 10 7 points in this deformation space. The potential is defined continuously inside the mesh boundaries by use of a moving five-dimensional cubic spline approximation. Because of reflection symmetry, the effective mesh is nearly twice this size. For the inertia, I use a (possibly scaled) approximation to the inertia tensor defined by irrotational flow. A phenomenological dissipation tensor related to one-body dissipation is used. A normal-mode analysis of the dynamical system at the saddle point and the assumption of quasiequilibrium provide distributions of initial conditions appropriate to low excitation energies, and are extended to model spontaneous fission. A dynamical model of postscission fragment motion including dynamical deformations and separation allows the calculation of final mass and kinetic-energy distributions, along with other interesting quantities. The model makes quantitative predictions for fragment mass and kinetic-energy yields, some of which are very close to measured ones. Varying the energy of the incident neutron for induced fission allows the prediction of energy dependencies of fragment yields and average kinetic energies. With a simple approximation for spontaneous fission starting conditions, quantitative predictions are made for some observables which are close to measurements. In conclusion, this model is able to reproduce several mass and energy yield observables with a small number of physical parameters, some of which do not need to be varied after benchmarking to 235U (n, f) to predict results for other fissioning isotopes.« less
Stochastic study of solute transport in a nonstationary medium.
Hu, Bill X
2006-01-01
A Lagrangian stochastic approach is applied to develop a method of moment for solute transport in a physically and chemically nonstationary medium. Stochastic governing equations for mean solute flux and solute covariance are analytically obtained in the first-order accuracy of log conductivity and/or chemical sorption variances and solved numerically using the finite-difference method. The developed method, the numerical method of moments (NMM), is used to predict radionuclide solute transport processes in the saturated zone below the Yucca Mountain project area. The mean, variance, and upper bound of the radionuclide mass flux through a control plane 5 km downstream of the footprint of the repository are calculated. According to their chemical sorption capacities, the various radionuclear chemicals are grouped as nonreactive, weakly sorbing, and strongly sorbing chemicals. The NMM method is used to study their transport processes and influence factors. To verify the method of moments, a Monte Carlo simulation is conducted for nonreactive chemical transport. Results indicate the results from the two methods are consistent, but the NMM method is computationally more efficient than the Monte Carlo method. This study adds to the ongoing debate in the literature on the effect of heterogeneity on solute transport prediction, especially on prediction uncertainty, by showing that the standard derivation of solute flux is larger than the mean solute flux even when the hydraulic conductivity within each geological layer is mild. This study provides a method that may become an efficient calculation tool for many environmental projects.
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.; ...
2017-02-28
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
Comparative study on neutronics characteristics of a 1500 MWe metal fuel sodium-cooled fast reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohgama, Kazuya; Aliberti, Gerardo; Stauff, Nicolas E.
Under the cooperative effort of the Civil Nuclear Energy R&D Working Group within the framework of the U.S.-Japan bilateral, Argonne National Laboratory (ANL) and Japan Atomic Energy Agency (JAEA) have been performing benchmark study using Japan Sodium-cooled Fast Reactor (JSFR) design with metal fuel. In this benchmark study, core characteristic parameters at the beginning of cycle were evaluated by the best estimate deterministic and stochastic methodologies of ANL and JAEA. The results obtained by both institutions show a good agreement with less than 200 pcm of discrepancy on the neutron multiplication factor, and less than 3% of discrepancy on themore » sodium void reactivity, Doppler reactivity, and control rod worth. The results by the stochastic and deterministic approaches were compared in each party to investigate impacts of the deterministic approximation and to understand potential variations in the results due to different calculation methodologies employed. From the detailed analysis of methodologies, it was found that the good agreement in multiplication factor from the deterministic calculations comes from the cancellation of the differences on the methodology (0.4%) and nuclear data (0.6%). The different treatment in reflector cross section generation was estimated as the major cause of the discrepancy between the multiplication factors by the JAEA and ANL deterministic methodologies. Impacts of the nuclear data libraries were also investigated using a sensitivity analysis methodology. Furthermore, the differences on the inelastic scattering cross sections of U-238, ν values and fission cross sections of Pu-239 and µ-average of Na-23 are the major contributors to the difference on the multiplication factors.« less
Dynamic partitioning for hybrid simulation of the bistable HIV-1 transactivation network.
Griffith, Mark; Courtney, Tod; Peccoud, Jean; Sanders, William H
2006-11-15
The stochastic kinetics of a well-mixed chemical system, governed by the chemical Master equation, can be simulated using the exact methods of Gillespie. However, these methods do not scale well as systems become more complex and larger models are built to include reactions with widely varying rates, since the computational burden of simulation increases with the number of reaction events. Continuous models may provide an approximate solution and are computationally less costly, but they fail to capture the stochastic behavior of small populations of macromolecules. In this article we present a hybrid simulation algorithm that dynamically partitions the system into subsets of continuous and discrete reactions, approximates the continuous reactions deterministically as a system of ordinary differential equations (ODE) and uses a Monte Carlo method for generating discrete reaction events according to a time-dependent propensity. Our approach to partitioning is improved such that we dynamically partition the system of reactions, based on a threshold relative to the distribution of propensities in the discrete subset. We have implemented the hybrid algorithm in an extensible framework, utilizing two rigorous ODE solvers to approximate the continuous reactions, and use an example model to illustrate the accuracy and potential speedup of the algorithm when compared with exact stochastic simulation. Software and benchmark models used for this publication can be made available upon request from the authors.
A framework to analyze the stochastic harmonics and resonance of wind energy grid interconnection
Cho, Youngho; Lee, Choongman; Hur, Kyeon; ...
2016-08-31
This study addresses a modeling and analysis methodology for investigating the stochastic harmonics and resonance concerns of wind power plants (WPPs). Wideband harmonics from modern wind turbines are observed to be stochastic, associated with real power production, and they may adversely interact with the grid impedance and cause unexpected harmonic resonance if not comprehensively addressed in the planning and commissioning of the WPPs. These issues should become more critical as wind penetration levels increase. We thus propose a planning study framework comprising the following functional steps: First, the best-fitted probability density functions (PDFs) of the harmonic components of interest inmore » the frequency domain are determined. In operations planning, maximum likelihood estimations followed by a chi-square test are used once field measurements or manufacturers' data are available. Second, harmonic currents from the WPP are represented by randomly-generating harmonic components based on their PDFs (frequency spectrum) and then synthesized for time-domain simulations via inverse Fourier transform. Finally, we conduct a comprehensive assessment by including the impacts of feeder configurations, harmonic filters, and the variability of parameters. We demonstrate the efficacy of the proposed study approach for a 100-MW offshore WPP consisting of 20 units of 5-MW full-converter turbines, a realistic benchmark system adapted from a WPP under development in Korea, and discuss lessons learned through this research.« less
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling.
Li, Jilong; Cheng, Jianlin
2016-05-10
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96-6.37% and 2.42-5.19% on the three datasets over using single templates. MTMG's performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html.
A Stochastic Point Cloud Sampling Method for Multi-Template Protein Comparative Modeling
Li, Jilong; Cheng, Jianlin
2016-01-01
Generating tertiary structural models for a target protein from the known structure of its homologous template proteins and their pairwise sequence alignment is a key step in protein comparative modeling. Here, we developed a new stochastic point cloud sampling method, called MTMG, for multi-template protein model generation. The method first superposes the backbones of template structures, and the Cα atoms of the superposed templates form a point cloud for each position of a target protein, which are represented by a three-dimensional multivariate normal distribution. MTMG stochastically resamples the positions for Cα atoms of the residues whose positions are uncertain from the distribution, and accepts or rejects new position according to a simulated annealing protocol, which effectively removes atomic clashes commonly encountered in multi-template comparative modeling. We benchmarked MTMG on 1,033 sequence alignments generated for CASP9, CASP10 and CASP11 targets, respectively. Using multiple templates with MTMG improves the GDT-TS score and TM-score of structural models by 2.96–6.37% and 2.42–5.19% on the three datasets over using single templates. MTMG’s performance was comparable to Modeller in terms of GDT-TS score, TM-score, and GDT-HA score, while the average RMSD was improved by a new sampling approach. The MTMG software is freely available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/mtmg.html. PMID:27161489
Application of Effective Medium Theory to the Three-Dimensional Heterogeneity of Mantle Anisotropy
NASA Astrophysics Data System (ADS)
Song, X.; Jordan, T. H.
2015-12-01
A self-consistent theory for the effective elastic parameters of stochastic media with small-scale 3D heterogeneities has been developed using a 2nd-order Born approximation to the scattered wavefield (T. H. Jordan, GJI, in press). Here we apply the theory to assess how small-scale variations in the local anisotropy of the upper mantle affect seismic wave propagation. We formulate a anisotropic model in which the local elastic properties are specified by a constant stiffness tensor with hexagonal symmetry of arbitrary orientation. This orientation is guided by a Gaussian random vector field with transversely isotropic (TI) statistics. If the outer scale of the statistical variability is small compared to a wavelength, then the effective seismic velocities are TI and depend on two parameters, a horizontal-to-vertical orientation ratio ξ and a horizontal-to-vertical aspect ratio, η. If ξ = 1, the symmetry axis is isotropically distributed; if ξ < 1, it is vertical biased (bipolar distribution), and if ξ > 1, it is horizontally biased (girdle distribution). If η = 1, the heterogeneity is geometrically isotropic; as η à∞, the medium becomes a horizontal stochastic laminate; as η à0, the medium becomes a vertical stochastic bundle. Using stiffness tensors constrained by laboratory measurements of mantle xenoliths, we explore the dependence of the effective P and S velocities on ξ and η. The effective velocities are strongly controlled by the orientation ratio ξ; e.g., if the hexagonal symmetry axis of the local anisotropy is the fast direction of propagation, then vPH > vPV and vSH > vSV for ξ > 1. A more surprising result is the 2nd-order insensitivity of the velocities to the heterogeneity aspect ratio η. Consequently, the geometrical anisotropy of upper-mantle heterogeneity significantly enhances seismic-wave anisotropy only through local variations in the Voigt-averaged velocities, which depend primarily on rock composition and not deformation history.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Sardeshmukh, P. D.
2017-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models. They provide a way to represent model uncertainty through representing the variability of unresolved sub-grid processes, and have been shown to have a beneficial effect on the spread and mean state for medium- and extended-range forecasts. There is increasing evidence that stochastic parameterization of unresolved processes can improve the bias in mean and variability, e.g. by introducing a noise-induced drift (nonlinear rectification), and by changing the residence time and structure of flow regimes. We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. SPPT results in a significant improvement in the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. We use a Linear Inverse Modelling framework to gain insight into the mechanisms by which SPPT has improved ENSO-variability.
NASA Astrophysics Data System (ADS)
Lu, B.; Darmon, M.; Leymarie, N.; Chatillon, S.; Potel, C.
2012-05-01
In-service inspection of Sodium-Cooled Fast Reactors (SFR) requires the development of non-destructive techniques adapted to the harsh environment conditions and the examination complexity. From past experiences, ultrasonic techniques are considered as suitable candidates. The ultrasonic telemetry is a technique used to constantly insure the safe functioning of reactor inner components by determining their exact position: it consists in measuring the time of flight of the ultrasonic response obtained after propagation of a pulse emitted by a transducer and its interaction with the targets. While in-service the sodium flow creates turbulences that lead to temperature inhomogeneities, which translates into ultrasonic velocity inhomogeneities. These velocity variations could directly impact the accuracy of the target locating by introducing time of flight variations. A stochastic simulation model has been developed to calculate the propagation of ultrasonic waves in such an inhomogeneous medium. Using this approach, the travel time is randomly generated by a stochastic process whose inputs are the statistical moments of travel times known analytically. The stochastic model predicts beam deviations due to velocity inhomogeneities, which are similar to those provided by a determinist method, such as the ray method.
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu
2016-12-10
Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imagingmore » come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.« less
Filamentation of ultrashort light pulses in a liquid scattering medium
NASA Astrophysics Data System (ADS)
Jukna, V.; Tamošauskas, G.; Valiulis, G.; Aputis, M.; Puida, M.; Ivanauskas, F.; Dubietis, A.
2009-01-01
We have studied filamentation of 1-ps laser pulses in a scattering medium (aqueous suspension of 2-μm polystyrene microspheres) and compared filamentation dynamics to that in pure water. Our results indicate that light scattering does not alter filamentation dynamics in general, but rather results in farther position of the nonlinear focus, shorter filament length, and the development of speckle structure in the peripheral part of the beam. The experimental observations are qualitatively reproduced by the numerical model which accounts for diffraction, self-focusing, multiphoton absorption, and light scattering introduced through a stochastic diffusion and diffraction term.
NASA Astrophysics Data System (ADS)
Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher
2017-11-01
Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.
NASA Astrophysics Data System (ADS)
Holmes, Philip; Eckhoff, Philip; Wong-Lin, K. F.; Bogacz, Rafal; Zacksenhouse, Miriam; Cohen, Jonathan D.
2010-03-01
We describe how drift-diffusion (DD) processes - systems familiar in physics - can be used to model evidence accumulation and decision-making in two-alternative, forced choice tasks. We sketch the derivation of these stochastic differential equations from biophysically-detailed models of spiking neurons. DD processes are also continuum limits of the sequential probability ratio test and are therefore optimal in the sense that they deliver decisions of specified accuracy in the shortest possible time. This leaves open the critical balance of accuracy and speed. Using the DD model, we derive a speed-accuracy tradeoff that optimizes reward rate for a simple perceptual decision task, compare human performance with this benchmark, and discuss possible reasons for prevalent sub-optimality, focussing on the question of uncertain estimates of key parameters. We present an alternative theory of robust decisions that allows for uncertainty, and show that its predictions provide better fits to experimental data than a more prevalent account that emphasises a commitment to accuracy. The article illustrates how mathematical models can illuminate the neural basis of cognitive processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malone, Fionn D., E-mail: f.malone13@imperial.ac.uk; Lee, D. K. K.; Foulkes, W. M. C.
The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing ourmore » results to previous work where possible.« less
NASA Astrophysics Data System (ADS)
Cheng, Longjiu; Cai, Wensheng; Shao, Xueguang
2005-03-01
An energy-based perturbation and a new idea of taboo strategy are proposed for structural optimization and applied in a benchmark problem, i.e., the optimization of Lennard-Jones (LJ) clusters. It is proved that the energy-based perturbation is much better than the traditional random perturbation both in convergence speed and searching ability when it is combined with a simple greedy method. By tabooing the most wide-spread funnel instead of the visited solutions, the hit rate of other funnels can be significantly improved. Global minima of (LJ) clusters up to 200 atoms are found with high efficiency.
A PSO-Based Hybrid Metaheuristic for Permutation Flowshop Scheduling Problems
Zhang, Le; Wu, Jinnan
2014-01-01
This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature. PMID:24672389
A PSO-based hybrid metaheuristic for permutation flowshop scheduling problems.
Zhang, Le; Wu, Jinnan
2014-01-01
This paper investigates the permutation flowshop scheduling problem (PFSP) with the objectives of minimizing the makespan and the total flowtime and proposes a hybrid metaheuristic based on the particle swarm optimization (PSO). To enhance the exploration ability of the hybrid metaheuristic, a simulated annealing hybrid with a stochastic variable neighborhood search is incorporated. To improve the search diversification of the hybrid metaheuristic, a solution replacement strategy based on the pathrelinking is presented to replace the particles that have been trapped in local optimum. Computational results on benchmark instances show that the proposed PSO-based hybrid metaheuristic is competitive with other powerful metaheuristics in the literature.
A non-linear dimension reduction methodology for generating data-driven stochastic input models
NASA Astrophysics Data System (ADS)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
2008-06-01
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem of manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space Rn. An isometric mapping F from M to a low-dimensional, compact, connected set A⊂Rd(d≪n) is constructed. Given only a finite set of samples of the data, the methodology uses arguments from graph theory and differential geometry to construct the isometric transformation F:M→A. Asymptotic convergence of the representation of M by A is shown. This mapping F serves as an accurate, low-dimensional, data-driven representation of the property variations. The reduced-order model of the material topology and thermal diffusivity variations is subsequently used as an input in the solution of stochastic partial differential equations that describe the evolution of dependant variables. A sparse grid collocation strategy (Smolyak algorithm) is utilized to solve these stochastic equations efficiently. We showcase the methodology by constructing low-dimensional input stochastic models to represent thermal diffusivity in two-phase microstructures. This model is used in analyzing the effect of topological variations of two-phase microstructures on the evolution of temperature in heat conduction processes.
[Benchmarking in patient identification: An opportunity to learn].
Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C
To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Toward an Understanding of People Management Issues in SMEs: a South-Eastern European Perspective
ERIC Educational Resources Information Center
Szamosi, Leslie T.; Duxbury, Linda; Higgins, Chris
2004-01-01
The focus of this paper is on developing an understanding, and benchmarking, human resource management HRM issues in small and medium enterprises SMEs in South-Eastern Europe. The importance of SMEs in helping transition-based economies develop is critical, but at the same time the research indicates that the movement toward westernized business…
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, T; Finlay, J; Mesina, C
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axismore » ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.« less
Rätz, H-J; Charef, A; Abella, A J; Colloca, F; Ligas, A; Mannini, A; Lloret, J
2013-10-01
A medium-term (10 year) stochastic forecast model is developed and presented for mixed fisheries that can provide estimations of age-specific parameters for a maximum of 10 stocks and 10 fisheries. Designed to support fishery managers dealing with complex, multi-annual management plans, the model can be used to quantitatively test the consequences of various stock-specific and fishery-specific decisions, using non-equilibrium stock dynamics. Such decisions include fishing restrictions and other strategies aimed at achieving sustainable mixed fisheries consistent with the concept of maximum sustainable yield (MSY). In order to test the model, recently gathered data on seven stocks and four fisheries operating in the Ligurian and North Tyrrhenian Seas are used to generate quantitative, 10 year predictions of biomass and catch trends under four different management scenarios. The results show that using the fishing mortality at MSY as the biological reference point for the management of all stocks would be a strong incentive to reduce the technical interactions among concurrent fishing strategies. This would optimize the stock-specific exploitation and be consistent with sustainability criteria. © 2013 The Fisheries Society of the British Isles.
Spectral estimation for characterization of acoustic aberration.
Varslot, Trond; Angelsen, Bjørn; Waag, Robert C
2004-07-01
Spectral estimation based on acoustic backscatter from a motionless stochastic medium is described for characterization of aberration in ultrasonic imaging. The underlying assumptions for the estimation are: The correlation length of the medium is short compared to the length of the transmitted acoustic pulse, an isoplanatic region of sufficient size exists around the focal point, and the backscatter can be modeled as an ergodic stochastic process. The motivation for this work is ultrasonic imaging with aberration correction. Measurements were performed using a two-dimensional array system with 80 x 80 transducer elements and an element pitch of 0.6 mm. The f number for the measurements was 1.2 and the center frequency was 3.0 MHz with a 53% bandwidth. Relative phase of aberration was extracted from estimated cross spectra using a robust least-mean-square-error method based on an orthogonal expansion of the phase differences of neighboring wave forms as a function of frequency. Estimates of cross-spectrum phase from measurements of random scattering through a tissue-mimicking aberrator have confidence bands approximately +/- 5 degrees wide. Both phase and magnitude are in good agreement with a reference characterization obtained from a point scatterer.
Realistic Simulation for Body Area and Body-To-Body Networks
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D’Errico, Raffaele
2016-01-01
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices. PMID:27104537
Realistic Simulation for Body Area and Body-To-Body Networks.
Alam, Muhammad Mahtab; Ben Hamida, Elyes; Ben Arbia, Dhafer; Maman, Mickael; Mani, Francesco; Denis, Benoit; D'Errico, Raffaele
2016-04-20
In this paper, we present an accurate and realistic simulation for body area networks (BAN) and body-to-body networks (BBN) using deterministic and semi-deterministic approaches. First, in the semi-deterministic approach, a real-time measurement campaign is performed, which is further characterized through statistical analysis. It is able to generate link-correlated and time-varying realistic traces (i.e., with consistent mobility patterns) for on-body and body-to-body shadowing and fading, including body orientations and rotations, by means of stochastic channel models. The full deterministic approach is particularly targeted to enhance IEEE 802.15.6 proposed channel models by introducing space and time variations (i.e., dynamic distances) through biomechanical modeling. In addition, it helps to accurately model the radio link by identifying the link types and corresponding path loss factors for line of sight (LOS) and non-line of sight (NLOS). This approach is particularly important for links that vary over time due to mobility. It is also important to add that the communication and protocol stack, including the physical (PHY), medium access control (MAC) and networking models, is developed for BAN and BBN, and the IEEE 802.15.6 compliance standard is provided as a benchmark for future research works of the community. Finally, the two approaches are compared in terms of the successful packet delivery ratio, packet delay and energy efficiency. The results show that the semi-deterministic approach is the best option; however, for the diversity of the mobility patterns and scenarios applicable, biomechanical modeling and the deterministic approach are better choices.
NASA Astrophysics Data System (ADS)
Zhou, Rui-Rui; Li, Ben-Wen
2017-03-01
In this study, the Chebyshev collocation spectral method (CCSM) is developed to solve the radiative integro-differential transfer equation (RIDTE) for one-dimensional absorbing, emitting and linearly anisotropic-scattering cylindrical medium. The general form of quadrature formulas for Chebyshev collocation points is deduced. These formulas are proved to have the same accuracy as the Gauss-Legendre quadrature formula (GLQF) for the F-function (geometric function) in the RIDTE. The explicit expressions of the Lagrange basis polynomials and the differentiation matrices for Chebyshev collocation points are also given. These expressions are necessary for solving an integro-differential equation by the CCSM. Since the integrand in the RIDTE is continuous but non-smooth, it is treated by the segments integration method (SIM). The derivative terms in the RIDTE are carried out to improve the accuracy near the origin. In this way, a fourth order accuracy is achieved by the CCSM for the RIDTE, whereas it's only a second order one by the finite difference method (FDM). Several benchmark problems (BPs) with various combinations of optical thickness, medium temperature distribution, degree of anisotropy, and scattering albedo are solved. The results show that present CCSM is efficient to obtain high accurate results, especially for the optically thin medium. The solutions rounded to seven significant digits are given in tabular form, and show excellent agreement with the published data. Finally, the solutions of RIDTE are used as benchmarks for the solution of radiative integral transfer equations (RITEs) presented by Sutton and Chen (JQSRT 84 (2004) 65-103). A non-uniform grid refined near the wall is advised to improve the accuracy of RITEs solutions.
NASA Astrophysics Data System (ADS)
Zenkour, A. M.
2018-05-01
The thermal buckling analysis of carbon nanotubes embedded in a visco-Pasternak's medium is investigated. The Eringen's nonlocal elasticity theory, in conjunction with the first-order Donnell's shell theory, is used for this purpose. The surrounding medium is considered as a three-parameter viscoelastic foundation model, Winkler-Pasternak's model as well as a viscous damping coefficient. The governing equilibrium equations are obtained and solved for carbon nanotubes subjected to different thermal and mechanical loads. The effects of nonlocal parameter, radius and length of nanotube, and the three foundation parameters on the thermal buckling of the nanotube are studied. Sample critical buckling loads are reported and graphically illustrated to check the validity of the present results and to present benchmarks for future comparisons.
NASA Astrophysics Data System (ADS)
Bastani, Ali Foroush; Dastgerdi, Maryam Vahid; Mighani, Abolfazl
2018-06-01
The main aim of this paper is the analytical and numerical study of a time-dependent second-order nonlinear partial differential equation (PDE) arising from the endogenous stochastic volatility model, introduced in [Bensoussan, A., Crouhy, M. and Galai, D., Stochastic equity volatility related to the leverage effect (I): equity volatility behavior. Applied Mathematical Finance, 1, 63-85, 1994]. As the first step, we derive a consistent set of initial and boundary conditions to complement the PDE, when the firm is financed by equity and debt. In the sequel, we propose a Newton-based iteration scheme for nonlinear parabolic PDEs which is an extension of a method for solving elliptic partial differential equations introduced in [Fasshauer, G. E., Newton iteration with multiquadrics for the solution of nonlinear PDEs. Computers and Mathematics with Applications, 43, 423-438, 2002]. The scheme is based on multilevel collocation using radial basis functions (RBFs) to solve the resulting locally linearized elliptic PDEs obtained at each level of the Newton iteration. We show the effectiveness of the resulting framework by solving a prototypical example from the field and compare the results with those obtained from three different techniques: (1) a finite difference discretization; (2) a naive RBF collocation and (3) a benchmark approximation, introduced for the first time in this paper. The numerical results confirm the robustness, higher convergence rate and good stability properties of the proposed scheme compared to other alternatives. We also comment on some possible research directions in this field.
Diffuse reflection from a stochastically bounded, semi-infinite medium
NASA Technical Reports Server (NTRS)
Lumme, K.; Peltoniemi, J. I.; Irvine, W. M.
1990-01-01
In order to determine the diffuse reflection from a medium bounded by a rough surface, the problem of radiative transfer in a boundary layer characterized by a statistical distribution of heights is considered. For the case that the surface is defined by a multivariate normal probability density, the propagation probability for rays traversing the boundary layer is derived and, from that probability, a corresponding radiative transfer equation. A solution of the Eddington (two stream) type is found explicitly, and examples are given. The results should be applicable to reflection from the regoliths of solar system bodies, as well as from a rough ocean surface.
Limits on estimating the width of thin tubular structures in 3D images.
Wörz, Stefan; Rohr, Karl
2006-01-01
This work studies limits on estimating the width of thin tubular structures in 3D images. Based on nonlinear estimation theory we analyze the minimal stochastic error of estimating the width. Given a 3D analytic model of the image intensities of tubular structures, we derive a closed-form expression for the Cramér-Rao bound of the width estimate under image noise. We use the derived lower bound as a benchmark and compare it with three previously proposed accuracy limits for vessel width estimation. Moreover, by experimental investigations we demonstrate that the derived lower bound can be achieved by fitting a 3D parametric intensity model directly to the image data.
Optimal portfolio selection in a Lévy market with uncontrolled cash flow and only risky assets
NASA Astrophysics Data System (ADS)
Zeng, Yan; Li, Zhongfei; Wu, Huiling
2013-03-01
This article considers an investor who has an exogenous cash flow evolving according to a Lévy process and invests in a financial market consisting of only risky assets, whose prices are governed by exponential Lévy processes. Two continuous-time portfolio selection problems are studied for the investor. One is a benchmark problem, and the other is a mean-variance problem. The first problem is solved by adopting the stochastic dynamic programming approach, and the obtained results are extended to the second problem by employing the duality theory. Closed-form solutions of these two problems are derived. Some existing results are found to be special cases of our results.
SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output
Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.
2011-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297
SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†
Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.
2013-01-01
We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136
Pedron, Sara; Winter, Vera; Oppel, Eva-Maria; Bialas, Enno
2017-08-23
Operating room (OR) efficiency continues to be a high priority for hospitals. In this context the concept of benchmarking has gained increasing importance as a means to improve OR performance. The aim of this study was to investigate whether and how participation in a benchmarking and reporting program for surgical process data was associated with a change in OR efficiency, measured through raw utilization, turnover times, and first-case tardiness. The main analysis is based on panel data from 202 surgical departments in German hospitals, which were derived from the largest database for surgical process data in Germany. Panel regression modelling was applied. Results revealed no clear and univocal trend of participation in a benchmarking and reporting program for surgical process data. The largest trend was observed for first-case tardiness. In contrast to expectations, turnover times showed a generally increasing trend during participation. For raw utilization no clear and statistically significant trend could be evidenced. Subgroup analyses revealed differences in effects across different hospital types and department specialties. Participation in a benchmarking and reporting program and thus the availability of reliable, timely and detailed analysis tools to support the OR management seemed to be correlated especially with an increase in the timeliness of staff members regarding first-case starts. The increasing trend in turnover time revealed the absence of effective strategies to improve this aspect of OR efficiency in German hospitals and could have meaningful consequences for the medium- and long-run capacity planning in the OR.
Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Preston, Leiph
Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less
A non-linear dimension reduction methodology for generating data-driven stochastic input models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapathysubramanian, Baskar; Zabaras, Nicholas
Stochastic analysis of random heterogeneous media (polycrystalline materials, porous media, functionally graded materials) provides information of significance only if realistic input models of the topology and property variations are used. This paper proposes a framework to construct such input stochastic models for the topology and thermal diffusivity variations in heterogeneous media using a data-driven strategy. Given a set of microstructure realizations (input samples) generated from given statistical information about the medium topology, the framework constructs a reduced-order stochastic representation of the thermal diffusivity. This problem of constructing a low-dimensional stochastic representation of property variations is analogous to the problem ofmore » manifold learning and parametric fitting of hyper-surfaces encountered in image processing and psychology. Denote by M the set of microstructures that satisfy the given experimental statistics. A non-linear dimension reduction strategy is utilized to map M to a low-dimensional region, A. We first show that M is a compact manifold embedded in a high-dimensional input space R{sup n}. An isometric mapping F from M to a low-dimensional, compact, connected set A is contained in R{sup d}(d<
The stochastic dynamics of intermittent porescale particle motion
NASA Astrophysics Data System (ADS)
Dentz, Marco; Morales, Veronica; Puyguiraud, Alexandre; Gouze, Philippe; Willmann, Matthias; Holzner, Markus
2017-04-01
Numerical and experimental data for porescale particle dynamics show intermittent patterns in Lagrangian velocities and accelerations, which manifest in long time intervals of low and short durations of high velocities [1, 2]. This phenomenon is due to the spatial persistence of particle velocities on characteristic heterogeneity length scales. In order to systematically quantify these behaviors and extract the stochastic dynamics of particle motion, we focus on the analysis of Lagrangian velocities sampled equidistantly along trajectories [3]. This method removes the intermittency observed under isochrone sampling. The space-Lagrangian velocity series can be quantified by a Markov process that is continuous in distance along streamline. It is fully parameterized in terms of the flux-weighted Eulerian velocity PDF and the characteristic pore-length. The resulting stochastic particle motion describes a continuous time random walk (CTRW). This approach allows for the process based interpretation of experimental and numerical porescale velocity, acceleration and displacement data. It provides a framework for the characterization and upscaling of particle transport and dispersion from the pore to the Darcy-scale based on the medium geometry and Eulerian flow attributes. [1] P. De Anna, T. Le Borgne, M. Dentz, A.M. Tartakovsky, D. Bolster, and P. Davy, "Flow intermittency, dispersion, and correlated continuous time random walks in porous media," Phys. Rev. Lett. 110, 184502 (2013). [2] M. Holzner, V. L. Morales, M. Willmann, and M. Dentz, "Intermittent Lagrangian velocities and accelerations in three- dimensional porous medium flow," Phys. Rev. E 92, 013015 (2015). [3] M. Dentz, P. K. Kang, A. Comolli, T. Le Borgne, and D. R. Lester, "Continuous time random walks for the evolution of Lagrangian velocities," Phys. Rev. Fluids (2016).
A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models
NASA Astrophysics Data System (ADS)
Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele
2017-11-01
Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.
Taylor, P. R.; Baker, R. E.; Simpson, M. J.; Yates, C. A.
2016-01-01
Numerous processes across both the physical and biological sciences are driven by diffusion. Partial differential equations are a popular tool for modelling such phenomena deterministically, but it is often necessary to use stochastic models to accurately capture the behaviour of a system, especially when the number of diffusing particles is low. The stochastic models we consider in this paper are ‘compartment-based’: the domain is discretized into compartments, and particles can jump between these compartments. Volume-excluding effects (crowding) can be incorporated by blocking movement with some probability. Recent work has established the connection between fine- and coarse-grained models incorporating volume exclusion, but only for uniform lattices. In this paper, we consider non-uniform, hybrid lattices that incorporate both fine- and coarse-grained regions, and present two different approaches to describe the interface of the regions. We test both techniques in a range of scenarios to establish their accuracy, benchmarking against fine-grained models, and show that the hybrid models developed in this paper can be significantly faster to simulate than the fine-grained models in certain situations and are at least as fast otherwise. PMID:27383421
Stochastic Rotation Dynamics simulations of wetting multi-phase flows
NASA Astrophysics Data System (ADS)
Hiller, Thomas; Sanchez de La Lama, Marta; Brinkmann, Martin
2016-06-01
Multi-color Stochastic Rotation Dynamics (SRDmc) has been introduced by Inoue et al. [1,2] as a particle based simulation method to study the flow of emulsion droplets in non-wetting microchannels. In this work, we extend the multi-color method to also account for different wetting conditions. This is achieved by assigning the color information not only to fluid particles but also to virtual wall particles that are required to enforce proper no-slip boundary conditions. To extend the scope of the original SRDmc algorithm to e.g. immiscible two-phase flow with viscosity contrast we implement an angular momentum conserving scheme (SRD+mc). We perform extensive benchmark simulations to show that a mono-phase SRDmc fluid exhibits bulk properties identical to a standard SRD fluid and that SRDmc fluids are applicable to a wide range of immiscible two-phase flows. To quantify the adhesion of a SRD+mc fluid in contact to the walls we measure the apparent contact angle from sessile droplets in mechanical equilibrium. For a further verification of our wettability implementation we compare the dewetting of a liquid film from a wetting stripe to experimental and numerical studies of interfacial morphologies on chemically structured surfaces.
The stochastic Beer-Lambert-Bouguer law for discontinuous vegetation canopies
NASA Astrophysics Data System (ADS)
Shabanov, N.; Gastellu-Etchegorry, J.-P.
2018-07-01
The 3D distribution of canopy foliage affects the radiation regime and retrievals of canopy biophysical parameters. The gap fraction is one primary indicator of a canopy structure. Historically the Beer-Lambert-Bouguer law and the linear mixture model have served as a basis for multiple technologies for retrievals of the gap (or vegetation) fraction and Leaf Area Index (LAI). The Beer-Lambert-Bouguer law is a form of the Radiative Transfer (RT) equation for homogeneous canopies, which was later adjusted for a correlation between fitoelements using concept of the clumping index. The Stochastic Radiative Transfer (SRT) approach has been developed specifically for heterogeneous canopies, however the approach lacks a proper model of the vegetation fraction. This study is focused on the implementation of the stochastic version of the Beer-Lambert-Bouguer law for heterogeneous canopies, featuring the following principles: 1) two mechanisms perform photon transport- transmission through the turbid medium of foliage crowns and direct streaming through canopy gaps, 2) the radiation field is influenced by a canopy structure (quantified by the statistical moments of a canopy structure) and a foliage density (quantified by the gap fraction as a function of LAI), 3) the notions of canopy transmittance and gap fraction are distinct. The derived stochastic Beer-Lambert-Bouguer law is consistent with the Geometrical Optical and Radiative Transfer (GORT) derivations. Analytical and numerical analysis of the stochastic Beer-Lambert-Bouguer law presented in this study provides the basis to reformulate widely used technologies for retrievals of the gap fraction and LAI from ground and satellite radiation measurements.
A Diagnostic Assessment of Evolutionary Multiobjective Optimization for Water Resources Systems
NASA Astrophysics Data System (ADS)
Reed, P.; Hadka, D.; Herman, J.; Kasprzyk, J.; Kollat, J.
2012-04-01
This study contributes a rigorous diagnostic assessment of state-of-the-art multiobjective evolutionary algorithms (MOEAs) and highlights key advances that the water resources field can exploit to better discover the critical tradeoffs constraining our systems. This study provides the most comprehensive diagnostic assessment of MOEAs for water resources to date, exploiting more than 100,000 MOEA runs and trillions of design evaluations. The diagnostic assessment measures the effectiveness, efficiency, reliability, and controllability of ten benchmark MOEAs for a representative suite of water resources applications addressing rainfall-runoff calibration, long-term groundwater monitoring (LTM), and risk-based water supply portfolio planning. The suite of problems encompasses a range of challenging problem properties including (1) many-objective formulations with 4 or more objectives, (2) multi-modality (or false optima), (3) nonlinearity, (4) discreteness, (5) severe constraints, (6) stochastic objectives, and (7) non-separability (also called epistasis). The applications are representative of the dominant problem classes that have shaped the history of MOEAs in water resources and that will be dominant foci in the future. Recommendations are provided for which modern MOEAs should serve as tools and benchmarks in the future water resources literature.
NASA Astrophysics Data System (ADS)
Kaskhedikar, Apoorva Prakash
According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.
Data-Driven Benchmarking of Building Energy Efficiency Utilizing Statistical Frontier Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kavousian, A; Rajagopal, R
2014-01-01
Frontier methods quantify the energy efficiency of buildings by forming an efficient frontier (best-practice technology) and by comparing all buildings against that frontier. Because energy consumption fluctuates over time, the efficiency scores are stochastic random variables. Existing applications of frontier methods in energy efficiency either treat efficiency scores as deterministic values or estimate their uncertainty by resampling from one set of measurements. Availability of smart meter data (repeated measurements of energy consumption of buildings) enables using actual data to estimate the uncertainty in efficiency scores. Additionally, existing applications assume a linear form for an efficient frontier; i.e.,they assume that themore » best-practice technology scales up and down proportionally with building characteristics. However, previous research shows that buildings are nonlinear systems. This paper proposes a statistical method called stochastic energy efficiency frontier (SEEF) to estimate a bias-corrected efficiency score and its confidence intervals from measured data. The paper proposes an algorithm to specify the functional form of the frontier, identify the probability distribution of the efficiency score of each building using measured data, and rank buildings based on their energy efficiency. To illustrate the power of SEEF, this paper presents the results from applying SEEF on a smart meter data set of 307 residential buildings in the United States. SEEF efficiency scores are used to rank individual buildings based on energy efficiency, to compare subpopulations of buildings, and to identify irregular behavior of buildings across different time-of-use periods. SEEF is an improvement to the energy-intensity method (comparing kWh/sq.ft.): whereas SEEF identifies efficient buildings across the entire spectrum of building sizes, the energy-intensity method showed bias toward smaller buildings. The results of this research are expected to assist researchers and practitioners compare and rank (i.e.,benchmark) buildings more robustly and over a wider range of building types and sizes. Eventually, doing so is expected to result in improved resource allocation in energy-efficiency programs.« less
Double diffusivity model under stochastic forcing
NASA Astrophysics Data System (ADS)
Chattopadhyay, Amit K.; Aifantis, Elias C.
2017-05-01
The "double diffusivity" model was proposed in the late 1970s, and reworked in the early 1980s, as a continuum counterpart to existing discrete models of diffusion corresponding to high diffusivity paths, such as grain boundaries and dislocation lines. It was later rejuvenated in the 1990s to interpret experimental results on diffusion in polycrystalline and nanocrystalline specimens where grain boundaries and triple grain boundary junctions act as high diffusivity paths. Technically, the model pans out as a system of coupled Fick-type diffusion equations to represent "regular" and "high" diffusivity paths with "source terms" accounting for the mass exchange between the two paths. The model remit was extended by analogy to describe flow in porous media with double porosity, as well as to model heat conduction in media with two nonequilibrium local temperature baths, e.g., ion and electron baths. Uncoupling of the two partial differential equations leads to a higher-ordered diffusion equation, solutions of which could be obtained in terms of classical diffusion equation solutions. Similar equations could also be derived within an "internal length" gradient (ILG) mechanics formulation applied to diffusion problems, i.e., by introducing nonlocal effects, together with inertia and viscosity, in a mechanics based formulation of diffusion theory. While being remarkably successful in studies related to various aspects of transport in inhomogeneous media with deterministic microstructures and nanostructures, its implications in the presence of stochasticity have not yet been considered. This issue becomes particularly important in the case of diffusion in nanopolycrystals whose deterministic ILG-based theoretical calculations predict a relaxation time that is only about one-tenth of the actual experimentally verified time scale. This article provides the "missing link" in this estimation by adding a vital element in the ILG structure, that of stochasticity, that takes into account all boundary layer fluctuations. Our stochastic-ILG diffusion calculation confirms rapprochement between theory and experiment, thereby benchmarking a new generation of gradient-based continuum models that conform closer to real-life fluctuating environments.
NASA Astrophysics Data System (ADS)
Zatarain-Salazar, J.; Reed, P. M.; Herman, J. D.; Giuliani, M.; Castelletti, A.
2014-12-01
Globally reservoir operations provide fundamental services to water supply, energy generation, recreation, and ecosystems. The pressures of expanding populations, climate change, and increased energy demands are motivating a significant investment in re-operationalizing existing reservoirs or defining operations for new reservoirs. Recent work has highlighted the potential benefits of exploiting recent advances in many-objective optimization and direct policy search (DPS) to aid in addressing these systems' multi-sector demand tradeoffs. This study contributes to a comprehensive diagnostic assessment of multi-objective evolutionary optimization algorithms (MOEAs) efficiency, effectiveness, reliability, and controllability when supporting DPS for the Conowingo dam in the Lower Susquehanna River Basin. The Lower Susquehanna River is an interstate water body that has been subject to intensive water management efforts due to the system's competing demands from urban water supply, atomic power plant cooling, hydropower production, and federally regulated environmental flows. Seven benchmark and state-of-the-art MOEAs are tested on deterministic and stochastic instances of the Susquehanna test case. In the deterministic formulation, the operating objectives are evaluated over the historical realization of the hydroclimatic variables (i.e., inflows and evaporation rates). In the stochastic formulation, the same objectives are instead evaluated over an ensemble of stochastic inflows and evaporation rates realizations. The algorithms are evaluated in their ability to support DPS in discovering reservoir operations that compose the tradeoffs for six multi-sector performance objectives with thirty-two decision variables. Our diagnostic results highlight that many-objective DPS is very challenging for modern MOEAs and that epsilon dominance is critical for attaining high levels of performance. Epsilon dominance algorithms epsilon-MOEA, epsilon-NSGAII and the auto adaptive Borg MOEA, are statistically superior for the six-objective Susquehanna instance of this important class of problems. Additionally, shifting from deterministic history-based DPS to stochastic DPS significantly increases the difficulty of the problem.
Roberts, James J; Fausch, Kurt D; Peterson, Douglas P; Hooten, Mevin B
2013-05-01
Impending changes in climate will interact with other stressors to threaten aquatic ecosystems and their biota. Native Colorado River cutthroat trout (CRCT; Oncorhynchus clarkii pleuriticus) are now relegated to 309 isolated high-elevation (>1700 m) headwater stream fragments in the Upper Colorado River Basin, owing to past nonnative trout invasions and habitat loss. Predicted changes in climate (i.e., temperature and precipitation) and resulting changes in stochastic physical disturbances (i.e., wildfire, debris flow, and channel drying and freezing) could further threaten the remaining CRCT populations. We developed an empirical model to predict stream temperatures at the fragment scale from downscaled climate projections along with geomorphic and landscape variables. We coupled these spatially explicit predictions of stream temperature with a Bayesian Network (BN) model that integrates stochastic risks from fragmentation to project persistence of CRCT populations across the upper Colorado River basin to 2040 and 2080. Overall, none of the populations are at risk from acute mortality resulting from high temperatures during the warmest summer period. In contrast, only 37% of populations have a ≥90% chance of persistence for 70 years (similar to the typical benchmark for conservation), primarily owing to fragmentation. Populations in short stream fragments <7 km long, and those at the lowest elevations, are at the highest risk of extirpation. Therefore, interactions of stochastic disturbances with fragmentation are projected to be greater threats than warming for CRCT populations. The reason for this paradox is that past nonnative trout invasions and habitat loss have restricted most CRCT populations to high-elevation stream fragments that are buffered from the potential consequences of warming, but at risk of extirpation from stochastic events. The greatest conservation need is for management to increase fragment lengths to forestall these risks. © 2013 Blackwell Publishing Ltd.
Roberts, James J.; Fausch, Kurt D.; Peterson, Douglas P.; Hooten, Mevin B.
2013-01-01
Impending changes in climate will interact with other stressors to threaten aquatic ecosystems and their biota. Native Colorado River cutthroat trout (CRCT; Oncorhynchus clarkii pleuriticus) are now relegated to 309 isolated high-elevation (>1700 m) headwater stream fragments in the Upper Colorado River Basin, owing to past nonnative trout invasions and habitat loss. Predicted changes in climate (i.e., temperature and precipitation) and resulting changes in stochastic physical disturbances (i.e., wildfire, debris flow, and channel drying and freezing) could further threaten the remaining CRCT populations. We developed an empirical model to predict stream temperatures at the fragment scale from downscaled climate projections along with geomorphic and landscape variables. We coupled these spatially explicit predictions of stream temperature with a Bayesian Network (BN) model that integrates stochastic risks from fragmentation to project persistence of CRCT populations across the upper Colorado River basin to 2040 and 2080. Overall, none of the populations are at risk from acute mortality resulting from high temperatures during the warmest summer period. In contrast, only 37% of populations have a greater than or equal to 90% chance of persistence for 70 years (similar to the typical benchmark for conservation), primarily owing to fragmentation. Populations in short stream fragments <7 km long, and those at the lowest elevations, are at the highest risk of extirpation. Therefore, interactions of stochastic disturbances with fragmentation are projected to be greater threats than warming for CRCT populations. The reason for this paradox is that past nonnative trout invasions and habitat loss have restricted most CRCT populations to high-elevation stream fragments that are buffered from the potential consequences of warming, but at risk of extirpation from stochastic events. The greatest conservation need is for management to increase fragment lengths to forestall these risks.
Discussion of a ``coherent artifact'' in four-wave mixing experiments
NASA Astrophysics Data System (ADS)
Ferwerda, Hedzer A.; Terpstra, Jacob; Wiersma, Douwe A.
1989-09-01
In this paper, we discuss the nonlinear optical effects that arise when stochastic light waves, with different correlation times, interfere in an absorbing medium. It is shown that four-wave mixing signals are generated in several directions that spectrally track the incoming light fields. This effect is particularly relevant to transient hole-burning experiments, where one of these signals could easily be misinterpreted as a genuine hole-burning feature.
Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates
NASA Technical Reports Server (NTRS)
Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.
2016-01-01
The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.
Analysis of the Space Propulsion System Problem Using RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
diego mandelli; curtis smith; cristian rabiti
This paper presents the solution of the space propulsion problem using a PRA code currently under development at Idaho National Laboratory (INL). RAVEN (Reactor Analysis and Virtual control ENviroment) is a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities. It is designed to derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures) and to perform both Monte- Carlo sampling of random distributed events and Event Tree based analysis. In order to facilitate the input/output handling, a Graphical User Interface (GUI) and a post-processing data-mining module are available.more » RAVEN allows also to interface with several numerical codes such as RELAP5 and RELAP-7 and ad-hoc system simulators. For the space propulsion system problem, an ad-hoc simulator has been developed and written in python language and then interfaced to RAVEN. Such simulator fully models both deterministic (e.g., system dynamics and interactions between system components) and stochastic behaviors (i.e., failures of components/systems such as distribution lines and thrusters). Stochastic analysis is performed using random sampling based methodologies (i.e., Monte-Carlo). Such analysis is accomplished to determine both the reliability of the space propulsion system and to propagate the uncertainties associated to a specific set of parameters. As also indicated in the scope of the benchmark problem, the results generated by the stochastic analysis are used to generate risk-informed insights such as conditions under witch different strategy can be followed.« less
Makeev, Alexei G; Kurkina, Elena S; Kevrekidis, Ioannis G
2012-06-01
Kinetic Monte Carlo simulations are used to study the stochastic two-species Lotka-Volterra model on a square lattice. For certain values of the model parameters, the system constitutes an excitable medium: travelling pulses and rotating spiral waves can be excited. Stable solitary pulses travel with constant (modulo stochastic fluctuations) shape and speed along a periodic lattice. The spiral waves observed persist sometimes for hundreds of rotations, but they are ultimately unstable and break-up (because of fluctuations and interactions between neighboring fronts) giving rise to complex dynamic behavior in which numerous small spiral waves rotate and interact with each other. It is interesting that travelling pulses and spiral waves can be exhibited by the model even for completely immobile species, due to the non-local reaction kinetics.
Controlling multiple plasma channels created by a high-power femtosecond laser pulse
NASA Astrophysics Data System (ADS)
Kosareva, O. G.; Luo, Q.
2005-10-01
Femtosecond light filaments are comparatively long regions of the spatially and temporally localized radiation zones, which generate free electrons in the medium. At high pulse peak power multiple filaments are produced leading to stochastic plasma channels (Mlejnek et al.: PRL 83, 2938 (1999)). In both atmospheric long-distance propagation (Sprangle et al., PRE 66, 046418 (2002), Kasparian et al, Science 301, 61 (2003)) and focusing the radiation into condensed matter important issues are production of elongated plasma channels, as well as high conversion efficiency to the white light. We control stochastic plasma channels by changing the initial beam size or shape. The result is the increase in the plasma density and white light signal. Control by regular small-scale perturbations allows us to suppress atmospheric turbulence in air and create an array of well-arranged filaments in fused silica.
Disentangling Random Motion and Flow in a Complex Medium
Koslover, Elena F.; Chan, Caleb K.; Theriot, Julie A.
2016-01-01
We describe a technique for deconvolving the stochastic motion of particles from large-scale fluid flow in a dynamic environment such as that found in living cells. The method leverages the separation of timescales to subtract out the persistent component of motion from single-particle trajectories. The mean-squared displacement of the resulting trajectories is rescaled so as to enable robust extraction of the diffusion coefficient and subdiffusive scaling exponent of the stochastic motion. We demonstrate the applicability of the method for characterizing both diffusive and fractional Brownian motion overlaid by flow and analytically calculate the accuracy of the method in different parameter regimes. This technique is employed to analyze the motion of lysosomes in motile neutrophil-like cells, showing that the cytoplasm of these cells behaves as a viscous fluid at the timescales examined. PMID:26840734
Very High Specific Energy, Medium Power Li/CFx Primary Battery for Launchers and Space Probes
NASA Astrophysics Data System (ADS)
Brochard, Paul; Godillot, Gerome; Peres, Jean Paul; Corbin, Julien; Espinosa, Amaya
2014-08-01
Benchmark with existing technologies shows the advantages of the lithium-fluorinated carbon (Li/CFx) technology for use aboard future launchers in terms of a low Total Cost of Ownership (TCO), especially for high energy demanding missions such as re-ignitable upper stages for long GTO+ missions and probes for deep space exploration.This paper presents the new results obtained on this chemistry in terms of electrical and climatic performances, abuse tests and life tests. Studies - co-financed between CNES and Saft - looked at a pure CFx version with a specific energy up to 500 Wh/kg along with a medium power of 80 to 100 W/kg.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
Hawking, Thomas G.
2013-01-01
Dorsolateral striatum (DLS) is implicated in tactile perception and receives strong projections from somatosensory cortex. However, the sensory representations encoded by striatal projection neurons are not well understood. Here we characterized the contribution of DLS to the encoding of vibrotactile information in rats by assessing striatal responses to precise frequency stimuli delivered to a single vibrissa. We applied stimuli in a frequency range (45–90 Hz) that evokes discriminable percepts and carries most of the power of vibrissa vibration elicited by a range of complex fine textures. Both medium spiny neurons and evoked potentials showed tactile responses that were modulated by slow wave oscillations. Furthermore, medium spiny neuron population responses represented stimulus frequency on par with previously reported behavioral benchmarks. Our results suggest that striatum encodes frequency information of vibrotactile stimuli which is dynamically modulated by ongoing brain state. PMID:23114217
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation.
Stochastic response of human blood platelets to stimulation of shape changes and secretion.
Deranleau, D A; Lüthy, R; Lüscher, E F
1986-01-01
Stopped-flow turbidimetric data indicate that platelets stimulated with low levels of thrombin undergo a shape transformation from disc to "sphere" to smaller spiny sphere that is indistinguishable from the shape change induced by ADP through different membrane receptor sites and a dissimilar receptor trigger mechanism. Under conditions where neither secretion nor aggregation occur, the extinction coefficients for total scattering by each of the three platelet forms are independent of the stimulus applied, and both reaction mechanisms can be described as stochastic (Poisson) processes in which the rate constant for the formation of the transient species is equal to the rate constant for its disappearance. This observation is independent of the shape assignment, and as the concentration of thrombin is increased and various storage organelles secrete increasing amounts of their contents into the external medium, the stochastic pattern persists. Progressively larger decreases in the extinction coefficients of the intermediate and final platelet forms, over and above those that reflect shape alterations alone, accompany or parallel the reaction induced by the higher thrombin concentrations. The excess turbidity decrease observed when full secretion occurs can be wholly accounted for by a decrease in platelet volume equal in magnitude to the fraction of the total platelet volume occupied by alpha granules. Platelet activation, as reported by the whole body light scattering of either shape changes alone or shape changes plus parallel (but not necessarily also stochastic) alpha granule secretion, thus manifests itself as a random series of transient events conceivably with its origins in the superposition of a set of more elementary stochastic processes that could include microtubule depolymerization, actin polymerization, and possibly diffusion. Although the real nature of the control mechanism remains obscure, certain properties of pooled stochastic processes suggest that a reciprocal connection between microtubule fragmentation and the assembly of actin-containing pseudopodal structures and contractile elements--processes that may exhibit reciprocal requirements for calcium--might provide a hypothetical basis for a rate-limiting step. PMID:3457375
Lee, Seung Yup; Skolnick, Jeffrey
2007-07-01
To improve the accuracy of TASSER models especially in the limit where threading provided template alignments are of poor quality, we have developed the TASSER(iter) algorithm which uses the templates and contact restraints from TASSER generated models for iterative structure refinement. We apply TASSER(iter) to a large benchmark set of 2,773 nonhomologous single domain proteins that are < or = 200 in length and that cover the PDB at the level of 35% pairwise sequence identity. Overall, TASSER(iter) models have a smaller global average RMSD of 5.48 A compared to 5.81 A RMSD of the original TASSER models. Classifying the targets by the level of prediction difficulty (where Easy targets have a good template with a corresponding good threading alignment, Medium targets have a good template but a poor alignment, and Hard targets have an incorrectly identified template), TASSER(iter) (TASSER) models have an average RMSD of 4.15 A (4.35 A) for the Easy set and 9.05 A (9.52 A) for the Hard set. The largest reduction of average RMSD is for the Medium set where the TASSER(iter) models have an average global RMSD of 5.67 A compared to 6.72 A of the TASSER models. Seventy percent of the Medium set TASSER(iter) models have a smaller RMSD than the TASSER models, while 63% of the Easy and 60% of the Hard TASSER models are improved by TASSER(iter). For the foldable cases, where the targets have a RMSD to the native <6.5 A, TASSER(iter) shows obvious improvement over TASSER models: For the Medium set, it improves the success rate from 57.0 to 67.2%, followed by the Hard targets where the success rate improves from 32.0 to 34.8%, with the smallest improvement in the Easy targets from 82.6 to 84.0%. These results suggest that TASSER(iter) can provide more reliable predictions for targets of Medium difficulty, a range that had resisted improvement in the quality of protein structure predictions. 2007 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Lentati, L.; Shannon, R. M.; Coles, W. A.; Verbiest, J. P. W.; van Haasteren, R.; Ellis, J. A.; Caballero, R. N.; Manchester, R. N.; Arzoumanian, Z.; Babak, S.; Bassa, C. G.; Bhat, N. D. R.; Brem, P.; Burgay, M.; Burke-Spolaor, S.; Champion, D.; Chatterjee, S.; Cognard, I.; Cordes, J. M.; Dai, S.; Demorest, P.; Desvignes, G.; Dolch, T.; Ferdman, R. D.; Fonseca, E.; Gair, J. R.; Gonzalez, M. E.; Graikou, E.; Guillemot, L.; Hessels, J. W. T.; Hobbs, G.; Janssen, G. H.; Jones, G.; Karuppusamy, R.; Keith, M.; Kerr, M.; Kramer, M.; Lam, M. T.; Lasky, P. D.; Lassus, A.; Lazarus, P.; Lazio, T. J. W.; Lee, K. J.; Levin, L.; Liu, K.; Lynch, R. S.; Madison, D. R.; McKee, J.; McLaughlin, M.; McWilliams, S. T.; Mingarelli, C. M. F.; Nice, D. J.; Osłowski, S.; Pennucci, T. T.; Perera, B. B. P.; Perrodin, D.; Petiteau, A.; Possenti, A.; Ransom, S. M.; Reardon, D.; Rosado, P. A.; Sanidas, S. A.; Sesana, A.; Shaifullah, G.; Siemens, X.; Smits, R.; Stairs, I.; Stappers, B.; Stinebring, D. R.; Stovall, K.; Swiggum, J.; Taylor, S. R.; Theureau, G.; Tiburzi, C.; Toomey, L.; Vallisneri, M.; van Straten, W.; Vecchio, A.; Wang, J.-B.; Wang, Y.; You, X. P.; Zhu, W. W.; Zhu, X.-J.
2016-05-01
We analyse the stochastic properties of the 49 pulsars that comprise the first International Pulsar Timing Array (IPTA) data release. We use Bayesian methodology, performing model selection to determine the optimal description of the stochastic signals present in each pulsar. In addition to spin-noise and dispersion-measure (DM) variations, these models can include timing noise unique to a single observing system, or frequency band. We show the improved radio-frequency coverage and presence of overlapping data from different observing systems in the IPTA data set enables us to separate both system and band-dependent effects with much greater efficacy than in the individual pulsar timing array (PTA) data sets. For example, we show that PSR J1643-1224 has, in addition to DM variations, significant band-dependent noise that is coherent between PTAs which we interpret as coming from time-variable scattering or refraction in the ionized interstellar medium. Failing to model these different contributions appropriately can dramatically alter the astrophysical interpretation of the stochastic signals observed in the residuals. In some cases, the spectral exponent of the spin-noise signal can vary from 1.6 to 4 depending upon the model, which has direct implications for the long-term sensitivity of the pulsar to a stochastic gravitational-wave (GW) background. By using a more appropriate model, however, we can greatly improve a pulsar's sensitivity to GWs. For example, including system and band-dependent signals in the PSR J0437-4715 data set improves the upper limit on a fiducial GW background by ˜60 per cent compared to a model that includes DM variations and spin-noise only.
Stochastic and Deterministic Fluctuations in Stimulated Brillouin Scattering
1990-10-01
and J. R. Ackerhalt, "Instabilities in the Propagation of Arbitrarily Polarized Counterpropagating Waves in a Nonlinear Kerr Medium," Optical...Ackerhalt, and P. W. Milonni, "Instabilities and Chaos in the Polarizations of Counterpropagating Light Fields," Phys. Rev. Lett. 58, 2432 (1987). iv P...Plenum, New York (1990). V D. J. Gauthier, M. S. Malcuit, A. L. Gaeta, and R. W. Boyd, " Polarization Bistability of Counterpropagating Beams," Phys. Rev
EIT Intensity Noise Spectroscopy
NASA Astrophysics Data System (ADS)
Crescimanno, Michael; Xiao, Yanhong; Baryakhtar, Maria; Hohensee, Michael; Phillips, David; Walsworth, Ron
2008-10-01
Intensity noise correlations in coherently-prepared media can reveal underlying spectroscopic detail, such as power broadening-free resonances. We analyze recent experimental results using very simple theory: The intensity noise correlation spectra can be quantitatively understood entirely in terms of static ensemble averages of the medium's steady state response. This is significantly simpler than stochastic integration of the Bloch equations, and leads to physical insights we apply to non-linear Faraday rotation and noise spectra in optically thick media.
Event-by-event picture for the medium-induced jet evolution
NASA Astrophysics Data System (ADS)
Escobedo, Miguel A.; Iancu, Edmond
2017-08-01
We discuss the evolution of an energetic jet which propagates through a dense quark-gluon plasma and radiates gluons due to its interactions with the medium. Within perturbative QCD, this evolution can be described as a stochastic branching process, that we have managed to solve exactly. We present exact, analytic, results for the gluon spectrum (the average gluon distribution) and for the higher n-point functions, which describe correlations and fluctuations. Using these results, we construct the event-by-event picture of the gluon distribution produced via medium-induced gluon branching. In contrast to what happens in a usual QCD cascade in vacuum, the medium-induced branchings are quasi-democratic, with offspring gluons carrying sizable fractions of the energy of their parent parton. We find large fluctuations in the energy loss and in the multiplicity of soft gluons. The multiplicity distribution is predicted to exhibit KNO (Koba-Nielsen-Olesen) scaling. These predictions can be tested in Pb+Pb collisions at the LHC, via event-by-event measurements of the di-jet asymmetry. Based on [1, 2].
Event-by-event picture for the medium-induced jet evolution
NASA Astrophysics Data System (ADS)
Escobedo, Miguel A.; Iancu, Edmond
2017-03-01
We discuss the evolution of an energetic jet which propagates through a dense quark-gluon plasma and radiates gluons due to its interactions with the medium. Within perturbative QCD, this evolution can be described as a stochastic branching process, that we have managed to solve exactly. We present exact, analytic, results for the gluon spectrum (the average gluon distribution) and for the higher n-point functions, which describe correlations and fluctuations. Using these results, we construct the event-by-event picture of the gluon distribution produced via medium-induced gluon branching. In contrast to what happens in a usual QCD cascade in vacuum, the medium-induced branchings are quasi-democratic, with offspring gluons carrying sizable fractions of the energy of their parent parton. We find large fluctuations in the energy loss and in the multiplicity of soft gluons. The multiplicity distribution is predicted to exhibit KNO (Koba-Nielsen-Olesen) scaling. These predictions can be tested in Pb+Pb collisions at the LHC, via event-by-event measurements of the di-jet asymmetry. Based on [1, 2].
NASA Astrophysics Data System (ADS)
Thomas, R. N.; Ebigbo, A.; Paluszny, A.; Zimmerman, R. W.
2016-12-01
The macroscopic permeability of 3D anisotropic geomechanically-generated fractured rock masses is investigated. The explicitly computed permeabilities are compared to the predictions of classical inclusion-based effective medium theories, and to the permeability of networks of randomly oriented and stochastically generated fractures. Stochastically generated fracture networks lack features that arise from fracture interaction, such as non-planarity, and termination of fractures upon intersection. Recent discrete fracture network studies include heuristic rules that introduce these features to some extent. In this work, fractures grow and extend under tension from a finite set of initial flaws. The finite element method is used to compute displacements, and modal stress intensity factors are computed around each fracture tip using the interaction integral accumulated over a set of virtual discs. Fracture apertures emerge as a result of simulations that honour the constraints of stress equilibrium and mass conservation. The macroscopic permeabilities are explicitly calculated by solving the local cubic law in the fractures, on an element-by-element basis, coupled to Darcy's law in the matrix. The permeabilities are then compared to the estimates given by the symmetric and asymmetric versions of the self-consistent approximation, which, for randomly fractured volumes, were previously demonstrated to be most accurate of the inclusion-based effective medium methods (Ebigbo et al., Transport in Porous Media, 2016). The permeabilities of several dozen geomechanical networks are computed as a function of density and in situ stresses. For anisotropic networks, we find that the asymmetric and symmetric self-consistent methods overestimate the effective permeability in the direction of the dominant fracture set. Effective permeabilities that are more strongly dependent on the connectivity of two or more fracture sets are more accurately captured by the effective medium models.
Carrel, M.; Dentz, M.; Derlon, N.; Morgenroth, E.
2018-01-01
Abstract Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3‐D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean‐squared displacements, are found to be non‐Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered. PMID:29780184
NASA Astrophysics Data System (ADS)
Carrel, M.; Morales, V. L.; Dentz, M.; Derlon, N.; Morgenroth, E.; Holzner, M.
2018-03-01
Biofilms are ubiquitous bacterial communities that grow in various porous media including soils, trickling, and sand filters. In these environments, they play a central role in services ranging from degradation of pollutants to water purification. Biofilms dynamically change the pore structure of the medium through selective clogging of pores, a process known as bioclogging. This affects how solutes are transported and spread through the porous matrix, but the temporal changes to transport behavior during bioclogging are not well understood. To address this uncertainty, we experimentally study the hydrodynamic changes of a transparent 3-D porous medium as it experiences progressive bioclogging. Statistical analyses of the system's hydrodynamics at four time points of bioclogging (0, 24, 36, and 48 h in the exponential growth phase) reveal exponential increases in both average and variance of the flow velocity, as well as its correlation length. Measurements for spreading, as mean-squared displacements, are found to be non-Fickian and more intensely superdiffusive with progressive bioclogging, indicating the formation of preferential flow pathways and stagnation zones. A gamma distribution describes well the Lagrangian velocity distributions and provides parameters that quantify changes to the flow, which evolves from a parallel pore arrangement under unclogged conditions, toward a more serial arrangement with increasing clogging. Exponentially evolving hydrodynamic metrics agree with an exponential bacterial growth phase and are used to parameterize a correlated continuous time random walk model with a stochastic velocity relaxation. The model accurately reproduces transport observations and can be used to resolve transport behavior at intermediate time points within the exponential growth phase considered.
Rupert, C.P.; Miller, C.T.
2008-01-01
We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519
Compact stochastic models for multidimensional quasiballistic thermal transport
NASA Astrophysics Data System (ADS)
Vermeersch, Bjorn
2016-11-01
The Boltzmann transport equation (BTE) has proven indispensable in elucidating quasiballistic heat dynamics. The experimental observations of nondiffusive thermal transients, however, are interpreted almost exclusively through purely diffusive formalisms that merely extract "effective" Fourier conductivities. Here, we build upon stochastic transport theory to provide a characterisation framework that blends the rich physics contained within the BTE solutions with the convenience of conventional analyses. The multidimensional phonon dynamics are described in terms of an isotropic Poissonian flight process with a rigorous Fourier-Laplace single pulse response P (ξ → ,s )=1 /[s +ψ(∥ ξ → ∥ )] . The spatial propagator ψ(∥ ξ → ∥ ) , unlike commonly reconstructed mean free path spectra κΣ(Λ) , serves as a genuine thermal blueprint of the medium that can be identified in a compact form directly from the raw measurement signals. Practical illustrations for transient thermal grating and time domain thermoreflectance experiments on respectively GaAs and InGaAs are provided.
Probabilistic measures of persistence and extinction in measles (meta)populations.
Gunning, Christian E; Wearing, Helen J
2013-08-01
Persistence and extinction are fundamental processes in ecological systems that are difficult to accurately measure due to stochasticity and incomplete observation. Moreover, these processes operate on multiple scales, from individual populations to metapopulations. Here, we examine an extensive new data set of measles case reports and associated demographics in pre-vaccine era US cities, alongside a classic England & Wales data set. We first infer the per-population quasi-continuous distribution of log incidence. We then use stochastic, spatially implicit metapopulation models to explore the frequency of rescue events and apparent extinctions. We show that, unlike critical community size, the inferred distributions account for observational processes, allowing direct comparisons between metapopulations. The inferred distributions scale with population size. We use these scalings to estimate extinction boundary probabilities. We compare these predictions with measurements in individual populations and random aggregates of populations, highlighting the importance of medium-sized populations in metapopulation persistence. © 2013 John Wiley & Sons Ltd/CNRS.
Access Protocol For An Industrial Optical Fibre LAN
NASA Astrophysics Data System (ADS)
Senior, John M.; Walker, William M.; Ryley, Alan
1987-09-01
A structure for OSI levels 1 and 2 of a local area network suitable for use in a variety of industrial environments is reported. It is intended that the LAN will utilise optical fibre technology at the physical level and a hybrid of dynamically optimisable token passing and CSMA/CD techniques at the data link (IEEE 802 medium access control - logical link control) level. An intelligent token passing algorithm is employed which dynamically allocates tokens according to the known upper limits on the requirements of each device. In addition a system of stochastic tokens is used to increase efficiency when the stochastic traffic is significant. The protocol also allows user-defined priority systems to be employed and is suitable for distributed or centralised implementation. The results of computer simulated performance characteristics for the protocol using a star-ring topology are reported which demonstrate its ability to perform efficiently with the device and traffic loads anticipated within an industrial environment.
Floris, Patrick; Curtin, Sean; Kaisermayer, Christian; Lindeberg, Anna; Bones, Jonathan
2018-07-01
The compatibility of CHO cell culture medium formulations with all stages of the bioprocess must be evaluated through small-scale studies prior to scale-up for commercial manufacturing operations. Here, we describe the development of a bespoke small-scale device for assessing the compatibility of culture media with a widely implemented upstream viral clearance strategy, high-temperature short-time (HTST) treatment. The thermal stability of undefined medium formulations supplemented with soy hydrolysates was evaluated upon variations in critical HTST processing parameters, namely, holding times and temperatures. Prolonged holding times of 43 s at temperatures of 110 °C did not adversely impact medium quality while significant degradation was observed upon treatment at elevated temperatures (200 °C) for shorter time periods (11 s). The performance of the device was benchmarked against a commercially available mini-pilot HTST system upon treatment of identical formulations on both platforms. Processed medium samples were analyzed by untargeted LC-MS/MS for compositional profiling followed by chemometric evaluation, which confirmed the observed degradation effects caused by elevated holding temperatures but revealed comparable performance of our developed device with the commercial mini-pilot setup. The developed device can assist medium optimization activities by reducing volume requirements relative to commercially available mini-pilot instrumentation and by facilitating fast throughput evaluation of heat-induced effects on multiple medium lots.
NASA Astrophysics Data System (ADS)
Moix, Jeremy M.; Cao, Jianshu
2013-10-01
The hierarchical equations of motion technique has found widespread success as a tool to generate the numerically exact dynamics of non-Markovian open quantum systems. However, its application to low temperature environments remains a serious challenge due to the need for a deep hierarchy that arises from the Matsubara expansion of the bath correlation function. Here we present a hybrid stochastic hierarchical equation of motion (sHEOM) approach that alleviates this bottleneck and leads to a numerical cost that is nearly independent of temperature. Additionally, the sHEOM method generally converges with fewer hierarchy tiers allowing for the treatment of larger systems. Benchmark calculations are presented on the dynamics of two level systems at both high and low temperatures to demonstrate the efficacy of the approach. Then the hybrid method is used to generate the exact dynamics of systems that are nearly impossible to treat by the standard hierarchy. First, exact energy transfer rates are calculated across a broad range of temperatures revealing the deviations from the Förster rates. This is followed by computations of the entanglement dynamics in a system of two qubits at low temperature spanning the weak to strong system-bath coupling regimes.
Moix, Jeremy M; Cao, Jianshu
2013-10-07
The hierarchical equations of motion technique has found widespread success as a tool to generate the numerically exact dynamics of non-Markovian open quantum systems. However, its application to low temperature environments remains a serious challenge due to the need for a deep hierarchy that arises from the Matsubara expansion of the bath correlation function. Here we present a hybrid stochastic hierarchical equation of motion (sHEOM) approach that alleviates this bottleneck and leads to a numerical cost that is nearly independent of temperature. Additionally, the sHEOM method generally converges with fewer hierarchy tiers allowing for the treatment of larger systems. Benchmark calculations are presented on the dynamics of two level systems at both high and low temperatures to demonstrate the efficacy of the approach. Then the hybrid method is used to generate the exact dynamics of systems that are nearly impossible to treat by the standard hierarchy. First, exact energy transfer rates are calculated across a broad range of temperatures revealing the deviations from the Förster rates. This is followed by computations of the entanglement dynamics in a system of two qubits at low temperature spanning the weak to strong system-bath coupling regimes.
Information flow and causality as rigorous notions ab initio
NASA Astrophysics Data System (ADS)
Liang, X. San
2016-11-01
Information flow or information transfer the widely applicable general physics notion can be rigorously derived from first principles, rather than axiomatically proposed as an ansatz. Its logical association with causality is firmly rooted in the dynamical system that lies beneath. The principle of nil causality that reads, an event is not causal to another if the evolution of the latter is independent of the former, which transfer entropy analysis and Granger causality test fail to verify in many situations, turns out to be a proven theorem here. Established in this study are the information flows among the components of time-discrete mappings and time-continuous dynamical systems, both deterministic and stochastic. They have been obtained explicitly in closed form, and put to applications with the benchmark systems such as the Kaplan-Yorke map, Rössler system, baker transformation, Hénon map, and stochastic potential flow. Besides unraveling the causal relations as expected from the respective systems, some of the applications show that the information flow structure underlying a complex trajectory pattern could be tractable. For linear systems, the resulting remarkably concise formula asserts analytically that causation implies correlation, while correlation does not imply causation, providing a mathematical basis for the long-standing philosophical debate over causation versus correlation.
Characterizing a New Candidate Benchmark Brown Dwarf Companion in the β Pic Moving Group
NASA Astrophysics Data System (ADS)
Phillips, Caprice; Bowler, Brendan; Liu, Michael C.; Mace, Gregory N.; Sokal, Kimberly R.
2018-01-01
Benchmark brown dwarfs are objects that have at least two measured fundamental quantities such as luminosity and age, and therefore can be used to test substellar atmospheric and evolutionary models. Nearby, young, loose associations such as the β Pic moving group represent some of the best regions in which to identify intermediate-age benchmark brown dwarfs due to their well-constrained ages and metallicities. We present a spectroscopic study of a new companion at the hydrogen-burning limit orbiting a low-mass star at a separation of 9″ (650 AU) in the 23 Myr old β Pic moving group. The medium-resolution near-infrared spectrum of this companion from IRTF/SpeX shows clear signs of low surface gravity and yields an index-based spectral type of M6±1 with a VL-G gravity on the Allers & Liu classification system. Currently, there are four known brown dwarf and giant planet companions in the β Pic moving group: HR 7329 B, PZ Tel B, β Pic b, and 51 Eri b. Depending on its exact age and accretion history, this new object may represent the third brown dwarf companion and fifth substellar companion in this association.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
Benchmarking Commercial Conformer Ensemble Generators.
Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes
2017-11-27
We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.
Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.
Grossi, Giuliano
2009-08-01
Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.
Recent trends in hardware security exploiting hybrid CMOS-resistive memory circuits
NASA Astrophysics Data System (ADS)
Sahay, Shubham; Suri, Manan
2017-12-01
This paper provides a comprehensive review and insight of recent trends in the field of random number generator (RNG) and physically unclonable function (PUF) circuits implemented using different types of emerging resistive non-volatile (NVM) memory devices. We present a detailed review of hybrid RNG/PUF implementations based on the use of (i) Spin-Transfer Torque (STT-MRAM), and (ii) metal-oxide based (OxRAM), NVM devices. Various approaches on Hybrid CMOS-NVM RNG/PUF circuits are considered, followed by a discussion on different nanoscale device phenomena. Certain nanoscale device phenomena (variability/stochasticity etc), which are otherwise undesirable for reliable memory and storage applications, form the basis for low power and highly scalable RNG/PUF circuits. Detailed qualitative comparison and benchmarking of all implementations is performed.
A Hybrid Method of Moment Equations and Rate Equations to Modeling Gas-Grain Chemistry
NASA Astrophysics Data System (ADS)
Pei, Y.; Herbst, E.
2011-05-01
Grain surfaces play a crucial role in catalyzing many important chemical reactions in the interstellar medium (ISM). The deterministic rate equation (RE) method has often been used to simulate the surface chemistry. But this method becomes inaccurate when the number of reacting particles per grain is typically less than one, which can occur in the ISM. In this condition, stochastic approaches such as the master equations are adopted. However, these methods have mostly been constrained to small chemical networks due to the large amounts of processor time and computer power required. In this study, we present a hybrid method consisting of the moment equation approximation to the stochastic master equation approach and deterministic rate equations to treat a gas-grain model of homogeneous cold cloud cores with time-independent physical conditions. In this model, we use the standard OSU gas phase network (version OSU2006V3) which involves 458 gas phase species and more than 4000 reactions, and treat it by deterministic rate equations. A medium-sized surface reaction network which consists of 21 species and 19 reactions accounts for the productions of stable molecules such as H_2O, CO, CO_2, H_2CO, CH_3OH, NH_3 and CH_4. These surface reactions are treated by a hybrid method of moment equations (Barzel & Biham 2007) and rate equations: when the abundance of a surface species is lower than a specific threshold, say one per grain, we use the ``stochastic" moment equations to simulate the evolution; when its abundance goes above this threshold, we use the rate equations. A continuity technique is utilized to secure a smooth transition between these two methods. We have run chemical simulations for a time up to 10^8 yr at three temperatures: 10 K, 15 K, and 20 K. The results will be compared with those generated from (1) a completely deterministic model that uses rate equations for both gas phase and grain surface chemistry, (2) the method of modified rate equations (Garrod 2008), which partially takes into account the stochastic effect for surface reactions, and (3) the master equation approach solved using a Monte Carlo technique. At 10 K and standard grain sizes, our model results agree well with the above three methods, while discrepancies appear at higher temperatures and smaller grain sizes.
Numerical Analysis of Base Flowfield for a Four-Engine Clustered Nozzle Configuration
NASA Technical Reports Server (NTRS)
Wang, Ten-See
1995-01-01
Excessive base heating has been a problem for many launch vehicles. For certain designs such as the direct dump of turbine exhaust inside and at the lip of the nozzle, the potential burning of the turbine exhaust in the base region can be of great concern. Accurate prediction of the base environment at altitudes is therefore very important during the vehicle design phase. Otherwise, undesirable consequences may occur. In this study, the turbulent base flowfield of a cold flow experimental investigation for a four-engine clustered nozzle was numerically benchmarked using a pressure-based computational fluid dynamics (CFD) method. This is a necessary step before the benchmarking of hot flow and combustion flow tests can be considered. Since the medium was unheated air, reasonable prediction of the base pressure distribution at high altitude was the main goal. Several physical phenomena pertaining to the multiengine clustered nozzle base flow physics were deduced from the analysis.
NASA Indexing Benchmarks: Evaluating Text Search Engines
NASA Technical Reports Server (NTRS)
Esler, Sandra L.; Nelson, Michael L.
1997-01-01
The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.
Propagation of mechanical waves through a stochastic medium with spherical symmetry
NASA Astrophysics Data System (ADS)
Avendaño, Carlos G.; Reyes, J. Adrián
2018-01-01
We theoretically analyze the propagation of outgoing mechanical waves through an infinite isotropic elastic medium possessing spherical symmetry whose Lamé coefficients and density are spatial random functions characterized by well-defined statistical parameters. We derive the differential equation that governs the average displacement for a system whose properties depend on the radial coordinate. We show that such an equation is an extended version of the well-known Bessel differential equation whose perturbative additional terms contain coefficients that depend directly on the squared noise intensities and the autocorrelation lengths in an exponential decay fashion. We numerically solve the second order differential equation for several values of noise intensities and autocorrelation lengths and compare the corresponding displacement profiles with that of the exact analytic solution for the case of absent inhomogeneities.
Louro, Henriqueta; Pinhão, Mariana; Santos, Joana; Tavares, Ana; Vital, Nádia; Silva, Maria João
2016-11-16
To contribute with scientific evidence to the grouping strategy for the safety assessment of multi-walled carbon nanotubes (MWCNTs), this work describes the investigation of the cytotoxic and genotoxic effects of four benchmark MWCNTs in relation to their physicochemical characteristics, using two types of human respiratory cells. The cytotoxic effects were analysed using the clonogenic assay and replication index determination. A 48h-exposure of cells revealed that NM-401 was the only cytotoxic MWCNT in both cell lines, but after 8-days exposure, the clonogenic assay in A549 cells showed cytotoxic effects for all the tested MWCNTs. Correlation analysis suggested an association between the MWCNTs size in cell culture medium and cytotoxicity. No induction of DNA damage was observed after any MWCNTs in any cell line by the comet assay, while the micronucleus assay revealed that both NM-401 and NM-402 were genotoxic in A549 cells. NM-401 and NM-402 are the two longest MWCNTs analyzed in this work, suggesting that length may be determinant for genotoxicity. No induction of micronuclei was observed in BBEAS-2Beas-2B cell line and the different effect in both cell lines is explained in view of the size-distribution of MWCNTs in the cell culture medium, rather than cell's specificities. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Parallel Stochastic Framework for Reservoir Characterization and History Matching
Thomas, Sunil G.; Klie, Hector M.; Rodriguez, Adolfo A.; ...
2011-01-01
The spatial distribution of parameters that characterize the subsurface is never known to any reasonable level of accuracy required to solve the governing PDEs of multiphase flow or species transport through porous media. This paper presents a numerically cheap, yet efficient, accurate and parallel framework to estimate reservoir parameters, for example, medium permeability, using sensor information from measurements of the solution variables such as phase pressures, phase concentrations, fluxes, and seismic and well log data. Numerical results are presented to demonstrate the method.
Stochastic resonance in feedforward acupuncture networks
NASA Astrophysics Data System (ADS)
Qin, Ying-Mei; Wang, Jiang; Men, Cong; Deng, Bin; Wei, Xi-Le; Yu, Hai-Tao; Chan, Wai-Lok
2014-10-01
Effects of noises and some other network properties on the weak signal propagation are studied systematically in feedforward acupuncture networks (FFN) based on FitzHugh-Nagumo neuron model. It is found that noises with medium intensity can enhance signal propagation and this effect can be further increased by the feedforward network structure. Resonant properties in the noisy network can also be altered by several network parameters, such as heterogeneity, synapse features, and feedback connections. These results may also provide a novel potential explanation for the propagation of acupuncture signal.
Thermal interpretation of infrared dynamics in de Sitter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rigopoulos, Gerasimos, E-mail: gerasimos.rigopoulos@ncl.ac.uk
The infrared dynamics of a light, minimally coupled scalar field in de Sitter spacetime with Ricci curvature R = 12 H {sup 2}, averaged over horizon sized regions of physical volume V {sub H} = (4π/3)(1/ H ){sup 3}, can be interpreted as Brownian motion in a medium with de Sitter temperature T {sub DS} = h-bar H /2π. We demonstrate this by directly deriving the effective action of scalar field fluctuations with wavelengths larger than the de Sitter curvature radius and generalizing Starobinsky's seminal results on stochastic inflation. The effective action describes stochastic dynamics and the fluctuating force drivesmore » the field to an equilibrium characterized by a thermal Gibbs distribution at temperature T {sub DS} which corresponds to a de Sitter invariant state. Hence, approach towards this state can be interpreted as thermalization. We show that the stochastic kinetic energy of the coarse-grained description corresponds to the norm of ∂{sub μ}φ and takes a well defined value per horizon volume ½((∇φ){sup 2}) = − ½ T {sub DS}/ V {sub H} . This approach allows for the non-perturbative computation of the de Sitter invariant stress energy tensor ( T {sub μν}) for an arbitrary scalar potential.« less
AGN jet-driven stochastic cold accretion in cluster cores
NASA Astrophysics Data System (ADS)
Prasad, Deovrat; Sharma, Prateek; Babul, Arif
2017-10-01
Several arguments suggest that stochastic condensation of cold gas and its accretion on to the central supermassive black hole (SMBH) is essential for active galactic nuclei (AGNs) feedback to work in the most massive galaxies that lie at the centres of galaxy clusters. Our 3-D hydrodynamic AGN jet-ICM (intracluster medium) simulations, looking at the detailed angular momentum distribution of cold gas and its time variability for the first time, show that the angular momentum of the cold gas crossing ≲1 kpc is essentially isotropic. With almost equal mass in clockwise and counterclockwise orientations, we expect a cancellation of the angular momentum on roughly the dynamical time. This means that a compact accretion flow with a short viscous time ought to form, through which enough accretion power can be channeled into jet mechanical energy sufficiently quickly to prevent a cooling flow. The inherent stochasticity, expected in feedback cycles driven by cold gas condensation, gives rise to a large variation in the cold gas mass at the centres of galaxy clusters, for similar cluster and SMBH masses, in agreement with the observations. Such correlations are expected to be much tighter for the smoother hot/Bondi accretion. The weak correlation between cavity power and Bondi power obtained from our simulations also matches observations.
Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors
NASA Technical Reports Server (NTRS)
VanOverbeke, Thomas J.
1998-01-01
The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.
Retention performance of green roofs in representative climates worldwide
NASA Astrophysics Data System (ADS)
Viola, F.; Hellies, M.; Deidda, R.
2017-10-01
The ongoing process of global urbanization contributes to an increase in stormwater runoff from impervious surfaces, threatening also water quality. Green roofs have been proved to be innovative stormwater management measures to partially restore natural states, enhancing interception, infiltration and evapotranspiration fluxes. The amount of water that is retained within green roofs depends not only on their depth, but also on the climate, which drives the stochastic soil moisture dynamic. In this context, a simple tool for assessing performance of green roofs worldwide in terms of retained water is still missing and highly desirable for practical assessments. The aim of this work is to explore retention performance of green roofs as a function of their depth and in different climate regimes. Two soil depths are investigated, one representing the intensive configuration and another representing the extensive one. The role of the climate in driving water retention has been represented by rainfall and potential evapotranspiration dynamics. A simple conceptual weather generator has been implemented and used for stochastic simulation of daily rainfall and potential evapotranspiration. Stochastic forcing is used as an input of a simple conceptual hydrological model for estimating long-term water partitioning between rainfall, runoff and actual evapotranspiration. Coupling the stochastic weather generator with the conceptual hydrological model, we assessed the amount of rainfall diverted into evapotranspiration for different combinations of annual rainfall and potential evapotranspiration in five representative climatic regimes. Results quantified the capabilities of green roofs in retaining rainfall and consequently in reducing discharges into sewer systems at an annual time scale. The role of substrate depth has been recognized to be crucial in determining green roofs retention performance, which in general increase from extensive to intensive settings. Looking at the role of climatic conditions, namely annual rainfall, potential evapotranspiration and their seasonality cycles, we found that they drive green roofs retention performance, which are the maxima when rainfall and temperature are in phase. Finally, we provide design charts for a first approximation of possible hydrological benefits deriving from the implementation of intensive or extensive green roofs in different world areas. As an example, 25 big cities have been indicated as benchmark case studies.
About the discrete-continuous nature of a hematopoiesis model for Chronic Myeloid Leukemia.
Gaudiano, Marcos E; Lenaerts, Tom; Pacheco, Jorge M
2016-12-01
Blood of mammals is composed of a variety of cells suspended in a fluid medium known as plasma. Hematopoiesis is the biological process of birth, replication and differentiation of blood cells. Despite of being essentially a stochastic phenomenon followed by a huge number of discrete entities, blood formation has naturally an associated continuous dynamics, because the cellular populations can - on average - easily be described by (e.g.) differential equations. This deterministic dynamics by no means contemplates some important stochastic aspects related to abnormal hematopoiesis, that are especially significant for studying certain blood cancer deceases. For instance, by mere stochastic competition against the normal cells, leukemic cells sometimes do not reach the population thereshold needed to kill the organism. Of course, a pure discrete model able to follow the stochastic paths of billons of cells is computationally impossible. In order to avoid this difficulty, we seek a trade-off between the computationally feasible and the biologically realistic, deriving an equation able to size conveniently both the discrete and continuous parts of a model for hematopoiesis in terrestrial mammals, in the context of Chronic Myeloid Leukemia. Assuming the cancer is originated from a single stem cell inside of the bone marrow, we also deduce a theoretical formula for the probability of non-diagnosis as a function of the mammal average adult mass. In addition, this work cellular dynamics analysis may shed light on understanding Peto's paradox, which is shown here as an emergent property of the discrete-continuous nature of the system. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Mulavara, Ajitkumar; Fiedler, Matthew; Kofman, Igor; Peters, Brian; Wood, Scott; Serrador, Jorge; Cohen, Helen; Reschke, Millard; Bloomberg, Jacob
2010-01-01
Stochastic resonance (SR) is a mechanism by which noise can assist and enhance the response of neural systems to relevant sensory signals. Application of imperceptible SR noise coupled with sensory input through the proprioceptive, visual, or vestibular sensory systems has been shown to improve motor function. Specifically, studies have shown that that vestibular electrical stimulation by imperceptible stochastic noise, when applied to normal young and elderly subjects, significantly improved their ocular stabilization reflexes in response to whole-body tilt as well as balance performance during postural disturbances. The goal of this study was to optimize the characteristics of the stochastic vestibular signals for balance performance during standing on an unstable surface. Subjects performed a standardized balance task of standing on a block of 10 cm thick medium density foam with their eyes closed for a total of 40 seconds. Stochastic electrical stimulation was applied to the vestibular system through electrodes placed over the mastoid process behind the ears during the last 20 seconds of the test period. A custom built constant current stimulator with subject isolation delivered the stimulus. Stimulation signals were generated with frequencies in the bandwidth of 1-2 Hz and 0.01-30 Hz. Amplitude of the signals were varied in the range of 0- +/-700 micro amperes with the RMS of the signal increased by 30 micro amperes for each 100 micro amperes increase in the current range. Balance performance was measured using a force plate under the foam block and inertial motion sensors placed on the torso and head segments. Preliminary results indicate that balance performance is improved in the range of 10-25% compared to no stimulation conditions. Subjects improved their performance consistently across the blocks of stimulation. Further the signal amplitude at which the performance was maximized was different in the two frequency ranges. Optimization of the frequency and amplitude of the signal characteristics of the stochastic noise signals on maximizing balance performance will have a significant impact in its development as a unique system to aid recovery of function in astronauts after long duration space flight or for people with balance disorders.
Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.
2017-01-01
Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.
Gabran, S R I; Saad, J H; Salama, M M A; Mansour, R R
2009-01-01
This paper demonstrates the electromagnetic modeling and simulation of an implanted Medtronic deep brain stimulation (DBS) electrode using finite difference time domain (FDTD). The model is developed using Empire XCcel and represents the electrode surrounded with brain tissue assuming homogenous and isotropic medium. The model is created to study the parameters influencing the electric field distribution within the tissue in order to provide reference and benchmarking data for DBS and intra-cortical electrode development.
Benchmarking variable-density flow in saturated and unsaturated porous media
NASA Astrophysics Data System (ADS)
Guevara Morel, Carlos Roberto; Cremer, Clemens; Graf, Thomas
2015-04-01
In natural environments, fluid density and viscosity can be affected by spatial and temporal variations of solute concentration and/or temperature. These variations can occur, for example, due to salt water intrusion in coastal aquifers, leachate infiltration from waste disposal sites and upconing of saline water from deep aquifers. As a consequence, potentially unstable situations may exist in which a dense fluid overlies a less dense fluid. This situation can produce instabilities that manifest as dense plume fingers that move vertically downwards counterbalanced by vertical upwards flow of the less dense fluid. Resulting free convection increases solute transport rates over large distances and times relative to constant-density flow. Therefore, the understanding of free convection is relevant for the protection of freshwater aquifer systems. The results from a laboratory experiment of saturated and unsaturated variable-density flow and solute transport (Simmons et al., Transp. Porous Medium, 2002) are used as the physical basis to define a mathematical benchmark. The HydroGeoSphere code coupled with PEST are used to estimate the optimal parameter set capable of reproducing the physical model. A grid convergency analysis (in space and time) is also undertaken in order to obtain the adequate spatial and temporal discretizations. The new mathematical benchmark is useful for model comparison and testing of variable-density variably saturated flow in porous media.
NASA Astrophysics Data System (ADS)
Arias, E.; Florez, E.; Pérez-Torres, J. F.
2017-06-01
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.
Hyperspectral Image Classification With Markov Random Fields and a Convolutional Neural Network
NASA Astrophysics Data System (ADS)
Cao, Xiangyong; Zhou, Feng; Xu, Lin; Meng, Deyu; Xu, Zongben; Paisley, John
2018-05-01
This paper presents a new supervised classification algorithm for remotely sensed hyperspectral image (HSI) which integrates spectral and spatial information in a unified Bayesian framework. First, we formulate the HSI classification problem from a Bayesian perspective. Then, we adopt a convolutional neural network (CNN) to learn the posterior class distributions using a patch-wise training strategy to better use the spatial information. Next, spatial information is further considered by placing a spatial smoothness prior on the labels. Finally, we iteratively update the CNN parameters using stochastic gradient decent (SGD) and update the class labels of all pixel vectors using an alpha-expansion min-cut-based algorithm. Compared with other state-of-the-art methods, the proposed classification method achieves better performance on one synthetic dataset and two benchmark HSI datasets in a number of experimental settings.
Undoing measurement-induced dephasing in circuit QED
NASA Astrophysics Data System (ADS)
Frisk Kockum, A.; Tornberg, L.; Johansson, G.
2012-05-01
We analyze the backaction of homodyne detection and photodetection on superconducting qubits in circuit quantum electrodynamics. Although both measurement schemes give rise to backaction in the form of stochastic phase rotations, which leads to dephasing, we show that this can be perfectly undone provided that the measurement signal is fully accounted for. This result improves on an earlier one [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.012329 82, 012329 (2010)], showing that the method suggested can be made to realize a perfect two-qubit parity measurement. We propose a benchmarking experiment on a single qubit to demonstrate the method using homodyne detection. By analyzing the limited measurement efficiency of the detector and bandwidth of the amplifier, we show that the parameter values necessary to see the effect are within the limits of existing technology.
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reactionmore » rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.« less
Composite Particle Swarm Optimizer With Historical Memory for Function Optimization.
Li, Jie; Zhang, JunQi; Jiang, ChangJun; Zhou, MengChu
2015-10-01
Particle swarm optimization (PSO) algorithm is a population-based stochastic optimization technique. It is characterized by the collaborative search in which each particle is attracted toward the global best position (gbest) in the swarm and its own best position (pbest). However, all of particles' historical promising pbests in PSO are lost except their current pbests. In order to solve this problem, this paper proposes a novel composite PSO algorithm, called historical memory-based PSO (HMPSO), which uses an estimation of distribution algorithm to estimate and preserve the distribution information of particles' historical promising pbests. Each particle has three candidate positions, which are generated from the historical memory, particles' current pbests, and the swarm's gbest. Then the best candidate position is adopted. Experiments on 28 CEC2013 benchmark functions demonstrate the superiority of HMPSO over other algorithms.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem.
Arias, E; Florez, E; Pérez-Torres, J F
2017-06-28
A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu 7 , Cu 9 , and Cu 11 as benchmark systems, and Cu 38 and Ni 9 as novel systems. New equilibrium structures for Cu 9 , Cu 11 , Cu 38 , and Ni 9 are reported.
Mechanism of spiral formation in heterogeneous discretized excitable media.
Kinoshita, Shu-ichi; Iwamoto, Mayuko; Tateishi, Keita; Suematsu, Nobuhiko J; Ueyama, Daishin
2013-06-01
Spiral waves on excitable media strongly influence the functions of living systems in both a positive and negative way. The spiral formation mechanism has thus been one of the major themes in the field of reaction-diffusion systems. Although the widely believed origin of spiral waves is the interaction of traveling waves, the heterogeneity of an excitable medium has recently been suggested as a probable cause. We suggest one possible origin of spiral waves using a Belousov-Zhabotinsky reaction and a discretized FitzHugh-Nagumo model. The heterogeneity of the reaction field is shown to stochastically generate unidirectional sites, which can induce spiral waves. Furthermore, we found that the spiral wave vanished with only a small reduction in the excitability of the reaction field. These results reveal a gentle approach for controlling the appearance of a spiral wave on an excitable medium.
Discrete-event system simulation on small and medium enterprises productivity improvement
NASA Astrophysics Data System (ADS)
Sulistio, J.; Hidayah, N. A.
2017-12-01
Small and medium industries in Indonesia is currently developing. The problem faced by SMEs is the difficulty of meeting growing demand coming into the company. Therefore, SME need an analysis and evaluation on its production process in order to meet all orders. The purpose of this research is to increase the productivity of SMEs production floor by applying discrete-event system simulation. This method preferred because it can solve complex problems die to the dynamic and stochastic nature of the system. To increase the credibility of the simulation, model validated by cooperating the average of two trials, two trials of variance and chi square test. Afterwards, Benferroni method applied to development several alternatives. The article concludes that, the productivity of SMEs production floor increased up to 50% by adding the capacity of dyeing and drying machines.
An optical channel modeling of a single mode fiber
NASA Astrophysics Data System (ADS)
Nabavi, Neda; Liu, Peng; Hall, Trevor James
2018-05-01
The evaluation of the optical channel model that accurately describes the single mode fibre as a coherent transmission medium is reviewed through analytical, numerical and experimental analysis. We used the numerical modelling of the optical transmission medium and experimental measurements to determine the polarization drift as a function of time for a fixed length of fibre. The probability distribution of the birefringence vector was derived, which is associated to the 'Poole' equation. The theory and experimental evidence that has been disclosed in the literature in the context of polarization mode dispersion - Stokes & Jones formulations and solutions for key statistics by integration of stochastic differential equations has been investigated. Besides in-depth definition of the single-mode fibre-optic channel, the modelling which concerns an ensemble of fibres each with a different instance of environmental perturbation has been analysed.
Stochastic description of geometric phase for polarized waves in random media
NASA Astrophysics Data System (ADS)
Boulanger, Jérémie; Le Bihan, Nicolas; Rossetto, Vincent
2013-01-01
We present a stochastic description of multiple scattering of polarized waves in the regime of forward scattering. In this regime, if the source is polarized, polarization survives along a few transport mean free paths, making it possible to measure an outgoing polarization distribution. We consider thin scattering media illuminated by a polarized source and compute the probability distribution function of the polarization on the exit surface. We solve the direct problem using compound Poisson processes on the rotation group SO(3) and non-commutative harmonic analysis. We obtain an exact expression for the polarization distribution which generalizes previous works and design an algorithm solving the inverse problem of estimating the scattering properties of the medium from the measured polarization distribution. This technique applies to thin disordered layers, spatially fluctuating media and multiple scattering systems and is based on the polarization but not on the signal amplitude. We suggest that it can be used as a non-invasive testing method.
Hybrid regulatory models: a statistically tractable approach to model regulatory network dynamics.
Ocone, Andrea; Millar, Andrew J; Sanguinetti, Guido
2013-04-01
Computational modelling of the dynamics of gene regulatory networks is a central task of systems biology. For networks of small/medium scale, the dominant paradigm is represented by systems of coupled non-linear ordinary differential equations (ODEs). ODEs afford great mechanistic detail and flexibility, but calibrating these models to data is often an extremely difficult statistical problem. Here, we develop a general statistical inference framework for stochastic transcription-translation networks. We use a coarse-grained approach, which represents the system as a network of stochastic (binary) promoter and (continuous) protein variables. We derive an exact inference algorithm and an efficient variational approximation that allows scalable inference and learning of the model parameters. We demonstrate the power of the approach on two biological case studies, showing that the method allows a high degree of flexibility and is capable of testable novel biological predictions. http://homepages.inf.ed.ac.uk/gsanguin/software.html. Supplementary data are available at Bioinformatics online.
Simulating immiscible multi-phase flow and wetting with 3D stochastic rotation dynamics (SRD)
NASA Astrophysics Data System (ADS)
Hiller, Thomas; Sanchez de La Lama, Marta; Herminghaus, Stephan; Brinkmann, Martin
2013-11-01
We use a variant of the mesoscopic particle method stochastic rotation dynamics (SRD) to simulate immiscible multi-phase flow on the pore and sub-pore scale in three dimensions. As an extension to the multi-color SRD method, first proposed by Inoue et al., we present an implementation that accounts for complex wettability on heterogeneous surfaces. In order to demonstrate the versatility of this algorithm, we consider immiscible two-phase flow through a model porous medium (disordered packing of spherical beads) where the substrate exhibits different spatial wetting patterns. We show that these patterns have a significant effect on the interface dynamics. Furthermore, the implementation of angular momentum conservation into the SRD algorithm allows us to extent the applicability of SRD also to micro-fluidic systems. It is now possible to study e.g. the internal flow behaviour of a droplet depending on the driving velocity of the surrounding bulk fluid or the splitting of droplets by an obstacle.
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
NASA Astrophysics Data System (ADS)
Larmier, Coline; Lam, Adam; Brantley, Patrick; Malvagi, Fausto; Palmer, Todd; Zoia, Andrea
2018-01-01
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore-Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using this algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning [1] and extended by Brantley [2]. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
Larmier, Coline; Lam, Adam; Brantley, Patrick; ...
2017-09-27
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Monte Carlo chord length sampling for d-dimensional Markov binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmier, Coline; Lam, Adam; Brantley, Patrick
The Chord Length Sampling (CLS) algorithm is a powerful Monte Carlo method that models the effects of stochastic media on particle transport by generating on-the-fly the material interfaces seen by the random walkers during their trajectories. This annealed disorder approach, which formally consists of solving the approximate Levermore–Pomraning equations for linear particle transport, enables a considerable speed-up with respect to transport in quenched disorder, where ensemble-averaging of the Boltzmann equation with respect to all possible realizations is needed. However, CLS intrinsically neglects the correlations induced by the spatial disorder, so that the accuracy of the solutions obtained by using thismore » algorithm must be carefully verified with respect to reference solutions based on quenched disorder realizations. When the disorder is described by Markov mixing statistics, such comparisons have been attempted so far only for one-dimensional geometries, of the rod or slab type. In this work we extend these results to Markov media in two-dimensional (extruded) and three-dimensional geometries, by revisiting the classical set of benchmark configurations originally proposed by Adams, Larsen and Pomraning and extended by Brantley. In particular, we examine the discrepancies between CLS and reference solutions for scalar particle flux and transmission/reflection coefficients as a function of the material properties of the benchmark specifications and of the system dimensionality.« less
Reservoir optimisation using El Niño information. Case study of Daule Peripa (Ecuador)
NASA Astrophysics Data System (ADS)
Gelati, Emiliano; Madsen, Henrik; Rosbjerg, Dan
2010-05-01
The optimisation of water resources systems requires the ability to produce runoff scenarios that are consistent with available climatic information. We approach stochastic runoff modelling with a Markov-modulated autoregressive model with exogenous input, which belongs to the class of Markov-switching models. The model assumes runoff parameterisation to be conditioned on a hidden climatic state following a Markov chain, whose state transition probabilities depend on climatic information. This approach allows stochastic modeling of non-stationary runoff, as runoff anomalies are described by a mixture of autoregressive models with exogenous input, each one corresponding to a climate state. We calibrate the model on the inflows of the Daule Peripa reservoir located in western Ecuador, where the occurrence of El Niño leads to anomalously heavy rainfall caused by positive sea surface temperature anomalies along the coast. El Niño - Southern Oscillation (ENSO) information is used to condition the runoff parameterisation. Inflow predictions are realistic, especially at the occurrence of El Niño events. The Daule Peripa reservoir serves a hydropower plant and a downstream water supply facility. Using historical ENSO records, synthetic monthly inflow scenarios are generated for the period 1950-2007. These scenarios are used as input to perform stochastic optimisation of the reservoir rule curves with a multi-objective Genetic Algorithm (MOGA). The optimised rule curves are assumed to be the reservoir base policy. ENSO standard indices are currently forecasted at monthly time scale with nine-month lead time. These forecasts are used to perform stochastic optimisation of reservoir releases at each monthly time step according to the following procedure: (i) nine-month inflow forecast scenarios are generated using ENSO forecasts; (ii) a MOGA is set up to optimise the upcoming nine monthly releases; (iii) the optimisation is carried out by simulating the releases on the inflow forecasts, and by applying the base policy on a subsequent synthetic inflow scenario in order to account for long-term costs; (iv) the optimised release for the first month is implemented; (v) the state of the system is updated and (i), (ii), (iii), and (iv) are iterated for the following time step. The results highlight the advantages of using a climate-driven stochastic model to produce inflow scenarios and forecasts for reservoir optimisation, showing potential improvements with respect to the current management. Dynamic programming was used to find the best possible release time series given the inflow observations, in order to benchmark any possible operational improvement.
Medium term hurricane catastrophe models: a validation experiment
NASA Astrophysics Data System (ADS)
Bonazzi, Alessandro; Turner, Jessica; Dobbin, Alison; Wilson, Paul; Mitas, Christos; Bellone, Enrica
2013-04-01
Climate variability is a major source of uncertainty for the insurance industry underwriting hurricane risk. Catastrophe models provide their users with a stochastic set of events that expands the scope of the historical catalogue by including synthetic events that are likely to happen in a defined time-frame. The use of these catastrophe models is widespread in the insurance industry but it is only in recent years that climate variability has been explicitly accounted for. In the insurance parlance "medium term catastrophe model" refers to products that provide an adjusted view of risk that is meant to represent hurricane activity on a 1 to 5 year horizon, as opposed to long term models that integrate across the climate variability of the longest available time series of observations. In this presentation we discuss how a simple reinsurance program can be used to assess the value of medium term catastrophe models. We elaborate on similar concepts as discussed in "Potential Economic Value of Seasonal Hurricane Forecasts" by Emanuel et al. (2012, WCAS) and provide an example based on 24 years of historical data of the Chicago Mercantile Hurricane Index (CHI), an insured loss proxy. Profit and loss volatility of a hypothetical primary insurer are used to score medium term models versus their long term counterpart. Results show that medium term catastrophe models could help a hypothetical primary insurer to improve their financial resiliency to varying climate conditions.
NASA Astrophysics Data System (ADS)
Fauzi, Rizky Hanif; Liquiddanu, Eko; Suletra, I. Wayan
2018-02-01
Batik printing is made by way of night printing as well as conventional batik & through the dyeing process like batik making in general. One of the areas that support the batik industry in Karisidenan Surakarta is Kliwonan Village, Masaran District, Sragen. Masaran district is known as one of batik centers originated from batik workers in Laweyan Solo area from Masaran, they considered that it would be more economical to produce batik in their village which is Masaran Sragen because it is impossible to do production from upstream to downstream in Solo. SME X is one of SME batik in Kliwonan Village, Masaran, Sragen which has been able to produce batik printing with sales coverage to national. One of the key SME X in selling its products is by participating in various national & international exhibitions that are able to catapult its name. SME Y & SME Z are also SMEs in Kliwonan Village, Masaran, Sragen producing batik printing. From the observations made there are several problems that must be fixed in SME Y & SME Z. The production process is delayed from schedule, maintenance of used equipment, procedures for batik workmanship, supervision of operators as well as unknown SMEY & SMEZ products are the problems found. The purpose of this research is to improve the primary activity in SME Y & Z value chain on batik prioting product by benchmarking to small & medium scale industries (SME) X which have better competence.
NASA Astrophysics Data System (ADS)
Zhou, Zhenhuan; Li, Yuejie; Fan, Junhai; Rong, Dalun; Sui, Guohao; Xu, Chenghui
2018-05-01
A new Hamiltonian-based approach is presented for finding exact solutions for transverse vibrations of double-nanobeam-systems embedded in an elastic medium. The continuum model is established within the frameworks of the symplectic methodology and the nonlocal Euler-Bernoulli and Timoshenko beam beams. The symplectic eigenfunctions are obtained after expressing the governing equations in a Hamiltonian form. Exact frequency equations, vibration modes and displacement amplitudes are obtained by using symplectic eigenfunctions and end conditions. Comparisons with previously published work are presented to illustrate the accuracy and reliability of the proposed method. The comprehensive results for arbitrary boundary conditions could serve as benchmark results for verifying numerically obtained solutions. In addition, a study on the difference between the nonlocal beam and the nonlocal plate is also included.
Klemperer, William
2011-01-01
The discovery of polar polyatomic molecules in higher-density regions of the interstellar medium by means of their rotational emission detected by radioastronomy has changed our conception of the universe from essentially atomic to highly molecular. We discuss models for molecule formation, emphasizing the general lack of thermodynamic equilibrium. Detailed chemical kinetics is needed to understand molecule formation as well as destruction. Ion molecule reactions appear to be an important class for the generally low temperatures of the interstellar medium. The need for the intrinsically high-quality factor of rotational transitions to definitively pin down molecular emitters has been well established by radioastronomy. The observation of abundant molecular ions both positive and, as recently observed, negative provides benchmarks for chemical kinetic schemes. Of considerable importance in guiding our understanding of astronomical chemistry is the fact that the larger molecules (with more than five atoms) are all organic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meglinskii, I V
2001-12-31
The reflection spectra of a multilayer random medium - the human skin - strongly scattering and absorbing light are numerically simulated. The propagation of light in the medium and the absorption spectra are simulated by the stochastic Monte Carlo method, which combines schemes for calculations of real photon trajectories and the statistical weight method. The model takes into account the inhomogeneous spatial distribution of blood vessels, water, and melanin, the degree of blood oxygenation, and the hematocrit index. The attenuation of the incident radiation caused by reflection and refraction at Fresnel boundaries of layers inside the medium is also considered.more » The simulated reflection spectra are compared with the experimental reflection spectra of the human skin. It is shown that a set of parameters that was used to describe the optical properties of skin layers and their possible variations, despite being far from complete, is nevertheless sufficient for the simulation of the reflection spectra of the human skin and their quantitative analysis. (laser applications and other topics in quantum electronics)« less
Maggi, Claudio; Paoluzzi, Matteo; Angelani, Luca; Di Leonardo, Roberto
2017-12-14
We investigate experimentally and numerically the stochastic dynamics and the time-dependent response of colloids subject to a small external perturbation in a dense bath of motile E. coli bacteria. The external field is a magnetic field acting on a superparamagnetic microbead suspended in an active medium. The measured linear response reveals an instantaneous friction kernel despite the complexity of the bacterial bath. By comparing the mean squared displacement and the response function we detect a clear violation of the fluctuation dissipation theorem.
Kinematic dynamo, supersymmetry breaking, and chaos
NASA Astrophysics Data System (ADS)
Ovchinnikov, Igor V.; Enßlin, Torsten A.
2016-04-01
The kinematic dynamo (KD) describes the growth of magnetic fields generated by the flow of a conducting medium in the limit of vanishing backaction of the fields onto the flow. The KD is therefore an important model system for understanding astrophysical magnetism. Here, the mathematical correspondence between the KD and a specific stochastic differential equation (SDE) viewed from the perspective of the supersymmetric theory of stochastics (STS) is discussed. The STS is a novel, approximation-free framework to investigate SDEs. The correspondence reported here permits insights from the STS to be applied to the theory of KD and vice versa. It was previously known that the fast KD in the idealistic limit of no magnetic diffusion requires chaotic flows. The KD-STS correspondence shows that this is also true for the diffusive KD. From the STS perspective, the KD possesses a topological supersymmetry, and the dynamo effect can be viewed as its spontaneous breakdown. This supersymmetry breaking can be regarded as the stochastic generalization of the concept of dynamical chaos. As this supersymmetry breaking happens in both the diffusive and the nondiffusive cases, the necessity of the underlying SDE being chaotic is given in either case. The observed exponentially growing and oscillating KD modes prove physically that dynamical spectra of the STS evolution operator that break the topological supersymmetry exist with both real and complex ground state eigenvalues. Finally, we comment on the nonexistence of dynamos for scalar quantities.
Two-fluid dusty shocks: simple benchmarking problems and applications to protoplanetary discs
NASA Astrophysics Data System (ADS)
Lehmann, Andrew; Wardle, Mark
2018-05-01
The key role that dust plays in the interstellar medium has motivated the development of numerical codes designed to study the coupled evolution of dust and gas in systems such as turbulent molecular clouds and protoplanetary discs. Drift between dust and gas has proven to be important as well as numerically challenging. We provide simple benchmarking problems for dusty gas codes by numerically solving the two-fluid dust-gas equations for steady, plane-parallel shock waves. The two distinct shock solutions to these equations allow a numerical code to test different forms of drag between the two fluids, the strength of that drag and the dust to gas ratio. We also provide an astrophysical application of J-type dust-gas shocks to studying the structure of accretion shocks on to protoplanetary discs. We find that two-fluid effects are most important for grains larger than 1 μm, and that the peak dust temperature within an accretion shock provides a signature of the dust-to-gas ratio of the infalling material.
NASA Astrophysics Data System (ADS)
Song, X.; Jordan, T. H.
2016-12-01
Body-wave and normal-mode observations have revealed an inner-core structure that is radially layered, axially anisotropic, and hemispherically asymmetric. Previous theoretical studies have examined the consistency of these features with the elasticity of iron crystals thought to dominate inner-core composition, but a fully consistent model has been elusive. Here we compare the seismic observation with effective-medium models derived from ab initio calculations of the elasticity tensors for hcp-Fe and bcc-Fe. Our estimates are based on Jordan's (GJI, 2015) effective medium theory, which is derived from a self-consistent, second-order Born approximation. The theory provides closed-form expressions for the effective elastic parameters of 3D anisotropic, heterogeneous media in which the local anisotropy is a constant hexagonal stiffness tensor C stochastically oriented about a constant symmetry axis \\hat{s} and the statistics of the small-scale heterogeneities are transversely isotropic in the plane perpendicular to \\hat{s}. The stochastic model is then described by a dimensionless "aspect ratio of the heterogeneity", 0 ≤ η < ∞, and a dimensionless "orientation ratio of the anisotropy", 0 ≤ ξ < ∞. The latter determines the degree to which the axis of C is aligned with \\hat{s}. We compute the loci of models with \\hat{s} oriented along the Earth's rotational axis ( \\hat{s} = north) by varying ξ and η for various ab initio estimates of C. We show that a lot of widely used estimates of C are inconsistent with most published normal-mode models of inner-core anisotropy. In particular, if the P-wave fast axis aligns with the rotational axis, which is required to satisfy the body-wave observations, then these hcp-Fe models predict that the fast polarization of the S waves is in the plane perpendicular to \\hat{s}, which disagrees with most normal-mode models. We have attempted to resolve this discrepancy by examining alternative hcp-Fe models, including radially anisotropic distributions of stochastic anisotropy and heterogeneity (i.e., where \\hat{s} = \\hat{r}), as well as bcc-Fe models. Our calculations constrain the form of C needed to satisfy the seismological inferences.
Li, Desheng
2014-01-01
This paper proposes a novel variant of cooperative quantum-behaved particle swarm optimization (CQPSO) algorithm with two mechanisms to reduce the search space and avoid the stagnation, called CQPSO-DVSA-LFD. One mechanism is called Dynamic Varying Search Area (DVSA), which takes charge of limiting the ranges of particles' activity into a reduced area. On the other hand, in order to escape the local optima, Lévy flights are used to generate the stochastic disturbance in the movement of particles. To test the performance of CQPSO-DVSA-LFD, numerical experiments are conducted to compare the proposed algorithm with different variants of PSO. According to the experimental results, the proposed method performs better than other variants of PSO on both benchmark test functions and the combinatorial optimization issue, that is, the job-shop scheduling problem. PMID:24851085
NASA Astrophysics Data System (ADS)
Yelkenci Köse, Simge; Demir, Leyla; Tunalı, Semra; Türsel Eliiyi, Deniz
2015-02-01
In manufacturing systems, optimal buffer allocation has a considerable impact on capacity improvement. This study presents a simulation optimization procedure to solve the buffer allocation problem in a heat exchanger production plant so as to improve the capacity of the system. For optimization, three metaheuristic-based search algorithms, i.e. a binary-genetic algorithm (B-GA), a binary-simulated annealing algorithm (B-SA) and a binary-tabu search algorithm (B-TS), are proposed. These algorithms are integrated with the simulation model of the production line. The simulation model, which captures the stochastic and dynamic nature of the production line, is used as an evaluation function for the proposed metaheuristics. The experimental study with benchmark problem instances from the literature and the real-life problem show that the proposed B-TS algorithm outperforms B-GA and B-SA in terms of solution quality.
Implementing Bayesian networks with embedded stochastic MRAM
NASA Astrophysics Data System (ADS)
Faria, Rafatul; Camsari, Kerem Y.; Datta, Supriyo
2018-04-01
Magnetic tunnel junctions (MTJ's) with low barrier magnets have been used to implement random number generators (RNG's) and it has recently been shown that such an MTJ connected to the drain of a conventional transistor provides a three-terminal tunable RNG or a p-bit. In this letter we show how this p-bit can be used to build a p-circuit that emulates a Bayesian network (BN), such that the correlations in real world variables can be obtained from electrical measurements on the corresponding circuit nodes. The p-circuit design proceeds in two steps: the BN is first translated into a behavioral model, called Probabilistic Spin Logic (PSL), defined by dimensionless biasing (h) and interconnection (J) coefficients, which are then translated into electronic circuit elements. As a benchmark example, we mimic a family tree of three generations and show that the genetic relatedness calculated from a SPICE-compatible circuit simulator matches well-known results.
Comparison of deterministic and stochastic methods for time-dependent Wigner simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Sihong, E-mail: sihong@math.pku.edu.cn; Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg
2015-11-01
Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution ofmore » a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.« less
Turbulent dissipation challenge: a community-driven effort
NASA Astrophysics Data System (ADS)
Parashar, Tulasi N.; Salem, Chadi; Wicks, Robert T.; Karimabadi, H.; Gary, S. Peter; Matthaeus, William H.
2015-10-01
> Many naturally occurring and man-made plasmas are collisionless and turbulent. It is not yet well understood how the energy in fields and fluid motions is transferred into the thermal degrees of freedom of constituent particles in such systems. The debate at present primarily concerns proton heating. Multiple possible heating mechanisms have been proposed over the past few decades, including cyclotron damping, Landau damping, heating at intermittent structures and stochastic heating. Recently, a community-driven effort was proposed (Parashar & Salem, 2013, arXiv:1303.0204) to bring the community together and understand the relative contributions of these processes under given conditions. In this paper, we propose the first step of this challenge: a set of problems and diagnostics for benchmarking and comparing different types of 2.5D simulations. These comparisons will provide insights into the strengths and limitations of different types of numerical simulations and will help guide subsequent stages of the challenge.
NASA Astrophysics Data System (ADS)
Guala, M.; Liu, M.
2017-12-01
The kinematics of sediment particles is investigated by non-intrusive imaging methods to provide a statistical description of bedload transport in conditions near the threshold of motion. In particular, we focus on the cyclic transition between motion and rest regimes to quantify the waiting time statistics inferred to be responsible for anomalous diffusion, and so far elusive. Despite obvious limitations in the spatio-temporal domain of the observations, we are able to identify the probability distributions of the particle step time and length, velocity, acceleration, waiting time, and thus distinguish which quantities exhibit well converged mean values, based on the thickness of their respective tails. The experimental results shown here for four different transport conditions highlight the importance of the waiting time distribution and represent a benchmark dataset for the stochastic modeling of bedload transport.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed.
Hu, Cong; Li, Zhi; Zhou, Tian; Zhu, Aijun; Xu, Chuanpei
2016-01-01
We propose a new meta-heuristic algorithm named Levy flights multi-verse optimizer (LFMVO), which incorporates Levy flights into multi-verse optimizer (MVO) algorithm to solve numerical and engineering optimization problems. The Original MVO easily falls into stagnation when wormholes stochastically re-span a number of universes (solutions) around the best universe achieved over the course of iterations. Since Levy flights are superior in exploring unknown, large-scale search space, they are integrated into the previous best universe to force MVO out of stagnation. We test this method on three sets of 23 well-known benchmark test functions and an NP complete problem of test scheduling for Network-on-Chip (NoC). Experimental results prove that the proposed LFMVO is more competitive than its peers in both the quality of the resulting solutions and convergence speed. PMID:27926946
DTN routing in body sensor networks with dynamic postural partitioning.
Quwaider, Muhannad; Biswas, Subir
2010-11-01
This paper presents novel store-and-forward packet routing algorithms for Wireless Body Area Networks ( WBAN ) with frequent postural partitioning. A prototype WBAN has been constructed for experimentally characterizing on-body topology disconnections in the presence of ultra short range radio links, unpredictable RF attenuation, and human postural mobility. On-body DTN routing protocols are then developed using a stochastic link cost formulation, capturing multi-scale topological localities in human postural movements. Performance of the proposed protocols are evaluated experimentally and via simulation, and are compared with a number of existing single-copy DTN routing protocols and an on-body packet flooding mechanism that serves as a performance benchmark with delay lower-bound. It is shown that via multi-scale modeling of the spatio-temporal locality of on-body link disconnection patterns, the proposed algorithms can provide better routing performance compared to a number of existing probabilistic, opportunistic, and utility-based DTN routing protocols in the literature.
NASA Astrophysics Data System (ADS)
Galliano, Frédéric
2018-05-01
This article presents a new dust spectral energy distribution (SED) model, named HerBIE, aimed at eliminating the noise-induced correlations and large scatter obtained when performing least-squares fits. The originality of this code is to apply the hierarchical Bayesian approach to full dust models, including realistic optical properties, stochastic heating, and the mixing of physical conditions in the observed regions. We test the performances of our model by applying it to synthetic observations. We explore the impact on the recovered parameters of several effects: signal-to-noise ratio, SED shape, sample size, the presence of intrinsic correlations, the wavelength coverage, and the use of different SED model components. We show that this method is very efficient: the recovered parameters are consistently distributed around their true values. We do not find any clear bias, even for the most degenerate parameters, or with extreme signal-to-noise ratios.
Noise and the statistical mechanics of distributed transport in a colony of interacting agents
NASA Astrophysics Data System (ADS)
Katifori, Eleni; Graewer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G.
Inspired by the process of liquid food distribution between individuals in an ant colony, in this work we consider the statistical mechanics of resource dissemination between interacting agents with finite carrying capacity. The agents move inside a confined space (nest), pick up the food at the entrance of the nest and share it with other agents that they encounter. We calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess which strategies can lead to efficient food distribution within the nest and also to what level the observed food uptake rates and efficiency in food distribution are due to stochastic fluctuations or specific food exchange strategies by an actual ant colony.
Quadratic integrand double-hybrid made spin-component-scaled
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brémond, Éric, E-mail: eric.bremond@iit.it; Savarese, Marika; Sancho-García, Juan C.
2016-03-28
We propose two analytical expressions aiming to rationalize the spin-component-scaled (SCS) and spin-opposite-scaled (SOS) schemes for double-hybrid exchange-correlation density-functionals. Their performances are extensively tested within the framework of the nonempirical quadratic integrand double-hybrid (QIDH) model on energetic properties included into the very large GMTKN30 benchmark database, and on structural properties of semirigid medium-sized organic compounds. The SOS variant is revealed as a less computationally demanding alternative to reach the accuracy of the original QIDH model without losing any theoretical background.
Macroinvertebrate community assembly in pools created during peatland restoration.
Brown, Lee E; Ramchunder, Sorain J; Beadle, Jeannie M; Holden, Joseph
2016-11-01
Many degraded ecosystems are subject to restoration attempts, providing new opportunities to unravel the processes of ecological community assembly. Restoration of previously drained northern peatlands, primarily to promote peat and carbon accumulation, has created hundreds of thousands of new open water pools. We assessed the potential benefits of this wetland restoration for aquatic biodiversity, and how communities reassemble, by comparing pool ecosystems in regions of the UK Pennines on intact (never drained) versus restored (blocked drainage-ditches) peatland. We also evaluated the conceptual idea that comparing reference ecosystems in terms of their compositional similarity to null assemblages (and thus the relative importance of stochastic versus deterministic assembly) can guide evaluations of restoration success better than analyses of community composition or diversity. Community composition data highlighted some differences in the macroinvertebrate composition of restored pools compared to undisturbed peatland pools, which could be used to suggest that alternative end-points to restoration were influenced by stochastic processes. However, widely used diversity metrics indicated no differences between undisturbed and restored pools. Novel evaluations of restoration using null models confirmed the similarity of deterministic assembly processes from the national species pool across all pools. Stochastic elements were important drivers of between-pool differences at the regional-scale but the scale of these effects was also similar across most of the pools studied. The amalgamation of assembly theory into ecosystem restoration monitoring allows us to conclude with more certainty that restoration has been successful from an ecological perspective in these systems. Evaluation of these UK findings compared to those from peatlands across Europe and North America further suggests that restoring peatland pools delivers significant benefits for aquatic fauna by providing extensive new habitat that is largely equivalent to natural pools. More generally, we suggest that assembly theory could provide new benchmarks for planning and evaluating ecological restoration success. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
A continuous stochastic model for non-equilibrium dense gases
NASA Astrophysics Data System (ADS)
Sadr, M.; Gorji, M. H.
2017-12-01
While accurate simulations of dense gas flows far from the equilibrium can be achieved by direct simulation adapted to the Enskog equation, the significant computational demand required for collisions appears as a major constraint. In order to cope with that, an efficient yet accurate solution algorithm based on the Fokker-Planck approximation of the Enskog equation is devised in this paper; the approximation is very much associated with the Fokker-Planck model derived from the Boltzmann equation by Jenny et al. ["A solution algorithm for the fluid dynamic equations based on a stochastic model for molecular motion," J. Comput. Phys. 229, 1077-1098 (2010)] and Gorji et al. ["Fokker-Planck model for computational studies of monatomic rarefied gas flows," J. Fluid Mech. 680, 574-601 (2011)]. The idea behind these Fokker-Planck descriptions is to project the dynamics of discrete collisions implied by the molecular encounters into a set of continuous Markovian processes subject to the drift and diffusion. Thereby, the evolution of particles representing the governing stochastic process becomes independent from each other and thus very efficient numerical schemes can be constructed. By close inspection of the Enskog operator, it is observed that the dense gas effects contribute further to the advection of molecular quantities. That motivates a modelling approach where the dense gas corrections can be cast in the extra advection of particles. Therefore, the corresponding Fokker-Planck approximation is derived such that the evolution in the physical space accounts for the dense effects present in the pressure, stress tensor, and heat fluxes. Hence the consistency between the devised Fokker-Planck approximation and the Enskog operator is shown for the velocity moments up to the heat fluxes. For validation studies, a homogeneous gas inside a box besides Fourier, Couette, and lid-driven cavity flow setups is considered. The results based on the Fokker-Planck model are compared with respect to benchmark simulations, where good agreement is found for the flow field along with the transport properties.
Chong, Meng Nan; Sidhu, Jatinder; Aryal, Rupak; Tang, Janet; Gernjak, Wolfgang; Escher, Beate; Toze, Simon
2013-08-01
Stormwater is one of the last major untapped urban water resources that can be exploited as an alternative water source in Australia. The information in the current Australian Guidelines for Water Recycling relating to stormwater harvesting and reuse only emphasises on a limited number of stormwater quality parameters. In order to supply stormwater as a source for higher value end-uses, a more comprehensive assessment on the potential public health risks has to be undertaken. Owing to the stochastic variations in rainfall, catchment hydrology and also the types of non-point pollution sources that can provide contaminants relating to different anthropogenic activities and catchment land uses, the characterisation of public health risks in stormwater is complex, tedious and not always possible through the conventional detection and analytical methods. In this study, a holistic approach was undertaken to assess the potential public health risks in urban stormwater samples from a medium-density residential catchment. A combined chemical-toxicological assessment was used to characterise the potential health risks arising from chemical contaminants, while a combination of standard culture methods and quantitative polymerase chain reaction (qPCR) methods was used for detection and quantification of faecal indicator bacteria (FIB) and pathogens in urban stormwater. Results showed that the concentration of chemical contaminants and associated toxicity were relatively low when benchmarked against other alternative water sources such as recycled wastewater. However, the concentrations of heavy metals particularly cadmium and lead have exceeded the Australian guideline values, indicating potential public health risks. Also, high numbers of FIB were detected in urban stormwater samples obtained from wet weather events. In addition, qPCR detection of human-related pathogens suggested there are frequent sewage ingressions into the urban stormwater runoff during wet weather events. Further water quality monitoring study will be conducted at different contrasting urban catchments in order to undertake a more comprehensive public health risk assessment for urban stormwater.
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2012-01-01
The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training the users on homogenization software was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that automatic algorithms can perform as well as manual ones.
Benchmarking monthly homogenization algorithms
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Training was found to be very important. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Enhanced three-dimensional stochastic adjustment for combined volcano geodetic networks
NASA Astrophysics Data System (ADS)
Del Potro, R.; Muller, C.
2009-12-01
Volcano geodesy is unquestionably a necessary technique in studies of physical volcanology and for eruption early warning systems. However, as every volcano geodesist knows, obtaining measurements of the required resolution using traditional campaigns and techniques is time consuming and requires a large manpower. Moreover, most volcano geodetic networks worldwide use a combination of data from traditional techniques; levelling, electronic distance measurements (EDM), triangulation and Global Navigation Satellite Systems (GNSS) but, in most cases, these data are surveyed, analysed and adjusted independently. This then leaves it to the authors’ criteria to decide which technique renders the most realistic results in each case. Herein we present a way of solving the problem of inter-methodology data integration in a cost-effective manner following a methodology were all the geodetic data of a redundant, combined network (e.g. surveyed by GNSS, levelling, distance, angular data, INSAR, extensometers, etc.) is adjusted stochastically within a single three-dimensional referential frame. The adjustment methodology is based on the least mean square method and links the data with its geometrical component providing combined, precise, three-dimensional, displacement vectors, relative to external reference points as well as stochastically-quantified, benchmark-specific, uncertainty ellipsoids. Three steps in the adjustment allow identifying, and hence dismissing, flagrant measurement errors (antenna height, atmospheric effects, etc.), checking the consistency of external reference points and a final adjustment of the data. Moreover, since the statistical indicators can be obtained from expected uncertainties in the measurements of the different geodetic techniques used (i.e. independent of the measured data), it is possible to run a priori simulations of a geodetic network in order to constrain its resolution, and reduce logistics, before the network is even built. In this work we present a first effort to apply this technique to a new volcano geodetic network on Arenal volcano in Costa Rica, using triangulation, EDM and GNSS data from four campaigns. An a priori simulation, later confirmed by field measurements, of the movement detection capacity of different benchmarks within the network, shows how the network design is optimised to detect smaller displacement at the points where these are expected. Data from the four campaigns also proves the repeatability and consistency of the statistical indicators. A preliminary interpretation of the geodetic data relative to Arenal’s volcanic activity could indicate a correlation between displacement velocity and direction with the location and thickness of the recent lava flow field. This then suggests that a deflation caused by the weight of the lava field could be obscuring the effects of possible deep magmatic sources. Although this study is specific to Arenal volcano and its regional tectonic setting, we suggest that the cost-effective, high-quality results we present, prove the methodology’s potential to be incorporated into the design and analysis of volcano geodetic networks worldwide.
Effect of particle size distribution on permeability in the randomly packed porous media
NASA Astrophysics Data System (ADS)
Markicevic, Bojan
2017-11-01
An answer of how porous medium heterogeneity influences the medium permeability is still inconclusive, where both increase and decrease in the permeability value are reported. A numerical procedure is used to generate a randomly packed porous material consisting of spherical particles. Six different particle size distributions are used including mono-, bi- and three-disperse particles, as well as uniform, normal and log-normal particle size distribution with the maximum to minimum particle size ratio ranging from three to eight for different distributions. In all six cases, the average particle size is kept the same. For all media generated, the stochastic homogeneity is checked from distribution of three coordinates of particle centers, where uniform distribution of x-, y- and z- positions is found. The medium surface area remains essentially constant except for bi-modal distribution in which medium area decreases, while no changes in the porosity are observed (around 0.36). The fluid flow is solved in such domain, and after checking for the pressure axial linearity, the permeability is calculated from the Darcy law. The permeability comparison reveals that the permeability of the mono-disperse medium is smallest, and the permeability of all poly-disperse samples is less than ten percent higher. For bi-modal particles, the permeability is for a quarter higher compared to the other media which can be explained by volumetric contribution of larger particles and larger passages for fluid flow to take place.
Stochastic model search with binary outcomes for genome-wide association studies.
Russu, Alberto; Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-06-01
The spread of case-control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model.
A robust and efficient stepwise regression method for building sparse polynomial chaos expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader
2017-03-01
Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less
Informed consent in human research: what to say and how to say it.
Reiman, Robert E
2013-02-01
To ensure that the possibility of harm to human research subjects is minimized, clinical trials and other research protocols are subject to oversight by Institutional Review Boards (IRBs). IRBs require that subjects be fully informed about the real or potential risks of participation in a research study. The use of radiological examinations in research protocols subjects the participants to exposure to ionizing radiation, which in theory carries a risk of stochastic effects such as radiation-induced cancer, and in practice may lead to deterministic effects such as skin injury. Because IRB members and clinical study coordinators may have little knowledge of radiation effects or how best to communicate the risk to the research subjects, they will consult with institutional Radiation Safety Committees and radiation protection professionals regarding how to integrate radiation risk information into the informed consent process. Elements of radiation informed consent include: (1) comparison of the radiation dose to some benchmark that enables the study subjects to make a value judgment regarding the acceptability of the risk; (2) a quantitative expression of the absolute risk of stochastic effects; (3) an expression of uncertainty in the risk; and (4) understandability. Standardized risk statement templates may be created for specific radiological examinations. These standardized risk statements may be deployed as paper forms or electronically in the form of internet-based applications. The technical nature of creating useful radiation risk statements represents an opportunity for radiation protection professionals to participate productively in the clinical research process.
The infinite medium Green's function for neutron transport in plane geometry 40 years later
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.
1993-01-01
In 1953, the first of what was supposed to be two volumes on neutron transport theory was published. The monograph, entitled [open quotes]Introduction to the Theory of Neutron Diffusion[close quotes] by Case et al., appeared as a Los Alamos National Laboratory report and was to be followed by a second volume, which never appeared as intended because of the death of Placzek. Instead, Case and Zweifel collaborated on the now classic work entitled Linear Transport Theory 2 in which the underlying mathematical theory of linear transport was presented. The initial monograph, however, represented the coming of age of neutron transportmore » theory, which had its roots in radiative transfer and kinetic theory. In addition, it provided the first benchmark results along with the mathematical development for several fundamental neutron transport problems. In particular, one-dimensional infinite medium Green's functions for the monoenergetic transport equation in plane and spherical geometries were considered complete with numerical results to be used as standards to guide code development for applications. Unfortunately, because of the limited computational resources of the day, some numerical results were incorrect. Also, only conventional mathematics and numerical methods were used because the transport theorists of the day were just becoming acquainted with more modern mathematical approaches. In this paper, Green's function solution is revisited in light of modern numerical benchmarking methods with an emphasis on evaluation rather than theoretical results. The primary motivation for considering the Green's function at this time is its emerging use in solving finite and heterogeneous media transport problems.« less
NASA Astrophysics Data System (ADS)
Puchkov, V. A.
2016-09-01
Aspect sensitive scattering of multi-frequency probe signals by artificial, magnetic field aligned density irregularities (with transverse size ∼ 1- 10 m) generated in the ionosphere by powerful radio waves is considered. Fluctuations of received signals depending on stochastic properties of the irregularities are calculated. It is shown that in the case of HF probe waves two mechanisms may contribute to the scattered signal fluctuations. The first one is due to the propagation of probe waves in the ionospheric plasma as in a randomly inhomogeneous medium. The second one lies in non-stationary stochastic behavior of irregularities which satisfy the Bragg conditions for the scattering geometry and therefore constitute centers of scattering. In the probe wave frequency band of the order of 10-100 MHz the second mechanism dominates which delivers opportunity to recover some properties of artificial irregularities from received signals. Correlation function of backscattered probe waves with close frequencies is calculated, and it is shown that detailed spatial distribution of irregularities along the scattering vector can be found experimentally from observations of this correlation function.
Evaluating the morphological completeness of a training image.
Gao, Mingliang; Teng, Qizhi; He, Xiaohai; Feng, Junxi; Han, Xue
2017-05-01
Understanding the three-dimensional (3D) stochastic structure of a porous medium is helpful for studying its physical properties. A 3D stochastic structure can be reconstructed from a two-dimensional (2D) training image (TI) using mathematical modeling. In order to predict what specific morphology belonging to a TI can be reconstructed at the 3D orthogonal slices by the method of 3D reconstruction, this paper begins by introducing the concept of orthogonal chords. After analyzing the relationship among TI morphology, orthogonal chords, and the 3D morphology of orthogonal slices, a theory for evaluating the morphological completeness of a TI is proposed for the cases of three orthogonal slices and of two orthogonal slices. The proposed theory is evaluated using four TIs of porous media that represent typical but distinct morphological types. The significance of this theoretical evaluation lies in two aspects: It allows special morphologies, for which the attributes of a TI can be reconstructed at a special orthogonal slice of a 3D structure, to be located and quantified, and it can guide the selection of an appropriate reconstruction method for a special TI.
Recurrent noise-induced phase singularities in drifting patterns.
Clerc, M G; Coulibaly, S; del Campo, F; Garcia-Nustes, M A; Louvergneaux, E; Wilson, M
2015-11-01
We show that the key ingredients for creating recurrent traveling spatial phase defects in drifting patterns are a noise-sustained structure regime together with the vicinity of a phase transition, that is, a spatial region where the control parameter lies close to the threshold for pattern formation. They both generate specific favorable initial conditions for local spatial gradients, phase, and/or amplitude. Predictions from the stochastic convective Ginzburg-Landau equation with real coefficients agree quite well with experiments carried out on a Kerr medium submitted to shifted optical feedback that evidence noise-induced traveling phase slips and vortex phase-singularities.
IGMtransmission: Transmission curve computation
NASA Astrophysics Data System (ADS)
Harrison, Christopher M.; Meiksin, Avery; Stock, David
2015-04-01
IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.
Mass loss from inhomogeneous hot star winds. I. Resonance line formation in 2D models
NASA Astrophysics Data System (ADS)
Sundqvist, J. O.; Puls, J.; Feldmeier, A.
2010-01-01
Context. The mass-loss rate is a key parameter of hot, massive stars. Small-scale inhomogeneities (clumping) in the winds of these stars are conventionally included in spectral analyses by assuming optically thin clumps, a void inter-clump medium, and a smooth velocity field. To reconcile investigations of different diagnostics (in particular, unsaturated UV resonance lines vs. Hα/radio emission) within such models, a highly clumped wind with very low mass-loss rates needs to be invoked, where the resonance lines seem to indicate rates an order of magnitude (or even more) lower than previously accepted values. If found to be realistic, this would challenge the radiative line-driven wind theory and have dramatic consequences for the evolution of massive stars. Aims: We investigate basic properties of the formation of resonance lines in small-scale inhomogeneous hot star winds with non-monotonic velocity fields. Methods: We study inhomogeneous wind structures by means of 2D stochastic and pseudo-2D radiation-hydrodynamic wind models, constructed by assembling 1D snapshots in radially independent slices. A Monte-Carlo radiative transfer code, which treats the resonance line formation in an axially symmetric spherical wind (without resorting to the Sobolev approximation), is presented and used to produce synthetic line spectra. Results: The optically thin clumping limit is only valid for very weak lines. The detailed density structure, the inter-clump medium, and the non-monotonic velocity field are all important for the line formation. We confirm previous findings that radiation-hydrodynamic wind models reproduce observed characteristics of strong lines (e.g., the black troughs) without applying the highly supersonic “microturbulence” needed in smooth models. For intermediate strong lines, the velocity spans of the clumps are of central importance. Current radiation-hydrodynamic models predict spans that are too large to reproduce observed profiles unless a very low mass-loss rate is invoked. By simulating lower spans in 2D stochastic models, the profile strengths become drastically reduced, and are consistent with higher mass-loss rates. To simultaneously meet the constraints from strong lines, the inter-clump medium must be non-void. A first comparison to the observed Phosphorus V doublet in the O6 supergiant λ Cep confirms that line profiles calculated from a stochastic 2D model reproduce observations with a mass-loss rate approximately ten times higher than that derived from the same lines but assuming optically thin clumping. Tentatively this may resolve discrepancies between theoretical predictions, evolutionary constraints, and recent derived mass-loss rates, and suggests a re-investigation of the clump structure predicted by current radiation-hydrodynamic models.
Propagation and scattering of vector light beam in turbid scattering medium
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Milione, Giovanni; Meglinski, Igor; Alfano, Robert R.
2014-03-01
Due to its high sensitivity to subtle alterations in medium morphology the vector light beams have recently gained much attention in the area of photonics. This leads to development of a new non-invasive optical technique for tissue diagnostics. Conceptual design of the particular experimental systems requires careful selection of various technical parameters, including beam structure, polarization, coherence, wavelength of incident optical radiation, as well as an estimation of how the spatial and temporal structural alterations in biological tissues can be distinguished by variations of these parameters. Therefore, an accurate realistic description of vector light beams propagation within tissue-like media is required. To simulate and mimic the propagation of vector light beams within the turbid scattering media the stochastic Monte Carlo (MC) technique has been used. In current report we present the developed MC model and the results of simulation of different vector light beams propagation in turbid tissue-like scattering media. The developed MC model takes into account the coherent properties of light, the influence of reflection and refraction at the medium boundary, helicity flip of vortexes and their mutual interference. Finally, similar to the concept of higher order Poincaŕe sphere (HOPS), to link the spatial distribution of the intensity of the backscattered vector light beam and its state of polarization on the medium surface we introduced the color-coded HOPS.
NASA Astrophysics Data System (ADS)
Dube, Timothy; Mutanga, Onisimo
2015-03-01
Aboveground biomass estimation is critical in understanding forest contribution to regional carbon cycles. Despite the successful application of high spatial and spectral resolution sensors in aboveground biomass (AGB) estimation, there are challenges related to high acquisition costs, small area coverage, multicollinearity and limited availability. These challenges hamper the successful regional scale AGB quantification. The aim of this study was to assess the utility of the newly-launched medium-resolution multispectral Landsat 8 Operational Land Imager (OLI) dataset with a large swath width, in quantifying AGB in a forest plantation. We applied different sets of spectral analysis (test I: spectral bands; test II: spectral vegetation indices and test III: spectral bands + spectral vegetation indices) in testing the utility of Landsat 8 OLI using two non-parametric algorithms: stochastic gradient boosting and the random forest ensembles. The results of the study show that the medium-resolution multispectral Landsat 8 OLI dataset provides better AGB estimates for Eucalyptus dunii, Eucalyptus grandis and Pinus taeda especially when using the extracted spectral information together with the derived spectral vegetation indices. We also noted that incorporating the optimal subset of the most important selected medium-resolution multispectral Landsat 8 OLI bands improved AGB accuracies. We compared medium-resolution multispectral Landsat 8 OLI AGB estimates with Landsat 7 ETM + estimates and the latter yielded lower estimation accuracies. Overall, this study demonstrates the invaluable potential and strength of applying the relatively affordable and readily available newly-launched medium-resolution Landsat 8 OLI dataset, with a large swath width (185-km) in precisely estimating AGB. This strength of the Landsat OLI dataset is crucial especially in sub-Saharan Africa where high-resolution remote sensing data availability remains a challenge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, B.C.J.; Sha, W.T.; Doria, M.L.
1980-11-01
The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less
Assessment of the Accuracy of the Bethe-Salpeter (BSE/GW) Oscillator Strengths.
Jacquemin, Denis; Duchemin, Ivan; Blondel, Aymeric; Blase, Xavier
2016-08-09
Aiming to assess the accuracy of the oscillator strengths determined at the BSE/GW level, we performed benchmark calculations using three complementary sets of molecules. In the first, we considered ∼80 states in Thiel's set of compounds and compared the BSE/GW oscillator strengths to recently determined ADC(3/2) and CC3 reference values. The second set includes the oscillator strengths of the low-lying states of 80 medium to large dyes for which we have determined CC2/aug-cc-pVTZ values. The third set contains 30 anthraquinones for which experimental oscillator strengths are available. We find that BSE/GW accurately reproduces the trends for all series with excellent correlation coefficients to the benchmark data and generally very small errors. Indeed, for Thiel's sets, the BSE/GW values are more accurate (using CC3 references) than both CC2 and ADC(3/2) values on both absolute and relative scales. For all three sets, BSE/GW errors also tend to be nicely spread with almost equal numbers of positive and negative deviations as compared to reference values.
McKenzie, J.M.; Voss, C.I.; Siegel, D.I.
2007-01-01
In northern peatlands, subsurface ice formation is an important process that can control heat transport, groundwater flow, and biological activity. Temperature was measured over one and a half years in a vertical profile in the Red Lake Bog, Minnesota. To successfully simulate the transport of heat within the peat profile, the U.S. Geological Survey's SUTRA computer code was modified. The modified code simulates fully saturated, coupled porewater-energy transport, with freezing and melting porewater, and includes proportional heat capacity and thermal conductivity of water and ice, decreasing matrix permeability due to ice formation, and latent heat. The model is verified by correctly simulating the Lunardini analytical solution for ice formation in a porous medium with a mixed ice-water zone. The modified SUTRA model correctly simulates the temperature and ice distributions in the peat bog. Two possible benchmark problems for groundwater and energy transport with ice formation and melting are proposed that may be used by other researchers for code comparison. ?? 2006 Elsevier Ltd. All rights reserved.
Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K
2013-04-22
A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.
NASA Astrophysics Data System (ADS)
Sun, Yujia; Zhang, Xiaobing; Howell, John R.
2017-06-01
This work investigates the performance of the DOM, FVM, P1, SP3 and P3 methods for 2D combined natural convection and radiation heat transfer for an absorbing, emitting medium. The Monte Carlo method is used to solve the RTE coupled with the energy equation, and its results are used as benchmark solutions. Effects of the Rayleigh number, Planck number and optical thickness are considered, all covering several orders of magnitude. Temperature distributions, heat transfer rate and computational performance in terms of accuracy and computing time are presented and analyzed.
Neutron radiative capture cross section of Cu,6563 between 0.4 and 7.5 MeV
NASA Astrophysics Data System (ADS)
Newsome, I.; Bhike, M.; Krishichayan, Tornow, W.
2018-04-01
Natural copper is commonly used as cooling and shielding medium in detector arrangements designed to search for neutrinoless double-β decay. Neutron-induced background reactions on copper could potentially produce signals that are indistinguishable from the signals of interest. The present work focuses on radiative neutron capture experiments on Cu,6563 in the 0.4 to 7.5 MeV neutron energy range. The new data provide evaluations and model calculations with benchmark data needed to extend their applicability in predicting background rates in neutrinoless double-β decay experiments.
BioPreDyn-bench: a suite of benchmark problems for dynamic modelling in systems biology.
Villaverde, Alejandro F; Henriques, David; Smallbone, Kieran; Bongard, Sophia; Schmid, Joachim; Cicin-Sain, Damjan; Crombach, Anton; Saez-Rodriguez, Julio; Mauch, Klaus; Balsa-Canto, Eva; Mendes, Pedro; Jaeger, Johannes; Banga, Julio R
2015-02-20
Dynamic modelling is one of the cornerstones of systems biology. Many research efforts are currently being invested in the development and exploitation of large-scale kinetic models. The associated problems of parameter estimation (model calibration) and optimal experimental design are particularly challenging. The community has already developed many methods and software packages which aim to facilitate these tasks. However, there is a lack of suitable benchmark problems which allow a fair and systematic evaluation and comparison of these contributions. Here we present BioPreDyn-bench, a set of challenging parameter estimation problems which aspire to serve as reference test cases in this area. This set comprises six problems including medium and large-scale kinetic models of the bacterium E. coli, baker's yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The level of description includes metabolism, transcription, signal transduction, and development. For each problem we provide (i) a basic description and formulation, (ii) implementations ready-to-run in several formats, (iii) computational results obtained with specific solvers, (iv) a basic analysis and interpretation. This suite of benchmark problems can be readily used to evaluate and compare parameter estimation methods. Further, it can also be used to build test problems for sensitivity and identifiability analysis, model reduction and optimal experimental design methods. The suite, including codes and documentation, can be freely downloaded from the BioPreDyn-bench website, https://sites.google.com/site/biopredynbenchmarks/ .
Economic Risk of Bee Pollination in Maine Wild Blueberry, Vaccinium angustifolium.
Asare, Eric; Hoshide, Aaron K; Drummond, Francis A; Criner, George K; Chen, Xuan
2017-10-01
Recent pollinator declines highlight the importance of evaluating economic risk of agricultural systems heavily dependent on rented honey bees or native pollinators. Our study analyzed variability of native bees and honey bees, and the risks these pose to profitability of Maine's wild blueberry industry. We used cross-sectional data from organic, low-, medium-, and high-input wild blueberry producers in 1993, 1997-1998, 2005-2007, and from 2011 to 2015 (n = 162 fields). Data included native and honey bee densities (count/m2/min) and honey bee stocking densities (hives/ha). Blueberry fruit set, yield, and honey bee hive stocking density models were estimated. Fruit set is impacted about 1.6 times more by native bees than honey bees on a per bee basis. Fruit set significantly explained blueberry yield. Honey bee stocking density in fields predicted honey bee foraging densities. These three models were used in enterprise budgets for all four systems from on-farm surveys of 23 conventional and 12 organic producers (2012-2013). These budgets formed the basis of Monte Carlo simulations of production and profit. Stochastic dominance of net farm income (NFI) cumulative distribution functions revealed that if organic yields are high enough (2,345 kg/ha), organic systems are economically preferable to conventional systems. However, if organic yields are lower (724 kg/ha), it is riskier with higher variability of crop yield and NFI. Although medium-input systems are stochastically dominant with lower NFI variability compared with other conventional systems, the high-input system breaks even with the low-input system if honey bee hive rental prices triple in the future. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America.
Zagmutt, Francisco J; Sempier, Stephen H; Hanson, Terril R
2013-10-01
Emerging diseases (ED) can have devastating effects on agriculture. Consequently, agricultural insurance for ED can develop if basic insurability criteria are met, including the capability to estimate the severity of ED outbreaks with associated uncertainty. The U.S. farm-raised channel catfish (Ictalurus punctatus) industry was used to evaluate the feasibility of using a disease spread simulation modeling framework to estimate the potential losses from new ED for agricultural insurance purposes. Two stochastic models were used to simulate the spread of ED between and within channel catfish ponds in Mississippi (MS) under high, medium, and low disease impact scenarios. The mean (95% prediction interval (PI)) proportion of ponds infected within disease-impacted farms was 7.6% (3.8%, 22.8%), 24.5% (3.8%, 72.0%), and 45.6% (4.0%, 92.3%), and the mean (95% PI) proportion of fish mortalities in ponds affected by the disease was 9.8% (1.4%, 26.7%), 49.2% (4.7%, 60.7%), and 88.3% (85.9%, 90.5%) for the low, medium, and high impact scenarios, respectively. The farm-level mortality losses from an ED were up to 40.3% of the total farm inventory and can be used for insurance premium rate development. Disease spread modeling provides a systematic way to organize the current knowledge on the ED perils and, ultimately, use this information to help develop actuarially sound agricultural insurance policies and premiums. However, the estimates obtained will include a large amount of uncertainty driven by the stochastic nature of disease outbreaks, by the uncertainty in the frequency of future ED occurrences, and by the often sparse data available from past outbreaks. © 2013 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Berner, J.; Coleman, D.; Palmer, T.
2015-12-01
Stochastic parameterizations have been used for more than a decade in atmospheric models to represent the variability of unresolved sub-grid processes. They have a beneficial effect on the spread and mean state of medium- and extended-range forecasts (Buizza et al. 1999, Palmer et al. 2009). There is also increasing evidence that stochastic parameterization of unresolved processes could be beneficial for the climate of an atmospheric model through noise enhanced variability, noise-induced drift (Berner et al. 2008), and by enabling the climate simulator to explore other flow regimes (Christensen et al. 2015; Dawson and Palmer 2015). We present results showing the impact of including the Stochastically Perturbed Parameterization Tendencies scheme (SPPT) in coupled runs of the National Center for Atmospheric Research (NCAR) Community Atmosphere Model, version 4 (CAM4) with historical forcing. The SPPT scheme accounts for uncertainty in the CAM physical parameterization schemes, including the convection scheme, by perturbing the parametrised temperature, moisture and wind tendencies with a multiplicative noise term. SPPT results in a large improvement in the variability of the CAM4 modeled climate. In particular, SPPT results in a significant improvement to the representation of the El Nino-Southern Oscillation in CAM4, improving the power spectrum, as well as both the inter- and intra-annual variability of tropical pacific sea surface temperatures. References: Berner, J., Doblas-Reyes, F. J., Palmer, T. N., Shutts, G. J., & Weisheimer, A., 2008. Phil. Trans. R. Soc A, 366, 2559-2577 Buizza, R., Miller, M. and Palmer, T. N., 1999. Q.J.R. Meteorol. Soc., 125, 2887-2908. Christensen, H. M., I. M. Moroz & T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2239-9 Dawson, A. and T. N. Palmer, 2015. Clim. Dynam., doi: 10.1007/s00382-014-2238-x Palmer, T.N., R. Buizza, F. Doblas-Reyes, et al., 2009, ECMWF technical memorandum 598.
NASA Astrophysics Data System (ADS)
Brewster, J.; Oware, E. K.
2017-12-01
Groundwater hosted in fractured rocks constitutes almost 65% of the principal aquifers in the US. The exploitation and contaminant management of fractured aquifers require fracture flow and transport modeling, which in turn requires a detailed understanding of the structure of the aquifer. The widely used equivalent porous medium approach to modeling fractured aquifer systems is inadequate to accurately predict fracture transport processes due to the averaging of the sharp lithological contrast between the matrix and the fractures. The potential of geophysical imaging (GI) to estimate spatially continuous subsurface profiles in a minimally invasive fashion is well proven. Conventional deterministic GI strategies, however, produce geologically unrealistic, smoothed-out results due to commonly enforced smoothing constraints. Stochastic GI of fractured aquifers is becoming increasing appealing due to its ability to recover realistic fracture features while providing multiple likely realizations that enable uncertainty assessment. Generating prior spatial features consistent with the expected target structures is crucial in stochastic imaging. We propose to utilize eigenvalue ratios to resolve the elongated fracture features expected in a fractured aquifer system. Eigenvalues capture the major and minor directions of variability in a region, which can be employed to evaluate shape descriptors, such as eccentricity (elongation) and orientation of features in the region. Eccentricity ranges from zero to one, representing a circularly sharped to a line feature, respectively. Here, we apply eigenvalue ratios to define a joint objective parameter consisting of eccentricity (shape) and direction terms to guide the generation of prior fracture-like features in some predefined principal directions for stochastic GI. Preliminary unconditional, synthetic experiments reveal the potential of the algorithm to simulate prior fracture-like features. We illustrate the strategy with a 2D, cross-borehole electrical resistivity tomography (ERT) in a fractured aquifer at the UB Environmental Geophysics Imaging Site, with tomograms validated with gamma and caliper logs obtained from the two ERT wells.
Agent-Based Computational Modeling of Cell Culture ...
Quantitative characterization of cellular dose in vitro is needed for alignment of doses in vitro and in vivo. We used the agent-based software, CompuCell3D (CC3D), to provide a stochastic description of cell growth in culture. The model was configured so that isolated cells assumed a “fried egg shape” but became increasingly cuboidal with increasing confluency. The surface area presented by each cell to the overlying medium varies from cell-to-cell and is a determinant of diffusional flux of toxicant from the medium into the cell. Thus, dose varies among cells for a given concentration of toxicant in the medium. Computer code describing diffusion of H2O2 from medium into each cell and clearance of H2O2 was calibrated against H2O2 time-course data (25, 50, or 75 uM H2O2 for 60 min) obtained with the Amplex Red assay for the medium and the H2O2-sensitive fluorescent reporter, HyPer, for cytosol. Cellular H2O2 concentrations peaked at about 5 min and were near baseline by 10 min. The model predicted a skewed distribution of surface areas, with between cell variation usually 2 fold or less. Predicted variability in cellular dose was in rough agreement with the variation in the HyPer data. These results are preliminary, as the model was not calibrated to the morphology of a specific cell type. Future work will involve morphology model calibration against human bronchial epithelial (BEAS-2B) cells. Our results show, however, the potential of agent-based modeling
Stochastic analysis of three-dimensional flow in a bounded domain
Naff, R.L.; Vecchia, A.V.
1986-01-01
A commonly accepted first-order approximation of the equation for steady state flow in a fully saturated spatially random medium has the form of Poisson's equation. This form allows for the advantageous use of Green's functions to solve for the random output (hydraulic heads) in terms of a convolution over the random input (the logarithm of hydraulic conductivity). A solution for steady state three- dimensional flow in an aquifer bounded above and below is presented; consideration of these boundaries is made possible by use of Green's functions to solve Poisson's equation. Within the bounded domain the medium hydraulic conductivity is assumed to be a second-order stationary random process as represented by a simple three-dimensional covariance function. Upper and lower boundaries are taken to be no-flow boundaries; the mean flow vector lies entirely in the horizontal dimensions. The resulting hydraulic head covariance function exhibits nonstationary effects resulting from the imposition of boundary conditions. Comparisons are made with existing infinite domain solutions.
Multi-scale dynamics and relaxation of a tethered membrane in a solvent by Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Pandey, Ras; Anderson, Kelly; Farmer, Barry
2006-03-01
A tethered membrane modeled by a flexible sheet dissipates entropy as it wrinkles and crumples. Nodes of a coarse grained membrane are connected via multiple pathways for dynamical modes to propagate. We consider a sheet with nodes connected by fluctuating bonds on a cubic lattice. The empty lattice sites constitute an effective solvent medium via node-solvent interaction. Each node execute its stochastic motion with the Metropolis algorithm subject to bond fluctuations, excluded volume constraints, and interaction energy. Dynamics and conformation of the sheet are examined at a low and a high temperature with attractive and repulsive node-node interactions for the contrast in an attractive solvent medium. Variations of the mean square displacement of the center node of the sheet and that of its center of mass with the time steps are examined in detail which show different power-law motion from short to long time regimes. Relaxation of the gyration radius and scaling of its asymptotic value with the molecular weight are examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, S.; Gezari, S.; Heinis, S.
2015-03-20
We present a novel method for the light-curve characterization of Pan-STARRS1 Medium Deep Survey (PS1 MDS) extragalactic sources into stochastic variables (SVs) and burst-like (BL) transients, using multi-band image-differencing time-series data. We select detections in difference images associated with galaxy hosts using a star/galaxy catalog extracted from the deep PS1 MDS stacked images, and adopt a maximum a posteriori formulation to model their difference-flux time-series in four Pan-STARRS1 photometric bands g {sub P1}, r {sub P1}, i {sub P1}, and z {sub P1}. We use three deterministic light-curve models to fit BL transients; a Gaussian, a Gamma distribution, and anmore » analytic supernova (SN) model, and one stochastic light-curve model, the Ornstein-Uhlenbeck process, in order to fit variability that is characteristic of active galactic nuclei (AGNs). We assess the quality of fit of the models band-wise and source-wise, using their estimated leave-out-one cross-validation likelihoods and corrected Akaike information criteria. We then apply a K-means clustering algorithm on these statistics, to determine the source classification in each band. The final source classification is derived as a combination of the individual filter classifications, resulting in two measures of classification quality, from the averages across the photometric filters of (1) the classifications determined from the closest K-means cluster centers, and (2) the square distances from the clustering centers in the K-means clustering spaces. For a verification set of AGNs and SNe, we show that SV and BL occupy distinct regions in the plane constituted by these measures. We use our clustering method to characterize 4361 extragalactic image difference detected sources, in the first 2.5 yr of the PS1 MDS, into 1529 BL, and 2262 SV, with a purity of 95.00% for AGNs, and 90.97% for SN based on our verification sets. We combine our light-curve classifications with their nuclear or off-nuclear host galaxy offsets, to define a robust photometric sample of 1233 AGNs and 812 SNe. With these two samples, we characterize their variability and host galaxy properties, and identify simple photometric priors that would enable their real-time identification in future wide-field synoptic surveys.« less
Qi, Aiming; Holland, Robert A; Taylor, Gail; Richter, Goetz M
2018-09-01
To optimise trade-offs provided by future changes in grassland use intensity, spatially and temporally explicit estimates of respective grassland productivities are required at the systems level. Here, we benchmark the potential national availability of grassland biomass, identify optimal strategies for its management, and investigate the relative importance of intensification over reversion (prioritising productivity versus environmental ecosystem services). Process-conservative meta-models for different grasslands were used to calculate the baseline dry matter yields (DMY; 1961-1990) at 1km 2 resolution for the whole UK. The effects of climate change, rising atmospheric [CO 2 ] and technological progress on baseline DMYs were used to estimate future grassland productivities (up to 2050) for low and medium CO 2 emission scenarios of UKCP09. UK benchmark productivities of 12.5, 8.7 and 2.8t/ha on temporary, permanent and rough-grazing grassland, respectively, accounted for productivity gains by 2010. By 2050, productivities under medium emission scenario are predicted to increase to 15.5 and 9.8t/ha on temporary and permanent grassland, respectively, but not on rough grassland. Based on surveyed grassland distributions for Great Britain in 2010 the annual availability of grassland biomass is likely to rise from 64 to 72milliontonnes by 2050. Assuming optimal N application could close existing productivity gaps of ca. 40% a range of management options could deliver additional 21∗10 6 tonnes of biomass available for bioenergy. Scenarios of changes in grassland use intensity demonstrated considerable scope for maintaining or further increasing grassland production and sparing some grassland for the provision of environmental ecosystem services. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Stochastic model search with binary outcomes for genome-wide association studies
Malovini, Alberto; Puca, Annibale A; Bellazzi, Riccardo
2012-01-01
Objective The spread of case–control genome-wide association studies (GWASs) has stimulated the development of new variable selection methods and predictive models. We introduce a novel Bayesian model search algorithm, Binary Outcome Stochastic Search (BOSS), which addresses the model selection problem when the number of predictors far exceeds the number of binary responses. Materials and methods Our method is based on a latent variable model that links the observed outcomes to the underlying genetic variables. A Markov Chain Monte Carlo approach is used for model search and to evaluate the posterior probability of each predictor. Results BOSS is compared with three established methods (stepwise regression, logistic lasso, and elastic net) in a simulated benchmark. Two real case studies are also investigated: a GWAS on the genetic bases of longevity, and the type 2 diabetes study from the Wellcome Trust Case Control Consortium. Simulations show that BOSS achieves higher precisions than the reference methods while preserving good recall rates. In both experimental studies, BOSS successfully detects genetic polymorphisms previously reported to be associated with the analyzed phenotypes. Discussion BOSS outperforms the other methods in terms of F-measure on simulated data. In the two real studies, BOSS successfully detects biologically relevant features, some of which are missed by univariate analysis and the three reference techniques. Conclusion The proposed algorithm is an advance in the methodology for model selection with a large number of features. Our simulated and experimental results showed that BOSS proves effective in detecting relevant markers while providing a parsimonious model. PMID:22534080
Liquid Structures and Physical Properties -- Ground Based Studies for ISS Experiments
NASA Technical Reports Server (NTRS)
Kelton, K. F.; Bendert, J. C.; Mauro, N. A.
2012-01-01
Studies of electrostatically-levitated supercooled liquids have demonstrated strong short- and medium-range ordering in transition metal and alloy liquids, which can influence phase transitions like crystal nucleation and the glass transition. The structure is also related to the liquid properties. Planned ISS experiments will allow a deeper investigation of these results as well as the first investigations of a new type of coupling in crystal nucleation in primary crystallizing liquids, resulting from a linking of the stochastic processes of diffusion with interfacial-attachment. A brief description of the techniques used for ground-based studies and some results relevant to planned ISS investigations are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prinja, A. K.
The Karhunen-Loeve stochastic spectral expansion of a random binary mixture of immiscible fluids in planar geometry is used to explore asymptotic limits of radiation transport in such mixtures. Under appropriate scalings of mixing parameters - correlation length, volume fraction, and material cross sections - and employing multiple- scale expansion of the angular flux, previously established atomic mix and diffusion limits are reproduced. When applied to highly contrasting material properties in the small cor- relation length limit, the methodology yields a nonstandard reflective medium transport equation that merits further investigation. Finally, a hybrid closure is proposed that produces both small andmore » large correlation length limits of the closure condition for the material averaged equations.« less
Complex groundwater flow systems as traveling agent models
Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis
2014-01-01
Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455
The Calderón problem with corrupted data
NASA Astrophysics Data System (ADS)
Caro, Pedro; Garcia, Andoni
2017-08-01
We consider the inverse Calderón problem consisting of determining the conductivity inside a medium by electrical measurements on its surface. Ideally, these measurements determine the Dirichlet-to-Neumann map and, therefore, one usually assumes the data to be given by such a map. This situation corresponds to having access to infinite-precision measurements, which is totally unrealistic. In this paper, we study the Calderón problem assuming the data to contain measurement errors and provide formulas to reconstruct the conductivity and its normal derivative on the surface. Additionally, we state the rate convergence of the method. Our approach is theoretical and has a stochastic flavour.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Yang, Ping
2018-01-01
In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenzâ€"Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.
Discontinuous finite element method for vector radiative transfer
NASA Astrophysics Data System (ADS)
Wang, Cun-Hai; Yi, Hong-Liang; Tan, He-Ping
2017-03-01
The discontinuous finite element method (DFEM) is applied to solve the vector radiative transfer in participating media. The derivation in a discrete form of the vector radiation governing equations is presented, in which the angular space is discretized by the discrete-ordinates approach with a local refined modification, and the spatial domain is discretized into finite non-overlapped discontinuous elements. The elements in the whole solution domain are connected by modelling the boundary numerical flux between adjacent elements, which makes the DFEM numerically stable for solving radiative transfer equations. Several various problems of vector radiative transfer are tested to verify the performance of the developed DFEM, including vector radiative transfer in a one-dimensional parallel slab containing a Mie/Rayleigh/strong forward scattering medium and a two-dimensional square medium. The fact that DFEM results agree very well with the benchmark solutions in published references shows that the developed DFEM in this paper is accurate and effective for solving vector radiative transfer problems.
NASA Astrophysics Data System (ADS)
Giama, E.; Papadopoulos, A. M.
2018-01-01
The reduction of carbon emissions has become a top priority in the decision-making process for governments and companies, the strict European legislation framework being a major driving force behind this effort. On the other hand, many companies face difficulties in estimating their footprint and in linking the results derived from environmental evaluation processes with an integrated energy management strategy, which will eventually lead to energy-efficient and cost-effective solutions. The paper highlights the need of companies to establish integrated environmental management practices, with tools such as carbon footprint analysis to monitor the energy performance of production processes. Concepts and methods are analysed, and selected indicators are presented by means of benchmarking, monitoring and reporting the results in order to be used effectively from the companies. The study is based on data from more than 90 Greek small and medium enterprises, followed by a comprehensive discussion of cost-effective and realistic energy-saving measures.
NASA Astrophysics Data System (ADS)
Mishchenko, Michael I.; Yang, Ping
2018-01-01
In this paper we make practical use of the recently developed first-principles approach to electromagnetic scattering by particles immersed in an unbounded absorbing host medium. Specifically, we introduce an actual computational tool for the calculation of pertinent far-field optical observables in the context of the classical Lorenz-Mie theory. The paper summarizes the relevant theoretical formalism, explains various aspects of the corresponding numerical algorithm, specifies the input and output parameters of a FORTRAN program available at https://www.giss.nasa.gov/staff/mmishchenko/Lorenz-Mie.html, and tabulates benchmark results useful for testing purposes. This public-domain FORTRAN program enables one to solve the following two important problems: (i) simulate theoretically the reading of a remote well-collimated radiometer measuring electromagnetic scattering by an individual spherical particle or a small random group of spherical particles; and (ii) compute the single-scattering parameters that enter the vector radiative transfer equation derived directly from the Maxwell equations.
NASA Astrophysics Data System (ADS)
Wu, F.; Wu, T.-H.; Li, X.-Y.
2018-03-01
This article aims to present a systematic indentation theory on a half-space of multi-ferroic composite medium with transverse isotropy. The effect of sliding friction between the indenter and substrate is taken into account. The cylindrical flat-ended indenter is assumed to be electrically/magnetically conducting or insulating, which leads to four sets of mixed boundary-value problems. The indentation forces in the normal and tangential directions are related to the Coulomb friction law. For each case, the integral equations governing the contact behavior are developed by means of the generalized method of potential theory, and the corresponding coupling field is obtained in terms of elementary functions. The effect of sliding on the contact behavior is investigated. Finite element method (FEM) in the context of magneto-electro-elasticity is developed to discuss the validity of the analytical solutions. The obtained analytical solutions may serve as benchmarks to various simplified analyses and numerical codes and as a guide for future experimental studies.
Compton scattering collision module for OSIRIS
NASA Astrophysics Data System (ADS)
Del Gaudio, Fabrizio; Grismayer, Thomas; Fonseca, Ricardo; Silva, Luís
2017-10-01
Compton scattering plays a fundamental role in a variety of different astrophysical environments, such as at the gaps of pulsars and the stagnation surface of black holes. In these scenarios, Compton scattering is coupled with self-consistent mechanisms such as pair cascades. We present the implementation of a novel module, embedded in the self-consistent framework of the PIC code OSIRIS 4.0, capable of simulating Compton scattering from first principles and that is fully integrated with the self-consistent plasma dynamics. The algorithm accounts for the stochastic nature of Compton scattering reproducing without approximations the exchange of energy between photons and unbound charged species. We present benchmarks of the code against the analytical results of Blumenthal et al. and the numerical solution of the linear Kompaneets equation and good agreement is found between the simulations and the theoretical models. This work is supported by the European Research Council Grant (ERC- 2015-AdG 695088) and the Fundao para a Céncia e Tecnologia (Bolsa de Investigao PD/BD/114323/2016).
Stochastic-master-equation analysis of optimized three-qubit nondemolition parity measurements
NASA Astrophysics Data System (ADS)
Tornberg, L.; Barzanjeh, Sh.; DiVincenzo, David P.
2014-03-01
We analyzea direct parity measurement of the state of three superconducting qubits in circuit quantum electrodynamics. The parity is inferred from a homodyne measurement of the reflected and transmitted microwave radiation, and the measurement is direct in the sense that the parity is measured without the need for any quantum circuit operations or for ancilla qubits. Qubits are coupled to two resonant-cavity modes, allowing the steady state of the emitted radiation to satisfy the necessary conditions to act as a pointer state for the parity. However, the transient dynamics violates these conditions, and we analyze this detrimental effect and show that it can be overcome in the limit of a weak measurement signal. Our analysis shows that, with a moderate degree of postselection, it is possible to achieve postmeasurement states with fidelity of order 95%. We believe that this type of measurement could serve as a benchmark for future error correction protocols in a scalable architecture.
Trophallaxis-inspired model for distributed transport between randomly interacting agents
NASA Astrophysics Data System (ADS)
Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G.; Katifori, Eleni
2017-08-01
Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
Fundamental limits in 3D landmark localization.
Rohr, Karl
2005-01-01
This work analyses the accuracy of estimating the location of 3D landmarks and characteristic image structures. Based on nonlinear estimation theory we study the minimal stochastic errors of the position estimate caused by noisy data. Given analytic models of the image intensities we derive closed-form expressions for the Cramér-Rao bound for different 3D structures such as 3D edges, 3D ridges, 3D lines, and 3D blobs. It turns out, that the precision of localization depends on the noise level, the size of the region-of-interest, the width of the intensity transitions, as well as on other parameters describing the considered image structure. The derived lower bounds can serve as benchmarks and the performance of existing algorithms can be compared with them. To give an impression of the achievable accuracy numeric examples are presented. Moreover, by experimental investigations we demonstrate that the derived lower bounds can be achieved by fitting parametric intensity models directly to the image data.
A novel global Harmony Search method based on Ant Colony Optimisation algorithm
NASA Astrophysics Data System (ADS)
Fouad, Allouani; Boukhetala, Djamel; Boudjema, Fares; Zenger, Kai; Gao, Xiao-Zhi
2016-03-01
The Global-best Harmony Search (GHS) is a stochastic optimisation algorithm recently developed, which hybridises the Harmony Search (HS) method with the concept of swarm intelligence in the particle swarm optimisation (PSO) to enhance its performance. In this article, a new optimisation algorithm called GHSACO is developed by incorporating the GHS with the Ant Colony Optimisation algorithm (ACO). Our method introduces a novel improvisation process, which is different from that of the GHS in the following aspects. (i) A modified harmony memory (HM) representation and conception. (ii) The use of a global random switching mechanism to monitor the choice between the ACO and GHS. (iii) An additional memory consideration selection rule using the ACO random proportional transition rule with a pheromone trail update mechanism. The proposed GHSACO algorithm has been applied to various benchmark functions and constrained optimisation problems. Simulation results demonstrate that it can find significantly better solutions when compared with the original HS and some of its variants.
Investment risk in bioenergy crops
Skevas, Theodoros; Swinton, Scott M.; Tanner, Sophia; ...
2015-11-18
Here, perennial, cellulosic bioenergy crops represent a risky investment. The potential for adoption of these crops depends not only on mean net returns, but also on the associated probability distributions and on the risk preferences of farmers. Using 6-year observed crop yield data from highly productive and marginally productive sites in the southern Great Lakes region and assuming risk neutrality, we calculate expected breakeven biomass yields and prices compared to corn ( Zea mays L.) as a benchmark. Next we develop Monte Carlo budget simulations based on stochastic crop prices and yields. The crop yield simulations decompose yield risk intomore » three components: crop establishment survival, time to maturity, and mature yield variability. Results reveal that corn with harvest of grain and 38% of stover (as cellulosic bioenergy feedstock) is both the most profitable and the least risky investment option. It dominates all perennial systems considered across a wide range of farmer risk preferences. Although not currently attractive for profit-oriented farmers who are risk neutral or risk averse, perennial bioenergy crops.« less
Optimization of High-Dimensional Functions through Hypercube Evaluation
Abiyev, Rahib H.; Tunay, Mustafa
2015-01-01
A novel learning algorithm for solving global numerical optimization problems is proposed. The proposed learning algorithm is intense stochastic search method which is based on evaluation and optimization of a hypercube and is called the hypercube optimization (HO) algorithm. The HO algorithm comprises the initialization and evaluation process, displacement-shrink process, and searching space process. The initialization and evaluation process initializes initial solution and evaluates the solutions in given hypercube. The displacement-shrink process determines displacement and evaluates objective functions using new points, and the search area process determines next hypercube using certain rules and evaluates the new solutions. The algorithms for these processes have been designed and presented in the paper. The designed HO algorithm is tested on specific benchmark functions. The simulations of HO algorithm have been performed for optimization of functions of 1000-, 5000-, or even 10000 dimensions. The comparative simulation results with other approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low and high dimensional functions. PMID:26339237
Trophallaxis-inspired model for distributed transport between randomly interacting agents.
Gräwer, Johannes; Ronellenfitsch, Henrik; Mazza, Marco G; Katifori, Eleni
2017-08-01
Trophallaxis, the regurgitation and mouth to mouth transfer of liquid food between members of eusocial insect societies, is an important process that allows the fast and efficient dissemination of food in the colony. Trophallactic systems are typically treated as a network of agent interactions. This approach, though valuable, does not easily lend itself to analytic predictions. In this work we consider a simple trophallactic system of randomly interacting agents with finite carrying capacity, and calculate analytically and via a series of simulations the global food intake rate for the whole colony as well as observables describing how uniformly the food is distributed within the nest. Our model and predictions provide a useful benchmark to assess to what level the observed food uptake rates and efficiency in food distribution is due to stochastic effects or specific trophallactic strategies by the ant colony. Our work also serves as a stepping stone to describing the collective properties of more complex trophallactic systems, such as those including division of labor between foragers and workers.
Investment risk in bioenergy crops
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skevas, Theodoros; Swinton, Scott M.; Tanner, Sophia
Here, perennial, cellulosic bioenergy crops represent a risky investment. The potential for adoption of these crops depends not only on mean net returns, but also on the associated probability distributions and on the risk preferences of farmers. Using 6-year observed crop yield data from highly productive and marginally productive sites in the southern Great Lakes region and assuming risk neutrality, we calculate expected breakeven biomass yields and prices compared to corn ( Zea mays L.) as a benchmark. Next we develop Monte Carlo budget simulations based on stochastic crop prices and yields. The crop yield simulations decompose yield risk intomore » three components: crop establishment survival, time to maturity, and mature yield variability. Results reveal that corn with harvest of grain and 38% of stover (as cellulosic bioenergy feedstock) is both the most profitable and the least risky investment option. It dominates all perennial systems considered across a wide range of farmer risk preferences. Although not currently attractive for profit-oriented farmers who are risk neutral or risk averse, perennial bioenergy crops.« less
NASA Astrophysics Data System (ADS)
Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang
2010-05-01
CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces
Mak, Chi H; Pham, Phuong; Afif, Samir A; Goodman, Myron F
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C→U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-01-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C → U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics. PMID:26465508
NASA Astrophysics Data System (ADS)
Mak, Chi H.; Pham, Phuong; Afif, Samir A.; Goodman, Myron F.
2015-09-01
Enzymes that rely on random walk to search for substrate targets in a heterogeneously dispersed medium can leave behind complex spatial profiles of their catalyzed conversions. The catalytic signatures of these random-walk enzymes are the result of two coupled stochastic processes: scanning and catalysis. Here we develop analytical models to understand the conversion profiles produced by these enzymes, comparing an intrusive model, in which scanning and catalysis are tightly coupled, against a loosely coupled passive model. Diagrammatic theory and path-integral solutions of these models revealed clearly distinct predictions. Comparison to experimental data from catalyzed deaminations deposited on single-stranded DNA by the enzyme activation-induced deoxycytidine deaminase (AID) demonstrates that catalysis and diffusion are strongly intertwined, where the chemical conversions give rise to new stochastic trajectories that were absent if the substrate DNA was homogeneous. The C →U deamination profiles in both analytical predictions and experiments exhibit a strong contextual dependence, where the conversion rate of each target site is strongly contingent on the identities of other surrounding targets, with the intrusive model showing an excellent fit to the data. These methods can be applied to deduce sequence-dependent catalytic signatures of other DNA modification enzymes, with potential applications to cancer, gene regulation, and epigenetics.
Nonequilibrium forces between atoms and dielectrics mediated by a quantum field
NASA Astrophysics Data System (ADS)
Behunin, Ryan O.; Hu, Bei-Lok
2011-07-01
In this paper we give a first principles microphysics derivation of the nonequilibrium forces between an atom, treated as a three-dimensional harmonic oscillator, and a bulk dielectric medium modeled as a continuous lattice of oscillators coupled to a reservoir. We assume no direct interaction between the atom and the medium but there exist mutual influences transmitted via a common electromagnetic field. By employing concepts and techniques of open quantum systems we introduce coarse-graining to the physical variables—the medium, the quantum field, and the atom’s internal degrees of freedom, in that order—to extract their averaged effects from the lowest tier progressively to the top tier. The first tier of coarse-graining provides the averaged effect of the medium upon the field, quantified by a complex permittivity (in the frequency domain) describing the response of the dielectric to the field in addition to its back action on the field through a stochastic forcing term. The last tier of coarse-graining over the atom’s internal degrees of freedom results in an equation of motion for the atom’s center of mass from which we can derive the force on the atom. Our nonequilibrium formulation provides a fully dynamical description of the atom’s motion including back-action effects from all other relevant variables concerned. In the long-time limit we recover the known results for the atom-dielectric force when the combined system is in equilibrium or in a nonequilibrium stationary state.
On the efficiency of FES cycling: a framework and systematic review.
Hunt, K J; Fang, J; Saengsuwan, J; Grob, M; Laubacher, M
2012-01-01
Research and development in the art of cycling using functional electrical stimulation (FES) of the paralysed leg muscles has been going on for around thirty years. A range of physiological benefits has been observed in clinical studies but an outstanding problem with FES-cycling is that efficiency and power output are very low. The present work had the following aims: (i) to provide a tutorial introduction to a novel framework and methods of estimation of metabolic efficiency using example data sets, and to propose benchmark measures for evaluating FES-cycling performance; (ii) to systematically review the literature pertaining specifically to the metabolic efficiency of FES-cycling, to analyse the observations and possible explanations for the low efficiency, and to pose hypotheses for future studies which aim to improve performance. We recommend the following as benchmark measures for assessment of the performance of FES-cycling: (i) total work efficiency, delta efficiency and stimulation cost; (ii) we recommend, further, that these benchmark measures be complemented by mechanical measures of maximum power output, sustainable steady-state power output and endurance. Performance assessments should be carried out at a well-defined operating point, i.e. under conditions of well controlled work rate and cadence, because these variables have a strong effect on energy expenditure. Future work should focus on the two main factors which affect FES-cycling performance, namely: (i) unfavourable biomechanics, i.e. crude recruitment of muscle groups, non-optimal timing of muscle activation, and lack of synergistic and antagonistic joint control; (ii) non-physiological recruitment of muscle fibres, i.e. mixed recruitment of fibres of different type and deterministic constant-frequency stimulation. We hypothesise that the following areas may bring better FES-cycling performance: (i) study of alternative stimulation strategies for muscle activation including irregular stimulation patterns (e.g. doublets, triplets, stochastic patterns) and variable frequency stimulation trains, where it appears that increasing frequency over time may be profitable; (ii) study of better timing parameters for the stimulated muscle groups, and addition of more muscle groups: this path may be approached using EMG studies and constrained numerical optimisation employing dynamic models; (iii) development of optimal stimulation protocols for muscle reconditioning and FES-cycle training.
Oduro-Appiah, Kwaku; Scheinberg, Anne; Mensah, Anthony; Afful, Abraham; Boadu, Henry Kofi; de Vries, Nanne
2017-11-01
This article assesses the performance of the city of Accra, Ghana, in municipal solid waste management as defined by the integrated sustainable waste management framework. The article reports on a participatory process to socialise the Wasteaware benchmark indicators and apply them to an upgraded set of data and information. The process has engaged 24 key stakeholders for 9 months, to diagram the flow of materials and benchmark three physical components and three governance aspects of the city's municipal solid waste management system. The results indicate that Accra is well below some other lower middle-income cities regarding sustainable modernisation of solid waste services. Collection coverage and capture of 75% and 53%, respectively, are a disappointing result, despite (or perhaps because of) 20 years of formal private sector involvement in service delivery. A total of 62% of municipal solid waste continues to be disposed of in controlled landfills and the reported recycling rate of 5% indicates both a lack of good measurement and a lack of interest in diverting waste from disposal. Drains, illegal dumps and beaches are choked with discarded bottles and plastic packaging. The quality of collection, disposal and recycling score between low and medium on the Wasteaware indicators, and the scores for user inclusivity, financial sustainability and local institutional coherence are low. The analysis suggests that waste and recycling would improve through greater provider inclusivity, especially the recognition and integration of the informal sector, and interventions that respond to user needs for more inclusive decision-making.
On oscillating flows in randomly heterogeneous porous media.
Trefry, M G; McLaughlin, D; Metcalfe, G; Lester, D; Ord, A; Regenauer-Lieb, K; Hobbs, B E
2010-01-13
The emergence of structure in reactive geofluid systems is of current interest. In geofluid systems, the fluids are supported by a porous medium whose physical and chemical properties may vary in space and time, sometimes sharply, and which may also evolve in reaction with the local fluids. Geofluids may also experience pressure and temperature conditions within the porous medium that drive their momentum relations beyond the normal Darcy regime. Furthermore, natural geofluid systems may experience forcings that are periodic in nature, or at least episodic. The combination of transient forcing, near-critical fluid dynamics and heterogeneous porous media yields a rich array of emergent geofluid phenomena that are only now beginning to be understood. One of the barriers to forward analysis in these geofluid systems is the problem of data scarcity. It is most often the case that fluid properties are reasonably well known, but that data on porous medium properties are measured with much less precision and spatial density. It is common to seek to perform an estimation of the porous medium properties by an inverse approach, that is, by expressing porous medium properties in terms of observed fluid characteristics. In this paper, we move toward such an inversion for the case of a generalized geofluid momentum equation in the context of time-periodic boundary conditions. We show that the generalized momentum equation results in frequency-domain responses that are governed by a second-order equation which is amenable to numerical solution. A stochastic perturbation approach demonstrates that frequency-domain responses of the fluids migrating in heterogeneous domains have spatial spectral densities that can be expressed in terms of the spectral densities of porous media properties. This journal is © 2010 The Royal Society
Spatially Controlled Relay Beamforming
NASA Astrophysics Data System (ADS)
Kalogerias, Dionysios
This thesis is about fusion of optimal stochastic motion control and physical layer communications. Distributed, networked communication systems, such as relay beamforming networks (e.g., Amplify & Forward (AF)), are typically designed without explicitly considering how the positions of the respective nodes might affect the quality of the communication. Optimum placement of network nodes, which could potentially improve the quality of the communication, is not typically considered. However, in most practical settings in physical layer communications, such as relay beamforming, the Channel State Information (CSI) observed by each node, per channel use, although it might be (modeled as) random, it is both spatially and temporally correlated. It is, therefore, reasonable to ask if and how the performance of the system could be improved by (predictively) controlling the positions of the network nodes (e.g., the relays), based on causal side (CSI) information, and exploitting the spatiotemporal dependencies of the wireless medium. In this work, we address this problem in the context of AF relay beamforming networks. This novel, cyber-physical system approach to relay beamforming is termed as "Spatially Controlled Relay Beamforming". First, we discuss wireless channel modeling, however, in a rigorous, Bayesian framework. Experimentally accurate and, at the same time, technically precise channel modeling is absolutely essential for designing and analyzing spatially controlled communication systems. In this work, we are interested in two distinct spatiotemporal statistical models, for describing the behavior of the log-scale magnitude of the wireless channel: 1. Stationary Gaussian Fields: In this case, the channel is assumed to evolve as a stationary, Gaussian stochastic field in continuous space and discrete time (say, for instance, time slots). Under such assumptions, spatial and temporal statistical interactions are determined by a set of time and space invariant parameters, which completely determine the mean and covariance of the underlying Gaussian measure. This model is relatively simple to describe, and can be sufficiently characterized, at least for our purposes, both statistically and topologically. Additionally, the model is rather versatile and there is existing experimental evidence, supporting its practical applicability. Our contributions are summarized in properly formulating the whole spatiotemporal model in a completely rigorous mathematical setting, under a convenient measure theoretic framework. Such framework greatly facilitates formulation of meaningful stochastic control problems, where the wireless channel field (or a function of it) can be regarded as a stochastic optimization surface.. 2. Conditionally Gaussian Fields, when conditioned on a Markovian channel state: This is a completely novel approach to wireless channel modeling. In this approach, the communication medium is assumed to behave as a partially observable (or hidden) system, where a hidden, global, temporally varying underlying stochastic process, called the channel state, affects the spatial interactions of the actual channel magnitude, evaluated at any set of locations in the plane. More specifically, we assume that, conditioned on the channel state, the wireless channel constitutes an observable, conditionally Gaussian stochastic process. The channel state evolves in time according to a known, possibly non stationary, non Gaussian, low dimensional Markov kernel. Recognizing the intractability of general nonlinear state estimation, we advocate the use of grid based approximate nonlinear filters as an effective and robust means for recursive tracking of the channel state. We also propose a sequential spatiotemporal predictor for tracking the channel gains at any point in time and space, providing real time sequential estimates for the respective channel gain map. In this context, our contributions are multifold. Except for the introduction of the layered channel model previously described, this line of research has resulted in a number of general, asymptotic convergence results, advancing the theory of grid-based approximate nonlinear stochastic filtering. In particular, sufficient conditions, ensuring asymptotic optimality are relaxed, and, at the same time, the mode of convergence is strengthened. Although the need for such results initiated as an attempt to theoretically characterize the performance of the proposed approximate methods for statistical inference, in regard to the proposed channel modeling approach, they turn out to be of fundamental importance in the areas of nonlinear estimation and stochastic control. The experimental validation of the proposed channel model, as well as the related parameter estimation problem, termed as "Markovian Channel Profiling (MCP)", fundamentally important for any practical deployment, are subject of current, ongoing research. Second, adopting the first of the two aforementioned channel modeling approaches, we consider the spatially controlled relay beamforming problem for an AF network with a single source, a single destination, and multiple, controlled at will, relay nodes. (Abstract shortened by ProQuest.).
Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using classical, and minimax techniques are described. A unified general formulation and solution for the minimax approach, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
Sabin-to-Mahoney Transition Model of Quasispecies Replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-05-31
Qspp is an agent-based stochastic simulation model of the Poliovirus Sabin-to-Mahoney transition. This code simulates a cell-to-cell model of Poliovirus replication. The model tracks genotypes (virus genomes) as they are replicated in cells, and as the cells burst and release particles into the medium of a culture dish. An inoculum is then taken from the pool of virions and is used to inoculate cells on a new dish. This process repeats. The Sabin genotype comprises the initial inoculum. Nucleotide positions that match the Sabin1 (vaccine strain) and Mahoney (wild type) genotypes, as well as the neurovirulent phenotype (from the literature)more » are enumerated as constants.« less
Transport behaviors of locally fractional coupled Brownian motors with fluctuating interactions
NASA Astrophysics Data System (ADS)
Wang, Huiqi; Ni, Feixiang; Lin, Lifeng; Lv, Wangyong; Zhu, Hongqiang
2018-09-01
In some complex viscoelastic mediums, it is ubiquitous that absorbing and desorbing surrounding Brownian particles randomly occur in coupled systems. The conventional method is to model a variable-mass system driven by both multiplicative and additive noises. In this paper, an improved mathematical model is created based on generalized Langevin equations (GLE) to characterize the random interaction with locally fluctuating number of coupled particles in the elastically coupled factional Brownian motors (FBM). By the numerical simulations, the effect of fluctuating interactions on collective transport behaviors is investigated, and some abnormal phenomena, such as cooperative behaviors, stochastic resonance (SR) and anomalous transport, are observed in the regime of sub-diffusion.
Lightning-Discharge Initiation as a Noise-Induced Kinetic Transition
NASA Astrophysics Data System (ADS)
Iudin, D. I.
2017-10-01
The electric fields observed in thunderclouds have the peak values one order of magnitude smaller than the electric strength of air. This fact renders the issue of the lightning-discharge initiation one of the most intriguing problems of thunderstorm electricity. In this work, the lightning initiation in a thundercloud is considered as a noise-induced kinetic transition. The stochastic electric field of the charged hydrometeors is the noise source. The considered kinetic transition has some features which distinguish it from other lightning-initiation mechanisms. First, the dynamic realization of this transition, which is due to interaction of the electron and ion components, is extended for a time significantly exceeding the spark-discharge development time. In this case, the fast attachment of electrons generated by supercritical bursts of the electric field of hydrometeors is balanced during long-term time intervals by the electron-release processes when the negative ions are destroyed. Second, an important role in the transition kinetics is played by the stochastic drift of electrons and ions caused by the small-scale fluctuations of the field of charged hydrometeors. From the formal mathematical viewpoint, this stochastic drift is indistinguishable from the scalar-impurity advection in a turbulent flow. In this work, it is shown that the efficiency of "advective mixing" is several orders of magnitude greater than that of the ordinary diffusion. Third, the considered transition leads to a sharp increase in the conductivity in the exponentially rare compact regions of space against the background of the vanishingly small variations in the average conductivity of the medium. In turn, the spots with increased conductivity are polarized in the mean field followed by the streamer initiation and discharge contraction.
An approach to forecasting health expenditures, with application to the U.S. Medicare system.
Lee, Ronald; Miller, Timoth
2002-10-01
To quantify uncertainty in forecasts of health expenditures. Stochastic time series models are estimated for historical variations in fertility, mortality, and health spending per capita in the United States, and used to generate stochastic simulations of the growth of Medicare expenditures. Individual health spending is modeled to depend on the number of years until death. A simple accounting model is developed for forecasting health expenditures, using the U.S. Medicare system as an example. Medicare expenditures are projected to rise from 2.2 percent of GDP (gross domestic product) to about 8 percent of GDP by 2075. This increase is due in equal measure to increasing health spending per beneficiary and to population aging. The traditional projection method constructs high, medium, and low scenarios to assess uncertainty, an approach that has many problems. Using stochastic forecasting, we find a 95 percent probability that Medicare spending in 2075 will fall between 4 percent and 18 percent of GDP, indicating a wide band of uncertainty. Although there is substantial uncertainty about future mortality decline, it contributed little to uncertainty about future Medicare spending, since lower mortality both raises the number of elderly, tending to raise spending, and is associated with improved health of the elderly, tending to reduce spending. Uncertainty about fertility, by contrast, leads to great uncertainty about the future size of the labor force, and therefore adds importantly to uncertainty about the health-share of GDP. In the shorter term, the major source of uncertainty is health spending per capita. History is a valuable guide for quantifying our uncertainty about future health expenditures. The probabilistic model we present has several advantages over the high-low scenario approach to forecasting. It indicates great uncertainty about future Medicare expenditures relative to GDP.
Application of Stochastic and Deterministic Approaches to Modeling Interstellar Chemistry
NASA Astrophysics Data System (ADS)
Pei, Yezhe
This work is about simulations of interstellar chemistry using the deterministic rate equation (RE) method and the stochastic moment equation (ME) method. Primordial metal-poor interstellar medium (ISM) is of our interest and the socalled “Population-II” stars could have been formed in this environment during the “Epoch of Reionization” in the baby universe. We build a gas phase model using the RE scheme to describe the ionization-powered interstellar chemistry. We demonstrate that OH replaces CO as the most abundant metal-bearing molecule in such interstellar clouds of the early universe. Grain surface reactions play an important role in the studies of astrochemistry. But the lack of an accurate yet effective simulation method still presents a challenge, especially for large, practical gas-grain system. We develop a hybrid scheme of moment equations and rate equations (HMR) for large gas-grain network to model astrochemical reactions in the interstellar clouds. Specifically, we have used a large chemical gas-grain model, with stochastic moment equations to treat the surface chemistry and deterministic rate equations to treat the gas phase chemistry, to simulate astrochemical systems as of the ISM in the Milky Way, the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC). We compare the results to those of pure rate equations and modified rate equations and present a discussion about how moment equations improve our theoretical modeling and how the abundances of the assorted species are changed by varied metallicity. We also model the observed composition of H2O, CO and CO2 ices toward Young Stellar Objects in the LMC and show that the HMR method gives a better match to the observation than the pure RE method.
Stochastic Reconnection for Large Magnetic Prandtl Numbers
NASA Astrophysics Data System (ADS)
Jafari, Amir; Vishniac, Ethan T.; Kowal, Grzegorz; Lazarian, Alex
2018-06-01
We consider stochastic magnetic reconnection in high-β plasmas with large magnetic Prandtl numbers, Pr m > 1. For large Pr m , field line stochasticity is suppressed at very small scales, impeding diffusion. In addition, viscosity suppresses very small-scale differential motions and therefore also the local reconnection. Here we consider the effect of high magnetic Prandtl numbers on the global reconnection rate in a turbulent medium and provide a diffusion equation for the magnetic field lines considering both resistive and viscous dissipation. We find that the width of the outflow region is unaffected unless Pr m is exponentially larger than the Reynolds number Re. The ejection velocity of matter from the reconnection region is also unaffected by viscosity unless Re ∼ 1. By these criteria the reconnection rate in typical astrophysical systems is almost independent of viscosity. This remains true for reconnection in quiet environments where current sheet instabilities drive reconnection. However, if Pr m > 1, viscosity can suppress small-scale reconnection events near and below the Kolmogorov or viscous damping scale. This will produce a threshold for the suppression of large-scale reconnection by viscosity when {\\Pr }m> \\sqrt{Re}}. In any case, for Pr m > 1 this leads to a flattening of the magnetic fluctuation power spectrum, so that its spectral index is ∼‑4/3 for length scales between the viscous dissipation scale and eddies larger by roughly {{\\Pr }}m3/2. Current numerical simulations are insensitive to this effect. We suggest that the dependence of reconnection on viscosity in these simulations may be due to insufficient resolution for the turbulent inertial range rather than a guide to the large Re limit.
NASA Astrophysics Data System (ADS)
Vervatis, Vassilios; De Mey, Pierre; Ayoub, Nadia; Kailas, Marios; Sofianos, Sarantis
2017-04-01
The project entitled Stochastic Coastal/Regional Uncertainty Modelling (SCRUM) aims at strengthening CMEMS in the areas of ocean uncertainty quantification, ensemble consistency verification and ensemble data assimilation. The project has been initiated by the University of Athens and LEGOS/CNRS research teams, in the framework of CMEMS Service Evolution. The work is based on stochastic modelling of ocean physics and biogeochemistry in the Bay of Biscay, on an identical sub-grid configuration of the IBI-MFC system in its latest CMEMS operational version V2. In a first step, we use a perturbed tendencies scheme to generate ensembles describing uncertainties in open ocean and on the shelf, focusing on upper ocean processes. In a second step, we introduce two methodologies (i.e. rank histograms and array modes) aimed at checking the consistency of the above ensembles with respect to TAC data and arrays. Preliminary results highlight that wind uncertainties dominate all other atmosphere-ocean sources of model errors. The ensemble spread in medium-range ensembles is approximately 0.01 m for SSH and 0.15 °C for SST, though these values vary depending on season and cross shelf regions. Ecosystem model uncertainties emerging from perturbations in physics appear to be moderately larger than those perturbing the concentration of the biogeochemical compartments, resulting in total chlorophyll spread at about 0.01 mg.m-3. First consistency results show that the model ensemble and the pseudo-ensemble of OSTIA (L4) observation SSTs appear to exhibit nonzero joint probabilities with each other since error vicinities overlap. Rank histograms show that the model ensemble is initially under-dispersive, though results improve in the context of seasonal-range ensembles.
Tools used by the insurance industry to assess risk from hydroclimatic extremes
NASA Astrophysics Data System (ADS)
Higgs, Stephanie; McMullan, Caroline
2016-04-01
Probabilistic catastrophe models are widely used within the insurance industry to assess and price the risk of natural hazards to individual residences through to portfolios of millions of properties. Over the relatively short period that catastrophe models have been available (almost 30 years), the insurance industry has built up a financial resilience to key natural hazards in certain areas (e.g. US tropical cyclone, European extra-tropical cyclone and flood). However, due the rapidly expanding global population and increase in wealth, together with uncertainties in the behaviour of meteorological phenomena introduced by climate change, the domain in which natural hazards impact society is growing. As a result, the insurance industry faces new challenges in assessing the risk and uncertainty from natural hazards. As a catastrophe modelling company, AIR Worldwide has a toolbox of options available to help the insurance industry assess extreme climatic events and their associated uncertainty. Here we discuss several of these tools: from helping analysts understand how uncertainty is inherently built in to probabilistic catastrophe models, to understanding alternative stochastic catalogs for tropical cyclone based on climate conditioning. Through the use of stochastic extreme disaster events such as those provided through AIR's catalogs or through the Lloyds of London marketplace (RDS's) to provide useful benchmarks for the loss probability exceedence and tail-at-risk metrics outputted from catastrophe models; to the visualisation of 1000+ year event footprints and hazard intensity maps. Ultimately the increased transparency of catastrophe models and flexibility of a software platform that allows for customisation of modelled and non-modelled risks will drive a greater understanding of extreme hydroclimatic events within the insurance industry.
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Size Evolution and Stochastic Models: Explaining Ostracod Size through Probabilistic Distributions
NASA Astrophysics Data System (ADS)
Krawczyk, M.; Decker, S.; Heim, N. A.; Payne, J.
2014-12-01
The biovolume of animals has functioned as an important benchmark for measuring evolution throughout geologic time. In our project, we examined the observed average body size of ostracods over time in order to understand the mechanism of size evolution in these marine organisms. The body size of ostracods has varied since the beginning of the Ordovician, where the first true ostracods appeared. We created a stochastic branching model to create possible evolutionary trees of ostracod size. Using stratigraphic ranges for ostracods compiled from over 750 genera in the Treatise on Invertebrate Paleontology, we calculated overall speciation and extinction rates for our model. At each timestep in our model, new lineages can evolve or existing lineages can become extinct. Newly evolved lineages are assigned sizes based on their parent genera. We parameterized our model to generate neutral and directional changes in ostracod size to compare with the observed data. New sizes were chosen via a normal distribution, and the neutral model selected new sizes differentials centered on zero, allowing for an equal chance of larger or smaller ostracods at each speciation. Conversely, the directional model centered the distribution on a negative value, giving a larger chance of smaller ostracods. Our data strongly suggests that the overall direction of ostracod evolution has been following a model that directionally pushes mean ostracod size down, shying away from a neutral model. Our model was able to match the magnitude of size decrease. Our models had a constant linear decrease while the actual data had a much more rapid initial rate followed by a constant size. The nuance of the observed trends ultimately suggests a more complex method of size evolution. In conclusion, probabilistic methods can provide valuable insight into possible evolutionary mechanisms determining size evolution in ostracods.
NASA Astrophysics Data System (ADS)
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
Electron distribution functions in electric field environments
NASA Technical Reports Server (NTRS)
Rudolph, Terence H.
1991-01-01
The amount of current carried by an electric discharge in its early stages of growth is strongly dependent on its geometrical shape. Discharges with a large number of branches, each funnelling current to a common stem, tend to carry more current than those with fewer branches. The fractal character of typical discharges was simulated using stochastic models based on solutions of the Laplace equation. Extension of these models requires the use of electron distribution functions to describe the behavior of electrons in the undisturbed medium ahead of the discharge. These electrons, interacting with the electric field, determine the propagation of branches in the discharge and the way in which further branching occurs. The first phase in the extension of the referenced models , the calculation of simple electron distribution functions in an air/electric field medium, is discussed. Two techniques are investigated: (1) the solution of the Boltzmann equation in homogeneous, steady state environments, and (2) the use of Monte Carlo simulations. Distribution functions calculated from both techniques are illustrated. Advantages and disadvantages of each technique are discussed.
NASA Astrophysics Data System (ADS)
Marin, Alvaro; Lhuissier, Henri; Rossi, Massimiliano; Volk, Andreas; Kähler, Christian J.
2016-11-01
A group of objects passing through a constriction might get eventually stuck. It occurs no matter what type of object is considered: sand in an hourglass, particles in a fluid through a porous medium or people leaving a room in panic. The case of particles in a fluid affects porous mediums, filters and membranes, which become unusable when clogged. Certainly the adherence of the particles to the walls and to each other is an important parameter in such systems, but even without adherence the clogging probability is far from negligible. Focusing in these low-adherence regimes, we use microfluidic devices with a bottleneck of squared cross-section through which we force dilute polystyrene particle solutions with diameters comparable to the bottleneck size and down to one tenth its size. In such low friction conditions we show experimental evidence of a strong transition at a critical particle-to-neck ratio, just as it occurs in dry granular systems. We describe analytically such a transition by modeling the arch formation as a purely stochastic process, which yields a good agreement with the experimental data. Deutsche Forschungsgemeinschaft KA1808/22-1.
Supermassive Black Hole Binary Candidates from the Pan-STARRS1 Medium Deep Survey
NASA Astrophysics Data System (ADS)
Liu, Tingting; Gezari, Suvi
2018-01-01
Supermassive black hole binaries (SMBHBs) should be a common product of the hierarchal growth of galaxies and gravitational wave sources at nano-Hz frequencies. We have performed a systematic search in the Pan-STARRS1 Medium Deep Survey for periodically varying quasars, which are predicted manifestations of SMBHBs, and identified 26 candidates that are periodically varying on the timescale of ~300-1000 days over the 4-year baseline of MDS. We continue to monitor them with the Discovery Channel Telescope and the LCO network telescopes and thus are able to extend the baseline to 3-8 cycles and break false positive signals due to stochastic, normal quasar variability. From our imaging campaign, five candidates show persistent periodic variability and remain strong SMBHB candidates for follow-up observations. We calculate the cumulative number rate of SMBHBs and compare with previous work. We also compare the gravitational wave strain amplitudes of the candidates with the capability of pulsar timing arrays and discuss the future capabilities to detect periodic quasar and SMBHB candidates with the Large Synoptic Survey Telescope.
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.
Extracting the sovereigns’ CDS market hierarchy: A correlation-filtering approach
NASA Astrophysics Data System (ADS)
León, Carlos; Leiton, Karen; Pérez, Jhonatan
2014-12-01
This paper employs correlation-into-distance mapping techniques and a minimal spanning tree-based correlation-filtering methodology on 36 sovereign CDS spread time-series in order to identify the sovereigns’ informational hierarchy. The resulting hierarchy (i) concurs with sovereigns’ eigenvector centrality; (ii) confirms the importance of geographical and credit rating clustering; (iii) identifies Russia, Turkey and Brazil as regional benchmarks; (iv) reveals the idiosyncratic nature of Japan and United States; (v) confirms that a small set of common factors affects the system; (vi) suggests that lower-medium grade rated sovereigns are the most influential, but also the most prone to contagion; and (vii) suggests the existence of a “Latin American common factor”.
NASA Astrophysics Data System (ADS)
Wang, Yu; Wang, Min; Jiang, Jingfeng
2017-02-01
Shear wave elastography is increasingly being used to non-invasively stage liver fibrosis by measuring shear wave speed (SWS). This study quantitatively investigates intrinsic variations among SWS measurements obtained from heterogeneous media such as fibrotic livers. More specifically, it aims to demonstrate that intrinsic variations in SWS measurements, in general, follow a non-Gaussian distribution and are related to the heterogeneous nature of the medium being measured. Using the principle of maximum entropy (ME), our primary objective is to derive a probability density function (PDF) of the SWS distribution in conjunction with a lossless stochastic tissue model. Our secondary objective is to evaluate the performance of the proposed PDF using Monte Carlo (MC)-simulated shear wave (SW) data against three other commonly used PDFs. Based on statistical evaluation criteria, initial results showed that the derived PDF fits better to MC-simulated SWS data than the other three PDFs. It was also found that SW fronts stabilized after a short (compared with the SW wavelength) travel distance in lossless media. Furthermore, in lossless media, the distance required to stabilize the SW propagation was not correlated to the SW wavelength at the low frequencies investigated (i.e. 50, 100 and 150 Hz). Examination of the MC simulation data suggests that elastic (shear) wave scattering became more pronounced when the volume fraction of hard inclusions increased from 10 to 30%. In conclusion, using the principle of ME, we theoretically demonstrated for the first time that SWS measurements in this model follow a non-Gaussian distribution. Preliminary data indicated that the proposed PDF can quantitatively represent intrinsic variations in SWS measurements simulated using a two-phase random medium model. The advantages of the proposed PDF are its physically meaningful parameters and solid theoretical basis.
Jiménez, Juan J; Decaëns, Thibaud; Lavelle, Patrick; Rossi, Jean-Pierre
2014-12-05
Studying the drivers and determinants of species, population and community spatial patterns is central to ecology. The observed structure of community assemblages is the result of deterministic abiotic (environmental constraints) and biotic factors (positive and negative species interactions), as well as stochastic colonization events (historical contingency). We analyzed the role of multi-scale spatial component of soil environmental variability in structuring earthworm assemblages in a gallery forest from the Colombian "Llanos". We aimed to disentangle the spatial scales at which species assemblages are structured and determine whether these scales matched those expressed by soil environmental variables. We also tested the hypothesis of the "single tree effect" by exploring the spatial relationships between root-related variables and soil nutrient and physical variables in structuring earthworm assemblages. Multivariate ordination techniques and spatially explicit tools were used, namely cross-correlograms, Principal Coordinates of Neighbor Matrices (PCNM) and variation partitioning analyses. The relationship between the spatial organization of earthworm assemblages and soil environmental parameters revealed explicitly multi-scale responses. The soil environmental variables that explained nested population structures across the multi-spatial scale gradient differed for earthworms and assemblages at the very-fine- (<10 m) to medium-scale (10-20 m). The root traits were correlated with areas of high soil nutrient contents at a depth of 0-5 cm. Information on the scales of PCNM variables was obtained using variogram modeling. Based on the size of the plot, the PCNM variables were arbitrarily allocated to medium (>30 m), fine (10-20 m) and very fine scales (<10 m). Variation partitioning analysis revealed that the soil environmental variability explained from less than 1% to as much as 48% of the observed earthworm spatial variation. A large proportion of the spatial variation did not depend on the soil environmental variability for certain species. This finding could indicate the influence of contagious biotic interactions, stochastic factors, or unmeasured relevant soil environmental variables.
Anderson, Roy; Farrell, Sam; Turner, Hugo; Walson, Judd; Donnelly, Christl A; Truscott, James
2017-02-17
A method is outlined for the use of an individual-based stochastic model of parasite transmission dynamics to assess different designs for a cluster randomized trial in which mass drug administration (MDA) is employed in attempts to eliminate the transmission of soil-transmitted helminths (STH) in defined geographic locations. The hypothesis to be tested is: Can MDA alone interrupt the transmission of STH species in defined settings? Clustering is at a village level and the choice of clusters of villages is stratified by transmission intensity (low, medium and high) and parasite species mix (either Ascaris, Trichuris or hookworm dominant). The methodological approach first uses an age-structured deterministic model to predict the MDA coverage required for treating pre-school aged children (Pre-SAC), school aged children (SAC) and adults (Adults) to eliminate transmission (crossing the breakpoint in transmission created by sexual mating in dioecious helminths) with 3 rounds of annual MDA. Stochastic individual-based models are then used to calculate the positive and negative predictive values (PPV and NPV, respectively, for observing elimination or the bounce back of infection) for a defined prevalence of infection 2 years post the cessation of MDA. For the arm only involving the treatment of Pre-SAC and SAC, the failure rate is predicted to be very high (particularly for hookworm-infected villages) unless transmission intensity is very low (R 0 , or the effective reproductive number R, just above unity in value). The calculations are designed to consider various trial arms and stratifications; namely, community-based treatment and Pre-SAC and SAC only treatment (the two arms of the trial), different STH transmission settings of low, medium and high, and different STH species mixes. Results are considered in the light of the complications introduced by the choice of statistic to define success or failure, varying adherence to treatment, migration and parameter uncertainty.
Jet evolution in a dense medium: event-by-event fluctuations and multi-particle correlations
NASA Astrophysics Data System (ADS)
Escobedo, Miguel A.; Iancu, Edmond
2017-11-01
We study the gluon distribution produced via successive medium-induced branchings by an energetic jet propagating through a weakly-coupled quark-gluon plasma. We show that under suitable approximations, the jet evolution is a Markovian stochastic process, which is exactly solvable. For this process, we construct exact analytic solutions for all the n-point correlation functions describing the gluon distribution in the space of energy [M. A. Escobedo, E. Iancu, Event-by-event fluctuations in the medium-induced jet evolution, JHEP 05 (2016) 008. arXiv:arxiv:arXiv:1601.03629, doi:http://dx.doi.org/10.1007/JHEP05(2016)008, M. A. Escobedo, E. Iancu, Multi-particle correlations and KNO scaling in the medium-induced jet evolution, JHEP 12 (2016) 104. arXiv:arxiv:arXiv:1609.06104, doi:http://dx.doi.org/10.1007/JHEP12(2016)104]. Using these results, we study the event-by-event distribution of the energy lost by the jet at large angles and of the multiplicities of the soft particles which carry this energy. We find that the event-by-event fluctuations are huge: the standard deviation in the energy loss is parametrically as large as its mean value [M. A. Escobedo, E. Iancu, Event-by-event fluctuations in the medium-induced jet evolution, JHEP 05 (2016) 008. arXiv:arxiv:arXiv:1601.03629, doi:http://dx.doi.org/10.1007/JHEP05(2016)008]. This has important consequences for the phenomenology of di-jet asymmetry in Pb+Pb collisions at the LHC: it implies that the fluctuations in the branching process can contribute to the measured asymmetry on an equal footing with the geometry of the di-jet event (i.e. as the difference between the in-medium path lengths of the two jets). We compute the higher moments of the multiplicity distribution and identify a remarkable regularity known as Koba-Nielsen-Olesen (KNO) scaling [M. A. Escobedo, E. Iancu, Multi-particle correlations and KNO scaling in the medium-induced jet evolution, JHEP 12 (2016) 104. arXiv:arxiv:arXiv:1609.06104, doi:http://dx.doi.org/10.1007/JHEP12(2016)104
CRPropa 3.1—a low energy extension based on stochastic differential equations
NASA Astrophysics Data System (ADS)
Merten, Lukas; Becker Tjus, Julia; Fichtner, Horst; Eichmann, Björn; Sigl, Günter
2017-06-01
The propagation of charged cosmic rays through the Galactic environment influences all aspects of the observation at Earth. Energy spectrum, composition and arrival directions are changed due to deflections in magnetic fields and interactions with the interstellar medium. Today the transport is simulated with different simulation methods either based on the solution of a transport equation (multi-particle picture) or a solution of an equation of motion (single-particle picture). We developed a new module for the publicly available propagation software CRPropa 3.1, where we implemented an algorithm to solve the transport equation using stochastic differential equations. This technique allows us to use a diffusion tensor which is anisotropic with respect to an arbitrary magnetic background field. The source code of CRPropa is written in C++ with python steering via SWIG which makes it easy to use and computationally fast. In this paper, we present the new low-energy propagation code together with validation procedures that are developed to proof the accuracy of the new implementation. Furthermore, we show first examples of the cosmic ray density evolution, which depends strongly on the ratio of the parallel κ∥ and perpendicular κ⊥ diffusion coefficients. This dependency is systematically examined as well the influence of the particle rigidity on the diffusion process.
Light, John M; Jason, Leonard A; Stevens, Edward B; Callahan, Sarah; Stone, Ariel
2016-03-01
The complex system conception of group social dynamics often involves not only changing individual characteristics, but also changing within-group relationships. Recent advances in stochastic dynamic network modeling allow these interdependencies to be modeled from data. This methodology is discussed within a context of other mathematical and statistical approaches that have been or could be applied to study the temporal evolution of relationships and behaviors within small- to medium-sized groups. An example model is presented, based on a pilot study of five Oxford House recovery homes, sober living environments for individuals following release from acute substance abuse treatment. This model demonstrates how dynamic network modeling can be applied to such systems, examines and discusses several options for pooling, and shows how results are interpreted in line with complex system concepts. Results suggest that this approach (a) is a credible modeling framework for studying group dynamics even with limited data, (b) improves upon the most common alternatives, and (c) is especially well-suited to complex system conceptions. Continuing improvements in stochastic models and associated software may finally lead to mainstream use of these techniques for the study of group dynamics, a shift already occurring in related fields of behavioral science.
A framework for stochastic simulation of distribution practices for hotel reservations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halkos, George E.; Tsilika, Kyriaki D.
The focus of this study is primarily on the Greek hotel industry. The objective is to design and develop a framework for stochastic simulation of reservation requests, reservation arrivals, cancellations and hotel occupancy with a planning horizon of a tourist season. In Greek hospitality industry there have been two competing policies for reservation planning process up to 2003: reservations coming directly from customers and a reservations management relying on tour operator(s). Recently the Internet along with other emerging technologies has offered the potential to disrupt enduring distribution arrangements. The focus of the study is on the choice of distribution intermediaries.more » We present an empirical model for the hotel reservation planning process that makes use of a symbolic simulation, Monte Carlo method, as, requests for reservations, cancellations, and arrival rates are all sources of uncertainty. We consider as a case study the problem of determining the optimal booking strategy for a medium size hotel in Skiathos Island, Greece. Probability distributions and parameters estimation result from the historical data available and by following suggestions made in the relevant literature. The results of this study may assist hotel managers define distribution strategies for hotel rooms and evaluate the performance of the reservations management system.« less
NASA Astrophysics Data System (ADS)
Macian-Sorribes, Hector; Pulido-Velazquez, Manuel
2016-04-01
This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).
Sub-Doppler Rovibrational Spectroscopy of the H_3^+ Cation and Isotopologues
NASA Astrophysics Data System (ADS)
Markus, Charles R.; McCollum, Jefferson E.; Dieter, Thomas S.; Kocheril, Philip A.; McCall, Benjamin J.
2017-06-01
Molecular ions play a central role in the chemistry of the interstellar medium (ISM) and act as benchmarks for state of the art ab initio theory. The molecular ion H_3^+ initiates a chain of ion-neutral reactions which drives chemistry in the ISM, and observing it either directly or indirectly through its isotopologues is valuable for understanding interstellar chemistry. Improving the accuracy of laboratory measurements will assist future astronomical observations. H_3^+ is also one of a few systems whose rovibrational transitions can be predicted to spectroscopic accuracy (<1 cm^{-1}), and with careful treatment of adiabatic, nonadiabatic, and quantum electrodynamic corrections to the potential energy surface, predictions of low lying rovibrational states can rival the uncertainty of experimental measurements New experimental data will be needed to benchmark future treatment of these corrections. Previously we have reported 26 transitions within the fundamental band of H_3^+ with MHz-level uncertainties. With recent improvements to our overall sensitivity, we have expanded this survey to include additional transitions within the fundamental band and the first hot band. These new data will ultimately be used to predict ground state rovibrational energy levels through combination differences which will act as benchmarks for ab initio theory and predict forbidden rotational transitions of H_3^+. We will also discuss progress in measuring rovibrational transitions of the isotopologues H_2D^+ and D_2H^+, which will be used to assist in future THz astronomical observations. New experimental data will be needed to benchmark future treatment of these corrections. J. N. Hodges, A. J. Perry, P. A. Jenkins II, B. M. Siller, and B. J. McCall, J. Chem. Phys. (2013), 139, 164201. A. J. Perry, J. N. Hodges, C. R. Markus, G. S. Kocheril, and B. J. McCall, J. Mol. Spectrosc. (2015), 317, 71-73. A. J. Perry, C. R. Markus, J. N. Hodges, G. S. Kocheril, and B. J. McCall, 71st International Symposium on Molecular Spectroscopy (2016), MH03. C. R. Markus, A. J. Perry, J. N. Hodges, and B. J. McCall, Opt. Express (2017), 25, 3709-3721.
A generalised porous medium approach to study thermo-fluid dynamics in human eyes.
Mauro, Alessandro; Massarotti, Nicola; Salahudeen, Mohamed; Romano, Mario R; Romano, Vito; Nithiarasu, Perumal
2018-03-22
The present work describes the application of the generalised porous medium model to study heat and fluid flow in healthy and glaucomatous eyes of different subject specimens, considering the presence of ocular cavities and porous tissues. The 2D computational model, implemented into the open-source software OpenFOAM, has been verified against benchmark data for mixed convection in domains partially filled with a porous medium. The verified model has been employed to simulate the thermo-fluid dynamic phenomena occurring in the anterior section of four patient-specific human eyes, considering the presence of anterior chamber (AC), trabecular meshwork (TM), Schlemm's canal (SC), and collector channels (CC). The computational domains of the eye are extracted from tomographic images. The dependence of TM porosity and permeability on intraocular pressure (IOP) has been analysed in detail, and the differences between healthy and glaucomatous eye conditions have been highlighted, proving that the different physiological conditions of patients have a significant influence on the thermo-fluid dynamic phenomena. The influence of different eye positions (supine and standing) on thermo-fluid dynamic variables has been also investigated: results are presented in terms of velocity, pressure, temperature, friction coefficient and local Nusselt number. The results clearly indicate that porosity and permeability of TM are two important parameters that affect eye pressure distribution. Graphical abstract Velocity contours and vectors for healthy eyes (top) and glaucomatous eyes (bottom) for standing position.
NASA Astrophysics Data System (ADS)
Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng
2017-08-01
Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.
Representational Distance Learning for Deep Neural Networks
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains. PMID:28082889
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, C.; Potts, I.; Reeks, M. W., E-mail: mike.reeks@ncl.ac.uk
We present a simple stochastic quadrant model for calculating the transport and deposition of heavy particles in a fully developed turbulent boundary layer based on the statistics of wall-normal fluid velocity fluctuations obtained from a fully developed channel flow. Individual particles are tracked through the boundary layer via their interactions with a succession of random eddies found in each of the quadrants of the fluid Reynolds shear stress domain in a homogeneous Markov chain process. In this way, we are able to account directly for the influence of ejection and sweeping events as others have done but without resorting tomore » the use of adjustable parameters. Deposition rate predictions for a wide range of heavy particles predicted by the model compare well with benchmark experimental measurements. In addition, deposition rates are compared with those obtained from continuous random walk models and Langevin equation based ejection and sweep models which noticeably give significantly lower deposition rates. Various statistics related to the particle near wall behavior are also presented. Finally, we consider the model limitations in using the model to calculate deposition in more complex flows where the near wall turbulence may be significantly different.« less
Escalated convergent artificial bee colony
NASA Astrophysics Data System (ADS)
Jadon, Shimpi Singh; Bansal, Jagdish Chand; Tiwari, Ritu
2016-03-01
Artificial bee colony (ABC) optimisation algorithm is a recent, fast and easy-to-implement population-based meta heuristic for optimisation. ABC has been proved a rival algorithm with some popular swarm intelligence-based algorithms such as particle swarm optimisation, firefly algorithm and ant colony optimisation. The solution search equation of ABC is influenced by a random quantity which helps its search process in exploration at the cost of exploitation. In order to find a fast convergent behaviour of ABC while exploitation capability is maintained, in this paper basic ABC is modified in two ways. First, to improve exploitation capability, two local search strategies, namely classical unidimensional local search and levy flight random walk-based local search are incorporated with ABC. Furthermore, a new solution search strategy, namely stochastic diffusion scout search is proposed and incorporated into the scout bee phase to provide more chance to abandon solution to improve itself. Efficiency of the proposed algorithm is tested on 20 benchmark test functions of different complexities and characteristics. Results are very promising and they prove it to be a competitive algorithm in the field of swarm intelligence-based algorithms.
Representational Distance Learning for Deep Neural Networks.
McClure, Patrick; Kriegeskorte, Nikolaus
2016-01-01
Deep neural networks (DNNs) provide useful models of visual representational transformations. We present a method that enables a DNN (student) to learn from the internal representational spaces of a reference model (teacher), which could be another DNN or, in the future, a biological brain. Representational spaces of the student and the teacher are characterized by representational distance matrices (RDMs). We propose representational distance learning (RDL), a stochastic gradient descent method that drives the RDMs of the student to approximate the RDMs of the teacher. We demonstrate that RDL is competitive with other transfer learning techniques for two publicly available benchmark computer vision datasets (MNIST and CIFAR-100), while allowing for architectural differences between student and teacher. By pulling the student's RDMs toward those of the teacher, RDL significantly improved visual classification performance when compared to baseline networks that did not use transfer learning. In the future, RDL may enable combined supervised training of deep neural networks using task constraints (e.g., images and category labels) and constraints from brain-activity measurements, so as to build models that replicate the internal representational spaces of biological brains.
A high resolution InSAR topographic reconstruction research in urban area based on TerraSAR-X data
NASA Astrophysics Data System (ADS)
Qu, Feifei; Qin, Zhang; Zhao, Chaoying; Zhu, Wu
2011-10-01
Aiming at the problems of difficult unwrapping and phase noise in InSAR DEM reconstruction, especially for the high-resolution TerraSAR-X data, this paper improved the height reconstruction algorithm in view of "remove-restore" based on external coarse DEM and multi-interferogram processing, proposed a height calibration method based on CR+GPS data. Several measures have been taken for urban high resolution DEM reconstruction with TerraSAR data. The SAR interferometric pairs with long spatial and short temporal baselines are served for the DEM. The external low resolution and low accuracy DEM is applied for the "remove-restore" concept to ease the phase unwrapping. The stochastic errors including atmospheric effects and phase noise are suppressed by weighted averaging of DEM phases. Six TerraSAR-X data are applied to create the twelve-meter's resolution DEM over Xian, China with the newly-proposed method. The heights in discrete GPS benchmarks are used to calibrate the result, and the RMS of 3.29 meter is achieved by comparing with 1:50000 DEM.
Theoretical limits of localizing 3-D landmarks and features.
Rohr, Karl
2007-09-01
In this paper, we analyze the accuracy of estimating the location of 3-D landmarks and characteristic image structures. Based on nonlinear estimation theory, we study the minimal stochastic errors of the position estimate caused by noisy data. Given analytic models of the image intensities, we derive closed-form expressions of the Cramér-Rao bound for different 3-D structures such as 3-D edges, 3-D ridges, 3-D lines, 3-D boxes, and 3-D blobs. It turns out that the precision of localization depends on the noise level, the size of the region-of-interest, the image contrast, the width of the intensity transitions, as well as on other parameters describing the considered image structure. The derived lower bounds can serve as benchmarks and the performance of existing algorithms can be compared with them. To give an impression of the achievable accuracy, numeric examples are presented. Moreover, by experimental investigations, we demonstrate that the derived lower bounds can be achieved by fitting parametric intensity models directly to the image data.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
NASA Astrophysics Data System (ADS)
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
NASA Astrophysics Data System (ADS)
Andrianova, Olga; Lomakov, Gleb; Manturov, Gennady
2017-09-01
The neutron transmission experiments are one of the main sources of information about the neutron cross section resonance structure and effect in the self-shielding. Such kind of data for niobium and silicon nuclides in energy range 7 keV to 3 MeV can be obtained from low-resolution transmission measurements performed earlier in Russia (with samples of 0.027 to 0.871 atom/barn for niobium and 0.076 to 1.803 atom/barn for silicon). A significant calculation-to-experiment discrepancy in energy range 100 to 600 keV and 300 to 800 keV for niobium and silicon, respectively, obtained using the evaluated nuclear data library ROSFOND, were found. The EVPAR code was used for estimation the average resonance parameters in energy range 7 to 600 keV for niobium. For silicon a stochastic optimization method was used to modify the resolved resonance parameters in energy range 300 to 800 keV. The improved ROSFOND evaluated nuclear data files were tested in calculation of ICSBEP integral benchmark experiments.
NASA Astrophysics Data System (ADS)
Afshar, Ali
An evaluation of Lagrangian-based, discrete-phase models for multi-component liquid sprays encountered in the combustors of gas turbine engines is considered. In particular, the spray modeling capabilities of the commercial software, ANSYS Fluent, was evaluated. Spray modeling was performed for various cold flow validation cases. These validation cases include a liquid jet in a cross-flow, an airblast atomizer, and a high shear fuel nozzle. Droplet properties including velocity and diameter were investigated and compared with previous experimental and numerical results. Different primary and secondary breakup models were evaluated in this thesis. The secondary breakup models investigated include the Taylor analogy breakup (TAB) model, the wave model, the Kelvin-Helmholtz Rayleigh-Taylor model (KHRT), and the Stochastic secondary droplet (SSD) approach. The modeling of fuel sprays requires a proper treatment for the turbulence. Reynolds-averaged Navier-Stokes (RANS), large eddy simulation (LES), hybrid RANS/LES, and dynamic LES (DLES) were also considered for the turbulent flows involving sprays. The spray and turbulence models were evaluated using the available benchmark experimental data.
Adaptive hidden Markov model with anomaly States for price manipulation detection.
Cao, Yi; Li, Yuhua; Coleman, Sonya; Belatreche, Ammar; McGinnity, Thomas Martin
2015-02-01
Price manipulation refers to the activities of those traders who use carefully designed trading behaviors to manually push up or down the underlying equity prices for making profits. With increasing volumes and frequency of trading, price manipulation can be extremely damaging to the proper functioning and integrity of capital markets. The existing literature focuses on either empirical studies of market abuse cases or analysis of particular manipulation types based on certain assumptions. Effective approaches for analyzing and detecting price manipulation in real time are yet to be developed. This paper proposes a novel approach, called adaptive hidden Markov model with anomaly states (AHMMAS) for modeling and detecting price manipulation activities. Together with wavelet transformations and gradients as the feature extraction methods, the AHMMAS model caters to price manipulation detection and basic manipulation type recognition. The evaluation experiments conducted on seven stock tick data from NASDAQ and the London Stock Exchange and 10 simulated stock prices by stochastic differential equation show that the proposed AHMMAS model can effectively detect price manipulation patterns and outperforms the selected benchmark models.
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Network embedding-based representation learning for single cell RNA-seq data.
Li, Xiangyu; Chen, Weizheng; Chen, Yang; Zhang, Xuegong; Gu, Jin; Zhang, Michael Q
2017-11-02
Single cell RNA-seq (scRNA-seq) techniques can reveal valuable insights of cell-to-cell heterogeneities. Projection of high-dimensional data into a low-dimensional subspace is a powerful strategy in general for mining such big data. However, scRNA-seq suffers from higher noise and lower coverage than traditional bulk RNA-seq, hence bringing in new computational difficulties. One major challenge is how to deal with the frequent drop-out events. The events, usually caused by the stochastic burst effect in gene transcription and the technical failure of RNA transcript capture, often render traditional dimension reduction methods work inefficiently. To overcome this problem, we have developed a novel Single Cell Representation Learning (SCRL) method based on network embedding. This method can efficiently implement data-driven non-linear projection and incorporate prior biological knowledge (such as pathway information) to learn more meaningful low-dimensional representations for both cells and genes. Benchmark results show that SCRL outperforms other dimensional reduction methods on several recent scRNA-seq datasets. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Stam, Samantha; Alberts, Jonathan; Gardel, Margaret; Munro, Edwin
2013-03-01
The interactions of bipolar myosin II filaments with actin arrays are a predominate means of generating forces in numerous physiological processes including muscle contraction and cell migration. However, how the spatiotemporal regulation of these forces depends on motor mechanochemistry, bipolar filament size, and local actin mechanics is unknown. Here, we simulate myosin II motors with an agent-based model in which the motors have been benchmarked against experimental measurements. Force generation occurs in two distinct regimes characterized either by stable tension maintenance or by stochastic buildup and release; transitions between these regimes occur by changes to duty ratio and myosin filament size. The time required for building force to stall scales inversely with the stiffness of a network and the actin gliding speed of a motor. Finally, myosin motors are predicted to contract a network toward stiffer regions, which is consistent with experimental observations. Our representation of myosin motors can be used to understand how their mechanical and biochemical properties influence their observed behavior in a variety of in vitro and in vivo contexts.
NASA Astrophysics Data System (ADS)
Musharbash, Eleonora; Nobile, Fabio
2018-02-01
In this paper we propose a method for the strong imposition of random Dirichlet boundary conditions in the Dynamical Low Rank (DLR) approximation of parabolic PDEs and, in particular, incompressible Navier Stokes equations. We show that the DLR variational principle can be set in the constrained manifold of all S rank random fields with a prescribed value on the boundary, expressed in low rank format, with rank smaller then S. We characterize the tangent space to the constrained manifold by means of a Dual Dynamically Orthogonal (Dual DO) formulation, in which the stochastic modes are kept orthonormal and the deterministic modes satisfy suitable boundary conditions, consistent with the original problem. The Dual DO formulation is also convenient to include the incompressibility constraint, when dealing with incompressible Navier Stokes equations. We show the performance of the proposed Dual DO approximation on two numerical test cases: the classical benchmark of a laminar flow around a cylinder with random inflow velocity, and a biomedical application for simulating blood flow in realistic carotid artery reconstructed from MRI data with random inflow conditions coming from Doppler measurements.
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
Prevention of occupational injuries: Evidence for effective good practices in foundries.
Porru, Stefano; Calza, Stefano; Arici, Cecilia
2017-02-01
Occupational injuries are a relevant research and practical issue. However, intervention studies evaluating the effectiveness of workplace injury prevention programs are seldom performed. The effectiveness of a multifaceted intervention aimed at reducing occupational injury rates (incidence/employment-based=IR, frequency/hours-based=FR, severity=SR) was evaluated between 2008 and 2013 in 29 Italian foundries (22 ferrous; 7 non-ferrous; 3,460 male blue collar workers/year) of varying sizes. Each foundry established an internal multidisciplinary prevention team for risk assessment, monitoring and prevention of occupational injuries, involving employers, occupational physicians, safety personnel, workers' representatives, supervisors. Targets of intervention were workers, equipment, organization, workplace, job tasks. An interrupted time series (ITS) design was applied. 4,604 occupational injuries and 83,156 lost workdays were registered between 2003 and 2013. Statistical analysis showed, after intervention, a reduction of all injury rates (-26% IR, -15% FR, -18% SR) in ferrous foundries and of SR (-4%) in non-ferrous foundries. A significant (p=0.021) 'step-effect' was shown for IR in ferrous foundries, independent of secular trends (p<0.001). Sector-specific benchmarks for all injury rates were developed separately for ferrous and non-ferrous foundries. Strengths of the study were: ITS design, according to standardized quality criteria (i.e., at least three data points before and three data points after intervention; clearly defined intervention point); pragmatic approach, with good external validity; promotion of effective good practices. Main limitations were the non-randomized nature and a medium length post-intervention period. In conclusion, a multifaceted, pragmatic and accountable intervention is effective in reducing the burden of occupational injuries in small-, medium- and large-sized foundries. Practical Applications: The study poses the basis for feasible good practice guidelines to be implemented to prevent occupational injuries, by means of sector-specific numerical benchmarks, with potentially relevant impacts on workers, companies, occupational health professionals and society at large. Copyright © 2016 National Safety Council and Elsevier Ltd. All rights reserved.
Radiative transport equation for the Mittag-Leffler path length distribution
NASA Astrophysics Data System (ADS)
Liemert, André; Kienle, Alwin
2017-05-01
In this paper, we consider the radiative transport equation for infinitely extended scattering media that are characterized by the Mittag-Leffler path length distribution p (ℓ ) =-∂ℓEα(-σtℓα ) , which is a generalization of the usually assumed Lambert-Beer law p (ℓ ) =σtexp(-σtℓ ) . In this context, we derive the infinite-space Green's function of the underlying fractional transport equation for the spherically symmetric medium as well as for the one-dimensional string. Moreover, simple analytical solutions are presented for the prediction of the radiation field in the single-scattering approximation. The resulting equations are compared with Monte Carlo simulations in the steady-state and time domain showing, within the stochastic nature of the simulations, an excellent agreement.
Transonic Flutter Suppression Control Law Design, Analysis and Wind Tunnel Results
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
Transonic Flutter Suppression Control Law Design, Analysis and Wind-Tunnel Results
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1999-01-01
The benchmark active controls technology and wind tunnel test program at NASA Langley Research Center was started with the objective to investigate the nonlinear, unsteady aerodynamics and active flutter suppression of wings in transonic flow. The paper will present the flutter suppression control law design process, numerical nonlinear simulation and wind tunnel test results for the NACA 0012 benchmark active control wing model. The flutter suppression control law design processes using (1) classical, (2) linear quadratic Gaussian (LQG), and (3) minimax techniques are described. A unified general formulation and solution for the LQG and minimax approaches, based on the steady state differential game theory is presented. Design considerations for improving the control law robustness and digital implementation are outlined. It was shown that simple control laws when properly designed based on physical principles, can suppress flutter with limited control power even in the presence of transonic shocks and flow separation. In wind tunnel tests in air and heavy gas medium, the closed-loop flutter dynamic pressure was increased to the tunnel upper limit of 200 psf. The control law robustness and performance predictions were verified in highly nonlinear flow conditions, gain and phase perturbations, and spoiler deployment. A non-design plunge instability condition was also successfully suppressed.
Dee, C R; Rankin, J A; Burns, C A
1998-07-01
Journal usage studies, which are useful for budget management and for evaluating collection performance relative to library use, have generally described a single library or subject discipline. The Southern Chapter/Medical Library Association (SC/MLA) study has examined journal usage at the aggregate data level with the long-term goal of developing hospital library benchmarks for journal use. Thirty-six SC/MLA hospital libraries, categorized for the study by size as small, medium, or large, reported current journal title use centrally for a one-year period following standardized data collection procedures. Institutional and aggregate data were analyzed for the average annual frequency of use, average costs per use and non-use, and average percent of non-used titles. Permutation F-type tests were used to measure difference among the three hospital groups. Averages were reported for each data set analysis. Statistical tests indicated no significant differences between the hospital groups, suggesting that benchmarks can be derived applying to all types of hospital libraries. The unanticipated lack of commonality among heavily used titles pointed to a need for uniquely tailored collections. Although the small sample size precluded definitive results, the study's findings constituted a baseline of data that can be compared against future studies.
Dee, C R; Rankin, J A; Burns, C A
1998-01-01
BACKGROUND: Journal usage studies, which are useful for budget management and for evaluating collection performance relative to library use, have generally described a single library or subject discipline. The Southern Chapter/Medical Library Association (SC/MLA) study has examined journal usage at the aggregate data level with the long-term goal of developing hospital library benchmarks for journal use. METHODS: Thirty-six SC/MLA hospital libraries, categorized for the study by size as small, medium, or large, reported current journal title use centrally for a one-year period following standardized data collection procedures. Institutional and aggregate data were analyzed for the average annual frequency of use, average costs per use and non-use, and average percent of non-used titles. Permutation F-type tests were used to measure difference among the three hospital groups. RESULTS: Averages were reported for each data set analysis. Statistical tests indicated no significant differences between the hospital groups, suggesting that benchmarks can be derived applying to all types of hospital libraries. The unanticipated lack of commonality among heavily used titles pointed to a need for uniquely tailored collections. CONCLUSION: Although the small sample size precluded definitive results, the study's findings constituted a baseline of data that can be compared against future studies. PMID:9681164
NASA Astrophysics Data System (ADS)
Schneider, E. A.; Deinert, M. R.; Cady, K. B.
2006-10-01
The balance of isotopes in a nuclear reactor core is key to understanding the overall performance of a given fuel cycle. This balance is in turn most strongly affected by the time and energy-dependent neutron flux. While many large and involved computer packages exist for determining this spectrum, a simplified approach amenable to rapid computation is missing from the literature. We present such a model, which accepts as inputs the fuel element/moderator geometry and composition, reactor geometry, fuel residence time and target burnup and we compare it to OECD/NEA benchmarks for homogeneous MOX and UOX LWR cores. Collision probability approximations to the neutron transport equation are used to decouple the spatial and energy variables. The lethargy dependent neutron flux, governed by coupled integral equations for the fuel and moderator/coolant regions is treated by multigroup thermalization methods, and the transport of neutrons through space is modeled by fuel to moderator transport and escape probabilities. Reactivity control is achieved through use of a burnable poison or adjustable control medium. The model calculates the buildup of 24 actinides, as well as fission products, along with the lethargy dependent neutron flux and the results of several simulations are compared with benchmarked standards.
Workplace road safety risk management: An investigation into Australian practices.
Warmerdam, Amanda; Newnam, Sharon; Sheppard, Dianne; Griffin, Mark; Stevenson, Mark
2017-01-01
In Australia, more than 30% of the traffic volume can be attributed to work-related vehicles. Although work-related driver safety has been given increasing attention in the scientific literature, it is uncertain how well this knowledge has been translated into practice in industry. It is also unclear how current practice in industry can inform scientific knowledge. The aim of the research was to use a benchmarking tool developed by the National Road Safety Partnership Program to assess industry maturity in relation to risk management practices. A total of 83 managers from a range of small, medium and large organisations were recruited through the Victorian Work Authority. Semi-structured interviews aimed at eliciting information on current organisational practices, as well as policy and procedures around work-related driving were conducted and the data mapped onto the benchmarking tool. Overall, the results demonstrated varying levels of maturity of risk management practices across organisations, highlighting the need to build accountability within organisations, improve communication practices, improve journey management, reduce vehicle-related risk, improve driver competency through an effective workplace road safety management program and review organisational incident and infringement management. The findings of the study have important implications for industry and highlight the need to review current risk management practices. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dynamically orthogonal field equations for stochastic flows and particle dynamics
2011-02-01
where uncertainty ‘lives’ as well as a system of Stochastic Di erential Equations that de nes how the uncertainty evolves in the time varying stochastic ... stochastic dynamical component that are both time and space dependent, we derive a system of field equations consisting of a Partial Differential Equation...a system of Stochastic Differential Equations that defines how the stochasticity evolves in the time varying stochastic subspace. These new
Fiore, Andrew M; Swan, James W
2018-01-28
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material.
Xenon-induced power oscillations in a generic small modular reactor
NASA Astrophysics Data System (ADS)
Kitcher, Evans Damenortey
As world demand for energy continues to grow at unprecedented rates, the world energy portfolio of the future will inevitably include a nuclear energy contribution. It has been suggested that the Small Modular Reactor (SMR) could play a significant role in the spread of civilian nuclear technology to nations previously without nuclear energy. As part of the design process, the SMR design must be assessed for the threat to operations posed by xenon-induced power oscillations. In this research, a generic SMR design was analyzed with respect to just such a threat. In order to do so, a multi-physics coupling routine was developed with MCNP/MCNPX as the neutronics solver. Thermal hydraulic assessments were performed using a single channel analysis tool developed in Python. Fuel and coolant temperature profiles were implemented in the form of temperature dependent fuel cross sections generated using the SIGACE code and reactor core coolant densities. The Power Axial Offset (PAO) and Xenon Axial Offset (XAO) parameters were chosen to quantify any oscillatory behavior observed. The methodology was benchmarked against results from literature of startup tests performed at a four-loop PWR in Korea. The developed benchmark model replicated the pertinent features of the reactor within ten percent of the literature values. The results of the benchmark demonstrated that the developed methodology captured the desired phenomena accurately. Subsequently, a high fidelity SMR core model was developed and assessed. Results of the analysis revealed an inherently stable SMR design at beginning of core life and end of core life under full-power and half-power conditions. The effect of axial discretization, stochastic noise and convergence of the Monte Carlo tallies in the calculations of the PAO and XAO parameters was investigated. All were found to be quite small and the inherently stable nature of the core design with respect to xenon-induced power oscillations was confirmed. Finally, a preliminary investigation into excess reactivity control options for the SMR design was conducted confirming the generally held notion that existing PWR control mechanisms can be used in iPWR SMRs with similar effectiveness. With the desire to operate the SMR under the boron free coolant condition, erbium oxide fuel integral burnable absorber rods were identified as a possible means to retain the dispersed absorber effect of soluble boron in the reactor coolant in replacement.
On the Impact of a Quadratic Acceleration Term in the Analysis of Position Time Series
NASA Astrophysics Data System (ADS)
Bogusz, Janusz; Klos, Anna; Bos, Machiel Simon; Hunegnaw, Addisu; Teferle, Felix Norman
2016-04-01
The analysis of Global Navigation Satellite System (GNSS) position time series generally assumes that each of the coordinate component series is described by the sum of a linear rate (velocity) and various periodic terms. The residuals, the deviations between the fitted model and the observations, are then a measure of the epoch-to-epoch scatter and have been used for the analysis of the stochastic character (noise) of the time series. Often the parameters of interest in GNSS position time series are the velocities and their associated uncertainties, which have to be determined with the highest reliability. It is clear that not all GNSS position time series follow this simple linear behaviour. Therefore, we have added an acceleration term in the form of a quadratic polynomial function to the model in order to better describe the non-linear motion in the position time series. This non-linear motion could be a response to purely geophysical processes, for example, elastic rebound of the Earth's crust due to ice mass loss in Greenland, artefacts due to deficiencies in bias mitigation models, for example, of the GNSS satellite and receiver antenna phase centres, or any combination thereof. In this study we have simulated 20 time series with different stochastic characteristics such as white, flicker or random walk noise of length of 23 years. The noise amplitude was assumed at 1 mm/y-/4. Then, we added the deterministic part consisting of a linear trend of 20 mm/y (that represents the averaged horizontal velocity) and accelerations ranging from minus 0.6 to plus 0.6 mm/y2. For all these data we estimated the noise parameters with Maximum Likelihood Estimation (MLE) using the Hector software package without taken into account the non-linear term. In this way we set the benchmark to then investigate how the noise properties and velocity uncertainty may be affected by any un-modelled, non-linear term. The velocities and their uncertainties versus the accelerations for different types of noise are determined. Furthermore, we have selected 40 globally distributed stations that have a clear non-linear behaviour from two different International GNSS Service (IGS) analysis centers: JPL (Jet Propulsion Laboratory) and BLT (British Isles continuous GNSS Facility and University of Luxembourg Tide Gauge Benchmark Monitoring (TIGA) Analysis Center). We obtained maximum accelerations of -1.8±1.2 mm2/y and -4.5±3.3 mm2/y for the horizontal and vertical components, respectively. The noise analysis tests have shown that the addition of the non-linear term has significantly whitened the power spectra of the position time series, i.e. shifted the spectral index from flicker towards white noise.
Upscaling: Effective Medium Theory, Numerical Methods and the Fractal Dream
NASA Astrophysics Data System (ADS)
Guéguen, Y.; Ravalec, M. Le; Ricard, L.
2006-06-01
Upscaling is a major issue regarding mechanical and transport properties of rocks. This paper examines three issues relative to upscaling. The first one is a brief overview of Effective Medium Theory (EMT), which is a key tool to predict average rock properties at a macroscopic scale in the case of a statistically homogeneous medium. EMT is of particular interest in the calculation of elastic properties. As discussed in this paper, EMT can thus provide a possible way to perform upscaling, although it is by no means the only one, and in particular it is irrelevant if the medium does not adhere to statistical homogeneity. This last circumstance is examined in part two of the paper. We focus on the example of constructing a hydrocarbon reservoir model. Such a construction is a required step in the process of making reasonable predictions for oil production. Taking into account rock permeability, lithological units and various structural discontinuities at different scales is part of this construction. The result is that stochastic reservoir models are built that rely on various numerical upscaling methods. These methods are reviewed. They provide techniques which make it possible to deal with upscaling on a general basis. Finally, a last case in which upscaling is trivial is considered in the third part of the paper. This is the fractal case. Fractal models have become popular precisely because they are free of the assumption of statistical homogeneity and yet do not involve numerical methods. It is suggested that using a physical criterion as a means to discriminate whether fractality is a dream or reality would be more satisfactory than relying on a limited data set alone.
Limitations of Community College Benchmarking and Benchmarks
ERIC Educational Resources Information Center
Bers, Trudy H.
2006-01-01
This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.
Computational Nuclear Physics and Post Hartree-Fock Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lietz, Justin; Sam, Novario; Hjorth-Jensen, M.
We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions onmore » strategies for porting the code to present and planned high-performance computing facilities.« less
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
Barillot, Romain; Louarn, Gaëtan; Escobar-Gutiérrez, Abraham J; Huynh, Pierre; Combes, Didier
2011-10-01
Most studies dealing with light partitioning in intercropping systems have used statistical models based on the turbid medium approach, thus assuming homogeneous canopies. However, these models could not be directly validated although spatial heterogeneities could arise in such canopies. The aim of the present study was to assess the ability of the turbid medium approach to accurately estimate light partitioning within grass-legume mixed canopies. Three contrasted mixtures of wheat-pea, tall fescue-alfalfa and tall fescue-clover were sown according to various patterns and densities. Three-dimensional plant mock-ups were derived from magnetic digitizations carried out at different stages of development. The benchmarks for light interception efficiency (LIE) estimates were provided by the combination of a light projective model and plant mock-ups, which also provided the inputs of a turbid medium model (SIRASCA), i.e. leaf area index and inclination. SIRASCA was set to gradually account for vertical heterogeneity of the foliage, i.e. the canopy was described as one, two or ten horizontal layers of leaves. Mixtures exhibited various and heterogeneous profiles of foliar distribution, leaf inclination and component species height. Nevertheless, most of the LIE was satisfactorily predicted by SIRASCA. Biased estimations were, however, observed for (1) grass species and (2) tall fescue-alfalfa mixtures grown at high density. Most of the discrepancies were due to vertical heterogeneities and were corrected by increasing the vertical description of canopies although, in practice, this would require time-consuming measurements. The turbid medium analogy could be successfully used in a wide range of canopies. However, a more detailed description of the canopy is required for mixtures exhibiting vertical stratifications and inter-/intra-species foliage overlapping. Architectural models remain a relevant tool for studying light partitioning in intercropping systems that exhibit strong vertical heterogeneities. Moreover, these models offer the possibility to integrate the effects of microclimate variations on plant growth.
Gantri, M.
2014-01-01
The present paper gives a new computational framework within which radiative transfer in a varying refractive index biological tissue can be studied. In our previous works, Legendre transform was used as an innovative view to handle the angular derivative terms in the case of uniform refractive index spherical medium. In biomedical optics, our analysis can be considered as a forward problem solution in a diffuse optical tomography imaging scheme. We consider a rectangular biological tissue-like domain with spatially varying refractive index submitted to a near infrared continuous light source. Interaction of radiation with the biological material into the medium is handled by a radiative transfer model. In the studied situation, the model displays two angular redistribution terms that are treated with Legendre integral transform. The model is used to study a possible detection of abnormalities in a general biological tissue. The effect of the embedded nonhomogeneous objects on the transmitted signal is studied. Particularly, detection of targets of localized heterogeneous inclusions within the tissue is discussed. Results show that models accounting for variation of refractive index can yield useful predictions about the target and the location of abnormal inclusions within the tissue. PMID:25013454
NASA Astrophysics Data System (ADS)
Avendaño, Carlos G.; Reyes, Arturo
2017-03-01
We theoretically study the dispersion relation for axially propagating electromagnetic waves throughout a one-dimensional helical structure whose pitch and dielectric and magnetic properties are spatial random functions with specific statistical characteristics. In the system of coordinates rotating with the helix, by using a matrix formalism, we write the set of differential equations that governs the expected value of the electromagnetic field amplitudes and we obtain the corresponding dispersion relation. We show that the dispersion relation depends strongly on the noise intensity introduced in the system and the autocorrelation length. When the autocorrelation length increases at fixed fluctuation and when the fluctuation augments at fixed autocorrelation length, the band gap widens and the attenuation coefficient of electromagnetic waves propagating in the random medium gets larger. By virtue of the degeneracy in the imaginary part of the eigenvalues associated with the propagating modes, the random medium acts as a filter for circularly polarized electromagnetic waves, in which only the propagating backward circularly polarized wave can propagate with no attenuation. Our results are valid for any kind of dielectric and magnetic structures which possess a helical-like symmetry such as cholesteric and chiral smectic-C liquid crystals, structurally chiral materials, and stressed cholesteric elastomers.
On the mass function of stars growing in a flocculent medium
NASA Astrophysics Data System (ADS)
Maschberger, Th.
2013-12-01
Stars form in regions of very inhomogeneous densities and may have chaotic orbital motions. This leads to a time variation of the accretion rate, which will spread the masses over some mass range. We investigate the mass distribution functions that arise from fluctuating accretion rates in non-linear accretion, ṁ ∝ mα. The distribution functions evolve in time and develop a power-law tail attached to a lognormal body, like in numerical simulations of star formation. Small fluctuations may be modelled by a Gaussian and develop a power-law tail ∝ m-α at the high-mass side for α > 1 and at the low-mass side for α < 1. Large fluctuations require that their distribution is strictly positive, for example, lognormal. For positive fluctuations the mass distribution function develops the power-law tail always at the high-mass hand side, independent of α larger or smaller than unity. Furthermore, we discuss Bondi-Hoyle accretion in a supersonically turbulent medium, the range of parameters for which non-linear stochastic growth could shape the stellar initial mass function, as well as the effects of a distribution of initial masses and growth times.
Numerical simulation of bubble plumes and an analysis of their seismic attributes
NASA Astrophysics Data System (ADS)
Li, Canping; Gou, Limin; You, Jiachun
2017-04-01
To study the bubble plume's seismic response characteristics, the model of a plume water body has been built in this article using the bubble-contained medium acoustic velocity model and the stochastic medium theory based on an analysis of both the acoustic characteristics of a bubble-contained water body and the actual features of a plume. The finite difference method is used for forward modelling, and the single-shot seismic record exhibits the characteristics of a scattered wave field generated by a plume. A meaningful conclusion is obtained by extracting seismic attributes from the pre-stack shot gather record of a plume. The values of the amplitude-related seismic attributes increase greatly as the bubble content goes up, and changes in bubble radius will not cause seismic attributes to change, which is primarily observed because the bubble content has a strong impact on the plume's acoustic velocity, while the bubble radius has a weak impact on the acoustic velocity. The above conclusion provides a theoretical reference for identifying hydrate plumes using seismic methods and contributes to further study on hydrate decomposition and migration, as well as on distribution of the methane bubble in seawater.
Cardiac cell: a biological laser?
Chorvat, D; Chorvatova, A
2008-04-01
We present a new concept of cardiac cells based on an analogy with lasers, practical implementations of quantum resonators. In this concept, each cardiac cell comprises a network of independent nodes, characterised by a set of discrete energy levels and certain transition probabilities between them. Interaction between the nodes is given by threshold-limited energy transfer, leading to quantum-like behaviour of the whole network. We propose that in cardiomyocytes, during each excitation-contraction coupling cycle, stochastic calcium release and the unitary properties of ionic channels constitute an analogue to laser active medium prone to "population inversion" and "spontaneous emission" phenomena. This medium, when powered by an incoming threshold-reaching voltage discharge in the form of an action potential, responds to the calcium influx through L-type calcium channels by stimulated emission of Ca2+ ions in a coherent, synchronised and amplified release process known as calcium-induced calcium release. In parallel, phosphorylation-stimulated molecular amplification in protein cascades adds tuneable features to the cells. In this framework, the heart can be viewed as a coherent network of synchronously firing cardiomyocytes behaving as pulsed laser-like amplifiers, coupled to pulse-generating pacemaker master-oscillators. The concept brings a new viewpoint on cardiac diseases as possible alterations of "cell lasing" properties.
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
Quantum stochastic calculus associated with quadratic quantum noises
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Sinha, Kalyan B., E-mail: kbs-jaya@yahoo.co.in
2016-02-15
We first study a class of fundamental quantum stochastic processes induced by the generators of a six dimensional non-solvable Lie †-algebra consisting of all linear combinations of the generalized Gross Laplacian and its adjoint, annihilation operator, creation operator, conservation, and time, and then we study the quantum stochastic integrals associated with the class of fundamental quantum stochastic processes, and the quantum Itô formula is revisited. The existence and uniqueness of solution of a quantum stochastic differential equation is proved. The unitarity conditions of solutions of quantum stochastic differential equations associated with the fundamental processes are examined. The quantum stochastic calculusmore » extends the Hudson-Parthasarathy quantum stochastic calculus.« less
Stochastic models for inferring genetic regulation from microarray gene expression data.
Tian, Tianhai
2010-03-01
Microarray expression profiles are inherently noisy and many different sources of variation exist in microarray experiments. It is still a significant challenge to develop stochastic models to realize noise in microarray expression profiles, which has profound influence on the reverse engineering of genetic regulation. Using the target genes of the tumour suppressor gene p53 as the test problem, we developed stochastic differential equation models and established the relationship between the noise strength of stochastic models and parameters of an error model for describing the distribution of the microarray measurements. Numerical results indicate that the simulated variance from stochastic models with a stochastic degradation process can be represented by a monomial in terms of the hybridization intensity and the order of the monomial depends on the type of stochastic process. The developed stochastic models with multiple stochastic processes generated simulations whose variance is consistent with the prediction of the error model. This work also established a general method to develop stochastic models from experimental information. 2009 Elsevier Ireland Ltd. All rights reserved.
CRPropa 3.1—a low energy extension based on stochastic differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merten, Lukas; Tjus, Julia Becker; Eichmann, Björn
The propagation of charged cosmic rays through the Galactic environment influences all aspects of the observation at Earth. Energy spectrum, composition and arrival directions are changed due to deflections in magnetic fields and interactions with the interstellar medium. Today the transport is simulated with different simulation methods either based on the solution of a transport equation (multi-particle picture) or a solution of an equation of motion (single-particle picture). We developed a new module for the publicly available propagation software CRPropa 3.1, where we implemented an algorithm to solve the transport equation using stochastic differential equations. This technique allows us tomore » use a diffusion tensor which is anisotropic with respect to an arbitrary magnetic background field. The source code of CRPropa is written in C++ with python steering via SWIG which makes it easy to use and computationally fast. In this paper, we present the new low-energy propagation code together with validation procedures that are developed to proof the accuracy of the new implementation. Furthermore, we show first examples of the cosmic ray density evolution, which depends strongly on the ratio of the parallel κ{sub ∥} and perpendicular κ{sub ⊥} diffusion coefficients. This dependency is systematically examined as well the influence of the particle rigidity on the diffusion process.« less
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
Richard V. Field, Jr.; Emery, John M.; Grigoriu, Mircea Dan
2015-05-19
The stochastic collocation (SC) and stochastic Galerkin (SG) methods are two well-established and successful approaches for solving general stochastic problems. A recently developed method based on stochastic reduced order models (SROMs) can also be used. Herein we provide a comparison of the three methods for some numerical examples; our evaluation only holds for the examples considered in the paper. The purpose of the comparisons is not to criticize the SC or SG methods, which have proven very useful for a broad range of applications, nor is it to provide overall ratings of these methods as compared to the SROM method.more » Furthermore, our objectives are to present the SROM method as an alternative approach to solving stochastic problems and provide information on the computational effort required by the implementation of each method, while simultaneously assessing their performance for a collection of specific problems.« less
A New Control Paradigm for Stochastic Differential Equations
NASA Astrophysics Data System (ADS)
Schmid, Matthias J. A.
This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, E.; Wang, L.; Gonder, J.
2013-10-01
Battery electric vehicles possess great potential for decreasing lifecycle costs in medium-duty applications, a market segment currently dominated by internal combustion technology. Characterized by frequent repetition of similar routes and daily return to a central depot, medium-duty vocations are well positioned to leverage the low operating costs of battery electric vehicles. Unfortunately, the range limitation of commercially available battery electric vehicles acts as a barrier to widespread adoption. This paper describes the National Renewable Energy Laboratory's collaboration with the U.S. Department of Energy and industry partners to analyze the use of small hydrogen fuel-cell stacks to extend the range ofmore » battery electric vehicles as a means of improving utility, and presumably, increasing market adoption. This analysis employs real-world vocational data and near-term economic assumptions to (1) identify optimal component configurations for minimizing lifecycle costs, (2) benchmark economic performance relative to both battery electric and conventional powertrains, and (3) understand how the optimal design and its competitiveness change with respect to duty cycle and economic climate. It is found that small fuel-cell power units provide extended range at significantly lower capital and lifecycle costs than additional battery capacity alone. And while fuel-cell range-extended vehicles are not deemed economically competitive with conventional vehicles given present-day economic conditions, this paper identifies potential future scenarios where cost equivalency is achieved.« less
Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand
DeLuca, Samuel; Khar, Karen; Meiler, Jens
2015-01-01
RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742
Laner, David; Rechberger, Helmut
2009-02-01
Waste prevention is a principle means of achieving the goals of waste management and a key element for developing sustainable economies. Small and medium sized enterprises (SMEs) contribute substantially to environmental degradation, often not even being aware of their environmental effects. Therefore, several initiatives have been launched in Austria aimed at supporting waste prevention measures on the level of SMEs. To promote the most efficient projects, they have to be evaluated with respect to their contribution to the goals of waste management. It is the aim of this paper to develop a methodology for evaluating waste prevention measures in SMEs based on their goal orientation. At first, conceptual problems of defining and delineating waste prevention activities are briefly discussed. Then an approach to evaluate waste prevention activities with respect to their environmental performance is presented and benchmarks which allow for an efficient use of the available funds are developed. Finally the evaluation method is applied to a number of former projects and the calculated results are analysed with respect to shortcomings and limitations of the model. It is found that the developed methodology can provide a tool for a more objective and comprehensible evaluation of waste prevention measures.
NASA Astrophysics Data System (ADS)
Marinacci, Federico; Pakmor, Rüdiger; Springel, Volker; Simpson, Christine M.
2014-08-01
We analyse the properties of the circumgalactic medium and the metal content of the stars comprising the central galaxy in eight hydrodynamical `zoom-in' simulations of disc galaxy formation. We use these properties as a benchmark for our model of galaxy formation physics implemented in the moving-mesh code AREPO, which succeeds in forming quite realistic late-type spirals in the set of `Aquarius' initial conditions of Milky-Way-sized haloes. Galactic winds significantly influence the morphology of the circumgalactic medium and induce bipolar features in the distribution of heavy elements. They also affect the thermodynamic properties of the circumgalactic gas by supplying an energy input that sustains its radiative losses. Although a significant fraction of the heavy elements are transferred from the central galaxy to the halo, and even beyond the virial radius, enough metals are retained by stars to yield a peak in their metallicity distributions at about Z⊙. All our default runs overestimate the stellar [O/Fe] ratio, an effect that we demonstrate can be rectified by an increase of the adopted Type Ia supernova rate. Nevertheless, the models have difficulty in producing stellar metallicity gradients of the same strength as observed in the Milky Way.
An Approach to Forecasting Health Expenditures, with Application to the U.S. Medicare System
Lee, Ronald; Miller, Timothy
2002-01-01
Objective To quantify uncertainty in forecasts of health expenditures. Study Design Stochastic time series models are estimated for historical variations in fertility, mortality, and health spending per capita in the United States, and used to generate stochastic simulations of the growth of Medicare expenditures. Individual health spending is modeled to depend on the number of years until death. Data Sources/Study Setting A simple accounting model is developed for forecasting health expenditures, using the U.S. Medicare system as an example. Principal Findings Medicare expenditures are projected to rise from 2.2 percent of GDP (gross domestic product) to about 8 percent of GDP by 2075. This increase is due in equal measure to increasing health spending per beneficiary and to population aging. The traditional projection method constructs high, medium, and low scenarios to assess uncertainty, an approach that has many problems. Using stochastic forecasting, we find a 95 percent probability that Medicare spending in 2075 will fall between 4 percent and 18 percent of GDP, indicating a wide band of uncertainty. Although there is substantial uncertainty about future mortality decline, it contributed little to uncertainty about future Medicare spending, since lower mortality both raises the number of elderly, tending to raise spending, and is associated with improved health of the elderly, tending to reduce spending. Uncertainty about fertility, by contrast, leads to great uncertainty about the future size of the labor force, and therefore adds importantly to uncertainty about the health-share of GDP. In the shorter term, the major source of uncertainty is health spending per capita. Conclusions History is a valuable guide for quantifying our uncertainty about future health expenditures. The probabilistic model we present has several advantages over the high–low scenario approach to forecasting. It indicates great uncertainty about future Medicare expenditures relative to GDP. PMID:12479501
Extracting Independent Local Oscillatory Geophysical Signals by Geodetic Tropospheric Delay
NASA Technical Reports Server (NTRS)
Botai, O. J.; Combrinck, L.; Sivakumar, V.; Schuh, H.; Bohm, J.
2010-01-01
Zenith Tropospheric Delay (ZTD) due to water vapor derived from space geodetic techniques and numerical weather prediction simulated-reanalysis data exhibits non-linear and non-stationary properties akin to those in the crucial geophysical signals of interest to the research community. These time series, once decomposed into additive (and stochastic) components, have information about the long term global change (the trend) and other interpretable (quasi-) periodic components such as seasonal cycles and noise. Such stochastic component(s) could be a function that exhibits at most one extremum within a data span or a monotonic function within a certain temporal span. In this contribution, we examine the use of the combined Ensemble Empirical Mode Decomposition (EEMD) and Independent Component Analysis (ICA): the EEMD-ICA algorithm to extract the independent local oscillatory stochastic components in the tropospheric delay derived from the European Centre for Medium-Range Weather Forecasts (ECMWF) over six geodetic sites (HartRAO, Hobart26, Wettzell, Gilcreek, Westford, and Tsukub32). The proposed methodology allows independent geophysical processes to be extracted and assessed. Analysis of the quality index of the Independent Components (ICs) derived for each cluster of local oscillatory components (also called the Intrinsic Mode Functions (IMFs)) for all the geodetic stations considered in the study demonstrate that they are strongly site dependent. Such strong dependency seems to suggest that the localized geophysical signals embedded in the ZTD over the geodetic sites are not correlated. Further, from the viewpoint of non-linear dynamical systems, four geophysical signals the Quasi-Biennial Oscillation (QBO) index derived from the NCEP/NCAR reanalysis, the Southern Oscillation Index (SOI) anomaly from NCEP, the SIDC monthly Sun Spot Number (SSN), and the Length of Day (LoD) are linked to the extracted signal components from ZTD. Results from the synchronization analysis show that ZTD and the geophysical signals exhibit (albeit subtle) site dependent phase synchronization index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Tsao, C.L.
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.
Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258
Benchmarking in emergency health systems.
Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg
2002-12-01
This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.
NASA Technical Reports Server (NTRS)
Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)
1993-01-01
A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qichun; Zhou, Jinglin; Wang, Hong
In this paper, stochastic coupling attenuation is investigated for a class of multi-variable bilinear stochastic systems and a novel output feedback m-block backstepping controller with linear estimator is designed, where gradient descent optimization is used to tune the design parameters of the controller. It has been shown that the trajectories of the closed-loop stochastic systems are bounded in probability sense and the stochastic coupling of the system outputs can be effectively attenuated by the proposed control algorithm. Moreover, the stability of the stochastic systems is analyzed and the effectiveness of the proposed method has been demonstrated using a simulated example.
Benchmarking and Performance Measurement.
ERIC Educational Resources Information Center
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
The KMAT: Benchmarking Knowledge Management.
ERIC Educational Resources Information Center
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
Optimal Control for Stochastic Delay Evolution Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com
2016-08-15
In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less
Stochastic Community Assembly: Does It Matter in Microbial Ecology?
Zhou, Jizhong; Ning, Daliang
2017-12-01
Understanding the mechanisms controlling community diversity, functions, succession, and biogeography is a central, but poorly understood, topic in ecology, particularly in microbial ecology. Although stochastic processes are believed to play nonnegligible roles in shaping community structure, their importance relative to deterministic processes is hotly debated. The importance of ecological stochasticity in shaping microbial community structure is far less appreciated. Some of the main reasons for such heavy debates are the difficulty in defining stochasticity and the diverse methods used for delineating stochasticity. Here, we provide a critical review and synthesis of data from the most recent studies on stochastic community assembly in microbial ecology. We then describe both stochastic and deterministic components embedded in various ecological processes, including selection, dispersal, diversification, and drift. We also describe different approaches for inferring stochasticity from observational diversity patterns and highlight experimental approaches for delineating ecological stochasticity in microbial communities. In addition, we highlight research challenges, gaps, and future directions for microbial community assembly research. Copyright © 2017 American Society for Microbiology.
Benchmarking undedicated cloud computing providers for analysis of genomic datasets.
Yazar, Seyhan; Gooden, George E C; Mackey, David A; Hewitt, Alex W
2014-01-01
A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5-78.2) for E.coli and 53.5% (95% CI: 34.4-72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5-303.1) and 173.9% (95% CI: 134.6-213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE.
Benchmarking Undedicated Cloud Computing Providers for Analysis of Genomic Datasets
Yazar, Seyhan; Gooden, George E. C.; Mackey, David A.; Hewitt, Alex W.
2014-01-01
A major bottleneck in biological discovery is now emerging at the computational level. Cloud computing offers a dynamic means whereby small and medium-sized laboratories can rapidly adjust their computational capacity. We benchmarked two established cloud computing services, Amazon Web Services Elastic MapReduce (EMR) on Amazon EC2 instances and Google Compute Engine (GCE), using publicly available genomic datasets (E.coli CC102 strain and a Han Chinese male genome) and a standard bioinformatic pipeline on a Hadoop-based platform. Wall-clock time for complete assembly differed by 52.9% (95% CI: 27.5–78.2) for E.coli and 53.5% (95% CI: 34.4–72.6) for human genome, with GCE being more efficient than EMR. The cost of running this experiment on EMR and GCE differed significantly, with the costs on EMR being 257.3% (95% CI: 211.5–303.1) and 173.9% (95% CI: 134.6–213.1) more expensive for E.coli and human assemblies respectively. Thus, GCE was found to outperform EMR both in terms of cost and wall-clock time. Our findings confirm that cloud computing is an efficient and potentially cost-effective alternative for analysis of large genomic datasets. In addition to releasing our cost-effectiveness comparison, we present available ready-to-use scripts for establishing Hadoop instances with Ganglia monitoring on EC2 or GCE. PMID:25247298
Farzandipour, Mehrdad; Meidani, Zahra
2014-06-01
Websites as one of the initial steps towards an e-government adoption do facilitate delivery of online and customer-oriented services. In this study we intended to investigate the role of the websites of medical universities in providing educational and research services following the E-government maturity model in the Iranian universities. This descriptive and cross- sectional study was conducted through content analysis and benchmarking the websites in 2012. The research population included the entire medical university website (37). Delivery of educational and research services through these university websites including information, interaction, transaction, and Integration were investigated using a checklist. The data were then analyzed by means of descriptive statistics and using SPSS software. Level of educational and research services by websites of the medical universities type I and II was evaluated medium as 1.99 and 1.89, respectively. All the universities gained a mean score of 1 out of 3 in terms of integration of educational and research services. Results of the study indicated that Iranian universities have passed information and interaction stages, but they have not made much progress in transaction and integration stages. Failure to adapt to e-government in Iranian medical universities in which limiting factors such as users' e-literacy, access to the internet and ICT infrastructure are not so crucial as in other organizations, suggest that e-government realization goes beyond technical challenges.
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’
NASA Astrophysics Data System (ADS)
Yegin, Gultekin
2018-02-01
In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.
NASA Astrophysics Data System (ADS)
Garland, N. A.; Boyle, G. J.; Cocks, D. G.; White, R. D.
2018-02-01
This study reviews the neutral density dependence of electron transport in gases and liquids and develops a method to determine the nonlinear medium density dependence of electron transport coefficients and scattering rates required for modeling transport in the vicinity of gas-liquid interfaces. The method has its foundations in Blanc’s law for gas-mixtures and adapts the theory of Garland et al (2017 Plasma Sources Sci. Technol. 26) to extract electron transport data across the gas-liquid transition region using known data from the gas and liquid phases only. The method is systematically benchmarked against multi-term Boltzmann equation solutions for Percus-Yevick model liquids. Application to atomic liquids highlights the utility and accuracy of the derived method.
Eco-efficiency of solid waste management in Welsh SMEs
NASA Astrophysics Data System (ADS)
Sarkis, Joseph; Dijkshoorn, Jeroen
2005-11-01
This paper provides an efficiency analysis of practices in Solid Waste Management of manufacturing companies in Wales. We apply data envelopment analysis (DEA) to a data set compiled during the National Waste Survey Wales 2003. We explore the relative performance of small and medium sized manufacturing enterprises (SME; 10-250 employees) in Wales. We determine the technical and scale environmental and economic efficiencies of these organizations. Our evaluation focuses on empirical data collected from companies in a wide diversity of manufacturing industries throughout Wales. We find significant differences in industry and size efficiencies. We also find correlations that exist among environmental and economic efficiencies. These variations show that improvements can be made using benchmarks from similar and different size industries. Further pursuit of an investigation of possible reasons for these differences is recommended.
Liu, Meng; Wang, Ke
2010-12-07
This is a continuation of our paper [Liu, M., Wang, K., 2010. Persistence and extinction of a stochastic single-species model under regime switching in a polluted environment, J. Theor. Biol. 264, 934-944]. Taking both white noise and colored noise into account, a stochastic single-species model under regime switching in a polluted environment is studied. Sufficient conditions for extinction, stochastic nonpersistence in the mean, stochastic weak persistence and stochastic permanence are established. The threshold between stochastic weak persistence and extinction is obtained. The results show that a different type of noise has a different effect on the survival results. Copyright © 2010 Elsevier Ltd. All rights reserved.
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
Hadjilouka, Agni; Mantzourani, Kyriaki-Sofia; Katsarou, Anastasia; Cavaiuolo, Marina; Ferrante, Antonio; Paramithiotis, Spiros; Mataragas, Marios; Drosinos, Eleftherios H
2015-02-01
The aims of the present study were to determine the prevalence and levels of Listeria monocytogenes and Escherichia coli O157:H7 in rocket and cucumber samples by deterministic (estimation of a single value) and stochastic (estimation of a range of values) approaches. In parallel, the chromogenic media commonly used for the recovery of these microorganisms were evaluated and compared, and the efficiency of an enzyme-linked immunosorbent assay (ELISA)-based protocol was validated. L. monocytogenes and E. coli O157:H7 were detected and enumerated using agar Listeria according to Ottaviani and Agosti plus RAPID' L. mono medium and Fluorocult plus sorbitol MacConkey medium with cefixime and tellurite in parallel, respectively. Identity was confirmed with biochemical and molecular tests and the ELISA. Performance indices of the media and the prevalence of both pathogens were estimated using Bayesian inference. In rocket, prevalence of both L. monocytogenes and E. coli O157:H7 was estimated at 7% (7 of 100 samples). In cucumber, prevalence was 6% (6 of 100 samples) and 3% (3 of 100 samples) for L. monocytogenes and E. coli O157:H7, respectively. The levels derived from the presence-absence data using Bayesian modeling were estimated at 0.12 CFU/25 g (0.06 to 0.20) and 0.09 CFU/25 g (0.04 to 0.170) for L. monocytogenes in rocket and cucumber samples, respectively. The corresponding values for E. coli O157:H7 were 0.59 CFU/25 g (0.43 to 0.78) and 1.78 CFU/25 g (1.38 to 2.24), respectively. The sensitivity and specificity of the culture media differed for rocket and cucumber samples. The ELISA technique had a high level of cross-reactivity. Parallel testing with at least two culture media was required to achieve a reliable result for L. monocytogenes or E. coli O157:H7 prevalence in rocket and cucumber samples.
Organic contaminants in Great Lakes tributaries: Prevalence and potential aquatic toxicity
Baldwin, Austin K.; Corsi, Steven R.; De Cicco, Laura A.; Lenaker, Peter L.; Lutz, Michelle A; Sullivan, Daniel J.; Richards, Kevin D.
2016-01-01
Organic compounds used in agriculture, industry, and households make their way into surface waters through runoff, leaking septic-conveyance systems, regulated and unregulated discharges, and combined sewer overflows, among other sources. Concentrations of these organic waste compounds (OWCs) in some Great Lakes tributaries indicate a high potential for adverse impacts on aquatic organisms. During 2010–13, 709 water samples were collected at 57 tributaries, together representing approximately 41% of the total inflow to the lakes. Samples were collected during runoff and low-flow conditions and analyzed for 69 OWCs, including herbicides, insecticides, polycyclic aromatic hydrocarbons, plasticizers, antioxidants, detergent metabolites, fire retardants, non-prescription human drugs, flavors/fragrances, and dyes. Urban-related land cover characteristics were the most important explanatory variables of concentrations of many OWCs. Compared to samples from nonurban watersheds (< 15% urban land cover) samples from urban watersheds (> 15% urban land cover) had nearly four times the number of detected compounds and four times the total sample concentration, on average. Concentration differences between runoff and low-flow conditions were not observed, but seasonal differences were observed in atrazine, metolachlor, DEET, and HHCB concentrations. Water quality benchmarks for individual OWCs were exceeded at 20 sites, and at 7 sites benchmarks were exceeded by a factor of 10 or more. The compounds with the most frequent water quality benchmark exceedances were the PAHs benzo[a]pyrene, pyrene, fluoranthene, and anthracene, the detergent metabolite 4-nonylphenol, and the herbicide atrazine. Computed estradiol equivalency quotients (EEQs) using only nonsteroidal endocrine-active compounds indicated medium to high risk of estrogenic effects (intersex or vitellogenin induction) at 10 sites. EEQs at 3 sites were comparable to values reported in effluent. This multifaceted study is the largest, most comprehensive assessment of the occurrence and potential effects of OWCs in the Great Lakes Basin to date.
Maximum principle for a stochastic delayed system involving terminal state constraints.
Wen, Jiaqiang; Shi, Yufeng
2017-01-01
We investigate a stochastic optimal control problem where the controlled system is depicted as a stochastic differential delayed equation; however, at the terminal time, the state is constrained in a convex set. We firstly introduce an equivalent backward delayed system depicted as a time-delayed backward stochastic differential equation. Then a stochastic maximum principle is obtained by virtue of Ekeland's variational principle. Finally, applications to a state constrained stochastic delayed linear-quadratic control model and a production-consumption choice problem are studied to illustrate the main obtained result.
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...
Momentum Maps and Stochastic Clebsch Action Principles
NASA Astrophysics Data System (ADS)
Cruzeiro, Ana Bela; Holm, Darryl D.; Ratiu, Tudor S.
2018-01-01
We derive stochastic differential equations whose solutions follow the flow of a stochastic nonlinear Lie algebra operation on a configuration manifold. For this purpose, we develop a stochastic Clebsch action principle, in which the noise couples to the phase space variables through a momentum map. This special coupling simplifies the structure of the resulting stochastic Hamilton equations for the momentum map. In particular, these stochastic Hamilton equations collectivize for Hamiltonians that depend only on the momentum map variable. The Stratonovich equations are derived from the Clebsch variational principle and then converted into Itô form. In comparing the Stratonovich and Itô forms of the stochastic dynamical equations governing the components of the momentum map, we find that the Itô contraction term turns out to be a double Poisson bracket. Finally, we present the stochastic Hamiltonian formulation of the collectivized momentum map dynamics and derive the corresponding Kolmogorov forward and backward equations.
Dynamics of non-holonomic systems with stochastic transport
NASA Astrophysics Data System (ADS)
Holm, D. D.; Putkaradze, V.
2018-01-01
This paper formulates a variational approach for treating observational uncertainty and/or computational model errors as stochastic transport in dynamical systems governed by action principles under non-holonomic constraints. For this purpose, we derive, analyse and numerically study the example of an unbalanced spherical ball rolling under gravity along a stochastic path. Our approach uses the Hamilton-Pontryagin variational principle, constrained by a stochastic rolling condition, which we show is equivalent to the corresponding stochastic Lagrange-d'Alembert principle. In the example of the rolling ball, the stochasticity represents uncertainty in the observation and/or error in the computational simulation of the angular velocity of rolling. The influence of the stochasticity on the deterministically conserved quantities is investigated both analytically and numerically. Our approach applies to a wide variety of stochastic, non-holonomically constrained systems, because it preserves the mathematical properties inherited from the variational principle.
Green Power Grids: How Energy from Renewable Sources Affects Networks and Markets
Mureddu, Mario; Caldarelli, Guido; Chessa, Alessandro; Scala, Antonio; Damiano, Alfonso
2015-01-01
The increasing attention to environmental issues is forcing the implementation of novel energy models based on renewable sources. This is fundamentally changing the configuration of energy management and is introducing new problems that are only partly understood. In particular, renewable energies introduce fluctuations which cause an increased request for conventional energy sources to balance energy requests at short notice. In order to develop an effective usage of low-carbon sources, such fluctuations must be understood and tamed. In this paper we present a microscopic model for the description and for the forecast of short time fluctuations related to renewable sources in order to estimate their effects on the electricity market. To account for the inter-dependencies in the energy market and the physical power dispatch network, we use a statistical mechanics approach to sample stochastic perturbations in the power system and an agent based approach for the prediction of the market players’ behavior. Our model is data-driven; it builds on one-day-ahead real market transactions in order to train agents’ behaviour and allows us to deduce the market share of different energy sources. We benchmarked our approach on the Italian market, finding a good accordance with real data. PMID:26335705
A differential memristive synapse circuit for on-line learning in neuromorphic computing systems
NASA Astrophysics Data System (ADS)
Nair, Manu V.; Muller, Lorenz K.; Indiveri, Giacomo
2017-12-01
Spike-based learning with memristive devices in neuromorphic computing architectures typically uses learning circuits that require overlapping pulses from pre- and post-synaptic nodes. This imposes severe constraints on the length of the pulses transmitted in the network, and on the network’s throughput. Furthermore, most of these circuits do not decouple the currents flowing through memristive devices from the one stimulating the target neuron. This can be a problem when using devices with high conductance values, because of the resulting large currents. In this paper, we propose a novel circuit that decouples the current produced by the memristive device from the one used to stimulate the post-synaptic neuron, by using a novel differential scheme based on the Gilbert normalizer circuit. We show how this circuit is useful for reducing the effect of variability in the memristive devices, and how it is ideally suited for spike-based learning mechanisms that do not require overlapping pre- and post-synaptic pulses. We demonstrate the features of the proposed synapse circuit with SPICE simulations, and validate its learning properties with high-level behavioral network simulations which use a stochastic gradient descent learning rule in two benchmark classification tasks.
Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E
2013-10-21
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
NASA Astrophysics Data System (ADS)
Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.
2013-10-01
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
Green Power Grids: How Energy from Renewable Sources Affects Networks and Markets.
Mureddu, Mario; Caldarelli, Guido; Chessa, Alessandro; Scala, Antonio; Damiano, Alfonso
2015-01-01
The increasing attention to environmental issues is forcing the implementation of novel energy models based on renewable sources. This is fundamentally changing the configuration of energy management and is introducing new problems that are only partly understood. In particular, renewable energies introduce fluctuations which cause an increased request for conventional energy sources to balance energy requests at short notice. In order to develop an effective usage of low-carbon sources, such fluctuations must be understood and tamed. In this paper we present a microscopic model for the description and for the forecast of short time fluctuations related to renewable sources in order to estimate their effects on the electricity market. To account for the inter-dependencies in the energy market and the physical power dispatch network, we use a statistical mechanics approach to sample stochastic perturbations in the power system and an agent based approach for the prediction of the market players' behavior. Our model is data-driven; it builds on one-day-ahead real market transactions in order to train agents' behaviour and allows us to deduce the market share of different energy sources. We benchmarked our approach on the Italian market, finding a good accordance with real data.
Besseris, George J
2013-01-01
Data screening is an indispensable phase in initiating the scientific discovery process. Fractional factorial designs offer quick and economical options for engineering highly-dense structured datasets. Maximum information content is harvested when a selected fractional factorial scheme is driven to saturation while data gathering is suppressed to no replication. A novel multi-factorial profiler is presented that allows screening of saturated-unreplicated designs by decomposing the examined response to its constituent contributions. Partial effects are sliced off systematically from the investigated response to form individual contrasts using simple robust measures. By isolating each time the disturbance attributed solely to a single controlling factor, the Wilcoxon-Mann-Whitney rank stochastics are employed to assign significance. We demonstrate that the proposed profiler possesses its own self-checking mechanism for detecting a potential influence due to fluctuations attributed to the remaining unexplainable error. Main benefits of the method are: 1) easy to grasp, 2) well-explained test-power properties, 3) distribution-free, 4) sparsity-free, 5) calibration-free, 6) simulation-free, 7) easy to implement, and 8) expanded usability to any type and size of multi-factorial screening designs. The method is elucidated with a benchmarked profiling effort for a water filtration process.
Simulations of magnetic nanoparticle Brownian motion
Reeves, Daniel B.; Weaver, John B.
2012-01-01
Magnetic nanoparticles are useful in many medical applications because they interact with biology on a cellular level thus allowing microenvironmental investigation. An enhanced understanding of the dynamics of magnetic particles may lead to advances in imaging directly in magnetic particle imaging or through enhanced MRI contrast and is essential for nanoparticle sensing as in magnetic spectroscopy of Brownian motion. Moreover, therapeutic techniques like hyperthermia require information about particle dynamics for effective, safe, and reliable use in the clinic. To that end, we have developed and validated a stochastic dynamical model of rotating Brownian nanoparticles from a Langevin equation approach. With no field, the relaxation time toward equilibrium matches Einstein's model of Brownian motion. In a static field, the equilibrium magnetization agrees with the Langevin function. For high frequency or low amplitude driving fields, behavior characteristic of the linearized Debye approximation is reproduced. In a higher field regime where magnetic saturation occurs, the magnetization and its harmonics compare well with the effective field model. On another level, the model has been benchmarked against experimental results, successfully demonstrating that harmonics of the magnetization carry enough information to infer environmental parameters like viscosity and temperature. PMID:23319830
Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul; ...
2017-12-20
We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less
Molecular simulation of small Knudsen number flows
NASA Astrophysics Data System (ADS)
Fei, Fei; Fan, Jing
2012-11-01
The direct simulation Monte Carlo (DSMC) method is a powerful particle-based method for modeling gas flows. It works well for relatively large Knudsen (Kn) numbers, typically larger than 0.01, but quickly becomes computationally intensive as Kn decreases due to its time step and cell size limitations. An alternative approach was proposed to relax or remove these limitations, based on replacing pairwise collisions with a stochastic model corresponding to the Fokker-Planck equation [J. Comput. Phys., 229, 1077 (2010); J. Fluid Mech., 680, 574 (2011)]. Similar to the DSMC method, the downside of that approach suffers from computationally statistical noise. To solve the problem, a diffusion-based information preservation (D-IP) method has been developed. The main idea is to track the motion of a simulated molecule from the diffusive standpoint, and obtain the flow velocity and temperature through sampling and averaging the IP quantities. To validate the idea and the corresponding model, several benchmark problems with Kn ˜ 10-3-10-4 have been investigated. It is shown that the IP calculations are not only accurate, but also efficient because they make possible using a time step and cell size over an order of magnitude larger than the mean collision time and mean free path, respectively.
Constant pressure and temperature discrete-time Langevin molecular dynamics
NASA Astrophysics Data System (ADS)
Grønbech-Jensen, Niels; Farago, Oded
2014-11-01
We present a new and improved method for simultaneous control of temperature and pressure in molecular dynamics simulations with periodic boundary conditions. The thermostat-barostat equations are built on our previously developed stochastic thermostat, which has been shown to provide correct statistical configurational sampling for any time step that yields stable trajectories. Here, we extend the method and develop a set of discrete-time equations of motion for both particle dynamics and system volume in order to seek pressure control that is insensitive to the choice of the numerical time step. The resulting method is simple, practical, and efficient. The method is demonstrated through direct numerical simulations of two characteristic model systems—a one-dimensional particle chain for which exact statistical results can be obtained and used as benchmarks, and a three-dimensional system of Lennard-Jones interacting particles simulated in both solid and liquid phases. The results, which are compared against the method of Kolb and Dünweg [J. Chem. Phys. 111, 4453 (1999)], show that the new method behaves according to the objective, namely that acquired statistical averages and fluctuations of configurational measures are accurate and robust against the chosen time step applied to the simulation.
A Distribution-Free Multi-Factorial Profiler for Harvesting Information from High-Density Screenings
Besseris, George J.
2013-01-01
Data screening is an indispensable phase in initiating the scientific discovery process. Fractional factorial designs offer quick and economical options for engineering highly-dense structured datasets. Maximum information content is harvested when a selected fractional factorial scheme is driven to saturation while data gathering is suppressed to no replication. A novel multi-factorial profiler is presented that allows screening of saturated-unreplicated designs by decomposing the examined response to its constituent contributions. Partial effects are sliced off systematically from the investigated response to form individual contrasts using simple robust measures. By isolating each time the disturbance attributed solely to a single controlling factor, the Wilcoxon-Mann-Whitney rank stochastics are employed to assign significance. We demonstrate that the proposed profiler possesses its own self-checking mechanism for detecting a potential influence due to fluctuations attributed to the remaining unexplainable error. Main benefits of the method are: 1) easy to grasp, 2) well-explained test-power properties, 3) distribution-free, 4) sparsity-free, 5) calibration-free, 6) simulation-free, 7) easy to implement, and 8) expanded usability to any type and size of multi-factorial screening designs. The method is elucidated with a benchmarked profiling effort for a water filtration process. PMID:24009744
Quantum neural network-based EEG filtering for a brain-computer interface.
Gandhi, Vaibhav; Prasad, Girijesh; Coyle, Damien; Behera, Laxmidhar; McGinnity, Thomas Martin
2014-02-01
A novel neural information processing architecture inspired by quantum mechanics and incorporating the well-known Schrodinger wave equation is proposed in this paper. The proposed architecture referred to as recurrent quantum neural network (RQNN) can characterize a nonstationary stochastic signal as time-varying wave packets. A robust unsupervised learning algorithm enables the RQNN to effectively capture the statistical behavior of the input signal and facilitates the estimation of signal embedded in noise with unknown characteristics. The results from a number of benchmark tests show that simple signals such as dc, staircase dc, and sinusoidal signals embedded within high noise can be accurately filtered and particle swarm optimization can be employed to select model parameters. The RQNN filtering procedure is applied in a two-class motor imagery-based brain-computer interface where the objective was to filter electroencephalogram (EEG) signals before feature extraction and classification to increase signal separability. A two-step inner-outer fivefold cross-validation approach is utilized to select the algorithm parameters subject-specifically for nine subjects. It is shown that the subject-specific RQNN EEG filtering significantly improves brain-computer interface performance compared to using only the raw EEG or Savitzky-Golay filtered EEG across multiple sessions.
Girsanov reweighting for path ensembles and Markov state models
NASA Astrophysics Data System (ADS)
Donati, L.; Hartmann, C.; Keller, B. G.
2017-06-01
The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul
We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less
Modeling of transport phenomena in tokamak plasmas with neural networks
Meneghini, Orso; Luna, Christopher J.; Smith, Sterling P.; ...
2014-06-23
A new transport model that uses neural networks (NNs) to yield electron and ion heat ux pro les has been developed. Given a set of local dimensionless plasma parameters similar to the ones that the highest delity models use, the NN model is able to efficiently and accurately predict the ion and electron heat transport pro les. As a benchmark, a NN was built, trained, and tested on data from the 2012 and 2013 DIII-D experimental campaigns. It is found that NN can capture the experimental behavior over the majority of the plasma radius and across a broad range ofmore » plasma regimes. Although each radial location is calculated independently from the others, the heat ux pro les are smooth, suggesting that the solution found by the NN is a smooth function of the local input parameters. This result supports the evidence of a well-de ned, non-stochastic relationship between the input parameters and the experimentally measured transport uxes. Finally, the numerical efficiency of this method, requiring only a few CPU-μs per data point, makes it ideal for scenario development simulations and real-time plasma control.« less
Efficient Online Learning Algorithms Based on LSTM Neural Networks.
Ergen, Tolga; Kozat, Suleyman Serdar
2017-09-13
We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.
Impact of uncertainties in free stream conditions on the aerodynamics of a rectangular cylinder
NASA Astrophysics Data System (ADS)
Mariotti, Alessandro; Shoeibi Omrani, Pejman; Witteveen, Jeroen; Salvetti, Maria Vittoria
2015-11-01
The BARC benchmark deals with the flow around a rectangular cylinder with chord-to-depth ratio equal to 5. This flow configuration is of practical interest for civil and industrial structures and it is characterized by massively separated flow and unsteadiness. In a recent review of BARC results, significant dispersion was observed both in experimental and numerical predictions of some flow quantities, which are extremely sensitive to various uncertainties, which may be present in experiments and simulations. Besides modeling and numerical errors, in simulations it is difficult to exactly reproduce the experimental conditions due to uncertainties in the set-up parameters, which sometimes cannot be exactly controlled or characterized. Probabilistic methods and URANS simulations are used to investigate the impact of the uncertainties in the following set-up parameters: the angle of incidence, the free stream longitudinal turbulence intensity and length scale. Stochastic collocation is employed to perform the probabilistic propagation of the uncertainty. The discretization and modeling errors are estimated by repeating the same analysis for different grids and turbulence models. The results obtained for different assumed PDF of the set-up parameters are also compared.
Time-ordered product expansions for computational stochastic system biology.
Mjolsness, Eric
2013-06-01
The time-ordered product framework of quantum field theory can also be used to understand salient phenomena in stochastic biochemical networks. It is used here to derive Gillespie's stochastic simulation algorithm (SSA) for chemical reaction networks; consequently, the SSA can be interpreted in terms of Feynman diagrams. It is also used here to derive other, more general simulation and parameter-learning algorithms including simulation algorithms for networks of stochastic reaction-like processes operating on parameterized objects, and also hybrid stochastic reaction/differential equation models in which systems of ordinary differential equations evolve the parameters of objects that can also undergo stochastic reactions. Thus, the time-ordered product expansion can be used systematically to derive simulation and parameter-fitting algorithms for stochastic systems.
Liao, Hehuan; Krometis, Leigh-Anne H; Kline, Karen
2016-05-01
Within the United States, elevated levels of fecal indicator bacteria (FIB) remain the leading cause of surface water-quality impairments requiring formal remediation plans under the federal Clean Water Act's Total Maximum Daily Load (TMDL) program. The sufficiency of compliance with numerical FIB criteria as the targeted endpoint of TMDL remediation plans may be questionable given poor correlations between FIB and pathogenic microorganisms and varying degrees of risk associated with exposure to different fecal pollution sources (e.g. human vs animal). The present study linked a watershed-scale FIB fate and transport model with a dose-response model to continuously predict human health risks via quantitative microbial risk assessment (QMRA), for comparison to regulatory benchmarks. This process permitted comparison of risks associated with different fecal pollution sources in an impaired urban watershed in order to identify remediation priorities. Results indicate that total human illness risks were consistently higher than the regulatory benchmark of 36 illnesses/1000 people for the study watershed, even when the predicted FIB levels were in compliance with the Escherichia coli geometric mean standard of 126CFU/100mL. Sanitary sewer overflows were associated with the greatest risk of illness. This is of particular concern, given increasing indications that sewer leakage is ubiquitous in urban areas, yet not typically fully accounted for during TMDL development. Uncertainty analysis suggested the accuracy of risk estimates would be improved by more detailed knowledge of site-specific pathogen presence and densities. While previous applications of the QMRA process to impaired waterways have mostly focused on single storm events or hypothetical situations, the continuous modeling framework presented in this study could be integrated into long-term water quality management planning, especially the United States' TMDL program, providing greater clarity to watershed stakeholders and decision-makers. Copyright © 2016 Elsevier B.V. All rights reserved.
Variational principles for stochastic fluid dynamics
Holm, Darryl D.
2015-01-01
This paper derives stochastic partial differential equations (SPDEs) for fluid dynamics from a stochastic variational principle (SVP). The paper proceeds by taking variations in the SVP to derive stochastic Stratonovich fluid equations; writing their Itô representation; and then investigating the properties of these stochastic fluid models in comparison with each other, and with the corresponding deterministic fluid models. The circulation properties of the stochastic Stratonovich fluid equations are found to closely mimic those of the deterministic ideal fluid models. As with deterministic ideal flows, motion along the stochastic Stratonovich paths also preserves the helicity of the vortex field lines in incompressible stochastic flows. However, these Stratonovich properties are not apparent in the equivalent Itô representation, because they are disguised by the quadratic covariation drift term arising in the Stratonovich to Itô transformation. This term is a geometric generalization of the quadratic covariation drift term already found for scalar densities in Stratonovich's famous 1966 paper. The paper also derives motion equations for two examples of stochastic geophysical fluid dynamics; namely, the Euler–Boussinesq and quasi-geostropic approximations. PMID:27547083
Universal fuzzy integral sliding-mode controllers for stochastic nonlinear systems.
Gao, Qing; Liu, Lu; Feng, Gang; Wang, Yong
2014-12-01
In this paper, the universal integral sliding-mode controller problem for the general stochastic nonlinear systems modeled by Itô type stochastic differential equations is investigated. One of the main contributions is that a novel dynamic integral sliding mode control (DISMC) scheme is developed for stochastic nonlinear systems based on their stochastic T-S fuzzy approximation models. The key advantage of the proposed DISMC scheme is that two very restrictive assumptions in most existing ISMC approaches to stochastic fuzzy systems have been removed. Based on the stochastic Lyapunov theory, it is shown that the closed-loop control system trajectories are kept on the integral sliding surface almost surely since the initial time, and moreover, the stochastic stability of the sliding motion can be guaranteed in terms of linear matrix inequalities. Another main contribution is that the results of universal fuzzy integral sliding-mode controllers for two classes of stochastic nonlinear systems, along with constructive procedures to obtain the universal fuzzy integral sliding-mode controllers, are provided, respectively. Simulation results from an inverted pendulum example are presented to illustrate the advantages and effectiveness of the proposed approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Mabrey, J.B.
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less
NASA Technical Reports Server (NTRS)
Kushner, H. J.
1972-01-01
The field of stochastic stability is surveyed, with emphasis on the invariance theorems and their potential application to systems with randomly varying coefficients. Some of the basic ideas are reviewed, which underlie the stochastic Liapunov function approach to stochastic stability. The invariance theorems are discussed in detail.
Liu, Meng; Wang, Ke
2010-06-07
A new single-species model disturbed by both white noise and colored noise in a polluted environment is developed and analyzed. Sufficient criteria for extinction, stochastic nonpersistence in the mean, stochastic weak persistence in the mean, stochastic strong persistence in the mean and stochastic permanence of the species are established. The threshold between stochastic weak persistence in the mean and extinction is obtained. The results show that both white and colored environmental noises have sufficient effect to the survival results. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Stochastic differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sobczyk, K.
1990-01-01
This book provides a unified treatment of both regular (or random) and Ito stochastic differential equations. It focuses on solution methods, including some developed only recently. Applications are discussed, in particular an insight is given into both the mathematical structure, and the most efficient solution methods (analytical as well as numerical). Starting from basic notions and results of the theory of stochastic processes and stochastic calculus (including Ito's stochastic integral), many principal mathematical problems and results related to stochastic differential equations are expounded here for the first time. Applications treated include those relating to road vehicles, earthquake excitations and offshoremore » structures.« less
Raising Quality and Achievement. A College Guide to Benchmarking.
ERIC Educational Resources Information Center
Owen, Jane
This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…
Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.
ERIC Educational Resources Information Center
Inger, Morton
Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…
Benchmarks: The Development of a New Approach to Student Evaluation.
ERIC Educational Resources Information Center
Larter, Sylvia
The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…
NASA Astrophysics Data System (ADS)
Wang, Qingyun; Zhang, Honghui; Chen, Guanrong
2012-12-01
We study the effect of heterogeneous neuron and information transmission delay on stochastic resonance of scale-free neuronal networks. For this purpose, we introduce the heterogeneity to the specified neuron with the highest degree. It is shown that in the absence of delay, an intermediate noise level can optimally assist spike firings of collective neurons so as to achieve stochastic resonance on scale-free neuronal networks for small and intermediate αh, which plays a heterogeneous role. Maxima of stochastic resonance measure are enhanced as αh increases, which implies that the heterogeneity can improve stochastic resonance. However, as αh is beyond a certain large value, no obvious stochastic resonance can be observed. If the information transmission delay is introduced to neuronal networks, stochastic resonance is dramatically affected. In particular, the tuned information transmission delay can induce multiple stochastic resonance, which can be manifested as well-expressed maximum in the measure for stochastic resonance, appearing every multiple of one half of the subthreshold stimulus period. Furthermore, we can observe that stochastic resonance at odd multiple of one half of the subthreshold stimulus period is subharmonic, as opposed to the case of even multiple of one half of the subthreshold stimulus period. More interestingly, multiple stochastic resonance can also be improved by the suitable heterogeneous neuron. Presented results can provide good insights into the understanding of the heterogeneous neuron and information transmission delay on realistic neuronal networks.
Ultimate open pit stochastic optimization
NASA Astrophysics Data System (ADS)
Marcotte, Denis; Caron, Josiane
2013-02-01
Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.
NASA Astrophysics Data System (ADS)
Fiore, Andrew M.; Swan, James W.
2018-01-01
Brownian Dynamics simulations are an important tool for modeling the dynamics of soft matter. However, accurate and rapid computations of the hydrodynamic interactions between suspended, microscopic components in a soft material are a significant computational challenge. Here, we present a new method for Brownian dynamics simulations of suspended colloidal scale particles such as colloids, polymers, surfactants, and proteins subject to a particular and important class of hydrodynamic constraints. The total computational cost of the algorithm is practically linear with the number of particles modeled and can be further optimized when the characteristic mass fractal dimension of the suspended particles is known. Specifically, we consider the so-called "stresslet" constraint for which suspended particles resist local deformation. This acts to produce a symmetric force dipole in the fluid and imparts rigidity to the particles. The presented method is an extension of the recently reported positively split formulation for Ewald summation of the Rotne-Prager-Yamakawa mobility tensor to higher order terms in the hydrodynamic scattering series accounting for force dipoles [A. M. Fiore et al., J. Chem. Phys. 146(12), 124116 (2017)]. The hydrodynamic mobility tensor, which is proportional to the covariance of particle Brownian displacements, is constructed as an Ewald sum in a novel way which guarantees that the real-space and wave-space contributions to the sum are independently symmetric and positive-definite for all possible particle configurations. This property of the Ewald sum is leveraged to rapidly sample the Brownian displacements from a superposition of statistically independent processes with the wave-space and real-space contributions as respective covariances. The cost of computing the Brownian displacements in this way is comparable to the cost of computing the deterministic displacements. The addition of a stresslet constraint to the over-damped particle equations of motion leads to a stochastic differential algebraic equation (SDAE) of index 1, which is integrated forward in time using a mid-point integration scheme that implicitly produces stochastic displacements consistent with the fluctuation-dissipation theorem for the constrained system. Calculations for hard sphere dispersions are illustrated and used to explore the performance of the algorithm. An open source, high-performance implementation on graphics processing units capable of dynamic simulations of millions of particles and integrated with the software package HOOMD-blue is used for benchmarking and made freely available in the supplementary material (ftp://ftp.aip.org/epaps/journ_chem_phys/E-JCPSA6-148-012805)
Stochastic effects in a seasonally forced epidemic model
NASA Astrophysics Data System (ADS)
Rozhnova, G.; Nunes, A.
2010-10-01
The interplay of seasonality, the system’s nonlinearities and intrinsic stochasticity, is studied for a seasonally forced susceptible-exposed-infective-recovered stochastic model. The model is explored in the parameter region that corresponds to childhood infectious diseases such as measles. The power spectrum of the stochastic fluctuations around the attractors of the deterministic system that describes the model in the thermodynamic limit is computed analytically and validated by stochastic simulations for large system sizes. Size effects are studied through additional simulations. Other effects such as switching between coexisting attractors induced by stochasticity often mentioned in the literature as playing an important role in the dynamics of childhood infectious diseases are also investigated. The main conclusion is that stochastic amplification, rather than these effects, is the key ingredient to understand the observed incidence patterns.
HS06 Benchmark for an ARM Server
NASA Astrophysics Data System (ADS)
Kluth, Stefan
2014-06-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
The relationship between stochastic and deterministic quasi-steady state approximations.
Kim, Jae Kyoung; Josić, Krešimir; Bennett, Matthew R
2015-11-23
The quasi steady-state approximation (QSSA) is frequently used to reduce deterministic models of biochemical networks. The resulting equations provide a simplified description of the network in terms of non-elementary reaction functions (e.g. Hill functions). Such deterministic reductions are frequently a basis for heuristic stochastic models in which non-elementary reaction functions are used to define reaction propensities. Despite their popularity, it remains unclear when such stochastic reductions are valid. It is frequently assumed that the stochastic reduction can be trusted whenever its deterministic counterpart is accurate. However, a number of recent examples show that this is not necessarily the case. Here we explain the origin of these discrepancies, and demonstrate a clear relationship between the accuracy of the deterministic and the stochastic QSSA for examples widely used in biological systems. With an analysis of a two-state promoter model, and numerical simulations for a variety of other models, we find that the stochastic QSSA is accurate whenever its deterministic counterpart provides an accurate approximation over a range of initial conditions which cover the likely fluctuations from the quasi steady-state (QSS). We conjecture that this relationship provides a simple and computationally inexpensive way to test the accuracy of reduced stochastic models using deterministic simulations. The stochastic QSSA is one of the most popular multi-scale stochastic simulation methods. While the use of QSSA, and the resulting non-elementary functions has been justified in the deterministic case, it is not clear when their stochastic counterparts are accurate. In this study, we show how the accuracy of the stochastic QSSA can be tested using their deterministic counterparts providing a concrete method to test when non-elementary rate functions can be used in stochastic simulations.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
The General Concept of Benchmarking and Its Application in Higher Education in Europe
ERIC Educational Resources Information Center
Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna
2009-01-01
The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…
A nanostructured surface increases friction exponentially at the solid-gas interface.
Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E; Prashanthi, Kovur; Thundat, Thomas
2016-09-06
According to Stokes' law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.
A nanostructured surface increases friction exponentially at the solid-gas interface
NASA Astrophysics Data System (ADS)
Phani, Arindam; Putkaradze, Vakhtang; Hawk, John E.; Prashanthi, Kovur; Thundat, Thomas
2016-09-01
According to Stokes’ law, a moving solid surface experiences viscous drag that is linearly related to its velocity and the viscosity of the medium. The viscous interactions result in dissipation that is known to scale as the square root of the kinematic viscosity times the density of the gas. We observed that when an oscillating surface is modified with nanostructures, the experimentally measured dissipation shows an exponential dependence on kinematic viscosity. The surface nanostructures alter solid-gas interplay greatly, amplifying the dissipation response exponentially for even minute variations in viscosity. Nanostructured resonator thus allows discrimination of otherwise narrow range of gaseous viscosity making dissipation an ideal parameter for analysis of a gaseous media. We attribute the observed exponential enhancement to the stochastic nature of interactions of many coupled nanostructures with the gas media.
Influence of nonlinear interactions on the development of instability in hydrodynamic wave systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romanova, N. N.; Chkhetiani, O. G., E-mail: ochkheti@mx.iki.rssi.ru, E-mail: ochkheti@gmail.ru; Yakushkin, I. G.
2016-05-15
The problem of the development of shear instability in a three-layer medium simulating the flow of a stratified incompressible fluid is considered. The hydrodynamic equations are solved by expanding the Hamiltonian in a small parameter. The equations for three interacting waves, one of which is unstable, have been derived and solved numerically. The three-wave interaction is shown to stabilize the instability. Various regimes of the system’s dynamics, including the stochastic ones dependent on one of the invariants in the problem, can arise in this case. It is pointed out that the instability development scenario considered differs from the previously consideredmore » scenario of a different type, where the three-wave interaction does not stabilize the instability. The interaction of wave packets is considered briefly.« less
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
Stochastic Parametrisations and Regime Behaviour of Atmospheric Models
NASA Astrophysics Data System (ADS)
Arnold, Hannah; Moroz, Irene; Palmer, Tim
2013-04-01
The presence of regimes is a characteristic of non-linear, chaotic systems (Lorenz, 2006). In the atmosphere, regimes emerge as familiar circulation patterns such as the El-Nino Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and Scandinavian Blocking events. In recent years there has been much interest in the problem of identifying and studying atmospheric regimes (Solomon et al, 2007). In particular, how do these regimes respond to an external forcing such as anthropogenic greenhouse gas emissions? The importance of regimes in observed trends over the past 50-100 years indicates that in order to predict anthropogenic climate change, our climate models must be able to represent accurately natural circulation regimes, their statistics and variability. It is well established that representing model uncertainty as well as initial condition uncertainty is important for reliable weather forecasts (Palmer, 2001). In particular, stochastic parametrisation schemes have been shown to improve the skill of weather forecast models (e.g. Berner et al., 2009; Frenkel et al., 2012; Palmer et al., 2009). It is possible that including stochastic physics as a representation of model uncertainty could also be beneficial in climate modelling, enabling the simulator to explore larger regions of the climate attractor including other flow regimes. An alternative representation of model uncertainty is a perturbed parameter scheme, whereby physical parameters in subgrid parametrisation schemes are perturbed about their optimal value. Perturbing parameters gives a greater control over the ensemble than multi-model or multiparametrisation ensembles, and has been used as a representation of model uncertainty in climate prediction (Stainforth et al., 2005; Rougier et al., 2009). We investigate the effect of including representations of model uncertainty on the regime behaviour of a simulator. A simple chaotic model of the atmosphere, the Lorenz '96 system, is used to study the predictability of regime changes (Lorenz 1996, 2006). Three types of models are considered: a deterministic parametrisation scheme, stochastic parametrisation schemes with additive or multiplicative noise, and a perturbed parameter ensemble. Each forecasting scheme was tested on its ability to reproduce the attractor of the full system, defined in a reduced space based on EOF decomposition. None of the forecast models accurately capture the less common regime, though a significant improvement is observed over the deterministic parametrisation when a temporally correlated stochastic parametrisation is used. The attractor for the perturbed parameter ensemble improves on that forecast by the deterministic or white additive schemes, showing a distinct peak in the attractor corresponding to the less common regime. However, the 40 constituent members of the perturbed parameter ensemble each differ greatly from the true attractor, with many only showing one dominant regime with very rare transitions. These results indicate that perturbed parameter ensembles must be carefully analysed as individual members may have very different characteristics to the ensemble mean and to the true system being modelled. On the other hand, the stochastic parametrisation schemes tested performed well, improving the simulated climate, and motivating the development of a stochastic earth-system simulator for use in climate prediction. J. Berner, G. J. Shutts, M. Leutbecher, and T. N. Palmer. A spectral stochastic kinetic energy backscatter scheme and its impact on flow dependent predictability in the ECMWF ensemble prediction system. J. Atmos. Sci., 66(3):603-626, 2009. Y. Frenkel, A. J. Majda, and B. Khouider. Using the stochastic multicloud model to improve tropical convective parametrisation: A paradigm example. J. Atmos. Sci., 69(3):1080-1105, 2012. E. N. Lorenz. Predictability: a problem partly solved. In Proceedings, Seminar on Predictability, 4-8 September 1995, volume 1, pages 1-18, Shinfield Park, Reading, 1996. ECMWF. E. N. Lorenz. Regimes in simple systems. J. Atmos. Sci., 63(8):2056-2073, 2006. T. N Palmer. A nonlinear dynamical perspective on model error: A proposal for non-local stochastic-dynamic parametrisation in weather and climate prediction models. Q. J. Roy. Meteor. Soc., 127(572):279-304, 2001. T. N. Palmer, R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G. J. Shutts, M. Steinheimer, and A. Weisheimer. Stochastic parametrization and model uncertainty. Technical Report 598, European Centre for Medium-Range Weather Forecasts, 2009. J. Rougier, D. M. H. Sexton, J. M. Murphy, and D. Stainforth. Analyzing the climate sensitivity of the HadSM3 climate model using ensembles from different but related experiments. J. Climate, 22:3540-3557, 2009. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, Tignor M., and H. L. Miller. Climate models and their evaluation. In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge, United Kingdom and New York, NY, USA, 2007. Cambridge University Press. D. A Stainforth, T. Aina, C. Christensen, M. Collins, N. Faull, D. J. Frame, J. A. Kettleborough, S. Knight, A. Martin, J. M. Murphy, C. Piani, D. Sexton, L. A. Smith, R. A Spicer, A. J. Thorpe, and M. R Allen. Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 433(7024):403-406, 2005.
Benchmarking reference services: an introduction.
Marshall, J G; Buchanan, H S
1995-01-01
Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.
Stochastic computing with biomolecular automata
Adar, Rivka; Benenson, Yaakov; Linshiz, Gregory; Rosner, Amit; Tishby, Naftali; Shapiro, Ehud
2004-01-01
Stochastic computing has a broad range of applications, yet electronic computers realize its basic step, stochastic choice between alternative computation paths, in a cumbersome way. Biomolecular computers use a different computational paradigm and hence afford novel designs. We constructed a stochastic molecular automaton in which stochastic choice is realized by means of competition between alternative biochemical pathways, and choice probabilities are programmed by the relative molar concentrations of the software molecules coding for the alternatives. Programmable and autonomous stochastic molecular automata have been shown to perform direct analysis of disease-related molecular indicators in vitro and may have the potential to provide in situ medical diagnosis and cure. PMID:15215499
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sousedík, Bedřich, E-mail: sousedik@umbc.edu; Elman, Howard C., E-mail: elman@cs.umd.edu
2016-07-01
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Analysis of a novel stochastic SIRS epidemic model with two different saturated incidence rates
NASA Astrophysics Data System (ADS)
Chang, Zhengbo; Meng, Xinzhu; Lu, Xiao
2017-04-01
This paper presents a stochastic SIRS epidemic model with two different nonlinear incidence rates and double epidemic asymmetrical hypothesis, and we devote to develop a mathematical method to obtain the threshold of the stochastic epidemic model. We firstly investigate the boundness and extinction of the stochastic system. Furthermore, we use Ito's formula, the comparison theorem and some new inequalities techniques of stochastic differential systems to discuss persistence in mean of two diseases on three cases. The results indicate that stochastic fluctuations can suppress the disease outbreak. Finally, numerical simulations about different noise disturbance coefficients are carried out to illustrate the obtained theoretical results.
Stochastic Galerkin methods for the steady-state Navier–Stokes equations
Sousedík, Bedřich; Elman, Howard C.
2016-04-12
We study the steady-state Navier–Stokes equations in the context of stochastic finite element discretizations. Specifically, we assume that the viscosity is a random field given in the form of a generalized polynomial chaos expansion. For the resulting stochastic problem, we formulate the model and linearization schemes using Picard and Newton iterations in the framework of the stochastic Galerkin method, and we explore properties of the resulting stochastic solutions. We also propose a preconditioner for solving the linear systems of equations arising at each step of the stochastic (Galerkin) nonlinear iteration and demonstrate its effectiveness for solving a set of benchmarkmore » problems.« less
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Life-space foam: A medium for motivational and cognitive dynamics
NASA Astrophysics Data System (ADS)
Ivancevic, Vladimir; Aidman, Eugene
2007-08-01
General stochastic dynamics, developed in a framework of Feynman path integrals, have been applied to Lewinian field-theoretic psychodynamics [K. Lewin, Field Theory in Social Science, University of Chicago Press, Chicago, 1951; K. Lewin, Resolving Social Conflicts, and, Field Theory in Social Science, American Psychological Association, Washington, 1997; M. Gold, A Kurt Lewin Reader, the Complete Social Scientist, American Psychological Association, Washington, 1999], resulting in the development of a new concept of life-space foam (LSF) as a natural medium for motivational and cognitive psychodynamics. According to LSF formalisms, the classic Lewinian life space can be macroscopically represented as a smooth manifold with steady force fields and behavioral paths, while at the microscopic level it is more realistically represented as a collection of wildly fluctuating force fields, (loco)motion paths and local geometries (and topologies with holes). A set of least-action principles is used to model the smoothness of global, macro-level LSF paths, fields and geometry. To model the corresponding local, micro-level LSF structures, an adaptive path integral is used, defining a multi-phase and multi-path (multi-field and multi-geometry) transition process from intention to goal-driven action. Application examples of this new approach include (but are not limited to) information processing, motivational fatigue, learning, memory and decision making.
NASA Astrophysics Data System (ADS)
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.
Stochasticity in materials structure, properties, and processing—A review
NASA Astrophysics Data System (ADS)
Hull, Robert; Keblinski, Pawel; Lewis, Dan; Maniatty, Antoinette; Meunier, Vincent; Oberai, Assad A.; Picu, Catalin R.; Samuel, Johnson; Shephard, Mark S.; Tomozawa, Minoru; Vashishth, Deepak; Zhang, Shengbai
2018-03-01
We review the concept of stochasticity—i.e., unpredictable or uncontrolled fluctuations in structure, chemistry, or kinetic processes—in materials. We first define six broad classes of stochasticity: equilibrium (thermodynamic) fluctuations; structural/compositional fluctuations; kinetic fluctuations; frustration and degeneracy; imprecision in measurements; and stochasticity in modeling and simulation. In this review, we focus on the first four classes that are inherent to materials phenomena. We next develop a mathematical framework for describing materials stochasticity and then show how it can be broadly applied to these four materials-related stochastic classes. In subsequent sections, we describe structural and compositional fluctuations at small length scales that modify material properties and behavior at larger length scales; systems with engineered fluctuations, concentrating primarily on composite materials; systems in which stochasticity is developed through nucleation and kinetic phenomena; and configurations in which constraints in a given system prevent it from attaining its ground state and cause it to attain several, equally likely (degenerate) states. We next describe how stochasticity in these processes results in variations in physical properties and how these variations are then accentuated by—or amplify—stochasticity in processing and manufacturing procedures. In summary, the origins of materials stochasticity, the degree to which it can be predicted and/or controlled, and the possibility of using stochastic descriptions of materials structure, properties, and processing as a new degree of freedom in materials design are described.
Taking the Battle Upstream: Towards a Benchmarking Role for NATO
2012-09-01
Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16 Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized
ERIC Educational Resources Information Center
Kent State Univ., OH. Ohio Literacy Resource Center.
This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…
NASA Astrophysics Data System (ADS)
Miller, Steven
1998-03-01
A generic stochastic method is presented that rapidly evaluates numerical bulk flux solutions to the one-dimensional integrodifferential radiative transport equation, for coherent irradiance of optically anisotropic suspensions of nonspheroidal bioparticles, such as blood. As Fermat rays or geodesics enter the suspension, they evolve into a bundle of random paths or trajectories due to scattering by the suspended bioparticles. Overall, this can be interpreted as a bundle of Markov trajectories traced out by a "gas" of Brownian-like point photons being scattered and absorbed by the homogeneous distribution of uncorrelated cells in suspension. By considering the cumulative vectorial intersections of a statistical bundle of random trajectories through sets of interior data planes in the space containing the medium, the effective equivalent information content and behavior of the (generally unknown) analytical flux solutions of the radiative transfer equation rapidly emerges. The fluxes match the analytical diffuse flux solutions in the diffusion limit, which verifies the accuracy of the algorithm. The method is not constrained by the diffusion limit and gives correct solutions for conditions where diffuse solutions are not viable. Unlike conventional Monte Carlo and numerical techniques adapted from neutron transport or nuclear reactor problems that compute scalar quantities, this vectorial technique is fast, easily implemented, adaptable, and viable for a wide class of biophotonic scenarios. By comparison, other analytical or numerical techniques generally become unwieldy, lack viability, or are more difficult to utilize and adapt. Illustrative calculations are presented for blood medias at monochromatic wavelengths in the visible spectrum.
Lohrasebi, A; Mohamadi, S; Fadaie, S; Rafii-Tabar, H
2012-07-01
We model the dynamics of the F(0) component of the F(0)F(1)-ATPase mitochondrion-based nano-motor operating in a stochastically-fluctuating medium that represents the intracellular environment. The stochastic dynamics are modeled via Langevin equation of motion wherein fluctuations are treated as white noise. We have investigated the influence of an applied alternating electric field on the rotary motion of the F(0) rotor in such an environment. The exposure to the field induces a temperature rise in the mitochondrion's membrane, within which the F(0) is embedded. The external field also induces an electric potential that promotes a change in the mitochondrion's transmembrane potential (TMP). Both the induced temperature and the change in TMP contribute to a change in the dynamics of the F(0). We have found that for external fields in the radio frequency (RF) range, normally present in the environment and encountered by biological systems, the contribution of the induced thermal effects, relative to that of the induced TMP, to the dynamics of the F(0) is more significant. The changes in the dynamics of the F(0) part affect the frequency of the rotary motion of the F(0)F(1)-ATPase protein motor which, in turn, affects the production rate of the ATP molecules. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Simulations of sooting turbulent jet flames using a hybrid flamelet/stochastic Eulerian field method
NASA Astrophysics Data System (ADS)
Consalvi, Jean-Louis; Nmira, Fatiha; Burot, Daria
2016-03-01
The stochastic Eulerian field method is applied to simulate 12 turbulent C1-C3 hydrocarbon jet diffusion flames covering a wide range of Reynolds numbers and fuel sooting propensities. The joint scalar probability density function (PDF) is a function of the mixture fraction, enthalpy defect, scalar dissipation rate and representative soot properties. Soot production is modelled by a semi-empirical acetylene/benzene-based soot model. Spectral gas and soot radiation is modelled using a wide-band correlated-k model. Emission turbulent radiation interactions (TRIs) are taken into account by means of the PDF method, whereas absorption TRIs are modelled using the optically thin fluctuation approximation. Model predictions are found to be in reasonable agreement with experimental data in terms of flame structure, soot quantities and radiative loss. Mean soot volume fractions are predicted within a factor of two of the experiments whereas radiant fractions and peaks of wall radiative fluxes are within 20%. The study also aims to assess approximate radiative models, namely the optically thin approximation (OTA) and grey medium approximation. These approximations affect significantly the radiative loss and should be avoided if accurate predictions of the radiative flux are desired. At atmospheric pressure, the relative errors that they produced on the peaks of temperature and soot volume fraction are within both experimental and model uncertainties. However, these discrepancies are found to increase with pressure, suggesting that spectral models describing properly the self-absorption should be considered at over-atmospheric pressure.
Lawson, L G; Bruun, J; Coelli, T; Agger, J F; Lund, M
2004-01-01
Relationships of various reproductive disorders and milk production performance of Danish dairy farms were investigated. A stochastic frontier production function was estimated using data collected in 1998 from 514 Danish dairy farms. Measures of farm-level milk production efficiency relative to this production frontier were obtained, and relationships between milk production efficiency and the incidence risk of reproductive disorders were examined. There were moderate positive relationships between milk production efficiency and retained placenta, induction of estrus, uterine infections, ovarian cysts, and induction of birth. Inclusion of reproductive management variables showed that these moderate relationships disappeared, but directions of coefficients for almost all those variables remained the same. Dystocia showed a weak negative correlation with milk production efficiency. Farms that were mainly managed by young farmers had the highest average efficiency scores. The estimated milk losses due to inefficiency averaged 1142, 488, and 256 kg of energy-corrected milk per cow, respectively, for low-, medium-, and high-efficiency herds. It is concluded that the availability of younger cows, which enabled farmers to replace cows with reproductive disorders, contributed to high cow productivity in efficient farms. Thus, a high replacement rate more than compensates for the possible negative effect of reproductive disorders. The use of frontier production and efficiency/inefficiency functions to analyze herd data may enable dairy advisors to identify inefficient herds and to simulate the effect of alternative management procedures on the individual herd's efficiency.
Numerical Model for Cosmic Rays Species Production and Propagation in the Galaxy
NASA Technical Reports Server (NTRS)
Farahat, Ashraf; Zhang, Ming; Rassoul, Hamid; Connell, J. J.
2005-01-01
In recent years, considerable progress has been made in studying the propagation and origin of cosmic rays, as new and more accurate data have become available. Many models have been developed to study cosmic ray interactions and propagation showed flexibility in resembling various astrophysical conditions and good agreement with observational data. However, some astrophysical problems cannot be addressed using these models, such as the stochastic nature of the cosmic rays source, small-scale structures and inhomogeneities in the interstellar gas that can affect radioactive secondary abundance in cosmic rays. We have developed a new model and a corresponding computer code that can address some of these limitations. The model depends on the expansion of the backward stochastic solution of the general diffusion transport equation (Zhang 1999) starting from an observer position to solve a group of diffusion transport equations each of which represents a particular element or isotope of cosmic ray nuclei. In this paper we are focusing on key abundance ratios such as B/C, sub-Fe/Fe, (10)Be/(9)Be, (26)Al/(27)Al, (36)Cl/(37)Cl and (54)Mn/(55)Mn, which all have well established cross sections, to evaluate our model. The effect of inhomogeneity in the interstellar medium is investigated. The contribution of certain cosmic ray nuclei to the production of other nuclei is addressed. The contribution of various galactic locations to the production of cosmic ray nuclei observed at solar system is also investigated.