Instabilities in Molecular Dynamics Integrators used in Hybrid Monte Carlo Simulations
B. Joo; UKQCD Collaboration
2001-10-11
We discuss an instability in the leapfrog integration algorithm, widely used in current Hybrid Monte Carlo (HMC) simulations of lattice QCD. We demonstrate the instability in the simple harmonic oscillator (SHO) system where it is manifest. We demonstrate the instability in HMC simulations of lattic QCD with dynamical Wilson-Clover fermions and discuss implications for future simulations of lattice QCD.
A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm
Waseem Kamleh
2011-01-04
Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.
Extra Chance Generalized Hybrid Monte Carlo
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; Sanz-Serna, J. M.
2015-01-01
We study a method, Extra Chance Generalized Hybrid Monte Carlo, to avoid rejections in the Hybrid Monte Carlo method and related algorithms. In the spirit of delayed rejection, whenever a rejection would occur, extra work is done to find a fresh proposal that, hopefully, may be accepted. We present experiments that clearly indicate that the additional work per sample carried out in the extra chance approach clearly pays in terms of the quality of the samples generated.
Monte Carlo integration on GPU
J. Kanzaki
2010-10-11
We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C, and more than 60 times faster than the original FORTRAN programs.
A separable shadow Hamiltonian hybrid Monte Carlo method
NASA Astrophysics Data System (ADS)
Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.
2009-11-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC's performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog/Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http://mdlab.sourceforge.net/s2hmc).
A separable shadow Hamiltonian hybrid Monte Carlo method
Sweet, Christopher R.; Hampton, Scott S.; Skeel, Robert D.; Izaguirre, Jesús A.
2009-01-01
Hybrid Monte Carlo (HMC) is a rigorous sampling method that uses molecular dynamics (MD) as a global Monte Carlo move. The acceptance rate of HMC decays exponentially with system size. The shadow hybrid Monte Carlo (SHMC) was previously introduced to reduce this performance degradation by sampling instead from the shadow Hamiltonian defined for MD when using a symplectic integrator. SHMC’s performance is limited by the need to generate momenta for the MD step from a nonseparable shadow Hamiltonian. We introduce the separable shadow Hamiltonian hybrid Monte Carlo (S2HMC) method based on a formulation of the leapfrog?Verlet integrator that corresponds to a separable shadow Hamiltonian, which allows efficient generation of momenta. S2HMC gives the acceptance rate of a fourth order integrator at the cost of a second-order integrator. Through numerical experiments we show that S2HMC consistently gives a speedup greater than two over HMC for systems with more than 4000 atoms for the same variance. By comparison, SHMC gave a maximum speedup of only 1.6 over HMC. S2HMC has the additional advantage of not requiring any user parameters beyond those of HMC. S2HMC is available in the program PROTOMOL 2.1. A Python version, adequate for didactic purposes, is also in MDL (http:??mdlab.sourceforge.net?s2hmc). PMID:19894997
Monte Carlo Integration with Subtraction
Rudy Arthur; A. D. Kennedy
2012-09-04
This paper investigates a class of algorithms for numerical integration of a function in d dimensions over a compact domain by Monte Carlo methods. We construct a histogram approximation to the function using a partition of the integration domain into a set of bins specified by some parameters. We then consider two adaptations; the first is to subtract the histogram approximation, whose integral we may easily evaluate explicitly, from the function and integrate the difference using Monte Carlo; the second is to modify the bin parameters in order to make the variance of the Monte Carlo estimate of the integral the same for all bins. This allows us to use Student's t-test as a trigger for rebinning, which we claim is more stable than the \\chi-squared test that is commonly used for this purpose. We provide a program that we have used to study the algorithm for the case where the histogram is represented as a product of one-dimensional histograms. We discuss the assumptions and approximations made, as well as giving a pedagogical discussion of the myriad ways in which the results of any such Monte Carlo integration program can be misleading.
Multiple-time-stepping generalized hybrid Monte Carlo methods
NASA Astrophysics Data System (ADS)
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
An Exact Local Hybrid Monte Carlo Algorithm for Gauge Theories
A. D. Kennedy; K. M. Bitar
1993-11-16
We introduce a new Monte Carlo method for pure gauge theories. It is not intended for use with dynamical fermions. It belongs to the class of Local Hybrid Monte Carlo (LHMC) algorithms, which make use of the locality of the action by updating individual sites or links by following a classical mechanics trajectory in fictitious time. We choose to update a one-parameter subgroup of the gauge field on each link of the lattice, and the classical trajectory can be found in closed form in terms of elliptic functions for this case. We show that this gives an overrelaxation algorithm with a tunable parameter which, unlike some previous methods, does not require the numerical integration of the equations of motion.
Tunneling hybrid Monte-Carlo algorithm
Golterman, Maarten [Department of Physics and Astronomy, San Francisco State University, San Francisco, California 94132 (United States); Shamir, Yigal [Raymond and Beverly Sackler School of Physics and Astronomy, Tel-Aviv University, Ramat Aviv, 69978 (Israel)
2007-11-01
The Hermitian Wilson kernel used in the construction of the domain-wall and overlap Dirac operators has exceptionally small eigenvalues that make it expensive to reach high-quality chiral symmetry for domain-wall fermions, or high precision in the case of the overlap operator. An efficient way of suppressing such eigenmodes consists of including a positive power of the determinant of the Wilson kernel in the Boltzmann weight, but doing this also suppresses tunneling between topological sectors. Here we propose a modification of the hybrid Monte-Carlo algorithm which aims to restore tunneling between topological sectors by excluding the lowest eigenmodes of the Wilson kernel from the molecular-dynamics evolution, and correcting for this at the accept/reject step. We discuss the implications of this modification for the acceptance rate.
Hybrid optofluidic integration.
Parks, Joshua W; Cai, Hong; Zempoaltecatl, Lynnell; Yuzvinsky, Thomas D; Leake, Kaelyn; Hawkins, Aaron R; Schmidt, Holger
2013-10-21
Complete integration of microfluidic and optical functions in a single lab-on-chip device is one goal of optofluidics. Here, we demonstrate the hybrid integration of a PDMS-based fluid handling layer with a silicon-based optical detection layer in a single optofluidic system. The optical layer consists of a liquid-core antiresonant reflecting optical waveguide (ARROW) chip that is capable of single particle detection and interfacing with optical fiber. Integrated devices are reconfigurable and able to sustain high pressures despite the small dimensions of the liquid-core waveguide channels. We show the combination of salient sample preparation capabilities-particle mixing, distribution, and filtering-with single particle fluorescence detection. Specifically, we demonstrate fluorescent labelling of ?-DNA, followed by flow-based single-molecule detection on a single device. This points the way towards amplification-free detection of nucleic acids with low-complexity biological sample preparation on a chip. PMID:23969694
Hybrid optofluidic integration
Parks, Joshua W.; Cai, Hong; Zempoaltecatl, Lynnell; Yuzvinsky, Thomas D.; Leake, Kaelyn; Hawkins, Aaron R.
2013-01-01
Complete integration of microfluidic and optical functions in a single lab-on-chip device is one goal of optofluidics. Here, we demonstrate the hybrid integration of a PDMS-based fluid handling layer with a silicon-based optical detection layer in a single optofluidic system. The optical layer consists of a liquid-core antiresonant reflecting optical waveguide (ARROW) chip that is capable of single particle detection and interfacing with optical fiber. Integrated devices are reconfigurable and able to sustain high pressures despite the small dimensions of the liquid-core waveguide channels. We show the combination of salient sample preparation capabilities—particle mixing, distribution, and filtering—with single particle fluorescence detection. Specifically, we demonstrate fluorescent labelling of ?-DNA, followed by flow-based single-molecule detection on a single device. This points the way towards amplification-free detection of nucleic acids with low-complexity biological sample preparation on a chip. PMID:23969694
Monte Carlo Simulation of the Strength of Hybrid Composites
Hiroshi Fukuda; Tsu-Wei Chou
1982-01-01
This paper first deals with the stress concentration factors for a general fiber breakage model. The knowledge of stress redistribution at fiber fracture is then used for a Monte Carlo simulation of composite strength. The theoretical analysis has predicted the multiple fracture pattern of the low elongation fibers and the progressive nature of failure of hybrid composites. The enhanced ultimate
Densmore, Jeffery D., E-mail: jdd@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Thompson, Kelly G., E-mail: kgt@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States); Urbatsch, Todd J., E-mail: tmonster@lanl.gov [Computational Physics and Methods Group, Los Alamos National Laboratory, P.O. Box 1663, MS D409, Los Alamos, NM 87545 (United States)
2012-08-15
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.
The hybrid Monte Carlo Algorithm and the chiral transition
Gupta, R.
1987-01-01
In this talk the author describes tests of the Hybrid Monte Carlo Algorithm for QCD done in collaboration with Greg Kilcup and Stephen Sharpe. We find that the acceptance in the glubal Metropolis step for Staggered fermions can be tuned and kept large without having to make the step-size prohibitively small. We present results for the finite temperature transition on 4/sup 4/ and 4 x 6/sup 3/ lattices using this algorithm.
Testing trivializing maps in the Hybrid Monte Carlo algorithm
Georg P. Engel; Stefan Schaefer
2011-02-09
We test a recent proposal to use approximate trivializing maps in a field theory to speed up Hybrid Monte Carlo simulations. Simulating the CP^{N-1} model, we find a small improvement with the leading order transformation, which is however compensated by the additional computational overhead. The scaling of the algorithm towards the continuum is not changed. In particular, the effect of the topological modes on the autocorrelation times is studied.
A Primer in Monte Carlo Integration Using Mathcad
ERIC Educational Resources Information Center
Hoyer, Chad E.; Kegerreis, Jeb S.
2013-01-01
The essentials of Monte Carlo integration are presented for use in an upper-level physical chemistry setting. A Mathcad document that aids in the dissemination and utilization of this information is described and is available in the Supporting Information. A brief outline of Monte Carlo integration is given, along with ideas and pedagogy for…
Quantum photonics hybrid integration platform
Murray, Eoin; Meany, Thomas; Flother, Frederick F; Lee, James P; Griffiths, Jonathan P; Jones, Geb A C; Farrer, Ian; Ritchie, David A; Bennet, Anthony J; Shields, Andrew J
2015-01-01
Fundamental to integrated photonic quantum computing is an on-chip method for routing and modulating quantum light emission. We demonstrate a hybrid integration platform consisting of arbitrarily designed waveguide circuits and single photon sources. InAs quantum dots (QD) embedded in GaAs are bonded to an SiON waveguide chip such that the QD emission is coupled to the waveguide mode. The waveguides are SiON core embedded in a SiO2 cladding. A tuneable Mach Zehnder modulates the emission between two output ports and can act as a path-encoded qubit preparation device. The single photon nature of the emission was veri?ed by an on-chip Hanbury Brown and Twiss measurement.
Quantum photonics hybrid integration platform
Eoin Murray; David P. Ellis; Thomas Meany; Frederik F. Floether; James P. Lee; Jonathan P. Griffiths; Geb A. C. Jones; Ian Farrer; David A. Ritchie; Anthony J. Bennett; Andrew J. Shields
2015-07-31
Fundamental to integrated photonic quantum computing is an on-chip method for routing and modulating quantum light emission. We demonstrate a hybrid integration platform consisting of arbitrarily designed waveguide circuits and single photon sources. InAs quantum dots (QD) embedded in GaAs are bonded to an SiON waveguide chip such that the QD emission is coupled to the waveguide mode. The waveguides are SiON core embedded in a SiO2 cladding. A tuneable Mach Zehnder modulates the emission between two output ports and can act as a path-encoded qubit preparation device. The single photon nature of the emission was veri?ed by an on-chip Hanbury Brown and Twiss measurement.
Accelerating Staggered Fermion Dynamics with the Rational Hybrid Monte Carlo (RHMC) Algorithm
M. A. Clark; A. D. Kennedy
2007-10-18
Improved staggered fermion formulations are a popular choice for lattice QCD calculations. Historically, the algorithm used for such calculations has been the inexact R algorithm, which has systematic errors that only vanish as the square of the integration step-size. We describe how the exact Rational Hybrid Monte Carlo (RHMC) algorithm may be used in this context, and show that for parameters corresponding to current state-of-the-art computations it leads to a factor of approximately seven decrease in cost as well as having no step-size errors.
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
Li, Z.; Wang, K.
2012-07-01
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Cost of the Generalised Hybrid Monte Carlo Algorithm for Free Field Theory
A. D. Kennedy; Brian Pendleton
2000-08-21
We study analytically the computational cost of the Generalised Hybrid Monte Carlo (GHMC) algorithm for free field theory. We calculate the Metropolis acceptance probability for leapfrog and higher-order discretisations of the Molecular Dynamics (MD) equations of motion. We show how to calculate autocorrelation functions of arbitrary polynomial operators, and use these to optimise the GHMC momentum mixing angle, the trajectory length, and the integration stepsize for the special cases of linear and quadratic operators. We show that long trajectories are optimal for GHMC, and that standard HMC is more efficient than algorithms based on Second Order Langevin Monte Carlo (L2MC), sometimes known as Kramers Equation. We show that contrary to naive expectations HMC and L2MC have the same volume dependence, but their dynamical critical exponents are z = 1 and z = 3/2 respectively.
2D hybrid meshes for direct simulation Monte Carlo solvers
NASA Astrophysics Data System (ADS)
Sengil, N.; Sengil, U.
2013-02-01
The efficiency of the direct simulation Monte Carlo (DSMC) method decreases considerably if gas is not rarefied. In order to extend the application range of the DSMC method towards non-rarefied gas regimes, the computational efficiency of the DSMC method should be increased further. One of the most time consuming parts of the DSMC method is to determine which DSMC molecules are in close proximity. If this information is calculated quickly, the efficiency of the DSMC method will be increased. Although some meshless methods are proposed, mostly structured or non-structured meshes are used to obtain this information. The simplest DSMC solvers are limited with the structured meshes. In these types of solvers, molecule indexing according to the positions can be handled very fast using simple arithmetic operations. But structured meshes are geometry dependent. Complicated geometries require the use of unstructured meshes. In this case, DSMC molecules are traced cell-by-cell. Different cell-by-cell tracing techniques exist. But, these techniques require complicated trigonometric operations or search algorithms. Both techniques are computationally expensive. In this study, a hybrid mesh structure is proposed. Hybrid meshes are both less dependent on the geometry like unstructured meshes and computationally efficient like structured meshes.
ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods
Ibrahim, A.; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Sawan, M.; Wilson, P.; Wagner, John C; Heltemes, Thad
2011-01-01
The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.
Multiple molecular dynamics time-scales in Hybrid Monte Carlo fermion simulations
Mike Peardon; James Sexton; for the TrinLat Collaboration
2002-09-03
A scheme for separating the high- and low-frequency molecular dynamics modes in Hybrid Monte Carlo (HMC) simulations of gauge theories with dynamical fermions is presented. The algorithm is tested in the Schwinger model with Wilson fermions.
BROWNIAN PROCESSES FOR MONTE CARLO INTEGRATION ON COMPACT LIE GROUPS
Manton, Jonathan
BROWNIAN PROCESSES FOR MONTE CARLO INTEGRATION ON COMPACT LIE GROUPS S. SAID, The University for the evaluation of integrals of smooth functions defined on compact Lie groups. The approach is based on the ergodic property of Brownian processes in compact Lie groups. The paper provides an elementary proof
Linearly scalable hybrid Monte Carlo method for conformational sampling of large biomolecules
Izaguirre, JesÃºs A.
multiple time scales that both limit the small time step of molecular dynamics (MD) and require the performance of either method separately. 2.1 Molecular Dynamics and Monte Carlo Molecular dynamicsLinearly scalable hybrid Monte Carlo method for conformational sampling of large biomolecules Scott
Recent advances in PLC hybrid integration technology
NASA Astrophysics Data System (ADS)
Ogawa, Ikuo; Kitagawa, Takeshi
2003-07-01
Opto-electronic hybrid integraiton using a silica-based planar lightwave circuit (PLC) platform is an attractive way to realize the various kinds of opto-electronic components required for future photonic networks. This paper briefly introduces the concept and basic techniques used for PLC hybrid integration, and describes recent advances in this field. We also report on several high-performance optical devices that we recently developed using this technology.
PATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE BURKHARD MILITZER
Militzer, Burkhard
, PIMC has been applied to study the equilibrium properties of hot, dense hydrogen in the temperature thermodynamic properties. The modi#12;ca- tions are particularly signi#12;cant at low temperature and highPATH INTEGRAL MONTE CARLO SIMULATIONS OF HOT DENSE HYDROGEN BY BURKHARD MILITZER Diplom, Humboldt
Hybrid Parallel Computation of Integration in GRACE
Yuasa, F; Kawabata, S; Perret-Gallix, D; Itakura, K; Hotta, Y; Okuda, M; Yuasa, Fukuko; Ishikawa, Tadashi; Kawabata, Setsuya; Perret-Gallix, Denis; Itakura, Kazuhiro; Hotta, Yukihiko; Okuda, Motoi
2000-01-01
With an integrated software package {\\tt GRACE}, it is possible to generate Feynman diagrams, calculate the total cross section and generate physics events automatically. We outline the hybrid method of parallel computation of the multi-dimensional integration of {\\tt GRACE}. We used {\\tt MPI} (Message Passing Interface) as the parallel library and, to improve the performance we embedded the mechanism of the dynamic load balancing. The reduction rate of the practical execution time was studied.
Hybrid Parallel Computation of Integration in GRACE
Fukuko Yuasa; Tadashi Ishikawa; Setsuya Kawabata; Denis Perret-Gallix; Kazuhiro Itakura; Yukihiko Hotta; Motoi Okuda
2000-06-23
With an integrated software package {\\tt GRACE}, it is possible to generate Feynman diagrams, calculate the total cross section and generate physics events automatically. We outline the hybrid method of parallel computation of the multi-dimensional integration of {\\tt GRACE}. We used {\\tt MPI} (Message Passing Interface) as the parallel library and, to improve the performance we embedded the mechanism of the dynamic load balancing. The reduction rate of the practical execution time was studied.
Path integral Monte Carlo and the electron gas
NASA Astrophysics Data System (ADS)
Brown, Ethan W.
Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.
The S/sub N//Monte Carlo response matrix hybrid method
Filippone, W.L.; Alcouffe, R.E.
1987-01-01
A hybrid method has been developed to iteratively couple S/sub N/ and Monte Carlo regions of the same problem. This technique avoids many of the restrictions and limitations of previous attempts to do the coupling and results in a general and relatively efficient method. We demonstrate the method with some simple examples.
A Concise Force Calculation for Hybrid Monte Carlo with Improved Actions
Karthik, Nikhil
2014-01-01
We present a concise way to calculate force for Hybrid Monte Carlo with improved actions using the fact that changes in thin and smeared link matrices lie in their respective tangent vector spaces. Since hypercubic smearing schemes are very memory intensive, we also present a memory optimized implementation of them.
A hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition q
for speeding up the simulation of copper electrodeposit- ion is presented. The fast diffusion eventsA hybrid multiscale kinetic Monte Carlo method for simulation of copper electrodeposition q Zheming are simulated deterministically with a heterogeneous diffusion model which con- siders site-blocking effects
M. A. Clark; A. D. Kennedy
2006-08-22
There has been much recent progress in the understanding and reduction of the computational cost of the Hybrid Monte Carlo algorithm for Lattice QCD as the quark mass parameter is reduced. In this letter we present a new solution to this problem, where we represent the fermionic determinant using (n) pseudofermion fields, each with an (\
A Hybrid Monte Carlo Method for Surface Growth Simulations
G. Russo; P. Smereka
2003-01-01
We introduce an algorithm for treating growth on surfaces which combines\\u000aimportant features of continuum methods (such as the level-set method) and\\u000aKinetic Monte Carlo (KMC) simulations. We treat the motion of adatoms in\\u000acontinuum theory, but attach them to islands one atom at a time. The technique\\u000ais borrowed from the Dielectric Breakdown Model. Our method allows us to
Monte Carlo Integration Using Spatial Structure of Markov Random Field
NASA Astrophysics Data System (ADS)
Yasuda, Muneki
2015-03-01
Monte Carlo integration (MCI) techniques are important in various fields. In this study, a new MCI technique for Markov random fields (MRFs) is proposed. MCI consists of two successive parts: the first involves sampling using a technique such as the Markov chain Monte Carlo method, and the second involves an averaging operation using the obtained sample points. In the averaging operation, a simple sample averaging technique is often employed. The method proposed in this paper improves the averaging operation by addressing the spatial structure of the MRF and is mathematically guaranteed to statistically outperform standard MCI using the simple sample averaging operation. Moreover, the proposed method can be improved in a systematic manner and is numerically verified by numerical simulations using planar Ising models. In the latter part of this paper, the proposed method is applied to the inverse Ising problem and we observe that it outperforms the maximum pseudo-likelihood estimation.
Quasi-Monte Carlo integration over ? for migration ? inversion
NASA Astrophysics Data System (ADS)
de Hoop, Maarten V.; Spencer, Carl
1996-06-01
In this paper, we analyse the discretization of the generalized radon transform/amplitude versus scattering angles (GRT/AVA) migration - inversion formula by means of quasi-Monte Carlo methods. These methods are efficient, in the sense that they require sparsely sampled measurements only, and accurate, which we have shown by theory and examples. Another feature of Monte Carlo methods is their ability to suppress effectively coherent noise associated with undesired wave phenomena in the inversion procedure. As examples, we carried out the associated integrations over 0266-5611/12/3/004/img3 and 0266-5611/12/3/004/img4, and consistently found that quasi-random sequences achieve a prescribed accuracy with significantly fewer nodes.
Hybrid manufacturing : integrating direct write and sterolithography.
Davis, Donald W.; Inamdar, Asim (University of Texas at El Paso, El Paso, TX); Lopes, Amit (University of Texas at El Paso, El Paso, TX); Chavez, Bart D.; Gallegos, Phillip L.; Palmer, Jeremy Andrew (University of Texas at El Paso, El Paso, TX); Wicker, Ryan B. (University of Texas at El Paso); Medina, Francisco (University of Texas at El Paso, El Paso, TX); Hennessey, Robert E. (University of Texas at El Paso, El Paso, TX)
2005-07-01
A commercial stereolithography (SL) machine was modified to integrate fluid dispensing or direct-write (DW) technology with SL in an integrated manufacturing environment for automated and efficient hybrid manufacturing of complex electrical devices, combining three-dimensional (3D) electrical circuitry with SL-manufactured parts. The modified SL system operates similarly to a commercially available machine, although build interrupts were used to stop and start the SL build while depositing fluid using the DW system. An additional linear encoder was attached to the SL platform z-stage and used to maintain accurate part registration during the SL and DW build processes. Individual STL files were required as part of the manufacturing process plan. The DW system employed a three-axis translation mechanism that was integrated with the commercial SL machine. Registration between the SL part, SL laser and the DW nozzle was maintained through the use of 0.025-inch diameter cylindrical reference holes manufactured in the part during SL. After depositing conductive ink using DW, the SL laser was commanded to trace the profile until the ink was cured. The current system allows for easy exchange between SL and DW in order to manufacture fully functional 3D electrical circuits and structures in a semi-automated environment. To demonstrate the manufacturing capabilities, the hybrid SL/DW setup was used to make a simple multi-layer SL part with embedded circuitry. This hybrid system is not intended to function as a commercial system, it is intended for experimental demonstration only. This hybrid SL/DW system has the potential for manufacturing fully functional electromechanical devices that are more compact, less expensive, and more reliable than their conventional predecessors, and work is ongoing in order to fully automate the current system.
Yang, Yang; Longini, Ira M.; Halloran, M. Elizabeth; Obenchain, Valerie
2012-01-01
Summary In epidemics of infectious diseases such as influenza, an individual may have one of four possible final states: prior immune, escaped from infection, infected with symptoms, and infected asymptomatically. The exact state is often not observed. In addition, the unobserved transmission times of asymptomatic infections further complicate analysis. Under the assumption of missing at random, data-augmentation techniques can be used to integrate out such uncertainties. We adapt an importance-sampling-based Monte Carlo EM (MCEM) algorithm to the setting of an infectious disease transmitted in close contact groups. Assuming the independence between close contact groups, we propose a hybrid EM-MCEM algorithm that applies the MCEM or the traditional EM algorithms to each close contact group depending on the dimension of missing data in that group, and discuss the variance estimation for this practice. In addition, we propose a bootstrap approach to assess the total Monte Carlo error and factor that error into the variance estimation. The proposed methods are evaluated using simulation studies. We use the hybrid EM-MCEM algorithm to analyze two influenza epidemics in the late 1970s to assess the effects of age and pre-season antibody levels on the transmissibility and pathogenicity of the viruses. PMID:22506893
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU
Abhijit Chakrabarty; Pushan Majumdar
2012-07-10
In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.
Balint Joo; Brian Pendleton; Anthony D. Kennedy; Alan C. Irving; James C. Sexton; Stephen M. Pickles; Stephen P. Booth; UKQCD Collaboration
2000-05-26
We investigate instability and reversibility within Hybrid Monte Carlo simulations using a non-perturbatively improved Wilson action. We demonstrate the onset of instability as tolerance parameters and molecular dynamics step sizes are varied. We compare these findings with theoretical expectations and present limits on simulation parameters within which a stable and reversible algorithm is obtained for physically relevant simulations. Results of optimisation experiments with respect to tolerance prarameters are also presented.
Somasundaram, E.; Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97332-5902 (United States)
2013-07-01
In this paper, the work that has been done to implement variance reduction techniques in a three dimensional, multi group Monte Carlo code - Tortilla, that works within the frame work of the commercial deterministic code - Attila, is presented. This project is aimed to develop an integrated Hybrid code that seamlessly takes advantage of the deterministic and Monte Carlo methods for deep shielding radiation detection problems. Tortilla takes advantage of Attila's features for generating the geometric mesh, cross section library and source definitions. Tortilla can also read importance functions (like adjoint scalar flux) generated from deterministic calculations performed in Attila and use them to employ variance reduction schemes in the Monte Carlo simulation. The variance reduction techniques that are implemented in Tortilla are based on the CADIS (Consistent Adjoint Driven Importance Sampling) method and the LIFT (Local Importance Function Transform) method. These methods make use of the results from an adjoint deterministic calculation to bias the particle transport using techniques like source biasing, survival biasing, transport biasing and weight windows. The results obtained so far and the challenges faced in implementing the variance reduction techniques are reported here. (authors)
Optimal implementation of the Shadow Hybrid Monte Carlo method.
Izaguirre, Jesús A.
the shadow Hamiltonian ^H can have a sig- nificant separation from the original Hamiltonian H as seen the Hamiltonian used for sampling as ~H = max{H, ^H - c}. (1) In the original work [5] the c parameter a symplectic numerical integrator when prop- agating the equations of motion derived from a Hamiltonian
Streamlining resummed QCD calculations using Monte Carlo integration
Farhi, David; Freytsis, Marat; Schwartz, Matthew D
2015-01-01
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MadGraph, Alpgen or Sherpa. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution. These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including $e^+e^-$ two- and four-jet event shapes, $n$-jettiness and jet-mas...
Hydrogen molecule ion: Path integral Monte Carlo approach
Kylänpää, I; Rantala, T T
2007-01-01
Path integral Monte Carlo approach is used to study the coupled quantum dynamics of the electron and nuclei in hydrogen molecule ion. The coupling effects are demonstrated by comparing differences in adiabatic Born--Oppenheimer and non-adiabatic simulations, and inspecting projections of the full three-body dynamics onto adiabatic Born--Oppenheimer approximation. Coupling of electron and nuclear quantum dynamics is clearly seen. Nuclear pair correlation function is found to broaden by 0.040 a_0 and average bond length is larger by 0.056 a_0. Also, non-adiabatic correction to the binding energy is found. Electronic distribution is affected less, and therefore, we could say that the adiabatic approximation is better for the electron than for the nuclei.
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
NASA Astrophysics Data System (ADS)
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
Hybrid Monte Carlo/Molecular Dynamics Simulation of a Refractory Metal High Entropy Alloy
NASA Astrophysics Data System (ADS)
Widom, Michael; Huhn, W. P.; Maiti, S.; Steurer, W.
2014-01-01
The high entropy alloy containing refractory metals Mo-Nb-Ta-W has a body-centered cubic structure, which is not surprising given the complete mutual solubility in BCC solid solutions of all pairs of the constituent elements. However, first principles total energy calculations for the binaries reveal a set of distinct energy minimizing structures implying the likelihood of chemically ordered low-temperature phases. We apply a hybrid Monte Carlo and molecular dynamics method to evaluate the temperature-dependent chemical order. Monte Carlo species swaps allow for equilibration of the structure that cannot be achieved by conventional molecular dynamics. At 300 K (27 °C), a cesium-chloride ordering emerges between mixed (Nb,Ta) sites and mixed (Mo,W) sites. This order is lost at elevated temperatures.
Hybrid method for modeling epitaxial growth: Kinetic Monte Carlo plus molecular dynamics
NASA Astrophysics Data System (ADS)
Zoontjens, P.; Schulze, T. P.; Hendy, S. C.
2007-12-01
We propose a concurrently coupled hybrid molecular dynamics (MD) and kinetic Monte Carlo (KMC) algorithm to simulate the motion of grain boundaries between fcc and hcp islands during epitaxial growth on a fcc (111) surface. The method combines MD and KMC in an adaptive spatial domain decomposition, so that near the grain boundary, atoms are treated using MD but away from the boundary atoms are simulated by KMC. The method allows the grain boundary to interact with structures that form on spatial scales significantly larger than that of the MD domain but with a negligible increase in computational cost.
NASA Astrophysics Data System (ADS)
Townson, Reid W.; Zavgorodni, Sergei
2014-12-01
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18?MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1?mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4–5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1?mm criteria, 99.8% for 2% / 2?mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics
Streamlining resummed QCD calculations using Monte Carlo integration
David Farhi; Ilya Feige; Marat Freytsis; Matthew D. Schwartz
2015-07-22
Some of the most arduous and error-prone aspects of precision resummed calculations are related to the partonic hard process, having nothing to do with the resummation. In particular, interfacing to parton-distribution functions, combining various channels, and performing the phase space integration can be limiting factors in completing calculations. Conveniently, however, most of these tasks are already automated in many Monte Carlo programs, such as MadGraph, Alpgen or Sherpa. In this paper, we show how such programs can be used to produce distributions of partonic kinematics with associated color structures representing the hard factor in a resummed distribution. These distributions can then be used to weight convolutions of jet, soft and beam functions producing a complete resummed calculation. In fact, only around 1000 unweighted events are necessary to produce precise distributions. A number of examples and checks are provided, including $e^+e^-$ two- and four-jet event shapes, $n$-jettiness and jet-mass related observables at hadron colliders. Attached code can be used to modify MadGraph to export the relevant leading-order hard functions and color structures for arbitrary processes.
Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Cleveland, Mathew A.; Gentile, Nick
2015-06-01
This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy and performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.
Optofluidic hybrid platform with integrated solid core waveguides
NASA Astrophysics Data System (ADS)
Testa, G.; Persichetti, G.; Sarro, P. M.; Bernini, R.
2014-03-01
An optofluidic hybrid platform based on hybrid liquid core ARROW waveguides has been fabricated and tested. Solid core hybrid ARROW was integrated in a self-aligned optical configuration with the ARROW optofluidic channel for an improved collection efficiency. The platform was fabricated using a modular approaches. The microfluidic system was completely realized with PDMS using a layered structure while the optical part was realized developing a hybrid silicon/PDMS solution. The performance of the system has been tested by carrying out fluorescence measurements on Cy5 water solutions, obtaining an LOD of 2.5 nM.
Chen, Yunjie; Roux, Benoît
2014-09-21
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription. PMID:25240345
NASA Astrophysics Data System (ADS)
Chen, Yunjie; Roux, Benoît
2014-09-01
Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.
Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling
Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W; Wagner, John C
2013-01-01
The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and the SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore »geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Stanford University
Alamos National Laboratory in the early years after World War II. The first electronic computer useful in computer graphics. Good references on Monte Carlo methods include Kalos & Whitlock [1986 [1987], and Kuipers & Niederreiter [1974]. 2.1 A brief history Monte Carlo methods originated at the Los
Monte Carlo integration to optimize geometry in gravitational experiments
Winkler, L.I. ); Goldblum, C.E. )
1992-07-01
The torsion balance has been the experimental apparatus of choice for centuries, both in precision measurements of the Newtonian gravitational constant and in searches for weak anomalous interactions outside of gravity. If the form of the interaction is modeled, it is often possible to optimize the interacting bodies so that the apparatus has the greatest sensitivity to the interaction under study. Other researchers have applied this strategy in the case of the gravitational interaction between cylinders, and between a cylinder and sphere. Whereas their work focused on developing an analytical expression for the force between the masses, we present here a numerical method{minus}Monte Carlo integration{minus}which is general enough to aid in the design of bodies interacting under arbitrary potentials and with any desired geometric shape (as long as an accurate absolute value of the force is not needed). This numerical method is used to compute the gravitational torsion constant produced between an external hollow cylinder and sphere, and demonstrates the behavior studied previously through analysis. However, the main purpose for which we have used this numerical technique is in the design of interacting bodies used in a torsion-pendulum search for interactions that depend on net intrinsic spin. We demonstrate how the method may be used to determine the optimum aspect ratio ({ital l}/{ital r}) of the polarized test masses, as well as the most sensitive orientation of the masses. Two different interactions are considered: the dipole--dipole interaction between two polarized bodies, and the monopole--dipole interaction between a polarized and unpolarized body. In the case of the monopole--dipole interaction, we also show how the numerical method can indicate which orientation between test bodies is most susceptible to a false signal caused by gravity.
Integration of the Lippmann-Schwinger equation with the Monte Carlo method
Salomon, M.
1983-12-01
A Monte Carlo method to integrate the Lippmann-Schwinger equation for elastic scattering is presented. Advantages and limitations of this algorithm are compared with traditional methods of computation.
Quasi-Monte Carlo integration over ? for migration ? inversion
Maarten V. de Hoop; Carl Spencer
1996-01-01
In this paper, we analyse the discretization of the generalized radon transform\\/amplitude versus scattering angles (GRT\\/AVA) migration - inversion formula by means of quasi-Monte Carlo methods. These methods are efficient, in the sense that they require sparsely sampled measurements only, and accurate, which we have shown by theory and examples. Another feature of Monte Carlo methods is their ability to
Wagner, John C [ORNL; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Turner, John A [ORNL
2011-01-01
This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.
Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.
2014-08-01
The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.
Dynamical overlap fermion simulations with a preconditioned Hybrid Monte Carlo force
Jan Volkholz; Wolfgang Bietenholz; Stanislav Shcheredin
2006-09-29
We present simulation results for the 2-flavour Schwinger model with dynamical Ginsparg-Wilson fermions. Our Dirac operator is constructed by inserting an approximately chiral hypercube operator into the overlap formula, which yields the overlap hypercube operator. Due to the similarity with the hypercubic kernel, a low polynomial of this kernel can be used as a numerically cheap way to evaluate the fermionic part of the Hybrid Monte Carlo force. We verify algorithmic requirements like area conservation and reversibility, and we discuss the viability of this approach in view of the acceptance rate. Next we confirm a high level of locality for this formulation. Finally we evaluate the chiral condensate at light fermion masses, based on the density of low lying Dirac eigenvalues in different topological sectors. The results represent one of the first measurements with dynamical overlap fermions, and they agree very well with analytic predictions at weak coupling.
Hybrid Quantum-Classical Monte-Carlo Study of a Molecule-Based Magnet
Henelius, Jarl P; Fishman, Randy Scott
2008-01-01
Using a Monte Carlo (MC) method, we study an effective model for the Fe(II)Fe(III) bimetallic oxalates. Within a hybrid quantum-classical MC algorithm, the Heisenberg S = 2 and S? = 5/2 spins on the Fe(II) and Fe(III) sites are updated using a quantum MC loop while the Ising-like orbital angular momenta on the Fe(II) sites are updated using a single-spin classical MC flip. The effective field acting on the orbital angular momenta depends on the quantum state of the system. We find that the mean-field phase diagram for the model is surprisingly robust with respect to fluctuations. In particular, the region displaying two compensation points shifts and shrinks but remains finite.
Toward large-scale Hybrid Monte Carlo simulations of the Hubbard model on graphics processing units
NASA Astrophysics Data System (ADS)
Wendt, Kyle A.; Drut, Joaquín E.; Lähde, Timo A.
2011-08-01
One of the most efficient non-perturbative methods for the calculation of thermal properties of quantum systems is the Hybrid Monte Carlo algorithm, as evidenced by its use in large-scale lattice quantum chromodynamics calculations. The performance of this algorithm is determined by the speed at which the fermion operator is applied to a given vector, as it is the central operation in the preconditioned conjugate gradient iteration. We study a simple implementation of these operations for the fermion matrix of the Hubbard model in d+1 spacetime dimensions, and report a performance comparison between a 2.66 GHz Intel Xeon E5430 CPU and an NVIDIA Tesla C1060 GPU using double-precision arithmetic. We find speedup factors ranging between 30 and 350 for d=1, and in excess of 40 for d=3. We argue that such speedups are of considerable impact for large-scale simulational studies of quantum many-body systems.
A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems
Justin Pounders; Farzad Rahnema
2001-10-01
A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
Townson, Reid W; Zavgorodni, Sergei
2014-12-21
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18?MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1?mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1?mm criteria, 99.8% for 2% / 2?mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. PMID:25426972
Exploiting symmetries for exponential error reduction in path integral Monte Carlo
Michele Della Morte; Leonardo Giusti
2009-01-01
The path integral of a quantum system with an exact symmetry can be written as a sum of functional integrals each giving the contribution from quantum states with definite symmetry properties. We propose a strategy to compute each of them, normalized to the one with vacuum quantum numbers, by a Monte Carlo procedure whose cost increases power-like with the time
Exploiting symmetries for exponential error reduction in path integral Monte Carlo
Michele Della Morte; Leonardo Giusti
2007-01-01
The path integral of a quantum system with an exact symmetry can be written as a sum of functional integrals each giving the contribution from quantum states with definite symmetry properties. We propose a strategy to compute each of them, normalized to the one with vacuum quantum numbers, by a Monte Carlo procedure whose cost increases power-like with the time
Real-time hybrid simulation using the convolution integral method
NASA Astrophysics Data System (ADS)
Jig Kim, Sung; Christenson, Richard E.; Wojtkiewicz, Steven F.; Johnson, Erik A.
2011-02-01
This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results.
Intelligent control: analytical integration of hybrid system
Mumin Songt; Tzyh-Jong Tarntt; Ning Xittt
1998-01-01
This paper presents a novel approach for solving the grand challenging problem in intelligent control, i.e. the analytical and robust integration of low-level system sensing and simple control with high-level system behavior and perception. The proposed max-plus algebra model combined with event-based planning and control provides an advanced mechanism to efficiently integrate task scheduling, sensing, planning and real-time execution so
High-order path-integral Monte Carlo methods for solving quantum dot problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
The conventional second-order path-integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of antisymmetric free-fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work we show that optimized fourth-order path-integral Monte Carlo methods, which use no more than five free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.
Nasif, Hesham; Neyama, Atsushi
2003-02-26
This paper presents results of an uncertainty and sensitivity analysis for performance of the different barriers of high level radioactive waste repositories. SUA is a tool to perform the uncertainty and sensitivity on the output of Wavelet Integrated Repository System model (WIRS), which is developed to solve a system of nonlinear partial differential equations arising from the model formulation of radionuclide transport through repository. SUA performs sensitivity analysis (SA) and uncertainty analysis (UA) on a sample output from Monte Carlo simulation. The sample is generated by WIRS and contains the values of the output values of the maximum release rate in the form of time series and values of the input variables for a set of different simulations (runs), which are realized by varying the model input parameters. The Monte Carlo sample is generated with SUA as a pure random sample or using Latin Hypercube sampling technique. Tchebycheff and Kolmogrov confidence bounds are compute d on the maximum release rate for UA and effective non-parametric statistics to rank the influence of the model input parameters SA. Based on the results, we point out parameters that have primary influences on the performance of the engineered barrier system of a repository. The parameters found to be key contributor to the release rate are selenium and Cesium distribution coefficients in both geosphere and major water conducting fault (MWCF), the diffusion depth and water flow rate in the excavation-disturbed zone (EDZ).
Gregory W. Faris; George Alexandrakis; David R. Busch; Michael S. Patterson
2001-01-01
We examine the ability to recover the optical properties of a two-layer turbid medium using multi-distance frequency domain reflectance measurements and a hybrid Monte Carlo-- diffusion model. Frequency domain measurements are performed on two-layer liquid tissue phantoms simulating skin on muscle and skin on fat. Particular care to systematic effects in the photomultiplier is required for the measurements at short
Quirk, Thomas, J., IV (University of New Mexico)
2004-08-01
The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.
Abdel-Khalik, Hany S.; Gardner, Robin; Mattingly, John; Sood, Avneet
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calulations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10-10 times to properly characterize the few-group cross-sections for deownstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the faborable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
NASA Astrophysics Data System (ADS)
Bousige, Colin; BoÅ£an, Alexandru; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoît
2015-03-01
We report an efficient atom-scale reconstruction method that consists of combining the Hybrid Reverse Monte Carlo algorithm (HRMC) with Molecular Dynamics (MD) in the framework of a simulated annealing technique. In the spirit of the experimentally constrained molecular relaxation technique [Biswas et al., Phys. Rev. B 69, 195207 (2004)], this modified procedure offers a refined strategy in the field of reconstruction techniques, with special interest for heterogeneous and disordered solids such as amorphous porous materials. While the HRMC method generates physical structures, thanks to the use of energy penalties, the combination with MD makes the method at least one order of magnitude faster than HRMC simulations to obtain structures of similar quality. Furthermore, in order to ensure the transferability of this technique, we provide rational arguments to select the various input parameters such as the relative weight ? of the energy penalty with respect to the structure optimization. By applying the method to disordered porous carbons, we show that adsorption properties provide data to test the global texture of the reconstructed sample but are only weakly sensitive to the presence of defects. In contrast, the vibrational properties such as the phonon density of states are found to be very sensitive to the local structure of the sample.
New model for dwelling dose calculation using Monte Carlo integration.
Allam, K A
2009-02-01
A new methodology and computer model using Monte Carlo simulation for indoor dose calculation are developed. A room model of six rectangular slabs of finite thickness with door or window in each slab was used. Point-kernel photon transport model with self-absorption correction was applied for dose calculations. New software was designed and programmed using Pascal programming language, which was evaluated for standard room design. The calculated dose due to natural radionuclides in the concert walls has differences from the average model results of 0.21% for (238)U, 12.3% for (232)Th and 13.9% for (40)K; and the variability of specific dose rate with changing position density and composition of walls was studied. The new model has more flexibility for real dose calculation of any room structure and tailing, which is not given in the published models. PMID:19287012
Assignment 1: 3-D Monte Carlo Integration Due: Monday 1/28/11 (before class); Mult Fac = 1.0
Whaley, R. Clint
Assignment 1: 3-D Monte Carlo Integration Due: Monday 1/28/11 (before class); Mult Fac = 1.0 In this assignment, you will use OpenMP to parallelize 3-D Monte Carlo integration. Your program should take Any other runtime flags will cause a usage message to printed, and the program will exit
Manousakis, Efstratios
Submonolayer molecular hydrogen on graphite: A path-integral Monte Carlo study Kwangsik Nho path-integral Monte Carlo PIMC to simulate molecular hydrogen on graphite at submono- layer coverage-graphite interaction, i.e., we include the effects of substrate corrugations. In this case we carry our
Path integral Monte Carlo applications to quantum fluids in confined geometries
Manousakis, Efstratios
2001 Path integral Monte Carlo is an exact simulation method for calculating thermodynamic properties of bosonic systems. Properties such as superfluidity and bose condensation are directly related geometries, such as helium and hydrogen on surfaces and in droplets are reviewed. © 2001 American Institute
Akifumi Yafune; Masato Takebe; Hiroyasu Ogata
1998-01-01
This paper describes a use of Monte Carlo integration for population pharmacokinetics with multivariate population distribution. In the proposed approach, a multivariate lognormal distribution is assumed for a population distribution of pharmacokinetic (PK) parameters. The maximum likelihood method is employed to estimate the population means, variances, and correlation coefficients of the multivariate lognormal distribution. Instead of a first-order Taylor series
Polymer waveguide based hybrid opto-electric integration technology
NASA Astrophysics Data System (ADS)
Mao, Jinbin; Deng, Lingling; Jiang, Xiyan; Ren, Rong; Zhai, Yumeng; Wang, Jin
2014-10-01
While monolithic integration especially based on InP appears to be quite an expensive solution for optical devices, hybrid integration solutions using cheaper material platforms are considered powerful competitors because of the high freedom of design, yield optimization and relative cost-efficiency. Among them, the polymer planar-lightwave circuit (PLC) technology is regarded attractive as polymer offers the potential of fairly simple and low-cost fabrication, and of low-cost packaging. In our work, polymer PLC was fabricated by using the standard reactive ion etching (RIE) technique, while other active and passive devices can be integrated on the polymer PLC platform. Exemplary polymer waveguide devices was a 13-channel arrayed waveguide grating (AWG) chip, where the central channel cross-talk was below -30dB and the polarization dependent frequency shift was mitigated by inserting a half wave plate. An optical 900 hybrid was also realized with one 2×4 multi-mode interferometer (MMI). The excess insertion losses are below 4dB for the C-band, while the transmission imbalance is below 1.2dB. When such an optical hybrid was integrated vertically with mesa-type photodiodes, the responsivity of the individual PD was around 0.06 A/W, while the 3 dB bandwidth reaches 24 ~ 27 GHz, which is sufficient for 100Gbit/s receivers. Another example of the hybrid integration was to couple the polymer waveguides to fiber by applying fiber grooves, whose typical loss value was 0.2 dB per-facet over a broad spectral range from 1200-1600 nm.
HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials
NASA Astrophysics Data System (ADS)
Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.
2011-02-01
The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.
Integrated Hybrid System Architecture for Risk Analysis
NASA Technical Reports Server (NTRS)
Moynihan, Gary P.; Fonseca, Daniel J.; Ray, Paul S.
2010-01-01
A conceptual design has been announced of an expert-system computer program, and the development of a prototype of the program, intended for use as a project-management tool. The program integrates schedule and risk data for the purpose of determining the schedule applications of safety risks and, somewhat conversely, the effects of changes in schedules on changes on safety. It is noted that the design has been delivered to a NASA client and that it is planned to disclose the design in a conference presentation.
Multiscale Modeling of Hybrid Structural Composites with Integrated Damping Features
NASA Astrophysics Data System (ADS)
Martone, Alfonso; Giordano, Michele
2008-08-01
The aim of this work is to propose a design approach for a multifunctional hybrid composite material that integrates high damping performances while withstanding the required structural features. Hybrid composite consists in a three phases composite where a viscoelastic material is added to the conventional structural long fibers/polymeric matrix laminate. Design addresses the problem of integrating the viscoelastic material within the laminate architecture to exploit its maximum damping efficiency. Key aspect is the definition of a viscoelastic multiscale model starting from the constituents to the lamina, and further to the hybrid laminate properties. An analytical procedure has been developed that uses the strain energy method to evaluate the specific damping capacity for all dimensional scales and classical lamination theory was extended to include the transverse shear effects. The method potentiality has been tested against experimental data from Literature. Possible configurations of hybrid laminates have been simulated where viscoelastic material is added as laminae or distributed as long fibers within the structural laminate.
Graphene/Si CMOS Hybrid Hall Integrated Circuits
NASA Astrophysics Data System (ADS)
Huang, Le; Xu, Huilong; Zhang, Zhiyong; Chen, Chengying; Jiang, Jianhua; Ma, Xiaomeng; Chen, Bingyan; Li, Zishen; Zhong, Hua; Peng, Lian-Mao
2014-07-01
Graphene/silicon CMOS hybrid integrated circuits (ICs) should provide powerful functions which combines the ultra-high carrier mobility of graphene and the sophisticated functions of silicon CMOS ICs. But it is difficult to integrate these two kinds of heterogeneous devices on a single chip. In this work a low temperature process is developed for integrating graphene devices onto silicon CMOS ICs for the first time, and a high performance graphene/CMOS hybrid Hall IC is demonstrated. Signal amplifying/process ICs are manufactured via commercial 0.18 um silicon CMOS technology, and graphene Hall elements (GHEs) are fabricated on top of the passivation layer of the CMOS chip via a low-temperature micro-fabrication process. The sensitivity of the GHE on CMOS chip is further improved by integrating the GHE with the CMOS amplifier on the Si chip. This work not only paves the way to fabricate graphene/Si CMOS Hall ICs with much higher performance than that of conventional Hall ICs, but also provides a general method for scalable integration of graphene devices with silicon CMOS ICs via a low-temperature process.
Novel integration technique for silicon/III-V hybrid laser.
Dong, Po; Hu, Ting-Chen; Liow, Tsung-Yang; Chen, Young-Kai; Xie, Chongjin; Luo, Xianshu; Lo, Guo-Qiang; Kopf, Rose; Tate, Alaric
2014-11-01
Integrated semiconductor lasers on silicon are one of the most crucial devices to enable low-cost silicon photonic integrated circuits for high-bandwidth optic communications and interconnects. While optical amplifiers and lasers are typically realized in III-V waveguide structures, it is beneficial to have an integration approach which allows flexible and efficient coupling of light between III-V gain media and silicon waveguides. In this paper, we propose and demonstrate a novel fabrication technique and associated transition structure to realize integrated lasers without the constraints of other critical processing parameters such as the starting silicon layer thicknesses. This technique employs epitaxial growth of silicon in a pre-defined trench with taper structures. We fabricate and demonstrate a long-cavity hybrid laser with a narrow linewidth of 130 kHz and an output power of 1.5 mW using the proposed technique. PMID:25401832
SU-E-T-117: Dose to Organs Outside of CT Scan Range- Monte Carlo and Hybrid Phantom Approach
Pelletier, C; Jung, J; Lee, C; Kim, J; Lee, C
2014-06-01
Purpose: Epidemiological study of second cancer risk for cancer survivors often requires the dose to normal tissues located outside the anatomy covered by radiological imaging, which is usually limited to tumor and organs at risk. We have investigated the feasibility of using whole body computational human phantoms for estimating out-of-field organ doses for patients treated by Intensity Modulated Radiation Therapy (IMRT). Methods: Identical 7-field IMRT prostate plans were performed using X-ray Voxel Monte Carlo (XVMC), a radiotherapy-specific Monte Carlo transport code, on the computed tomography (CT) images of the torso of an adult male patient (175 cm height, 66 kg weight) and an adult male hybrid computational phantom with the equivalent body size. Dose to the liver, right lung, and left lung were calculated and compared. Results: Considerable differences are seen between the doses calculated by XVMC for the patient CT and the hybrid phantom. One major contributing factor is the treatment method, deep inspiration breath hold (DIBH), used for this patient. This leads to significant differences in the organ position relative to the treatment isocenter. The transverse distances from the treatment isocenter to the inferior border of the liver, left lung, and right lung are 19.5cm, 29.5cm, and 30.0cm, respectively for the patient CT, compared with 24.3cm, 36.6cm, and 39.1cm, respectively, for the hybrid phantom. When corrected for the distance, the mean doses calculated using the hybrid phantom are within 28% of those calculated using the patient CT. Conclusion: This study showed that mean dose to the organs located in the missing CT coverage can be reconstructed by using whole body computational human phantoms within reasonable dosimetric uncertainty, however appropriate corrections may be necessary if the patient is treated with a technique that will significantly deform the size or location of the organs relative to the hybrid phantom.
Kaoru Aoki; Shigetaka Kuroda; Shigemasa Kajiwara; Hiromitsu Sato; Yoshio Yamamoto
2000-06-19
This paper presents the technical approach used to design and develop the powerplant for the Honda Insight, a new motor assist hybrid vehicle with an overall development objective of just half the fuel consumption of the current Civic over a wide range of driving conditions. Fuel consumption of 35km/L (Japanese 10-15 mode), and 3.4L/100km (98/69/EC) was realized. To achieve this, a new Integrated Motor Assist (IMA) hybrid power plant system was developed, incorporating many new technologies for packaging and integrating the motor assist system and for improving engine thermal efficiency. This was developed in combination with a new lightweight aluminum body with low aerodynamic resistance. Environmental performance goals also included the simultaneous achievement of low emissions (half the Japanese year 2000 standards, and half the EU2000 standards), high efficiency, and recyclability. Full consideration was also given to key consumer attributes, including crash safety performance, handling, and driving performance.
Biondo, Elliott D; Ibrahim, Ahmad M; Mosher, Scott W; Grove, Robert E
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Path integrals and large deviations in stochastic hybrid systems
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; Newby, Jay M.
2014-04-01
We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.
Hybrid silicon free-space source with integrated beam steering
NASA Astrophysics Data System (ADS)
Doylend, J. K.; Heck, M. J. R.; Bovington, J. T.; Peters, J. D.; Davenport, M. L.; Coldren, L. A.; Bowers, J. E.
2013-02-01
Free-space beam steering using optical phase arrays are desirable as a means of implementing Light Detection and Ranging (LIDAR) and free-space communication links without the need for moving parts, thus alleviating vulnerabilities due to vibrations and inertial forces. Implementing such an approach in silicon photonic integrated circuits is particularly desirable in order to take advantage of established CMOS processing techniques while reducing both device size and packaging complexity. In this work we demonstrate a free-space diode laser together with beam steering implemented on-chip in a silicon photonic circuit. A waveguide phased array, surface gratings, a hybrid III-V/silicon laser and an array of hybrid III/V silicon amplifiers were fabricated on-chip in order to achieve a fully integrated steerable free-space optical source with no external optical inputs, thus eliminating the need for fiber coupling altogether. The chip was fabricated using a modified version of the hybrid silicon process developed at UCSB, with modifications in order to incorporate diodes within the waveguide layer as well as within the III-V gain layer. Beam steering across a 12° field of view with +/-0.3° accuracy and 1.8°x0.6° beam width was achieved, with background peaks suppressed 7 dB relative to the main lobe within the field of view for arbitrarily chosen beam directions.
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
Dornheim, Tobias; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-01-01
The uniform electron gas (UEG) at finite temperature is of high current interest due to its key relevance for many applications including dense plasmas and laser excited solids. In particular, density functional theory heavily relies on accurate thermodynamic data for the UEG. Until recently, the only existing first-principle results had been obtained for $N=33$ electrons with restricted path integral Monte Carlo (RPIMC), for low to moderate density, $r_s = \\overline{r}/a_B \\gtrsim 1$. This data has been complemented by Configuration path integral Monte Carlo (CPIMC) simulations for $r_s \\leq 1$ that substantially deviate from RPIMC towards smaller $r_s$ and low temperature. In this work, we present results from an independent third method---the recently developed permutation blocking path integral Monte Carlo (PB-PIMC) approach [T. Dornheim \\textit{et al.}, NJP \\textbf{17}, 073017 (2015)] which we extend to the UEG. Interestingly, PB-PIMC allows us to perform simulations over the entire density range down to...
First Results From GLAST-LAT Integrated Towers Cosmic Ray Data Taking And Monte Carlo Comparison
Brigida, M.; Caliandro, A.; Favuzzi, C.; Fusco, P.; Gargano, F.; Giordano, F.; Giglietto, N.; Loparco, F.; Marangelli, B.; Mazziotta, M.N.; Mirizzi, N.; Raino, S.; Spinelli, P.; /Bari U. /INFN, Bari
2007-02-15
GLAST Large Area Telescope (LAT) is a gamma ray telescope instrumented with silicon-strip detector planes and sheets of converter, followed by a calorimeter (CAL) and surrounded by an anticoincidence system (ACD). This instrument is sensitive to gamma rays in the energy range between 20 MeV and 300 GeV. At present, the first towers have been integrated and pre-launch data taking with cosmic ray muons is being performed. The results from the data analysis carried out during LAT integration will be discussed and a comparison with the predictions from the Monte Carlo simulation will be shown.
Hybrid integration for spatially multiplexed single-photon generation
NASA Astrophysics Data System (ADS)
Meany, Thomas; Ngah, Lutfi A.; Collins, Matthew J.; Clark, Alex S.; Williams, Robert J.; Eggleton, Benjamin J.; Steel, M. J.; Withford, Michael J.; Alibart, Olivier; Tanzilli, Sébastien
2014-02-01
We discuss the hybrid integration of multiple components for the production of telecom band single photon sources. We implement four, on-chip, waveguide channels capable of producing four spatially separated collinear pairs of single photons. Using laser inscribed waveguide circuits and point-by-point bre Bragg gratings (FBG), we interface, separate and lter generated photon pairs. We propose using fast switches to actively route multiple heralded single photons to a single output producing an enhanced rate while maintaining a xed noise level.
Exploiting symmetries for exponential error reduction in path integral Monte Carlo
Della Morte, Michele
2009-01-01
The path integral of a quantum system with an exact symmetry can be written as a sum of functional integrals each giving the contribution from quantum states with definite symmetry properties. We propose a strategy to compute each of them, normalized to the one with vacuum quantum numbers, by a Monte Carlo procedure whose cost increases power-like with the time extent of the lattice. This is achieved thanks to a multi-level integration scheme, inspired by the transfer matrix formalism, which exploits the symmetry and the locality in time of the underlying statistical system. As a result the cost of computing the lowest energy level in a given channel, its multiplicity and its matrix elements is exponentially reduced with respect to the standard path-integral Monte Carlo. We test the strategy with a one-dimensional harmonic oscillator, by computing the ratio of the parity odd over the parity even functional integrals and the two-point correlation function. The cost of the simulations scales as expected. In par...
Hybrid polymer photonic crystal fiber with integrated chalcogenide glass nanofilms
NASA Astrophysics Data System (ADS)
Markos, Christos; Kubat, Irnis; Bang, Ole
2014-08-01
The combination of chalcogenide glasses with polymer photonic crystal fibers (PCFs) is a difficult and challenging task due to their different thermo-mechanical material properties. Here we report the first experimental realization of a hybrid polymer-chalcogenide PCF with integrated As2S3 glass nanofilms at the inner surface of the air-channels of a poly-methyl-methacrylate (PMMA) PCF. The integrated high refractive index glass films introduce distinct antiresonant transmission bands in the 480-900 nm wavelength region. We demonstrate that the ultra-high Kerr nonlinearity of the chalcogenide glass makes the polymer PCF nonlinear and provides a possibility to shift the transmission band edges as much as 17 nm by changing the intensity. The proposed fabrication technique constitutes a new highway towards all-fiber nonlinear tunable devices based on polymer PCFs, which at the moment is not possible with any other fabrication method.
Hybrid Silicon Photonic Integration using Quantum Well Intermixing
NASA Astrophysics Data System (ADS)
Jain, Siddharth R.
With the push for faster data transfer across all domains of telecommunication, optical interconnects are transitioning into shorter range applications such as in data centers and personal computing. Silicon photonics, with its economic advantages of leveraging well-established silicon manufacturing facilities, is considered the most promising approach to further scale down the cost and size of optical interconnects for chip-to-chip communication. Intrinsic properties of silicon however limit its ability to generate and modulate light, both of which are key to realizing on-chip optical data transfer. The hybrid silicon approach directly addresses this problem by using molecularly bonded III-V epitaxial layers on silicon for optical gain and absorption. This technology includes direct transfer of III-V wafer to a pre-patterned silicon-on-insulator wafer. Several discrete devices for light generation, modulation, amplification and detection have already been demonstrated on this platform. As in the case of electronics, multiple photonic elements can be integrated on a single chip to improve performance and functionality. However, scalable photonic integration requires the ability to control the bandgap for individual devices along with design changes to simplify fabrication. In the research presented here, quantum well intermixing is used as a technique to define multiple bandgaps for integration on the hybrid silicon platform. Implantation enhanced disordering is used to generate four bandgaps spread over 120+ nm. By combining these selectively intermixed III-V layers with pre-defined gratings and waveguides on silicon, we fabricate distributed feedback, distributed Bragg reflector, Fabry-Perot and mode-locked lasers along with photodetectors, electro-absorption modulators and other test structures, all on a single chip. We demonstrate a broadband laser source with continuous-wave operational lasers over a 200 nm bandwidth. Some of these lasers are integrated with modulators with a 3-dB bandwidth above 25 GHz, thus demonstrating coarse wavelength division multiplexing transmitter on silicon.
Better HMC integrators for dynamical simulations
M. A. Clark; Balint Joo; A. D. Kennedy; P. J. Silva
2010-11-01
We show how to improve the molecular dynamics step of Hybrid Monte Carlo, both by tuning the integrator using Poisson brackets measurements and by the use of force gradient integrators. We present results for moderate lattice sizes.
Better HMC integrators for dynamical simulations
M.A. Clark, Balint Joo, A.D. Kennedy, P.J. Silva
2010-06-01
We show how to improve the molecular dynamics step of Hybrid Monte Carlo, both by tuning the integrator using Poisson brackets measurements and by the use of force gradient integrators. We present results for moderate lattice sizes.
Hybrid integrated optic modules for real-time signal processing
NASA Technical Reports Server (NTRS)
Tsai, C. S.
1984-01-01
The most recent progress on four relatively new hybrid integrated optic device modules in LiNbO3 waveguides and one in YIG/GGG waveguide that are currently being studied are discussed. The five hybrid modules include a time-integrating acoustooptic correlator, a channel waveguide acoustooptic frequency shifter/modulator, an electrooptic channel waveguide total internal reflection moculator/switch, an electrooptic analog-to-digital converter using a Fabry-Perot modulator array, and a noncollinear magnetooptic modulator using magnetostatic surface waves. All of these devices possess the desirable characteristics of very large bandwidth (GHz or higher), very small substrate size along the optical path (typically 1.5 cm or less), single-mode optical propagation, and low drive power requirement. The devices utilize either acoustooptic, electrooptic or magnetooptic effects in planar or channel waveguides and, therefore, act as efficient interface devices between a light wave and temporal signals. Major areas of application lie in wideband multichannel optical real-time signal processing and communications. Some of the specific applications include spectral analysis and correlation of radio frequency (RF) signals, fiber-optic sensing, optical computing and multiport switching/routing, and analog-to-digital conversion of wide RF signals.
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-21
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors. PMID:22469917
Quantum Mechanical Single Molecule Partition Function from PathIntegral Monte Carlo Simulations
Chempath, Shaji; Bell, Alexis T.; Predescu, Cristian
2006-10-01
An algorithm for calculating the partition function of a molecule with the path integral Monte Carlo method is presented. Staged thermodynamic perturbation with respect to a reference harmonic potential is utilized to evaluate the ratio of partition functions. Parallel tempering and a new Monte Carlo estimator for the ratio of partition functions are implemented here to achieve well converged simulations that give an accuracy of 0.04 kcal/mol in the reported free energies. The method is applied to various test systems, including a catalytic system composed of 18 atoms. Absolute free energies calculated by this method lead to corrections as large as 2.6 kcal/mol at 300 K for some of the examples presented.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis. PMID:24002801
NASA Astrophysics Data System (ADS)
Hwang, Seok Won; Lee, Ho-Jun; Lee, Hae June
2014-12-01
Fluid models have been widely used and conducted successfully in high pressure plasma simulations where the drift–diffusion and the local-field approximation are valid. However, fluid models are not able to demonstrate non-local effects related to large electron energy relaxation mean free path in low pressure plasmas. To overcome this weakness, a hybrid model coupling electron Monte Carlo collision (EMCC) method with the fluid model is introduced to obtain precise electron energy distribution functions using pseudo-particles. Steady state simulation results by a one-dimensional hybrid model which includes EMCC method for the collisional reactions but uses drift–diffusion approximation for electron transport in a fluid model are compared with those of a conventional particle-in-cell (PIC) and a fluid model for low pressure capacitively coupled plasmas. At a wide range of pressure, the hybrid model agrees well with the PIC simulation with a reduced calculation time while the fluid model shows discrepancy in the results of the plasma density and the electron temperature.
C. C. Lu; W. C. Chew
1999-01-01
We propose a hybrid integral equation approach that combines the volume integral equation (VIE) and the surface integral equation to model the mixed dielectric and conducting structures. The volume integral equation is applied to the material region and the surface integral equation (SIE) is enforced over the conducting surface. This results in a very general model as all the volume
Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration
NASA Technical Reports Server (NTRS)
Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali
2007-01-01
We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.
NASA Astrophysics Data System (ADS)
El Bitar, Z.; Pino, F.; Candela, C.; Ros, D.; Pavía, J.; Rannou, F. R.; Ruibal, A.; Aguiar, P.
2014-12-01
It is well-known that in pinhole SPECT (single-photon-emission computed tomography), iterative reconstruction methods including accurate estimations of the system response matrix can lead to submillimeter spatial resolution. There are two different methods for obtaining the system response matrix: those that model the system analytically using an approach including an experimental characterization of the detector response, and those that make use of Monte Carlo simulations. Methods based on analytical approaches are faster and handle the statistical noise better than those based on Monte Carlo simulations, but they require tedious experimental measurements of the detector response. One suggested approach for avoiding an experimental characterization, circumventing the problem of statistical noise introduced by Monte Carlo simulations, is to perform an analytical computation of the system response matrix combined with a Monte Carlo characterization of the detector response. Our findings showed that this approach can achieve high spatial resolution similar to that obtained when the system response matrix computation includes an experimental characterization. Furthermore, we have shown that using simulated detector responses has the advantage of yielding a precise estimate of the shift between the point of entry of the photon beam into the detector and the point of interaction inside the detector. Considering this, it was possible to slightly improve the spatial resolution in the edge of the field of view.
Cluster analogs of binary isotopic mixtures: Path integral Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Chakravarty, Charusita
1996-05-01
The structure of quantum clusters composed of binary isotopic mixtures is studied using Fourier path integral Monte Carlo simulations. Such clusters display a purely quantum analog of the segregation phenomenon observed in classical binary clusters with the lighter isotope preferentially located on the cluster surface and the heavier isotope in the cluster interior. A parametric multistage sampling scheme is developed to equilibrate such quantum mixtures. The behavior of a single isotopic impurity in a quantum cluster is examined as a function of impurity mass, temperature and cluster size. Isotopic segregation effects in mixed para-H2/ortho-D2 clusters are shown to be striking.
NASA Technical Reports Server (NTRS)
Wolynes, Peter G.
1987-01-01
Nonadiabatic transitions are central to many areas of chemical and condensed matter physics, ranging from biological electron transfer to the optical properties of one-dimensional conductors. Here, a path integral Monte Carlo method is used to simulate such transitions, based on the observation that nonadiabatic rate coefficients are often dominated by saddle point trajectories that correspond to an imaginary time. Simple analytic theories can be used to continue these imaginary time correlation functions to determine rate coefficients. The advantages and drawbacks of this approach are discussed.
A Monte Carlo study to check the hadronic interaction models by a new EAS hybrid experiment in Tibet
Zhang, Ying; Jiang, L; Chen, D; Ding, L K; Shibata, M; Katayose, Y; Hotta, N; Ohnishi, M; Ouchi, T; Saito, T
2013-01-01
A new EAS hybrid experiment has been designed by constructing a YAC (Yangbajing Air shower Core) detector array inside the existing Tibet-III air shower array. The first step of YAC, called "YAC-I", consists of 16 plastic scintillator units (4 rows times 4 columns) each with an area of 40 cm * 50 cm which is used to check hadronic interaction models used in AS simulations. A Monte Carlo study shows that YAC-I can record high energy electromagnetic component in the core region of air showers induced by primary particles of several tens TeV energies where the primary composition is directly measured by space experiments. It may provide a direct check of the hadronic interaction models currently used in the air shower simulations in the corresponding energy region. In present paper, the method of the observation and the sensitivity of the characteristics of the observed events to the different interaction models are discussed.
A Monte Carlo study to check the hadronic interaction models by a new EAS hybrid experiment in Tibet
Ying Zhang; J. Huang; L. Jiang; D. Chen; L. K. Ding; M. Shibata; Y. Katayose; N. Hotta; M. Ohnishi; T. Ouchi; T. Saito
2013-03-14
A new EAS hybrid experiment has been designed by constructing a YAC (Yangbajing Air shower Core) detector array inside the existing Tibet-III air shower array. The first step of YAC, called "YAC-I", consists of 16 plastic scintillator units (4 rows times 4 columns) each with an area of 40 cm * 50 cm which is used to check hadronic interaction models used in AS simulations. A Monte Carlo study shows that YAC-I can record high energy electromagnetic component in the core region of air showers induced by primary particles of several tens TeV energies where the primary composition is directly measured by space experiments. It may provide a direct check of the hadronic interaction models currently used in the air shower simulations in the corresponding energy region. In present paper, the method of the observation and the sensitivity of the characteristics of the observed events to the different interaction models are discussed.
New One-Flavor Hybrid Monte Carlo Simulation Method for Lattice Fermions with gamma-five Hermiticity
Kenji Ogawa
2011-04-12
We propose a new method for Hybrid Monte Carlo (HMC) simulations with odd numbers of dynamical fermions on the lattice. It employs a different approach from polynomial or rational HMC. In this method, gamma-five hermiticity of the lattice Dirac operators is crucial and it can be applied to Wilson, domain-wall, and overlap fermions. We compare HMC simulations with two degenerate flavors and (1 + 1) degenerate flavors using optimal domain-wall fermions. The ratio of the efficiency, (number of accepted trajectories) / (simulation time), is about 3:2. The relation between pseudofermion action of chirally symmetric lattice fermions in four-dimensional(overlap) and five-dimensional(domain-wall) representation are also analyzed.
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W; Evans, Thomas M
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or more localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.
Hybrid integration platform based on silica-on-silicon planar lightwave circuit
NASA Astrophysics Data System (ADS)
Lin, Wenhua; Sun, C. Jacob; Schmidt, Kevin M.
2007-02-01
While silica waveguide PLC products have been deployed in various systems and applications, hybrid integration of semiconductor opto-electronic devices on silica-based planar lightwave circuit (PLC) has become the mainstream platform for small form factor, low-cost and high volume integrated transceiver modules. One of the main benefits of hybrid integration is the wafer-scale process, which greatly reduces chip/module size and assembly cost. This paper reviews the development of this technology, and as an example, presents a hybrid integrated transmitter with four wavelengths on silica PLC chip for LX4 and 10GbE applications.
Integration and Operational Experience in CMS Monte Carlo Production in LCG
Caballero, J; Hernández, Jose M
2007-01-01
This note describes integration and operational aspects of the CMS Monte Carlo production in the LHC Computing Grid (LCG). In 2005 the McRunjob MC production system was ported to LCG-2 in order to make use of the distributed computing and storage resources available in LCG for CMS. The full production chain (generation, simulation, digitization with pile-up, reconstruction, injection in the data transfer system and publication for analysis) was implemented. Experience gained during the implementation and operation of the production system in LCG has been used to build ProdAgent, the new MC production system. ProdAgent takes also advantage of the new CMS event data model, event processing framework and data management services. Integration and operational experience with ProdAgent is also described in this note.
State and parameter estimation using Monte Carlo evaluation of path integrals
John C. Quinn; Henry D. I. Abarbanel
2009-12-08
Transferring information from observations of a dynamical system to estimate the fixed parameters and unobserved states of a system model can be formulated as the evaluation of a discrete time path integral in model state space. The observations serve as a guiding potential working with the dynamical rules of the model to direct system orbits in state space. The path integral representation permits direct numerical evaluation of the conditional mean path through the state space as well as conditional moments about this mean. Using a Monte Carlo method for selecting paths through state space we show how these moments can be evaluated and demonstrate in an interesting model system the explicit influence of the role of transfer of information from the observations. We address the question of how many observations are required to estimate the unobserved state variables, and we examine the assumptions of Gaussianity of the underlying conditional probability.
Hybrid automated reliability predictor integrated work station (HiREL)
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.
1991-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated reliability (HiREL) workstation tool system marks another step toward the goal of producing a totally integrated computer aided design (CAD) workstation design capability. Since a reliability engineer must generally graphically represent a reliability model before he can solve it, the use of a graphical input description language increases productivity and decreases the incidence of error. The captured image displayed on a cathode ray tube (CRT) screen serves as a documented copy of the model and provides the data for automatic input to the HARP reliability model solver. The introduction of dependency gates to a fault tree notation allows the modeling of very large fault tolerant system models using a concise and visually recognizable and familiar graphical language. In addition to aiding in the validation of the reliability model, the concise graphical representation presents company management, regulatory agencies, and company customers a means of expressing a complex model that is readily understandable. The graphical postprocessor computer program HARPO (HARP Output) makes it possible for reliability engineers to quickly analyze huge amounts of reliability/availability data to observe trends due to exploratory design changes.
Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.
1993-01-01
Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.
NASA Astrophysics Data System (ADS)
Wang, Jin; Han, Yang; Liang, Zhongcheng; Chen, Yongjin
2012-11-01
Applying coherent detection technique to advanced modulation formats makes it possible to electronically compensate the signal impairments. A key issue for a successful deployment of coherent detection technique is the availability of cost-efficient and compact integrated receivers, which are composed of an optical 90° hybrid mixer and four photodiodes (PDs). In this work, three different types of optical hybrids are fabricated with polymer planar lightwave circuit (PLC), and hybridly integrated with four vertical backside illuminated III-V PDs. Their performances, such as the insertion loss, the transmission imbalance, the polarization dependence and the phase deviation of 90° hybrid will be discussed.
Chen, I.J.; Gelbard, E.M.
1988-07-01
The narrow resonance (NR) approximation has, in the past, been applied to regular lattices with fairly simple unit cells. Attempts to use the NR approximation to deal with fine details of the lattice structure, or with complicated lattice cells, have generally been based on assumptions and approximations that are rather difficult to evaluate. A benchmark method is developed in which slowing down is still treated in the NR approximation, but spatial neutron transport is handled by Monte Carlo. This benchmark method is used to evaluate older methods for analyzing the double-heterogeneity effect in fast reactors, and for computing resonance integrals in the PROTEUS lattices. New methods for treating the PROTEUS lattices are proposed.
WORM ALGORITHM PATH INTEGRAL MONTE CARLO APPLIED TO THE 3He-4He II SANDWICH SYSTEM
NASA Astrophysics Data System (ADS)
Al-Oqali, Amer; Sakhel, Asaad R.; Ghassib, Humam B.; Sakhel, Roger R.
2012-12-01
We present a numerical investigation of the thermal and structural properties of the 3He-4He sandwich system adsorbed on a graphite substrate using the worm algorithm path integral Monte Carlo (WAPIMC) method [M. Boninsegni, N. Prokof'ev and B. Svistunov, Phys. Rev. E74, 036701 (2006)]. For this purpose, we have modified a previously written WAPIMC code originally adapted for 4He on graphite, by including the second 3He-component. To describe the fermions, a temperature-dependent statistical potential has been used. This has proven very effective. The WAPIMC calculations have been conducted in the millikelvin temperature regime. However, because of the heavy computations involved, only 30, 40 and 50 mK have been considered for the time being. The pair correlations, Matsubara Green's function, structure factor, and density profiles have been explored at these temperatures.
Radiation Transport for Explosive Outflows: A Multigroup Hybrid Monte Carlo Method
NASA Astrophysics Data System (ADS)
Wollaeger, Ryan T.; van Rossum, Daniel R.; Graziani, Carlo; Couch, Sean M.; Jordan, George C., IV; Lamb, Donald Q.; Moses, Gregory A.
2013-12-01
We explore Implicit Monte Carlo (IMC) and discrete diffusion Monte Carlo (DDMC) for radiation transport in high-velocity outflows with structured opacity. The IMC method is a stochastic computational technique for nonlinear radiation transport. IMC is partially implicit in time and may suffer in efficiency when tracking MC particles through optically thick materials. DDMC accelerates IMC in diffusive domains. Abdikamalov extended IMC and DDMC to multigroup, velocity-dependent transport with the intent of modeling neutrino dynamics in core-collapse supernovae. Densmore has also formulated a multifrequency extension to the originally gray DDMC method. We rigorously formulate IMC and DDMC over a high-velocity Lagrangian grid for possible application to photon transport in the post-explosion phase of Type Ia supernovae. This formulation includes an analysis that yields an additional factor in the standard IMC-to-DDMC spatial interface condition. To our knowledge the new boundary condition is distinct from others presented in prior DDMC literature. The method is suitable for a variety of opacity distributions and may be applied to semi-relativistic radiation transport in simple fluids and geometries. Additionally, we test the code, called SuperNu, using an analytic solution having static material, as well as with a manufactured solution for moving material with structured opacities. Finally, we demonstrate with a simple source and 10 group logarithmic wavelength grid that IMC-DDMC performs better than pure IMC in terms of accuracy and speed when there are large disparities between the magnitudes of opacities in adjacent groups. We also present and test our implementation of the new boundary condition.
Fermionic path-integral Monte Carlo results for the uniform electron gas at finite temperature
NASA Astrophysics Data System (ADS)
Filinov, V. S.; Fortov, V. E.; Bonitz, M.; Moldabekov, Zh.
2015-03-01
The uniform electron gas (UEG) at finite temperature has recently attracted substantial interest due to the experimental progress in the field of warm dense matter. To explain the experimental data, accurate theoretical models for high-density plasmas are needed that depend crucially on the quality of the thermodynamic properties of the quantum degenerate nonideal electrons and of the treatment of their interaction with the positive background. Recent fixed-node path-integral Monte Carlo (RPIMC) data are believed to be the most accurate for the UEG at finite temperature, but they become questionable at high degeneracy when the Brueckner parameter rs=a /aB —the ratio of the mean interparticle distance to the Bohr radius—approaches 1. The validity range of these simulations and their predictive capabilities for the UEG are presently unknown. This is due to the unknown quality of the used fixed nodes and of the finite-size scaling from N =33 simulated particles (per spin projection) to the macroscopic limit. To analyze these questions, we present alternative direct fermionic path integral Monte Carlo (DPIMC) simulations that are independent from RPIMC. Our simulations take into account quantum effects not only in the electron system but also in their interaction with the uniform positive background. Also, we use substantially larger particle numbers (up to three times more) and perform an extrapolation to the macroscopic limit. We observe very good agreement with RPIMC, for the polarized electron gas, up to moderate densities around rs=4 , and larger deviations for the unpolarized case, for low temperatures. For higher densities (high electron degeneracy), rs?1.5 , both RPIMC and DPIMC are problematic due to the increased fermion sign problem.
Dunn, K. L.; Wilson, P. P. H. [Department of Engineering Physics, University of Wisconsin - Madison, 1500 Engineering Drive, Madison, WI 53706 (United States)
2013-07-01
A new Monte Carlo mesh tally based on a Kernel Density Estimator (KDE) approach using integrated particle tracks is presented. We first derive the KDE integral-track estimator and present a brief overview of its implementation as an alternative to the MCNP fmesh tally. To facilitate a valid quantitative comparison between these two tallies for verification purposes, there are two key issues that must be addressed. The first of these issues involves selecting a good data transfer method to convert the nodal-based KDE results into their cell-averaged equivalents (or vice versa with the cell-averaged MCNP results). The second involves choosing an appropriate resolution of the mesh, since if it is too coarse this can introduce significant errors into the reference MCNP solution. After discussing both of these issues in some detail, we present the results of a convergence analysis that shows the KDE integral-track and MCNP fmesh tallies are indeed capable of producing equivalent results for some simple 3D transport problems. In all cases considered, there was clear convergence from the KDE results to the reference MCNP results as the number of particle histories was increased. (authors)
Integrating Shape and Texture in Deformable Models: from Hybrid Methods to Metamorphs
Huang, Xiaolei
Integrating Shape and Texture in Deformable Models: from Hybrid Methods to Metamorphs Dimitris models, which we term "Metamorphs". The novel formulation of the Metamorph models tightly couples shape and region information in a variational framework. Keywords Metamorphs, deformable models, implicit
NASA Technical Reports Server (NTRS)
Park, Han G.; Cannon, Howard; Bajwa, Anupa; Mackey, Ryan; James, Mark; Maul, William
2004-01-01
This paper describes the initial integration of a hybrid reasoning system utilizing a continuous domain feature-based detector, Beacon-based Exceptions Analysis for Multimissions (BEAM), and a discrete domain model-based reasoner, Livingstone.
Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV
Miller, S.G.
1988-08-01
Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.
An Event-Driven Hybrid Molecular Dynamics and Direct Simulation Monte Carlo Algorithm
Donev, A; Garcia, A L; Alder, B J
2007-07-30
A novel algorithm is developed for the simulation of polymer chains suspended in a solvent. The polymers are represented as chains of hard spheres tethered by square wells and interact with the solvent particles with hard core potentials. The algorithm uses event-driven molecular dynamics (MD) for the simulation of the polymer chain and the interactions between the chain beads and the surrounding solvent particles. The interactions between the solvent particles themselves are not treated deterministically as in event-driven algorithms, rather, the momentum and energy exchange in the solvent is determined stochastically using the Direct Simulation Monte Carlo (DSMC) method. The coupling between the solvent and the solute is consistently represented at the particle level, however, unlike full MD simulations of both the solvent and the solute, the spatial structure of the solvent is ignored. The algorithm is described in detail and applied to the study of the dynamics of a polymer chain tethered to a hard wall subjected to uniform shear. The algorithm closely reproduces full MD simulations with two orders of magnitude greater efficiency. Results do not confirm the existence of periodic (cycling) motion of the polymer chain.
Streamline Integration using MPI-Hybrid Parallelism on a Large Multi-Core Architecture
Camp, David; Garth, Christoph; Childs, Hank; Pugmire, Dave; Joy, Kenneth I.
2010-11-01
Streamline computation in a very large vector field data set represents a significant challenge due to the non-local and datadependentnature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programmingand execution as applied to streamline integration on a large, multicore platform. With multi-core processors now prevalent in clustersand supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice.We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize-over-seeds and parallelize-overblocks,and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing betweencores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication andI/O bandwidth than a traditional, non-hybrid distributed implementation.
Hybrid integrated photonic components based on a polymer platform
NASA Astrophysics Data System (ADS)
Eldada, Louay A.
2003-06-01
We report on a polymer-on-silicon optical bench platform that enables the hybrid integration of elemental passive and active optical functions. Planar polymer circuits are produced photolithographically, and slots are formed in them for the insertion of chips and films of a variety of materials. The polymer circuits provide interconnects, static routing elements such as couplers, taps, and multi/demultiplexers, as well as thermo-optically dynamic elements such as switches, variable optical attenuators, and tunable notch filters. Crystal-ion-sliced thin films of lithium niobate are inserted in the polymer circuit for polarization control or for electro-optic modulation. Films of yttrium iron garnet and neodymium iron boron magnets are inserted in order to magneto-optically achieve non-reciprocal operation for isolation and circulation. Indium phosphide and gallium arsenide chips are inserted for light generation, amplification, and detection, as well as wavelength conversion. The functions enabled by this multi-material platform span the range of the building blocks needed in optical circuits, while using the highest-performance material system for each function. We demonstrated complex-functionality photonic components based on this technology, including a metro ring node module and a tunable optical transmitter. The metro ring node chip includes switches, variable optical attenuators, taps, and detectors; it enables optical add/drop multiplexing, power monitoring, and automatic load balancing, and it supports shared and dedicated protection protocols in two-fiber metro ring optical networks. The tunable optical transmitter chip includes a tunable external cavity laser, an isolator, and a high-speed modulator.
Fractional volume integration in two-dimensional NMR spectra: CAKE, a Monte Carlo approach
NASA Astrophysics Data System (ADS)
Romano, Rocco; Paris, Debora; Acernese, Fausto; Barone, Fabrizio; Motta, Andrea
2008-06-01
Quantitative information from multi-dimensional NMR experiments can be obtained by peak volume integration. The standard procedure (selection of a region around the chosen peak and addition of all values) is often biased by poor peak definition because of peak overlap. Here we describe a simple method, called CAKE, for volume integration of (partially) overlapping peaks. Assuming the axial symmetry of two-dimensional NMR peaks, as it occurs in NOESY and TOCSY when Lorentz-Gauss transformation of the signals is carried out, CAKE estimates the peak volume by multiplying a volume fraction by a factor R. It represents a proportionality ratio between the total and the fractional volume, which is identified as a slice in an exposed region of the overlapping peaks. The volume fraction is obtained via Monte Carlo Hit-or-Miss technique, which proved to be the most efficient because of the small region and the limited number of points within the selected area. Tests on simulated and experimental peaks, with different degrees of overlap and signal-to-noise ratios, show that CAKE results in improved volume estimates. A main advantage of CAKE is that the volume fraction can be flexibly chosen so as to minimize the effect of overlap, frequently observed in two-dimensional spectra.
Fractional volume integration in two-dimensional NMR spectra: CAKE, a Monte Carlo approach
NASA Astrophysics Data System (ADS)
Romano, Rocco; Acernese, Fausto; Paris, Debora; Motta, Andrea; Barone, Fabrizio
2009-03-01
Quantitative information from multidimensional NMR experiments can be obtained by peak volume integration. The standard procedure (selection of a region around the chosen peak and addition of all values) is often biased by poor peak definition because of peak overlap. Here we describe a simple method, called CAKE, for volume integration of (partially) overlapping peaks. Assuming the axial symmetry of two-dimensional NMR peaks, as it occurs in NOESY and TOCSY when Lorentz-Gauss transformation of the signals is carried out, CAKE estimates the peak volume by multiplying a volume fraction by a factor R. It represents a proportionality ratio between the total and the fractional volume, which is identified as a slice in an exposed region of the overlapping peaks. The volume fraction is obtained via Monte Carlo Hit-or-Miss technique, which proved to be the most efficient because of the small region and the limited number of points within the selected area. Tests on simulated and experimental peaks, with different degrees of overlap and signal-to-noise ratios, show that CAKE results in improved volume estimates. A main advantage of CAKE is that the volume fraction can be flexibly chosen so as to minimize the effect of overlap, frequently observed in two-dimensional spectra.
Monte Carlo simulation of small electron fields collimated by the integrated photon MLC
NASA Astrophysics Data System (ADS)
Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus
2011-02-01
In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15 MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1 mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433 m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3 mm was found.
Dynamical estimation of neuron and network properties II: Path integral Monte Carlo methods.
Kostuk, Mark; Toth, Bryan A; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I
2012-03-01
Hodgkin-Huxley (HH) models of neuronal membrane dynamics consist of a set of nonlinear differential equations that describe the time-varying conductance of various ion channels. Using observations of voltage alone we show how to estimate the unknown parameters and unobserved state variables of an HH model in the expected circumstance that the measurements are noisy, the model has errors, and the state of the neuron is not known when observations commence. The joint probability distribution of the observed membrane voltage and the unobserved state variables and parameters of these models is a path integral through the model state space. The solution to this integral allows estimation of the parameters and thus a characterization of many biological properties of interest, including channel complement and density, that give rise to a neuron's electrophysiological behavior. This paper describes a method for directly evaluating the path integral using a Monte Carlo numerical approach. This provides estimates not only of the expected values of model parameters but also of their posterior uncertainty. Using test data simulated from neuronal models comprising several common channels, we show that short (<50 ms) intracellular recordings from neurons stimulated with a complex time-varying current yield accurate and precise estimates of the model parameters as well as accurate predictions of the future behavior of the neuron. We also show that this method is robust to errors in model specification, supporting model development for biological preparations in which the channel expression and other biophysical properties of the neurons are not fully known. PMID:22526358
Integration of LED chip within patch antenna geometry for hybrid FSO/RF communication
Huang, Zhaoran "Rena"
Integration of LED chip within patch antenna geometry for hybrid FSO/RF communication J. Liao, A mode communi- cation transmitter using a LED integrated within the geometry of a planar patch antenna on a shared substrate is demonstrated. An exper- imental FSO link is constructed with a bare die visible LED
Monte Carlo simulation studies of lipid order parameter profiles near integral membrane proteins.
Sperotto, M M; Mouritsen, O G
1991-01-01
Monte Carlo simulation techniques have been applied to a statistical mechanical lattice model in order to study the coherence length for the spatial fluctuations of the lipid order parameter profiles around integral membrane proteins in dipalmitoyl phosphatidylcholine bilayers. The model, which provides a detailed description of the pure lipid bilayer main transition, incorporates hydrophobic matching between the lipid and protein hydrophobic thicknesses as a major contribution to the lipid-protein interactions in lipid membranes. The model is studied at low protein-to-lipid ratios. The temperature dependence of the coherence length is found to have a dramatic peak at the phase transition temperature. The dependence on protein circumference as well as hydrophobic length is determined and it is concluded that in some cases the coherence length is much longer than previously anticipated. The long coherence length provides a mechanism for indirect lipid-mediated protein-protein long-range attraction and hence plays an important role in regulating protein segregation. Images FIGURE 5 FIGURE 6 PMID:2009352
Zhang, Huan; Li, Tao; Wu, Guanji; Ma, Feng
2014-05-01
Coronary artery disease (CAD) is the most common type of cardiovascular disease and leading cause of mortality worldwide. Microarray technology for gene expression analysis has facilitated the identification of the molecular mechanism that underlies the pathogenesis of CAD. Previous studies have primarily used variance or regression analysis, without considering array specific factors. Thus, the aim of the present study was to investigate the mechanism of CAD using partial least squares (PLS)-based analysis, which was integrated with the Monte Carlo technique. Microarray analysis was performed with a data set of 110 CAD patients and 111 controls obtained from the Gene Expression Omnibus database. A total of 390 dysregulated genes were acquired. Significantly increased representations of dysregulated genes in Gene Ontology items, including transforming growth factor ?-activated receptor activity and acyl-CoA oxidase activity, were identified. Network analysis revealed three hub genes with a degree of >10, including ESR1, ITGA4 and ARRB2. The results of the present study provide novel information on the gene expression signatures of CAD patients and offer further theoretical support for future therapeutic study. PMID:24940402
Axel Hoefer; Oliver Buss; Maik Hennebach; Michael Schmid; Dieter Porsch
2014-11-12
MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transport code systems like MCNP or SCALE. What is special about MOCABA is that it can be applied to any integral function of nuclear data, and any integral measurement can be taken into account to improve the prediction of an integral observable of interest. In this paper we present two example applications of the MOCABA framework: the prediction of the neutron multiplication factor of a water-moderated PWR fuel assembly based on 21 criticality safety benchmark experiments and the prediction of the power distribution within a toy model reactor containing 100 fuel assemblies.
NASA Astrophysics Data System (ADS)
Blasone, R.; Madsen, H.; Rosbjerg, D.
2006-12-01
So far, limited attention has been given to uncertainty assessment (UA) of distributed and integrated hydrological models. The main reasons for this are the high computational burden required by running these models, the potentially huge number of parameters involved and the difficulties in aggregating multi-site and multi-variable objective measures. This work presents a UA application conducted on an integrated, spatially distributed hydrological model in an attempt to overcome these difficulties. The UA of the model response is assessed through a revised version of the Generalized Likelihood Uncertainty Estimation (GLUE) methodology in which the sampling method is improved by the use of an adaptive Markov Chain Monte Carlo method, the Shuffled Complex Evolution Metropolis (SCEM-UA) algorithm. The use of the SCEM-UA algorithm allows an efficient sampling of the region of the parameter space containing the best solutions. Hereby more reliable posterior distributions of model outputs and parameters can be obtained at a reduced computational cost in comparison to the use of an initial random sampling method. The MIKE SHE modeling software is employed to simulate the runoff and the groundwater responses of a Danish watershed, the Karup catchment. The calibration data consists of observations of groundwater elevation at 17 sites and of runoff measurements at the catchment outlet. An aggregation method based on a distance scale transformation is used to combine groundwater levels and runoff measurements into a single objective function that takes into account the different impact of the two types of data on the global optimized function. An initial sensitivity analysis is applied to reduce the dimensionality of the problem and thus reducing the total number of model runs: the parameters with a limited impact on the model response are fixed to their calibrated value, while the variation of the others, which are affecting the most the model response, is subjected to the UA.
Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models
Del Moral , Pierre
. Patras, S. Rubenthaler). On the concentration of interacting processes. Foundations & Trends in Machine drifts [McKean-Vlasov style] Interacting jump particle models [Boltzmann & Feynman-Kac style] #12;Part II Branching-selection Diffusion Monte Carlo Free evolutions Absorption Quantum Monte Carlo Walkers motions
Hybrid plasmon photonic crystal resonance grating for integrated spectrometer biosensor.
Guo, Hong; Guo, Junpeng
2015-01-15
Using nanofabricated hybrid metal-dielectric nanohole array photonic crystal gratings, a hybrid plasmonic optical resonance spectrometer biosensor is demonstrated. The new spectrometer sensor technique measures plasmonic optical resonance from the first-order diffraction rather than via the traditional method of measuring optical resonance from transmission. The resonance spectra measured with the new spectrometer technique are compared with the spectra measured using a commercial optical spectrometer. It is shown that the new optical resonance spectrometer can be used to measure plasmonic optical resonance that otherwise cannot be measured with a regular optical spectrometer. PMID:25679856
Hybrid Body Representation for Integrated Pose Recognition, Localization and Segmentation
Zhu, Zhigang
-based spatial priors are represented by a "star" graphical model. This hybrid body representation can-level and high-level vision stages, where top-down prior knowl- edge and bottom-up data processing is well by a collection of simpler elements with specific inter-relationships. The latter one postulates a very simple
NASA Astrophysics Data System (ADS)
Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.
2014-06-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.
Conceptual Integration of Hybridization by Algerian Students Intending to Teach Physical Sciences
ERIC Educational Resources Information Center
Salah, Hazzi; Dumon, Alain
2011-01-01
This work aims to assess the difficulties encountered by students of the Ecole Normale Superieure of Kouba (Algeria) intending to teach physical science in the integration of the hybridization of atomic orbitals. It is a concept that they should use in describing the formation of molecular orbitals ([sigma] and [pi]) in organic chemistry and gaps…
The energy demand of distillation-molecular sieve systems for ethanol recovery/dehydration can be significant, particularly for dilute solutions. An alternative hybrid process integrating vapor stripping (like a beer still) with vapor compression and a vapor permeation membrane s...
Lauermann, M.; Weimann, C.; Palmer, R.; Schindler, P. C. [Institute of Photonics and Quantum Electronics, Karlsruhe Institute of Technology, 76131 Karlsruhe (Germany); Koeber, S.; Freude, W., E-mail: christian.koos@kit.edu; Koos, C., E-mail: christian.koos@kit.edu [Institute of Photonics and Quantum Electronics, Karlsruhe Institute of Technology, 76131 Karlsruhe, Germany and Institute of Microstructure Technology, Karlsruhe Institute of Technology, 76344 Eggenstein-Leopoldshafen (Germany); Rembe, C. [Polytec GmbH, 76337 Waldbronn (Germany)
2014-05-27
We demonstrate a waveguide-based frequency shifter on the silicon photonic platform, enabling frequency shifts up to 10 GHz. The device is realized by silicon-organic hybrid (SOH) integration. Temporal shaping of the drive signal allows the suppression of spurious side-modes by more than 23 dB.
Heterogeneously integrated 2.0 m CW hybrid silicon lasers at room temperature
Bowers, John
Heterogeneously integrated 2.0 m CW hybrid silicon lasers at room temperature Alexander Spott,1 we experimentally demonstrate room temperature, continuous-wave (CW), 2.0 m wavelength lasers hetero lasers op- erate CW up to 35°C and emit up to 4.2 mW of single-facet CW power at room temperature. III
O. Briat; J. M. Vinassa; C. Zardini; J. L. Aucouturier
2001-01-01
This paper deals with the design and the integration of an electromechanical storage system into an electric vehicle power train. The main goal is to prove the interest of sources hybridization for heavy duty vehicles with high discontinuous mission profile as garbage collection. This behavior is characterized by a high peak-to-continuous battery power ratio. To optimize vehicle performances, the battery
Integrated Cost and Schedule using Monte Carlo Simulation of a CPM Model - 12419
Hulett, David T.; Nosbisch, Michael R.
2012-07-01
This discussion of the recommended practice (RP) 57R-09 of AACE International defines the integrated analysis of schedule and cost risk to estimate the appropriate level of cost and schedule contingency reserve on projects. The main contribution of this RP is to include the impact of schedule risk on cost risk and hence on the need for cost contingency reserves. Additional benefits include the prioritizing of the risks to cost, some of which are risks to schedule, so that risk mitigation may be conducted in a cost-effective way, scatter diagrams of time-cost pairs for developing joint targets of time and cost, and probabilistic cash flow which shows cash flow at different levels of certainty. Integrating cost and schedule risk into one analysis based on the project schedule loaded with costed resources from the cost estimate provides both: (1) more accurate cost estimates than if the schedule risk were ignored or incorporated only partially, and (2) illustrates the importance of schedule risk to cost risk when the durations of activities using labor-type (time-dependent) resources are risky. Many activities such as detailed engineering, construction or software development are mainly conducted by people who need to be paid even if their work takes longer than scheduled. Level-of-effort resources, such as the project management team, are extreme examples of time-dependent resources, since if the project duration exceeds its planned duration the cost of these resources will increase over their budgeted amount. The integrated cost-schedule risk analysis is based on: - A high quality CPM schedule with logic tight enough so that it will provide the correct dates and critical paths during simulation automatically without manual intervention. - A contingency-free estimate of project costs that is loaded on the activities of the schedule. - Resolves inconsistencies between cost estimate and schedule that often creep into those documents as project execution proceeds. - Good-quality risk data that are usually collected in risk interviews of the project team, management and others knowledgeable in the risk of the project. The risks from the risk register are used as the basis of the risk data in the risk driver method. The risk driver method is based in the fundamental principle that identifiable risks drive overall cost and schedule risk. - A Monte Carlo simulation software program that can simulate schedule risk, burn WM2012 rate risk and time-independent resource risk. The results include the standard histograms and cumulative distributions of possible cost and time results for the project. However, by simulating both cost and time simultaneously we can collect the cost-time pairs of results and hence show the scatter diagram ('football chart') that indicates the joint probability of finishing on time and on budget. Also, we can derive the probabilistic cash flow for comparison with the time-phased project budget. Finally the risks to schedule completion and to cost can be prioritized, say at the P-80 level of confidence, to help focus the risk mitigation efforts. If the cost and schedule estimates including contingency reserves are not acceptable to the project stakeholders the project team should conduct risk mitigation workshops and studies, deciding which risk mitigation actions to take, and re-run the Monte Carlo simulation to determine the possible improvement to the project's objectives. Finally, it is recommended that the contingency reserves of cost and of time, calculated at a level that represents an acceptable degree of certainty and uncertainty for the project stakeholders, be added as a resource-loaded activity to the project schedule for strategic planning purposes. The risk analysis described in this paper is correct only for the current plan, represented by the schedule. The project contingency reserve of time and cost that are the main results of this analysis apply if that plan is to be followed. Of course project managers have the option of re-planning and re-scheduling in the face of new facts, in part by m
NASA Astrophysics Data System (ADS)
Cerjanic, Alexander M.
The development of a spectral domain method of moments code for the modeling of single layer microstrip patch antennas is presented in this thesis. The mixed potential integral equation formulation of Maxwell's equations is used as the theoretical basis for the work, and is solved via the method of moments. General-purpose graphics processing units are used for the computation of the impedance matrix by incorporation of quasi-Monte Carlo integration. The development of the various components of the code, including Green's function, impedance matrix, and excitation vector modules are discussed with individual test cases for the major code modules. The integrated code was tested by modeling a suite of four coaxially probe fed circularly polarized single layer microstrip patch antennas and the computed results are compared to those obtained by measurement. Finally, a study examining the relationship between design parameters and S11 performance was undertaken using the code.
Hybrid Integrated Label-Free Chemical and Biological Sensors
Mehrabani, Simin; Maker, Ashley J.; Armani, Andrea M.
2014-01-01
Label-free sensors based on electrical, mechanical and optical transduction methods have potential applications in numerous areas of society, ranging from healthcare to environmental monitoring. Initial research in the field focused on the development and optimization of various sensor platforms fabricated from a single material system, such as fiber-based optical sensors and silicon nanowire-based electrical sensors. However, more recent research efforts have explored designing sensors fabricated from multiple materials. For example, synthetic materials and/or biomaterials can also be added to the sensor to improve its response toward analytes of interest. By leveraging the properties of the different material systems, these hybrid sensing devices can have significantly improved performance over their single-material counterparts (better sensitivity, specificity, signal to noise, and/or detection limits). This review will briefly discuss some of the methods for creating these multi-material sensor platforms and the advances enabled by this design approach. PMID:24675757
El-Kady, Maher F; Ihns, Melanie; Li, Mengping; Hwang, Jee Youn; Mousavi, Mir F; Chaney, Lindsay; Lech, Andrew T; Kaner, Richard B
2015-04-01
Supercapacitors now play an important role in the progress of hybrid and electric vehicles, consumer electronics, and military and space applications. There is a growing demand in developing hybrid supercapacitor systems to overcome the energy density limitations of the current generation of carbon-based supercapacitors. Here, we demonstrate 3D high-performance hybrid supercapacitors and microsupercapacitors based on graphene and MnO2 by rationally designing the electrode microstructure and combining active materials with electrolytes that operate at high voltages. This results in hybrid electrodes with ultrahigh volumetric capacitance of over 1,100 F/cm(3). This corresponds to a specific capacitance of the constituent MnO2 of 1,145 F/g, which is close to the theoretical value of 1,380 F/g. The energy density of the full device varies between 22 and 42 Wh/l depending on the device configuration, which is superior to those of commercially available double-layer supercapacitors, pseudocapacitors, lithium-ion capacitors, and hybrid supercapacitors tested under the same conditions and is comparable to that of lead acid batteries. These hybrid supercapacitors use aqueous electrolytes and are assembled in air without the need for expensive "dry rooms" required for building today's supercapacitors. Furthermore, we demonstrate a simple technique for the fabrication of supercapacitor arrays for high-voltage applications. These arrays can be integrated with solar cells for efficient energy harvesting and storage systems. PMID:25831542
El-Kady, Maher F.; Ihns, Melanie; Li, Mengping; Hwang, Jee Youn; Mousavi, Mir F.; Chaney, Lindsay; Lech, Andrew T.; Kaner, Richard B.
2015-01-01
Supercapacitors now play an important role in the progress of hybrid and electric vehicles, consumer electronics, and military and space applications. There is a growing demand in developing hybrid supercapacitor systems to overcome the energy density limitations of the current generation of carbon-based supercapacitors. Here, we demonstrate 3D high-performance hybrid supercapacitors and microsupercapacitors based on graphene and MnO2 by rationally designing the electrode microstructure and combining active materials with electrolytes that operate at high voltages. This results in hybrid electrodes with ultrahigh volumetric capacitance of over 1,100 F/cm3. This corresponds to a specific capacitance of the constituent MnO2 of 1,145 F/g, which is close to the theoretical value of 1,380 F/g. The energy density of the full device varies between 22 and 42 Wh/l depending on the device configuration, which is superior to those of commercially available double-layer supercapacitors, pseudocapacitors, lithium-ion capacitors, and hybrid supercapacitors tested under the same conditions and is comparable to that of lead acid batteries. These hybrid supercapacitors use aqueous electrolytes and are assembled in air without the need for expensive “dry rooms” required for building today’s supercapacitors. Furthermore, we demonstrate a simple technique for the fabrication of supercapacitor arrays for high-voltage applications. These arrays can be integrated with solar cells for efficient energy harvesting and storage systems. PMID:25831542
Path integral Monte Carlo approach to the U(1) lattice gauge theory in 2+1 dimensions
NASA Astrophysics Data System (ADS)
Loan, Mushtaq; Brunner, Michael; Sloggett, Clare; Hamer, Chris
2003-08-01
Path integral Monte Carlo simulations have been performed for U(1) lattice gauge theory in 2+1 dimensions on anisotropic lattices. We extract the static quark potential, the string tension and the low-lying “glueball” spectrum. The Euclidean string tension and mass gap decrease exponentially at weak coupling in excellent agreement with the predictions of Polyakov and Göpfert and Mack, but their magnitudes are five times bigger than predicted. Extrapolations are made to the extreme anisotropic or Hamiltonian limit, and comparisons are made with previous estimates obtained in the Hamiltonian formulation.
Few electron devices: towards hybrid CMOS-SET integrated circuits
Adrian M. Ionescu; Michel J. Declercq; Santanu Mahapatra; Kaustav Banerjee; Jacques Gautier
2002-01-01
In this paper, CMOS evolution and their fundamental and practical limitations are briefly reviewed, and the working principles, performance, and fabrication of single-electron transistors (SETs) are addressed in detail. Some of the unique characteristics and functionality of SETs, like unrivalled integration and low power, which are complementary to the sub-20 nm CMOS, are demonstrated. Characteristics of two novel SET architectures,
Ali, Fawaz; Waller, Ed
2014-10-01
There are numerous scenarios where radioactive particulates can be displaced by external forces. For example, the detonation of a radiological dispersal device in an urban environment will result in the release of radioactive particulates that in turn can be resuspended into the breathing space by external forces such as wind flow in the vicinity of the detonation. A need exists to quantify the internal (due to inhalation) and external radiation doses that are delivered to bystanders; however, current state-of-the-art codes are unable to calculate accurately radiation doses that arise from the resuspension of radioactive particulates in complex topographies. To address this gap, a coupled computational fluid dynamics and Monte Carlo radiation transport approach has been developed. With the aid of particulate injections, the computational fluid dynamics simulation models characterize the resuspension of particulates in a complex urban geometry due to air-flow. The spatial and temporal distributions of these particulates are then used by the Monte Carlo radiation transport simulation to calculate the radiation doses delivered to various points within the simulated domain. A particular resuspension scenario has been modeled using this coupled framework, and the calculated internal (due to inhalation) and external radiation doses have been deemed reasonable. GAMBIT and FLUENT comprise the software suite used to perform the Computational Fluid Dynamics simulations, and Monte Carlo N-Particle eXtended is used to perform the Monte Carlo Radiation Transport simulations. PMID:25162421
Integration of Neuroscience and Endocrinology in Hybrid PBL Curriculum
NSDL National Science Digital Library
PhD J. Thomas Cunningham (University of Missouri-Columbia School of Medicine Dept. of Physiology)
2001-12-01
At the University of Missouri-Columbia, the medical school employs a problem-based learning curriculum that began in 1993. Since the curriculum was changed, student performance on step 1 of the United States Medical Licensing Examination has significantly increased from slightly below the national average to almost one-half a standard deviation above the national mean. In the first and second years, classes for students are organized in classes or blocks that are 8 wk long, followed by 1 wk for evaluation. Initially, basic science endocrinology was taught in the fourth block of the first year with immunology and molecular biology. Student and faculty evaluations of the curriculum indicated that endocrinology did not integrate well with the rest of the material taught in that block. To address these issues, basic science endocrinology was moved into another block with neurosciences. We integrate endocrinology with neurosciences by using the hypothalamus and its role in neuroendocrinology as a springboard for endocrinology. This is accomplished by using clinical cases with clear neuroscience and endocrinology aspects such as CushingÂ?s disease and multiple endocrine neoplastic syndrome type 1.
Integrated process of Donnan dialysis and pertraction in a multimembrane hybrid system
R Wódzki; P Szczepa?ski
2001-01-01
This paper deals with the functioning of an integrated membrane system composed as follows:where CEM denotes a membrane made of cation-exchange polymer. The system combines the functions of Donnan dialysis (DD) and pertraction in the multimembrane hybrid system (MHS). The MHS consists of an agitated bulk liquid membrane and two CEMs. The DD–MHS system was designed to improve the treatment
Broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide.
Shin, Jin-Soo; Kim, Jin Tae
2015-09-11
Graphene is an excellent electronic and photonic material for developing electronic-photonic integrated circuits in Si-based semiconductor devices with ultra wide operational bandwidth. As an extended application, here we propose a broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide, and investigate the optical characteristics numerically at a wavelength of 1.55 ?m. The optical device is based on the surface plasmon polariton absorption of graphene. By electrically tuning the graphene's refractive index as low as that of a noble metal, the hybrid plasmonic waveguide supports a strongly confined highly lossy hybrid long-range surface plasmon polariton strip mode, and hence light coupled from an input waveguide experiences significant power attenuation as it propagates along the waveguide. Over the entire C-band from 1.530 to 1.565 ?m wavelengths, the on/off extinction ratio is larger than 13.7 dB. This modulator has the potential to play a key role in realizing graphene-Si waveguide-based integrated photonic devices. PMID:26293975
Broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide
NASA Astrophysics Data System (ADS)
Shin, Jin-Soo; Kim, Jin Tae
2015-09-01
Graphene is an excellent electronic and photonic material for developing electronic–photonic integrated circuits in Si-based semiconductor devices with ultra wide operational bandwidth. As an extended application, here we propose a broadband silicon optical modulator using a graphene-integrated hybrid plasmonic waveguide, and investigate the optical characteristics numerically at a wavelength of 1.55 ?m. The optical device is based on the surface plasmon polariton absorption of graphene. By electrically tuning the graphene’s refractive index as low as that of a noble metal, the hybrid plasmonic waveguide supports a strongly confined highly lossy hybrid long-range surface plasmon polariton strip mode, and hence light coupled from an input waveguide experiences significant power attenuation as it propagates along the waveguide. Over the entire C-band from 1.530 to 1.565 ?m wavelengths, the on/off extinction ratio is larger than 13.7 dB. This modulator has the potential to play a key role in realizing graphene–Si waveguide-based integrated photonic devices.
Hybrid integrated photodetector with flat-top steep-edge spectral response.
Fan, Xinye; Huang, Yongqing; Ren, Xiaomin; Duan, Xiaofeng; Hu, Fuquan; Wang, Qi; Cai, Shiwei; Zhang, Xia
2012-08-20
Hybrid integrated photodetectors with flat-top steep-edge spectral responses that consist of an Si-based multicavity Fabry-Perot (F-P) filter and an InP-based p-i-n absorption structure (with a 0.2 ?m In(0.53)Ga(0.47)As absorption layer), have been designed and fabricated. The performance of the hybrid integrated photodetectors is theoretically investigated by including key factors such as the thickness of each cavity, the pairs of each reflecting mirror, and the thickness of the benzocyclobutene bonding layer. The device is fabricated by bonding an Si-based multicavity F-P filter with an InP-based p-i-n absorption structure. A hybrid integrated photodetector with a peak quantum efficiency of 55% around 1549.2 nm, the -0.5 dB band of 0.43 nm, the 25 dB band of 1.06 nm, and 3 dB bandwidth more than 16 GHz, is simultaneously obtained. Based on multicavity F-P structure, this device has good flat-top steep-edge spectral response. PMID:22907001
NASA Technical Reports Server (NTRS)
Zuffada, C.; Cwik, T.; Jamnejad, V.
1994-01-01
An efficient hybrid finite element-integral equation technique has recently being developed to model EM scattering from three-dimensional inhomogeneous penetrable bodies of arbitrary shape, and is currently being extended to treat radiation problems.
980nm-1550nm vertically integrated duplexer for hybrid erbium-doped waveguide amplifiers on glass
NASA Astrophysics Data System (ADS)
Onestas, Lydie; Nappez, Thomas; Ghibaudo, Elise; Vitrant, Guy; Broquin, Jean-Emmanuel
2009-02-01
Ion-exchanged devices on glass have been successfully used to realize passive and active integrated optic devices for sensor and telecom applications. Nowadays, research is focused on the reduction of the chip dimensions with an increase of the number of different function integrated. In this paper we present how the use of two stacked optical layers can allow realizing efficient and compact pump duplexer for ion-exchanged hybrid erbium doped waveguide amplifier. Indeed our complete theoretical study of the device shows that excess losses lower than - 0.1 dB and crosstalk lower than -20 dB can be achieved.
Integrated Plasma Simulation of Lower Hybrid Current Drive in Tokamaks
NASA Astrophysics Data System (ADS)
Bonoli, P. T.; Wright, J. C.; Harvey, R. W.; Batchelor, D. B.; Berry, L. A.; Kessel, C. E.; Jardin, S. C.
2012-03-01
It has been shown in Alcator C-Mod that the onset time for sawteeth can be delayed significantly (up to 0.5 s) relative to ohmically heated plasmas, through the injection of off-axis LH current drive power [1]. We are simulating these experiments using the Integrated Plasma Simulator (IPS) [2], where the driven LH current density profiles are computed using a ray tracing component (GENRAY) and Fokker Planck code (CQL3D) [3] that are run in a tightly coupled time advance. The background plasma is evolved using the TSC transport code with the Porcelli sawtooth model [4]. Predictions of the driven LH current profiles will be compared with simpler ``reduced'' models for LHCD such as the LSC code which is implemented in TSC and which is also invoked within the IPS. [4pt] [1] C. E. Kessel et al, Bull. of the Am. Phys. Soc. 53, Poster PP6.00074 (2008). [0pt] [2] D. Batchelor et al, Journal of Physics: Conf. Series 125, 012039 (2008). [0pt] [3] R. W. Harvey and M. G. McCoy, Proc. of the IAEA Tech. Comm. Meeting on Simulation and Modeling of Therm. Plasmas, Montreal, Canada (1992). [0pt] [4] S. C. Jardin et al, J. Comp. Phys. 66, 481 (1986).
NASA Astrophysics Data System (ADS)
Urbic, T.; Holovko, M. F.
2011-10-01
Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.
NASA Astrophysics Data System (ADS)
Schumacher, Andreas B.; Krabe, Detlef; Dieckroeger, Jens; Spott, Thorsten; Kraeker, Tobias; Martins, Evely; Zavrsnik, Miha; Schneider, Hartmut W.; Baumann, Ingo
2003-03-01
We built a 20 channel, 200 GHz, fully reconfigurable optical add-/drop multiplexer with integrated variable optical attenuators and power monitor diodes. A single planar lightwave circuit chip contains demultiplexer, switch array, attenuators and multiplexers. It also serves as an "optical motherboard" for a hybrid, flip-chip assembly containing four 10-channel photo detector arrays. A thermal management concept which considers both microscopic and macroscopic aspects of the device was developed. The final device exhibits an insertion loss of 9 dB from "in"- to "through"-port, a 1 dB bandwidth of >50 GHz and switch extinction ratios in excess of 40 dB.
O'Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
NASA Astrophysics Data System (ADS)
Sharma, Diksha; Badal, Andreu; Badano, Aldo
2012-04-01
The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code \\scriptsize{{MANTIS}}, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fast\\scriptsize{{DETECT}}2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the \\scriptsize{{MANTIS}} code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify \\scriptsize{{PENELOPE}} (the open source software package that handles the x-ray and electron transport in \\scriptsize{{MANTIS}}) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fast\\scriptsize{{DETECT}}2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybrid\\scriptsize{{MANTIS}} approach achieves a significant speed-up factor of 627 when compared to \\scriptsize{{MANTIS}} and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybrid\\scriptsize{{MANTIS}}, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical to x-ray transport. The new code requires much less memory than \\scriptsize{{MANTIS}} and, as a result, allows us to efficiently simulate large area detectors.
Dornheim, Tobias; Filinov, Alexey; Bonitz, Michael
2015-01-01
Correlated fermions are of high interest in condensed matter (Fermi liquids, Wigner molecules), cold atomic gases and dense plasmas. Here we propose a novel approach to path integral Monte Carlo (PIMC) simulations of strongly degenerate non-ideal fermions at finite temperature by combining a fourth-order factorization of the density matrix with antisymmetric propagators, i.e., determinants, between all imaginary time slices. To efficiently run through the modified configuration space, we introduce a modification of the widely used continuous space worm algorithm, which allows for an efficient sampling at arbitrary system parameters. We demonstrate how the application of determinants achieves an effective blocking of permutations with opposite signs, leading to a significant relieve of the fermion sign problem. To benchmark the capability of our method regarding the simulation of degenerate fermions, we consider multiple electrons in a quantum dot and compare our results with other ab initio techniques, where ...
Kirk, B.L.
1985-12-01
The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.
Update of the MCSANC Monte Carlo Integrator, v.1.20
Arbuzov, A; Bondarenko, S; Christova, P; Kalinovskaya, L; Klein, U; Kolesnikov, V; Sadykov, R; Sapronov, A; Uskov, F
2015-01-01
This article presents new features of the MCSANC v.1.20 program, a Monte Carlo tool for calculation of the next-to-leading order electroweak and QCD corrections to various Standard Model processes. The extensions concern implementation of Drell--Yan-like processes and include a systematic treatment of the photon-induced contribution in proton--proton collisions and electroweak corrections beyond NLO approximation. There are also technical improvements such as calculation of the forward-backward asymmetry for the neutral current Drell--Yan process. The updated code is suitable for studies of the effects due to EW and QCD radiative corrections to Drell--Yan (and several other) processes at the LHC and for forthcoming high energy proton--proton colliders.
Performance analysis of an OTEC plant and a desalination plant using an integrated hybrid cycle
Uehara, Haruo; Miyara, Akio; Ikegami, Yasuyuki; Nakaoka, Tsutomu
1996-05-01
A performance analysis of an OTEC plant using an integrated hybrid cycle (I-H OTEC Cycle) has been conducted. The I-H OTEC cycle is a combination of a closed-cycle OTEC plant and a spray flash desalination plant. In an I-H OTEC cycle, warm sea water evaporates the liquid ammonia in the OTEC evaporator, then enters the flash chamber and evaporates itself. The evaporated steam enters the desalination condenser and is condensed by the cold sea water passed through the OTEC condenser. The optimization of the I-H OTEC cycle is analyzed by the method of steepest descent. The total heat transfer area of heat exchangers per net power is used as an objective function. Numerical results are reported for a 10 MW I-H OTEC cycle with plate-type heat exchangers and ammonia as working fluid. The results are compared with those of a joint hybrid OTEC cycle (J-H OTEC Cycle).
NASA Astrophysics Data System (ADS)
Belof, Jonathan; Dubois, Jonathan
2013-06-01
Warm dense matter (WDM), the regime of degenerate and strongly coupled Coulomb systems, is of great interest due to it's importance in understanding astrophysical processes and high energy density laboratory experiments. Path Integral Monte Carlo (PIMC) presents a particularly attractive formalism for tackling outstanding questions in WDM, in that electron correlation can be calculated exactly, with the nuclear and electronic degrees of freedom on equal footing. Here we present an efficient means of solving the Feynman path integral numerically by variational optimization of a trial density matrix, a method originally proposed for simple potentials by Feynman and Kleinert, and we show that this formalism provides an accurate description of warm dense matter with a number of unique advantages over other PIMC approaches. An exchange interaction term is derived for the variationally optimized path, as well as a numerically efficient scheme for dealing with long-range electrostatics. Finally, we present results for the pair correlation functions and thermodynamic observables of the spin polarized electron gas, warm dense hydrogen and all-electron warm dense carbon within the presented VPT-PIMC formalism. Warm dense matter (WDM), the regime of degenerate and strongly coupled Coulomb systems, is of great interest due to it's importance in understanding astrophysical processes and high energy density laboratory experiments. Path Integral Monte Carlo (PIMC) presents a particularly attractive formalism for tackling outstanding questions in WDM, in that electron correlation can be calculated exactly, with the nuclear and electronic degrees of freedom on equal footing. Here we present an efficient means of solving the Feynman path integral numerically by variational optimization of a trial density matrix, a method originally proposed for simple potentials by Feynman and Kleinert, and we show that this formalism provides an accurate description of warm dense matter with a number of unique advantages over other PIMC approaches. An exchange interaction term is derived for the variationally optimized path, as well as a numerically efficient scheme for dealing with long-range electrostatics. Finally, we present results for the pair correlation functions and thermodynamic observables of the spin polarized electron gas, warm dense hydrogen and all-electron warm dense carbon within the presented VPT-PIMC formalism. Lawrence Livermore National Laboratory is operated by Lawrence Livermore National Security, LLC, for the U.S. Department of Energy, National Nuclear Security Administration under Contract DE-AC52-07NA27344.
Draft of M2 Report on Integration of the Hybrid Hydride Model into INL’s MBM Framework for Review
Tikare, Veena; Weck, Philippe F.; Schultz, Peter A.; Clark, Blythe; Glazoff, Michael; Homer, Eric
2014-07-01
This report documents the development, demonstration and validation of a mesoscale, microstructural evolution model for simulation of zirconium hydride {delta}-ZrH{sub 1.5} precipitation in the cladding of used nuclear fuels that may occur during long-term dry storage. While the Zr-based claddings are manufactured free of any hydrogen, they absorb hydrogen during service, in the reactor by a process commonly termed ‘hydrogen pick-up’. The precipitation and growth of zirconium hydrides during dry storage is one of the most likely fuel rod integrity failure mechanisms either by embrittlement or delayed hydride cracking of the cladding (Hanson et al., 2011). While the phenomenon is well documented and identified as a potential key failure mechanism during long-term dry storage (Birk et al., 2012 and NUREG/CR-7116), the ability to actually predict the formation of hydrides is poor. The model being documented in this work is a computational capability for the prediction of hydride formation in different claddings of used nuclear fuels. This work supports the Used Fuel Disposition Research and Development Campaign in assessing the structural engineering performance of the cladding during and after long-term dry storage. This document demonstrates a basic hydride precipitation model that is built on a recently developed hybrid Potts-phase field model that combines elements of Potts-Monte Carlo and the phase-field models (Homer et al., 2013; Tikare and Schultz, 2012). The model capabilities are demonstrated along with the incorporation of the starting microstructure, thermodynamics of the Zr-H system and the hydride formation mechanism.
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Mashayekhi, S.; Razzaghi, M.; Tripak, O.
2014-01-01
A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638
Integration of multisensor hybrid reasoners to support personal autonomy in the smart home.
Valero, Miguel Ángel; Bravo, José; Chamizo, Juan Manuel García; López-de-Ipiña, Diego
2014-01-01
The deployment of the Ambient Intelligence (AmI) paradigm requires designing and integrating user-centered smart environments to assist people in their daily life activities. This research paper details an integration and validation of multiple heterogeneous sensors with hybrid reasoners that support decision making in order to monitor personal and environmental data at a smart home in a private way. The results innovate on knowledge-based platforms, distributed sensors, connected objects, accessibility and authentication methods to promote independent living for elderly people. TALISMAN+, the AmI framework deployed, integrates four subsystems in the smart home: (i) a mobile biomedical telemonitoring platform to provide elderly patients with continuous disease management; (ii) an integration middleware that allows context capture from heterogeneous sensors to program environment's reaction; (iii) a vision system for intelligent monitoring of daily activities in the home; and (iv) an ontologies-based integrated reasoning platform to trigger local actions and manage private information in the smart home. The framework was integrated in two real running environments, the UPM Accessible Digital Home and MetalTIC house, and successfully validated by five experts in home care, elderly people and personal autonomy. PMID:25232910
Integration of Multisensor Hybrid Reasoners to Support Personal Autonomy in the Smart Home
Valero, Miguel Ángel; Bravo, José; Chamizo, Juan Manuel García; López-de-Ipiña, Diego
2014-01-01
The deployment of the Ambient Intelligence (AmI) paradigm requires designing and integrating user-centered smart environments to assist people in their daily life activities. This research paper details an integration and validation of multiple heterogeneous sensors with hybrid reasoners that support decision making in order to monitor personal and environmental data at a smart home in a private way. The results innovate on knowledge-based platforms, distributed sensors, connected objects, accessibility and authentication methods to promote independent living for elderly people. TALISMAN+, the AmI framework deployed, integrates four subsystems in the smart home: (i) a mobile biomedical telemonitoring platform to provide elderly patients with continuous disease management; (ii) an integration middleware that allows context capture from heterogeneous sensors to program environment's reaction; (iii) a vision system for intelligent monitoring of daily activities in the home; and (iv) an ontologies-based integrated reasoning platform to trigger local actions and manage private information in the smart home. The framework was integrated in two real running environments, the UPM Accessible Digital Home and MetalTIC house, and successfully validated by five experts in home care, elderly people and personal autonomy. PMID:25232910
Wendland, D; Ballenegger, V; Alastuey, A
2014-11-14
We compute two- and three-body cluster functions that describe contributions of composite entities, like hydrogen atoms, ions H(-), H2(+), and helium atoms, and also charge-charge and atom-charge interactions, to the equation of state of a hydrogen-helium mixture at low density. A cluster function has the structure of a truncated virial coefficient and behaves, at low temperatures, like a usual partition function for the composite entity. Our path integral Monte Carlo calculations use importance sampling to sample efficiently the cluster partition functions even at low temperatures where bound state contributions dominate. We also employ a new and efficient adaptive discretization scheme that allows one not only to eliminate Coulomb divergencies in discretized path integrals, but also to direct the computational effort where particles are close and thus strongly interacting. The numerical results for the two-body function agree with the analytically known quantum second virial coefficient. The three-body cluster functions are compared at low temperatures with familiar partition functions for composite entities. PMID:25399134
Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu [Department of Biomedical Engineering, University of Kentucky, Lexington, Kentucky 40506 (United States); Li, Ting [Department of Biomedical Engineering, University of Kentucky, Lexington, Kentucky 40506 (United States); State Key Laboratory for Electronic Thin Film and Integrated Device, University of Electronic Science and Technology of China, Chengdu 610054 (China); Chen, Lei; Toborek, Michal [Department of Neurosurgery, University of Kentucky, Lexington, Kentucky 40536 (United States)
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (?D{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of ?D{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N???5) linear algorithm was more accurate in extracting ?D{sub B} (errors?
NASA Astrophysics Data System (ADS)
Rajagopal, S.; Huntington, J. L.; Niswonger, R. G.; Reeves, M.; Pohll, G.
2012-12-01
Modeling complex hydrologic systems requires increasingly complex models to sufficiently describe the physical mechanisms observed in the domain. Streamflow in our study area is primarily driven by climate, reservoirs, and surface and groundwater interactions. Hence in this study, we are using the coupled surface and groundwater flow model, GSFLOW, to simulate streamflow in the Truckee River basin, Nevada and California. To characterize this hydrologic system the model domain is discretized into ~10,500 grid cells of 300m resolution for which a priori parameter estimates from observed climate, soils, geology, and well logs along with parameters that are default were derived. Due to the high dimensionality of the problem, it is important to quantify model uncertainty from multiple sources (parameter, climate input). In the current study, we adopt a stepwise approach to calibrate the model and to quantify the uncertainty in the simulation of different hydro-meteorological fluxes. This approach is preferred firstly due to the availability of multiple observations such as precipitation, solar radiation, snow depth and snow water equivalent, remotely sensed snow cover, and observed streamflow. Secondly, by focusing on individual modules and the parameters associated with simulating one process (e.g. solar radiation) we reduce the parameter search space which improves the robustness of the search algorithm in identifying the global minimum. The Differential Evolution Adaptive Metropolis (DREAM) algorithm, which is a Markov Chain Monte Carlo (MCMC) sampler, is applied to the GSFLOW model in this step wise approach to quantify meteorological input and parameter uncertainty. Results from this approach, posterior parameter distributions for model parameters, and model uncertainty is presented. This analysis will not only produce a robust model, but will also help model developers understand non-linear relationships between model parameters and simulated processes.
Hybrid and heterogeneous photonic integrated circuits for high-performance applications
NASA Astrophysics Data System (ADS)
Heck, Martijn J. R.
2015-02-01
Photonic integration based on silicon, silica, or indium phosphide technologies has reached a level of maturity where it has now become an integral part of telecom and datacom networks. However, although impressive levels of integration and bandwidth have been achieved, the performance of these technologies is relatively low, compared to fiber-optics and discrete bulk optics counterparts. This limits their application in more demanding fields like microwave photonics, e.g., for 4G/5G wireless communications, more advanced complex modulation formats for telecommunications, and highly energy-efficient interconnects. The invention of the ultra-low loss waveguide (ULLW) platform, by me and my co-workers at UC Santa Barbara, heralds a new range of applications for photonic integrated circuits. Fiber-like loss performance, with waveguide propagation losses < 0.1 dB/m, has been realized in waveguides with silicon nitride cores. This performance level represents an order of magnitude lower loss than silica-based waveguides, and 2 - 3 orders of magnitude lower than the silicon-on-insulator and indium phosphide PIC platforms. A combination of the silicon, ULLW, and/or indium phosphide platforms can be made using hybrid or heterogeneous integration techniques. Using "the best of both worlds" approach, improved performance can be achieved. I will discuss the opportunities that these technologies offer for various high-performance applications, such as low-noise lasers and oscillators, high-resolution radars and gyroscopes, and high-bandwidth photonic analog-to-digital converters.
Sharma, Diksha; Badano, Aldo [Division of Imaging and Applied Mathematics, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States)
2013-03-15
Purpose: hybridMANTIS is a Monte Carlo package for modeling indirect x-ray imagers using columnar geometry based on a hybrid concept that maximizes the utilization of available CPU and graphics processing unit processors in a workstation. Methods: The authors compare hybridMANTIS x-ray response simulations to previously published MANTIS and experimental data for four cesium iodide scintillator screens. These screens have a variety of reflective and absorptive surfaces with different thicknesses. The authors analyze hybridMANTIS results in terms of modulation transfer function and calculate the root mean square difference and Swank factors from simulated and experimental results. Results: The comparison suggests that hybridMANTIS better matches the experimental data as compared to MANTIS, especially at high spatial frequencies and for the thicker screens. hybridMANTIS simulations are much faster than MANTIS with speed-ups up to 5260. Conclusions: hybridMANTIS is a useful tool for improved description and optimization of image acquisition stages in medical imaging systems and for modeling the forward problem in iterative reconstruction algorithms.
Evans, Mark F; Cooper, Kumarasen
2004-01-01
Human papillomaviruses (HPVs) are accepted as a necessary cause of cervical neoplasia. However, the benefits of testing simply for high-risk HPV types are limited because of their high prevalence in intraepithelial lesions of all grades, the majority of which regress if left untreated. One factor considered to be of key importance for the progression of intraepithelial lesions to invasive disease is integration of HPV into the host cell genome. Although questions remain about the prevalence of integration amongst pre-invasive lesions, sensitive in situ hybridization techniques utilizing tyramide reagents may aid determination of the significance of HPV infection by enabling routine detection of both high-risk HPV and its physical status. This will provide important data relevant not only to our understanding of the biology of HPV-associated neoplasia, but also potentially to clinical testing for HPV. PMID:14694515
Servo-integrated patterned media by hybrid directed self-assembly.
Xiao, Shuaigang; Yang, Xiaomin; Steiner, Philip; Hsu, Yautzong; Lee, Kim; Wago, Koichi; Kuo, David
2014-11-25
A hybrid directed self-assembly approach is developed to fabricate unprecedented servo-integrated bit-patterned media templates, by combining sphere-forming block copolymers with 5 teradot/in.(2) resolution capability, nanoimprint and optical lithography with overlay control. Nanoimprint generates prepatterns with different dimensions in the data field and servo field, respectively, and optical lithography controls the selective self-assembly process in either field. Two distinct directed self-assembly techniques, low-topography graphoepitaxy and high-topography graphoepitaxy, are elegantly integrated to create bit-patterned templates with flexible embedded servo information. Spinstand magnetic test at 1 teradot/in.(2) shows a low bit error rate of 10(-2.43), indicating fully functioning bit-patterned media and great potential of this approach for fabricating future ultra-high-density magnetic storage media. PMID:25380228
Fully integrated hybrid silicon free-space beam steering source with 32-channel phased array
NASA Astrophysics Data System (ADS)
Hulme, J. C.; Doylend, J. K.; Heck, M. J. R.; Peters, J. D.; Davenport, M. L.; Bovington, J. T.; Coldren, L. A.; Bowers, J. E.
2014-03-01
Free-space beam steering using optical phased arrays is a promising method for implementing free-space communication links and Light Detection and Ranging (LIDAR) without the sensitivity to inertial forces and long latencies which characterize moving parts. Implementing this approach on a silicon-based photonic integrated circuit adds the additional advantage of working with highly developed CMOS processing techniques. In this work we discuss our progress in the development of a fully integrated 32 channel PIC with a widely tunable diode laser, a waveguide phased array, an array of fast phase modulators, an array of hybrid III-V/silicon amplifiers, surface gratings, and a graded index lens (GRIN) feeding an array of photodiodes for feedback control. The PIC has been designed to provide beam steering across a 15°x5° field of view with 0.6°x0.6° beam width and background peaks suppressed 15 dB relative to the main lobe within the field of view for arbitrarily chosen beam directions. Fabrication follows the hybrid silicon process developed at UCSB with modifications to incorporate silicon diodes and a GRIN lens.
Geng, Changran; Tang, Xiaobin; Qian, Wei; Guan, Fada; Johns, Jesse; Yu, Haiyan; Gong, Chunhui; Shu, Diyun; Chen, Da
2015-09-01
The S values for the thyroid as the radioiodine source organ to other target organs were investigated using Chinese hybrid reference phantoms and the Monte Carlo code MCNP5. Two radioiodine isotopes (125)I and (131)I uniformly distributed in the thyroid were investigated separately. We compared our S values for (131)I in Chinese phantoms with previous studies using other types of phantoms: Oak Ridge National Laboratory (ORNL) stylized phantoms, International Commission on Radiological Protection (ICRP) voxel phantoms, and University of Florida (UF) phantoms. Our results are much closer to the UF phantoms. For each specific target organ, the S value for (131)I is larger than for (125)I in both male and female phantoms. In addition, the S values and effective dose to surrounding face-to-face exposed individuals, including different genders and ages (10- and 15-year-old juniors, and adults) from an adult male radioiodine carrier were also investigated. The target organ S values and effective dose for surrounding individuals obey the inverse square law with the distance between source and target phantoms. The obtained effective dose data in Chinese phantoms are comparable to the results in a previous study using the UF phantoms. The data generated in this study can serve as the reference to make recommendations for radiation protection of the Chinese patients or nuclear workers. PMID:26344387
Path integral Monte Carlo study of (H2)n@C70 (n = 1,2,3)
NASA Astrophysics Data System (ADS)
Hao, Yan; Zhang, Hong; Cheng, Xin-Lu
2015-08-01
The path integral Monte Carlo (PIMC) method is employed to study the thermal properties of C70 with one, two, and three H2 molecules confined in the cage, respectively. The interaction energies and vibrationally averaged spatial distributions under different temperatures are calculated to evaluate the stabilities of (H2)n@C70 (n = 1, 2, 3). The results show that (H2)2@C70 is more stable than H2@C70. The interaction energy slowly changes in a large temperature range, so temperature has little effect on the stability of the system. For H2@C70 and (H2)2@C70, the interaction energies keep negative; however, when three H2 molecules are in the cage, the interaction energy rapidly increases to a positive value. This implies that at most two H2 molecules can be trapped by C70. With an increase of temperature, the peak of the spatial distribution gradually shifts away from the center of the cage, but the maximum distance from the center of H2 molecule to the cage center is much smaller than the average radius of C70. Project supported by the National Natural Science Foundation of China (Grant Nos. 11474207 and 11374217).
NASA Astrophysics Data System (ADS)
Ahn, Jeonghwan; Lee, Hoonkyung; Kwon, Yongkyung
2015-03-01
Existence of a stable commensurate structure in the second 4He layer on graphite has been a subject of intensive experimental and theoretical studies because of its implication in the possible realization of two-dimensional supersolidity. Earlier path-integral Monte Carlo (PIMC) calculations of Pierce and Manousakis predicted a stable C4/7 commensurate structure above the first-layer 4He atoms fixed at triangular lattice sites, but Corboz et al. later showed that no commensurate phase was stable when quantum dynamics of the first-layer 4He atoms was incorporated in the PIMC calculations. On the other hand, recent heat capacity measurements of Nakamura et al. provided a strong evidence for a commensurate solid in the second 4He layer over an extended density range. Motivated by this, we have performed new PIMC calculations for the second helium layer on graphite. Unlike previous PIMC calculations where a laterally-averaged one-dimensional substrate potential was used, we here employ an anisotropic 4He-graphite potential described by a sum of the 4He-C pair potentials. With this fully-corrugated substrate potential we make more accurate description of quantum dynamics of the first-layer 4He atoms and analyze its effects on the phase diagram of the second layer.
Path integral Monte Carlo study of 4He clusters doped with alkali and alkali-earth ions.
Galli, D E; Ceperley, D M; Reatto, L
2011-06-30
Path integral Monte Carlo calculations of (4)He nanodroplets doped with alkali (Na(+), K(+) and Cs(+)) and alkali-earth (Be(+) and Mg(+)) ions are presented. We study the system at T = 1 K and between 14 and 128 (4)He atoms. For all studied systems, we find that the ion is well localized at the center of the droplet with the formation of a "snowball" of well-defined shells of localized (4)He atoms forming solid-like order in at least the first surrounding shell. The number of surrounding helium shells (two or three) and the number of atoms per shell and the degree of localization of the helium atoms are sensitive to the type of ion. The number of (4)He atoms in the first shell varies from 12 for Na(+) to 18 for Mg(+) and depends weakly on the size of the droplet. The study of the density profile and of the angular correlations shows that the local solid-like order is more pronounced for the alkali ions with Na(+) giving a very stable icosahedral order extending up to three shells. PMID:21568337
NASA Astrophysics Data System (ADS)
Dornheim, Tobias; Groth, Simon; Filinov, Alexey; Bonitz, Michael
2015-07-01
Correlated fermions are of high interest in condensed matter (Fermi liquids, Wigner molecules), cold atomic gases and dense plasmas. Here we propose a novel approach to path integral Monte Carlo (PIMC) simulations of strongly degenerate non-ideal fermions at finite temperature by combining a fourth-order factorization of the density matrix with antisymmetric propagators, i.e., determinants, between all imaginary time slices. To efficiently run through the modified configuration space, we introduce a modification of the widely used continuous space worm algorithm, which allows for an efficient sampling at arbitrary system parameters. We demonstrate how the application of determinants achieves an effective blocking of permutations with opposite signs, leading to a significant relieve of the fermion sign problem. To benchmark the capability of our method regarding the simulation of degenerate fermions, we consider multiple electrons in a quantum dot and compare our results with other ab initio techniques, where they are available. The present permutation blocking PIMC approach allows us to obtain accurate results even for N = 20 electrons at low temperature and arbitrary coupling, where no other ab initio results have been reported, so far.
NASA Astrophysics Data System (ADS)
Tramonto, F.; Salvestrini, P.; Nava, M.; Galli, D. E.
2015-07-01
By means of the Path Integral Monte Carlo method, we have performed a detailed microscopic study of 4He nanodroplets doped with an argon ion, Ar, at K. We have computed density profiles, energies, dissociation energies, and characterized the local order around the ion for nanodroplets with a number of 4He atoms ranging from 10 to 64 and also 128. We have found the formation of a stable solid structure around the ion, a "snowball", consisting of three concentric shells in which the 4He atoms are placed at the vertices of platonic solids: the first inner shell is an icosahedron (12 atoms); the second one is a dodecahedron with 20 atoms placed on the faces of the icosahedron of the first shell; the third shell is again an icosahedron composed of 12 atoms placed on the faces of the dodecahedron of the second shell. The "magic numbers" implied by this structure, 12, 32, and 44 helium atoms, have been observed in a recent experimental study (Bartl et al., J Phys Chem A 118:8050, 2014) of these complexes; the dissociation energy curve computed in the present work shows jumps in correspondence with those found in the nanodroplets abundance distribution measured in that experiment, strengthening the agreement between theory and experiment. The same structures were predicted in Galli et al. (J Phys Chem A 115:7300, 2011) in a study regarding Na+@4He when ; a comparison between Ar+@4He and Na+@4He complexes is also presented.
NASA Astrophysics Data System (ADS)
Lindsay, A.; McCloskey, J.; Nalbant, S. S.; Simao, N.; Murphy, S.; NicBhloscaidh, M.; Steacy, S.
2013-12-01
Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on locked sections of an active fault is stored as potential slip. Where this potential slip remains unreleased during earthquakes, a slip deficit can be said to have accrued. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip and indicate where the potential for large events remains. The location of recent earthquakes and their distribution of slip can be estimated instrumentally. To develop the idea of long-term slip-deficit modelling it is necessary to constrain the size and distribution of slip for pre-instrumental events dating back hundreds of years covering more than one ';seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of producing high resolution reconstructions of slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them allows them to act as long term geodetic recorders. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Instead of producing one definite model satisfying the observed corals displacements, a Monte Carlo Slip Estimator based on a Genetic Algorithm (MCSE-GA) accelerating the rate of convergence is used to identify a suite of models consistent with the data. Successive iterations of the MCSE-GA sample different displacements at each coral location, from within the spread of associated uncertainties, producing a catalog of models from the full range of possibilities. The suite of best slip distributions are weighted according to their fitness and stacked to produce a final estimate of the distribution of slip for a particular earthquake. Examination of the slip values in the stacked models allows areas of high confidence to be identified where the standard deviation is low. Similarly, areas of low confidence will be found where standard deviations are high. These high resolution models can be used to reconstruct a history of slip along the fault, both identifying and quantifying of slip deficits and constraining confidence in the accuracy of the modelled information. This presentation will demonstrate the ability of the MCSE-GA to produce accurate models of slip for instrumentally recorded earthquakes and show estimates for slip during paleoearthquakes along the Sunda Megathrust.
NSDL National Science Digital Library
David Joiner
Monte Carlo modeling refers to the solution of mathematical problems with the use of random numbers. This can include both function integration and the modeling of stochastic phenomena using random processes.
NASA Technical Reports Server (NTRS)
Combi, Michael R.
2004-01-01
In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the inner comae of comets (grant NAGS- 13239 from the Planetary Atmospheres program).
Automatic on-chip RNA-DNA hybridization assay with integrated phase change microvalves
NASA Astrophysics Data System (ADS)
Weng, Xuan; Jiang, Hai; Wang, Junsheng; Chen, Shu; Cao, Honghe; Li, Dongqing
2012-07-01
An RNA-DNA hybridization assay microfluidic chip integrated with electrothermally actuated phase change microvalves for detecting pathogenic bacteria is presented in this paper. In order to realize the sequential loading and washing processes required in such an assay, gravity-based pressure-driven flow and phase-change microvalves were used in the microfluidic chip. Paraffin wax was used as the phase change material in the valves and thin film heaters were used to electrothermally actuate microvalves. Light absorption measured by a photodetector to determine the concentrations of the samples. The automatic control of the complete assay was implemented by a self-coded LabVIEW program. To examine the performance of this chip, Salmonella was used as a sample pathogen. Significantly, reduction in reagent/sample consumption (up to 20 folds) was achieved by this on-chip assay, compared with using the commercial test kit following the same protocol in conventional labs. The experimental results show that the quantitative detection can be obtained in approximately 26 min, and the detection limit is as low as 103 CFU ml-1. This RNA-DNA hybridization assay microfluidic chip shows an excellent potential in the development of a portable device for point-of-testing applications.
Wu, Chunxiao; Wyatt, Alexander W; Lapuk, Anna V; McPherson, Andrew; McConeghy, Brian J; Bell, Robert H; Anderson, Shawn; Haegert, Anne; Brahmbhatt, Sonal; Shukin, Robert; Mo, Fan; Li, Estelle; Fazli, Ladan; Hurtado-Coll, Antonio; Jones, Edward C; Butterfield, Yaron S; Hach, Faraz; Hormozdiari, Fereydoun; Hajirasouliha, Iman; Boutros, Paul C; Bristow, Robert G; Jones, Steven JM; Hirst, Martin; Marra, Marco A; Maher, Christopher A; Chinnaiyan, Arul M; Sahinalp, S Cenk; Gleave, Martin E; Volik, Stanislav V; Collins, Colin C
2013-01-01
Next-generation sequencing is making sequence-based molecular pathology and personalized oncology viable. We selected an individual initially diagnosed with conventional but aggressive prostate adenocarcinoma and sequenced the genome and transcriptome from primary and metastatic tissues collected prior to hormone therapy. The histology-pathology and copy number profiles were remarkably homogeneous, yet it was possible to propose the quadrant of the prostate tumour that likely seeded the metastatic diaspora. Despite a homogeneous cell type, our transcriptome analysis revealed signatures of both luminal and neuroendocrine cell types. Remarkably, the repertoire of expressed but apparently private gene fusions, including C15orf21:MYC, recapitulated this biology. We hypothesize that the amplification and over-expression of the stem cell gene MSI2 may have contributed to the stable hybrid cellular identity. This hybrid luminal-neuroendocrine tumour appears to represent a novel and highly aggressive case of prostate cancer with unique biological features and, conceivably, a propensity for rapid progression to castrate-resistance. Overall, this work highlights the importance of integrated analyses of genome, exome and transcriptome sequences for basic tumour biology, sequence-based molecular pathology and personalized oncology. PMID:22294438
Wu, Chunxiao; Wyatt, Alexander W; Lapuk, Anna V; McPherson, Andrew; McConeghy, Brian J; Bell, Robert H; Anderson, Shawn; Haegert, Anne; Brahmbhatt, Sonal; Shukin, Robert; Mo, Fan; Li, Estelle; Fazli, Ladan; Hurtado-Coll, Antonio; Jones, Edward C; Butterfield, Yaron S; Hach, Faraz; Hormozdiari, Fereydoun; Hajirasouliha, Iman; Boutros, Paul C; Bristow, Robert G; Jones, Steven Jm; Hirst, Martin; Marra, Marco A; Maher, Christopher A; Chinnaiyan, Arul M; Sahinalp, S Cenk; Gleave, Martin E; Volik, Stanislav V; Collins, Colin C
2012-05-01
Next-generation sequencing is making sequence-based molecular pathology and personalized oncology viable. We selected an individual initially diagnosed with conventional but aggressive prostate adenocarcinoma and sequenced the genome and transcriptome from primary and metastatic tissues collected prior to hormone therapy. The histology-pathology and copy number profiles were remarkably homogeneous, yet it was possible to propose the quadrant of the prostate tumour that likely seeded the metastatic diaspora. Despite a homogeneous cell type, our transcriptome analysis revealed signatures of both luminal and neuroendocrine cell types. Remarkably, the repertoire of expressed but apparently private gene fusions, including C15orf21:MYC, recapitulated this biology. We hypothesize that the amplification and over-expression of the stem cell gene MSI2 may have contributed to the stable hybrid cellular identity. This hybrid luminal-neuroendocrine tumour appears to represent a novel and highly aggressive case of prostate cancer with unique biological features and, conceivably, a propensity for rapid progression to castrate-resistance. Overall, this work highlights the importance of integrated analyses of genome, exome and transcriptome sequences for basic tumour biology, sequence-based molecular pathology and personalized oncology. PMID:22294438
Hybrid plasmon/dielectric waveguide for integrated silicon-on-insulator optical elements
NASA Astrophysics Data System (ADS)
Banks, Jonathan; Flammer, David; Durfee, Charles; Furtak, Tom; Collins, Reuben; Hollingsworth, Russell
2009-10-01
We present a hybrid plasmon/dielectric single-mode single-polarization waveguide on silicon-on-insulator wafers. The structure is fabricable using VLSI processing techniques and minimizes losses due to surface roughness and metallic losses. Because only a single mode and single polarization is admitted, birefringent effects are eliminated. Both simulations and experimental verification of the modes are presented. Simulations show the waveguide can be tuned for either very long propagation lengths or sub-wavelength confinement by changing a patterned metal line width and oxide thickness, which are easily done with VLSI methods. Simulations show sub-wavelength confinement modes with propagation lengths greater than 100 microns, and micron-scale confinement modes with 7mm propagation lengths. This structure naturally forms an MOS capacitor that may be used for active device integration.
Celik, Metin
2009-03-01
The International Safety Management (ISM) Code defines a broad framework for the safe management and operation of merchant ships, maintaining high standards of safety and environmental protection. On the other hand, ISO 14001:2004 provides a generic, worldwide environmental management standard that has been utilized by several industries. Both the ISM Code and ISO 14001:2004 have the practical goal of establishing a sustainable Integrated Environmental Management System (IEMS) for shipping businesses. This paper presents a hybrid design methodology that shows how requirements from both standards can be combined into a single execution scheme. Specifically, the Analytic Hierarchy Process (AHP) and Fuzzy Axiomatic Design (FAD) are used to structure an IEMS for ship management companies. This research provides decision aid to maritime executives in order to enhance the environmental performance in the shipping industry. PMID:19038488
Electrocoagulation-integrated hybrid membrane processes for the treatment of tannery wastewater.
Keerthi; Vinduja, V; Balasubramanian, N
2013-10-01
Three different combinations of treatment techniques, i.e. electrocoagulation combined with microfiltration (EMR), membrane bioreactor (MBR) and electrocoagulation integrated with membrane bioreactor (hybrid MBR, (HMBR)), were analysed and compared for the treatment of tannery wastewater operated for 7 days under the constant trans-membrane pressure of 5 kPa. HMBR was found to be most suitable in performance as well as fouling reduction, with 94 % of chemical oxygen demand (COD) removal, 100 % chromium removal and 8 % improvement in percentage reduction in permeate flux compared to MBR with only 90 % COD removal and 67 % chromium removal. The effect of mixed liquor suspended solids on fouling was also investigated and was found to be insignificant. EMR was capable of elevating the flux but was not as efficient as HMBR and MBR in COD removal. Fouling reduction by HMBR was further confirmed by SEM-EDX and particle size analysis. PMID:23653316
Srivastava, Anurag K.; Annabathina, Bharath; Kamalasadan, Sukumar
2010-04-15
Plug-in hybrid electric vehicle may be prime candidates for the next generation of vehicles, but they offer several technological and economical challenges. This article assesses current progress in PHEV technology, market trends, research needs, challenges ahead and policy options for integrating PHEVs into the electric grid. (author)
NASA Astrophysics Data System (ADS)
Wagner, Marcus
Based on Richard P. Feynman's formulation of quantum mechanics, Path Integral Monte Carlo is a computational ab-initio method to calculate finite temperature equilibrium properties of quantum many-body systems. As input, only fundamental physical constants and pair-potentials are required. We carry out the first ab-initio particle simulations of three related physical systems. First, the bare H _2 substrate is simulated between 0.5 and 1.3K, because a liquid H_2 film is a candidate for a new superfluid. We find evidence of quantum exchange in surface terraces for up to 1K. Second, the melting of the H_2 surface between 3 and 15K is examined since this is the cleanest example of quantum surface melting. Third, atomically thin superfluid ^4He films on H_2 surfaces are simulated, calculating binding energies per ^4He atom and third sound, an important experimental probe for superfuid ^4 He films. For all systems we compute density profiles perpendicular and parallel to the surface and compare to experiment. We treat both H_2 molecules and ^4He atoms on the same footing, as spherical particles. For simulations of bulk/vapor interfaces and surface adsorption, a realistic representation of the macroscopic surface is crucial. Therefore, we introduce an external potential to account for arbitrarily layered substrates and long-range corrections. Two algorithms for parallel computers with independent processors are introduced, one to manage concurrent simulations of entire phase-diagrams, and one to improve input/output speed for files shared by all processors.
Novel Hybrid of LS-SVM and Kalman Filter for GPS/INS Integration
NASA Astrophysics Data System (ADS)
Xu, Zhenkai; Li, Yong; Rizos, Chris; Xu, Xiaosu
Integration of Global Positioning System (GPS) and Inertial Navigation System (INS) technologies can overcome the drawbacks of the individual systems. One of the advantages is that the integrated solution can provide continuous navigation capability even during GPS outages. However, bridging the GPS outages is still a challenge when Micro-Electro-Mechanical System (MEMS) inertial sensors are used. Methods being currently explored by the research community include applying vehicle motion constraints, optimal smoother, and artificial intelligence (AI) techniques. In the research area of AI, the neural network (NN) approach has been extensively utilised up to the present. In an NN-based integrated system, a Kalman filter (KF) estimates position, velocity and attitude errors, as well as the inertial sensor errors, to output navigation solutions while GPS signals are available. At the same time, an NN is trained to map the vehicle dynamics with corresponding KF states, and to correct INS measurements when GPS measurements are unavailable. To achieve good performance it is critical to select suitable quality and an optimal number of samples for the NN. This is sometimes too rigorous a requirement which limits real world application of NN-based methods.The support vector machine (SVM) approach is based on the structural risk minimisation principle, instead of the minimised empirical error principle that is commonly implemented in an NN. The SVM can avoid local minimisation and over-fitting problems in an NN, and therefore potentially can achieve a higher level of global performance. This paper focuses on the least squares support vector machine (LS-SVM), which can solve highly nonlinear and noisy black-box modelling problems. This paper explores the application of the LS-SVM to aid the GPS/INS integrated system, especially during GPS outages. The paper describes the principles of the LS-SVM and of the KF hybrid method, and introduces the LS-SVM regression algorithm. Field test data is processed to evaluate the performance of the proposed approach.
Electric-drive tractability indicator integrated in hybrid electric vehicle tachometer
Tamai, Goro; Zhou, Jing; Weslati, Feisel
2014-09-02
An indicator, system and method of indicating electric drive usability in a hybrid electric vehicle. A tachometer is used that includes a display having an all-electric drive portion and a hybrid drive portion. The all-electric drive portion and the hybrid drive portion share a first boundary which indicates a minimum electric drive usability and a beginning of hybrid drive operation of the vehicle. The indicated level of electric drive usability is derived from at least one of a percent battery discharge, a percent maximum torque provided by the electric drive, and a percent electric drive to hybrid drive operating cost for the hybrid electric vehicle.
SiGe Integrated Circuit/SQUID Hybrid Cryogenic Multiplexer for Superconducting Bolometer Array
NASA Astrophysics Data System (ADS)
Prêle, D.; Voisin, F.; Oger, R.; Chapron, C.; Bréelle, E.; Piat, M.
2009-12-01
The development of large superconducting bolometer (Transition Edge Sensor: TES) arrays requires ultra low noise amplification and multiplexing electronics. The use of a first transducer stage such as a SQUID (Superconducting QUantum Interference Device) allows ultimate performance in terms of noise. However, the linearization of the SQUID characteristic requires low noise amplification. Furthermore, to realize a time domain multiplexer with SQUIDs, switched biasing is also needed. We have designed an Integrated Circuit (IC) in standard BiCMOS SiGe technology for the readout and the control of a SQUID multiplexer. It includes a low noise amplifier with multiplexed inputs, switched current sources for SQUIDs, and digital circuit for the addressing with only one room temperature clock signal. We have successfully tested this integrated circuit down to 2 K. To validate the operation of a SQUID multiplexer controlled by this SiGe cryogenic IC, we have developed a 2×2 SQUID hybrid demonstrator. It consists of four commercial SQUIDs connected to a SiGe IC.
First application close measurements applying the new hybrid integrated MEMS spectrometer
NASA Astrophysics Data System (ADS)
Grüger, Heinrich; Pügner, Tino; Knobbe, Jens; Schenk, Harald
2013-05-01
Grating spectrometers have been designed in many different configurations. Now potential high volume applications ask for extremely miniaturized and low cost systems. By the use of integrated MEMS (micro electro mechanical systems) scanning grating devices a less expensive single detector can be used in the NIR instead of the array detectors required for fixed grating systems. Meanwhile the design of a hybrid integrated MEMS scanning grating spectrometer has been drawn. The MEMS device was fabricated in the Fraunhofer IPMS own clean room facility. This chip is mounted on a small circuit board together with the detector and then stacked with spacer and mirror substrate. The spectrometer has been realized by stacking several planar substrates by sophisticated mounting technologies. The spectrometer has been designed for the 950nm - 1900nm spectral range and 9nm spectral resolution with organic matter analysis in mind. First applications are considered in the food quality analysis and food processing technology. As example for the use of a spectrometer with this performance the grill process of steak was analyzed. Similar measurement would be possible on dairy products, vegetables or fruit. The idea is a mobile spectrometer for in situ and on site analysis applications in or attached to a host system providing processing, data access and input-output capabilities, disregarding this would be a laptop, tablet, smart phone or embedded platform.
NASA Astrophysics Data System (ADS)
Sercu, Jeannick; Fache, Niels; Libbrecht, Frank; Lagasse, Paul
1995-05-01
In this paper, a mixed potential integral equation (MPIE) formulation for hybrid microstrip-slotline multilayered circuits is presented. This integral equation is solved with the method of moments (MoM) in combination with Galerkin's method. The vector-valued rooftop functions defined over a mixed rectangular-triangular mesh are used to model the electric and magnetic currents on the microstrip and slotline structures. An efficient calculation technique for the quadruple interaction integrals between two cells in the system matrix equation is presented. Two examples of hybrid microstrip-slotline circuits are discussed. The first example compares the simulation results for a microstrip-slotline transition with measured data. The second example illustrates the use of the simulation technique in the design process of a broadband slot-coupled microstrip line transition.
ERIC Educational Resources Information Center
Rodriguez-Keyes, Elizabeth; Schneider, Dana A.
2013-01-01
This study illustrates an experience of implementing a hybrid model for teaching human behavior in the social environment in an urban university setting. Developing a hybrid model in a BSW program arose out of a desire to reach students in a different way. Designed to promote curiosity and active learning, this particular hybrid model has students…
NASA Astrophysics Data System (ADS)
Jiang, Jian; Luo, Jingshan; Zhu, Jianhui; Huang, Xintang; Liu, Jinping; Yu, Ting
2013-08-01
Controlled integration of multiple semiconducting oxides into each single unit of ordered nanotube arrays is highly desired in scientific research for the realization of more attractive applications. We herein report a diffusion-controlled solid-solid route to evolve simplex Co(CO3)0.5(OH)0.11H2O@TiO2 core-shell nanowire arrays (NWs) into CoO-CoTiO3 integrated hybrid nanotube arrays (NTs) with preserved morphology. During the evolution procedure, the decomposition of Co(CO3)0.5(OH)0.11H2O NWs into chains of CoCO3 nanoparticles initiates the diffusion process and promotes the interfacial solid-solid diffusion reaction even at a low temperature of 450 °C. The resulting CoO-CoTiO3 NTs possess well-defined sealed tubular geometries and a special ``inner-outer'' hybrid nature, which is suitable for application in Li-ion batteries (LIBs). As a proof-of-concept demonstration of the functions of such hybrid NTs in LIBs, CoO-CoTiO3 NTs are directly tested as LIB anodes, exhibiting both a high capacity (~600 mA h g-1 still remaining after 250 continuous cycles) and a much better cycling performance (no capacity fading within 250 total cycles) than CoO NWs. Our work presents not only a diffusion route for the formation of integrated hybrid NTs but also a new concept that can be employed as a general strategy to fabricate other oxide-based hybrid NTs for energy storage devices.Controlled integration of multiple semiconducting oxides into each single unit of ordered nanotube arrays is highly desired in scientific research for the realization of more attractive applications. We herein report a diffusion-controlled solid-solid route to evolve simplex Co(CO3)0.5(OH)0.11H2O@TiO2 core-shell nanowire arrays (NWs) into CoO-CoTiO3 integrated hybrid nanotube arrays (NTs) with preserved morphology. During the evolution procedure, the decomposition of Co(CO3)0.5(OH)0.11H2O NWs into chains of CoCO3 nanoparticles initiates the diffusion process and promotes the interfacial solid-solid diffusion reaction even at a low temperature of 450 °C. The resulting CoO-CoTiO3 NTs possess well-defined sealed tubular geometries and a special ``inner-outer'' hybrid nature, which is suitable for application in Li-ion batteries (LIBs). As a proof-of-concept demonstration of the functions of such hybrid NTs in LIBs, CoO-CoTiO3 NTs are directly tested as LIB anodes, exhibiting both a high capacity (~600 mA h g-1 still remaining after 250 continuous cycles) and a much better cycling performance (no capacity fading within 250 total cycles) than CoO NWs. Our work presents not only a diffusion route for the formation of integrated hybrid NTs but also a new concept that can be employed as a general strategy to fabricate other oxide-based hybrid NTs for energy storage devices. Electronic supplementary information (ESI) available: SEM images of Co(CO3)0.5(OH)0.11H2O NWs, SEM/TEM images of CoO-CoTiO3 hybrid nanotubes and the calculation of CoTiO3 theoretical capacity. See DOI: 10.1039/c3nr01786a
NASA Astrophysics Data System (ADS)
Mishechkin, Oleg Viktorovich
2003-10-01
A technological platform based on low-temperature hybrid sol-gel method for fabrication of optical waveguides and integrated optical components has been developed. The developed chemistry for doping incorporation in the host network provides a range of refractive indexes (1.444--1.51) critical for device optimization. A passivation method for improving long-term stability of organic-inorganic sol-gel material is reported. The degradation of waveguide loss over time due to moisture adsorption from the atmosphere is drastically suppressed by coating the material with a protective thin SiO2 film. The results indicate a long-term optical loss below 0.3 dB/cm for protected waveguides. The theory of multimode interference couplers employing self-imaging effect is described. A novel approach for design of high-performance MMI devices in low-contrast material is proposed. The design method is based on optimization of refractive index contrast and width of a multimode waveguide (the body of MMI couplers) to achieve a maximum number of constructively interfering modes resulting to the best self-imaging. This optimization is carried out using 3D BPM simulations. This method was applied to design 1 x 4, 1 x 12, and 4 x 4 MMI couplers and led to a superior performance in excess loss, power imbalance in output ports, and polarization sensitivity. Taking advantage of the inherent input-output phase relations in a 4 x 4 MMI coupler, an optical 90° hybrid is realized by incorporation a Y-junction to coherently excite two ports of the coupler. A series of MMI couplers were fabricated and characterized. The experimental results are in good agreement with the design. Measured performance of the sol-gel derived MMI components was compared to analogues fabricated by other technologies. The comparison demonstrates the superior performance of the sol-gel devices. The polarization sensitivity of all fabricated couplers is below 0.05 dB.
Advanced Hybrid Spacesuit Concept Featuring Integrated Open Loop and Closed Loop Ventilation Systems
NASA Technical Reports Server (NTRS)
Daniel, Brian A.; Fitzpatrick, Garret R.; Gohmert, Dustin M.; Ybarra, Rick M.; Dub, Mark O.
2013-01-01
A document discusses the design and prototype of an advanced spacesuit concept that integrates the capability to function seamlessly with multiple ventilation system approaches. Traditionally, spacesuits are designed to operate both dependently and independently of a host vehicle environment control and life support system (ECLSS). Spacesuits that operate independent of vehicle-provided ECLSS services must do so with equipment selfcontained within or on the spacesuit. Suits that are dependent on vehicle-provided consumables must remain physically connected to and integrated with the vehicle to operate properly. This innovation is the design and prototype of a hybrid spacesuit approach that configures the spacesuit to seamlessly interface and integrate with either type of vehicular systems, while still maintaining the ability to function completely independent of the vehicle. An existing Advanced Crew Escape Suit (ACES) was utilized as the platform from which to develop the innovation. The ACES was retrofitted with selected components and one-off items to achieve the objective. The ventilation system concept was developed and prototyped/retrofitted to an existing ACES. Components were selected to provide suit connectors, hoses/umbilicals, internal breathing system ducting/ conduits, etc. The concept utilizes a lowpressure- drop, high-flow ventilation system that serves as a conduit from the vehicle supply into the suit, up through a neck seal, into the breathing helmet cavity, back down through the neck seal, out of the suit, and returned to the vehicle. The concept also utilizes a modified demand-based breathing system configured to function seamlessly with the low-pressure-drop closed-loop ventilation system.
NASA Astrophysics Data System (ADS)
Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.
2014-06-01
MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.
NASA Astrophysics Data System (ADS)
Zhang, Xingyu; Hosseini, Amir; Subbaraman, Harish; Wang, Shiyi; Zhan, Qiwen; Luo, Jingdong; Jen, Alex K.; Chung, Chi-jui; Yan, Hai; Pan, Zeyu; Nelson, Robert L.; Lee, Charles Y.; Chen, Ray T.
2015-03-01
The detection and measurement of electromagnetic fields have attracted significant amounts of attention in recent years. Traditional electronic electromagnetic field sensors use large active conductive probes which perturb the field to be measured and also make the devices bulky. In order to address these problems, integrated photonic electromagnetic field sensors have been developed, in which an optical signal is modulated by an RF signal collected by a miniaturized antenna. In this work, we design, fabricate and characterize a compact, broadband and highly sensitive integrated photonic electromagnetic field sensor based on a silicon-organic hybrid modulator driven by a bowtie antenna. The large electro-optic (EO) coefficient of organic polymer, the slow-light effects in the silicon slot photonic crystal waveguide (PCW), and the broadband field enhancement provided by the bowtie antenna, are all combined to enhance the interaction of microwaves and optical waves, enabling a high EO modulation efficiency and thus a high sensitivity. The modulator is experimentally demonstrated with a record-high effective in-device EO modulation efficiency of r33=1230pm/V. Modulation response up to 40GHz is measured, with a 3-dB bandwidth of 11GHz. The slot PCW has an interaction length of 300?m, and the bowtie antenna has an area smaller than 1cm2. The bowtie antenna in the device is experimentally demonstrated to have a broadband characteristics with a central resonance frequency of 10GHz, as well as a large beam width which enables the detection of electromagnetic waves from a large range of incident angles. The sensor is experimentally demonstrated with a minimum detectable electromagnetic power density of 8.4mW/m2 at 8.4GHz, corresponding to a minimum detectable electric field of 2.5V/m and an ultra-high sensitivity of 0.000027V/m Hz-1/2 ever demonstrated. To the best of our knowledge, this is the first silicon-organic hybrid device and also the first PCW device used for the photonic detection of electromagnetic waves. Finally, we propose some future work, including a Teraherz wave sensor based on antenna-coupled electrooptic polymer filled plasmonic slot waveguide, as well as a fully packaged and tailgated device.
NASA Astrophysics Data System (ADS)
Ilgüsatiroglu, Emre; Illarionov, Alexey Yu.; Ciappa, Mauro; Pfäffli, Paul; Bomholt, Lars
2014-04-01
A new Monte Carlo code is presented that includes among others definition of arbitrary geometries with sub-nanometer resolution, high performance parallel computing capabilities, trapped charge, electric field calculation, electron tracking in electrostatic field, and calculation of 3D dose distributions. These functionalities are efficiently implemented thanks to the coupling of the Monte Carlo simulator with a TCAD environment. Applications shown are the synthesis of SEM linescans and images that focus on the evaluation of the impact of proximity effects and self charging on the quantitative extraction of critical dimensions in dense photoresist structures.
Jiang, Jian; Luo, Jingshan; Zhu, Jianhui; Huang, Xintang; Liu, Jinping; Yu, Ting
2013-09-01
Controlled integration of multiple semiconducting oxides into each single unit of ordered nanotube arrays is highly desired in scientific research for the realization of more attractive applications. We herein report a diffusion-controlled solid-solid route to evolve simplex Co(CO3)0.5(OH)0.11H2O@TiO2 core-shell nanowire arrays (NWs) into CoO-CoTiO3 integrated hybrid nanotube arrays (NTs) with preserved morphology. During the evolution procedure, the decomposition of Co(CO3)0.5(OH)0.11H2O NWs into chains of CoCO3 nanoparticles initiates the diffusion process and promotes the interfacial solid-solid diffusion reaction even at a low temperature of 450 °C. The resulting CoO-CoTiO3 NTs possess well-defined sealed tubular geometries and a special "inner-outer" hybrid nature, which is suitable for application in Li-ion batteries (LIBs). As a proof-of-concept demonstration of the functions of such hybrid NTs in LIBs, CoO-CoTiO3 NTs are directly tested as LIB anodes, exhibiting both a high capacity (~600 mA h g(-1) still remaining after 250 continuous cycles) and a much better cycling performance (no capacity fading within 250 total cycles) than CoO NWs. Our work presents not only a diffusion route for the formation of integrated hybrid NTs but also a new concept that can be employed as a general strategy to fabricate other oxide-based hybrid NTs for energy storage devices. PMID:23884214
Fabrication of reproducible, integration-compatible hybrid molecular/si electronics.
Yu, Xi; Lovrin?i?, Robert; Kraynis, Olga; Man, Gabriel; Ely, Tal; Zohar, Arava; Toledano, Tal; Cahen, David; Vilan, Ayelet
2014-12-29
Reproducible molecular junctions can be integrated within standard CMOS technology. Metal-molecule-semiconductor junctions are fabricated by direct Si-C binding of hexadecane or methyl-styrene onto oxide-free H-Si(111) surfaces, with the lateral size of the junctions defined by an etched SiO2 well and with evaporated Pb as the top contact. The current density, J, is highly reproducible with a standard deviation in log(J) of 0.2 over a junction diameter change from 3 to 100 ?m. Reproducibility over such a large range indicates that transport is truly across the molecules and does not result from artifacts like edge effects or defects in the molecular monolayer. Device fabrication is tested for two n-Si doping levels. With highly doped Si, transport is dominated by tunneling and reveals sharp conductance onsets at room temperature. Using the temperature dependence of current across medium-doped n-Si, the molecular tunneling barrier can be separated from the Si-Schottky one, which is a 0.47 eV, in agreement with the molecular-modified surface dipole and quite different from the bare Si-H junction. This indicates that Pb evaporation does not cause significant chemical changes to the molecules. The ability to manufacture reliable devices constitutes important progress toward possible future hybrid Si-based molecular electronics. PMID:25098545
T.F. Eibert; J.L. Volakis; Y.E. Erdemli
2002-03-03
Hybrid finite element (FE)--boundary integral (BI) analysis of infinite periodic arrays is extended to include planar multilayered Green's functions. In this manner, a portion of the volumetric dielectric region can be modeled via the finite element method whereas uniform multilayered regions can be modeled using a multilayered Green's function. As such, thick uniform substrates can be modeled without loss of efficiency and accuracy. The multilayered Green's function is analytically computed in the spectral domain and the resulting BI matrix-vector products are evaluated via the fast spectral domain algorithm (FSDA). As a result, the computational cost of the matrix-vector products is kept at O(N). Furthermore, the number of Floquet modes in the expansion are kept very few by placing the BI surfaces within the computational unit cell. Examples of frequency selective surface (FSS) arrays are analyzed with this method to demonstrate the accuracy and capability of the approach. One example involves complicated multilayered substrates above and below an inhomogeneous filter element and the other is an optical ring-slot array on a substrate several hundred wavelengths in thickness. Comparisons with measurements are included.
NASA Astrophysics Data System (ADS)
Bonoli, P. T.; Shiraiwa, S.; Wright, J. C.; Harvey, R. W.; Batchelor, D. B.; Berry, L. A.; Chen, Jin; Poli, F.; Kessel, C. E.; Jardin, S. C.
2012-10-01
Recent upgrades to the ion cyclotron RF (ICRF) and lower hybrid RF (LHRF) components of the Integrated Plasma Simulator [1] have made it possible to simulate LH current drive in the presence of ICRF minority heating and mode conversion electron heating. The background plasma is evolved in these simulations using the TSC transport code [2]. The driven LH current density profiles are computed using advanced ray tracing (GENRAY) and Fokker Planck (CQL3D) [3] components and predictions from GENRAY/CQL3D are compared with a ``reduced'' model for LHCD (the LSC [4] code). The ICRF TORIC solver is used for minority heating with a simplified (bi-Maxwellian) model for the non-thermal ion tail. Simulation results will be presented for LHCD in the presence of ICRF heating in Alcator C-Mod. [4pt] [1] D. Batchelor et al, Journal of Physics: Conf. Series 125, 012039 (2008).[0pt] [2] S. C. Jardin et al, J. Comp. Phys. 66, 481 (1986).[0pt] [3] R. W. Harvey and M. G. McCoy, Proc. of the IAEA Tech. Comm. Meeting on Simulation and Modeling of Therm. Plasmas, Montreal, Canada (1992).[0pt] [4] D. Ignat et al, Nucl. Fus. 34, 837 (1994).[0pt] [5] M. Brambilla, Plasma Phys. and Cont. Fusion 41,1 (1999).
Analysis of transgene integration sites in transgenic pigs by fluorescence in situ hybridization.
Kuipers, H W; Langford, G A; White, D J
1997-07-01
The production of pigs transgenic for human decay accelerating factor (hDAF) as potential donors for clinical organ xenotransplantation was reported several years ago. For this purpose it is required that high levels of hDAF are expressed at relevant sites in transplantable organs. Currently, homozygous lines have been produced as well as lines from crosses between heterozygous animals from different founder lines, termed 'jigsaw' pigs. The purpose of the 'jigsaw' crosses is to combine the desirable hDAF protein expression patterns found in different founder lines. Initial selection of the 'jigsaw' pigs is based on the inheritance of the hDAF integration sites from both lines. Litters with potential homozygous transgenics and 'jigsaw' transgenics were analysed by fluorescence in situ hybridization (FISH) and slot blot analysis. Results show that both slot blot analysis and FISH are suitable to distinguish between pigs that are heterozygous and homozygous fir hDAF. However, FISH has the advantage of producing results more rapidly. For the identification of 'jigsaw' pigs FISH analysis was required since slot blot analysis lacked the required accuracy. On basis of these results, FISH analysis was made part of the routine screening programme for hDAF transgenic pigs. PMID:9232026
Cramer, S.N.; Roussin, R.W.
1981-11-01
A Monte Carlo analysis of a time-dependent neutron and secondary gamma-ray integral experiment on a thick concrete and steel shield is presented. The energy range covered in the analysis is 15-2 MeV for neutron source energies. The multigroup MORSE code was used with the VITAMIN C 171-36 neutron-gamma-ray cross-section data set. Both neutron and gamma-ray count rates and unfolded energy spectra are presented and compared, with good general agreement, with experimental results.
NASA Technical Reports Server (NTRS)
Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.
Zhang, Xingyu; Lin, Xiaohui; Subbaraman, Harish; Chen, Ray T
2014-01-01
The accelerating increase in information traffic demands the expansion of optical access network systems that require cost reduction of optical and photonic components. Low cost, ease of fabrication, and integration capabilities of low optical-loss polymers make them attractive for photonic applications. In addition to passive wave-guiding components, electro-optic (EO) polymers consisting of a polymeric matrix doped with organic nonlinear chromophores have enabled wide-RF-bandwidth and low-power optical modulators. Beside board level passive and active optical components, compact on-chip modulators (a few 100 micronmeters to a few millimeters) have been made possible by hybrid integration of EO polymers onto the silicon platform. This paper summarizes some of the recent progress in polymer based optical modulators and interconnects. A highly linear, broadband directional coupler modulator for use in analog optical links and compact, and low-power silicon/polymer hybrid slot photonic crystal waveguide modulat...
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Not Available
1982-04-19
The Project Integration Office (PIO) was established to assist the US DOE with the direction and coordination of its multiple electric vehicle and hybrid electric vehicle research programs in order to get the maximum payoff from these research efforts. In addition, the PIO performs objective independent technical and economic studies, analyses and modeling, and maintains a technical information liaison service to facilitate information exchange between the program participants and industry. Progress in each of these activities is reported. (LCL)
Integrating Quality Matters into Hybrid Course Design: A Principles of Marketing Case Study
ERIC Educational Resources Information Center
Young, Mark R.
2014-01-01
Previous research supports the idea that the success of hybrid or online delivery modes is more a function of course design than delivery media. This article describes a case study of a hybrid Principles of Marketing course that implemented a comprehensive redesign based on design principles espoused by the Quality Matters Program, a center for…
Hybrid materials science: a promised land for the integrative design of multifunctional materials
NASA Astrophysics Data System (ADS)
Nicole, Lionel; Laberty-Robert, Christel; Rozes, Laurence; Sanchez, Clément
2014-05-01
For more than 5000 years, organic-inorganic composite materials created by men via skill and serendipity have been part of human culture and customs. The concept of ``hybrid organic-inorganic'' nanocomposites exploded in the second half of the 20th century with the expansion of the so-called ``chimie douce'' which led to many collaborations between a large set of chemists, physicists and biologists. Consequently, the scientific melting pot of these very different scientific communities created a new pluridisciplinary school of thought. Today, the tremendous effort of basic research performed in the last twenty years allows tailor-made multifunctional hybrid materials with perfect control over composition, structure and shape. Some of these hybrid materials have already entered the industrial market. Many tailor-made multiscale hybrids are increasingly impacting numerous fields of applications: optics, catalysis, energy, environment, nanomedicine, etc. In the present feature article, we emphasize several fundamental and applied aspects of the hybrid materials field: bioreplication, mesostructured thin films, Lego-like chemistry designed hybrid nanocomposites, and advanced hybrid materials for energy. Finally, a few commercial applications of hybrid materials will be presented.
Hybrid materials science: a promised land for the integrative design of multifunctional materials.
Nicole, Lionel; Laberty-Robert, Christel; Rozes, Laurence; Sanchez, Clément
2014-06-21
For more than 5000 years, organic-inorganic composite materials created by men via skill and serendipity have been part of human culture and customs. The concept of "hybrid organic-inorganic" nanocomposites exploded in the second half of the 20th century with the expansion of the so-called "chimie douce" which led to many collaborations between a large set of chemists, physicists and biologists. Consequently, the scientific melting pot of these very different scientific communities created a new pluridisciplinary school of thought. Today, the tremendous effort of basic research performed in the last twenty years allows tailor-made multifunctional hybrid materials with perfect control over composition, structure and shape. Some of these hybrid materials have already entered the industrial market. Many tailor-made multiscale hybrids are increasingly impacting numerous fields of applications: optics, catalysis, energy, environment, nanomedicine, etc. In the present feature article, we emphasize several fundamental and applied aspects of the hybrid materials field: bioreplication, mesostructured thin films, Lego-like chemistry designed hybrid nanocomposites, and advanced hybrid materials for energy. Finally, a few commercial applications of hybrid materials will be presented. PMID:24866174
Hybrid Environmental Control System Integrated Modeling Trade Study Analysis for Commercial Aviation
NASA Astrophysics Data System (ADS)
Parrilla, Javier
Current industry trends demonstrate aircraft electrification will be part of future platforms in order to achieve higher levels of efficiency in various vehicle level sub-systems. However electrification requires a substantial change in aircraft design that is not suitable for re-winged or re-engined applications as some aircraft manufacturers are opting for today. Thermal limits arise as engine cores progressively get smaller and hotter to improve overall engine efficiency, while legacy systems still demand a substantial amount of pneumatic, hydraulic and electric power extraction. The environmental control system (ECS) provides pressurization, ventilation and air conditioning in commercial aircraft, making it the main heat sink for all aircraft loads with exception of the engine. To mitigate the architecture thermal limits in an efficient manner, the form in which the ECS interacts with the engine will have to be enhanced as to reduce the overall energy consumed and achieve an energy optimized solution. This study examines a tradeoff analysis of an electric ECS by use of a fully integrated Numerical Propulsion Simulation System (NPSS) model that is capable of studying the interaction between the ECS and the engine cycle deck. It was found that a peak solution lays in a hybrid ECS where it utilizes the correct balance between a traditional pneumatic and a fully electric system. This intermediate architecture offers a substantial improvement in aircraft fuel consumptions due to a reduced amount of waste heat and customer bleed in exchange for partial electrification of the air-conditions pack which is a viable option for re-winged applications.
Nair, Sithara S; McCullough, Eric J; Yadavalli, Vamsi K; Wynne, Kenneth J
2014-11-01
Investigating the surface characteristics of heterogeneous polymer systems is important for understanding how to better tailor surfaces and engineering specific reactions and desirable properties. Here we report on the surface properties for a blend consisting of a major component, a linear polyurethane or thermoplastic elastomer (TPU), and a minor component that is a hybrid network. The hybrid network consists of a fluorous polyoxetane soft block and a hydrolysis/condensation inorganic (HyCoin) network. Phase separation during coating formation results in surface concentration of the minor fluorous hybrid domain. The TPU is H12MDI/BD(50)-PTMO-1000 derived from bis(cyclohexylmethylene)-diisocyanate and butane diol (50 wt %) and poly(tetramethylene oxide). Surface modification results from a novel network-forming hybrid composed of poly(trifluoroethoxymethyl-methyl oxetane) diol) (3F) as the fluorous moiety end-capped with 3-isocyanatopropylriethoxysilane and bis(triethoxysilyl)ethane (BTESE) as a siliceous stabilizer. We use an integrated approach that combines elemental analysis of the near surface via X-ray photoelectron microscopy with surface mapping using atomic force microscopy that presents topographical and phase imaging along with nanomechanical properties. Overall, this versatile, high-resolution approach enabled unique insight into surface composition and morphology that led to a model of heterogeneous surfaces containing a range of constituents and properties. PMID:25268217
Buettgenback, T.H. (California Inst. of Tech., Pasadena, CA (United States). Div. of Physics)
1993-10-01
The hybrid antenna discussed here is defined as a dielectric lens-antenna as a special case of an extended hemispherical dielectric lens that is operated in the diffraction limited regime. It is a modified version of the planar antenna on a lens scheme developed by Rutledge. The dielectric lens-antenna is fed by a planar-structure antenna, which is mounted on the flat side of the dielectric lens-antenna using it as a substrate, and the combination is termed a hybrid antenna. Beam pattern and aperture efficiency measurements were made at millimeter and submillimeter wavelengths as a function of extension of the hemispherical lens and different lens sizes. An optimum extension distance is found experimentally and numerically for which excellent beam patterns and simultaneously high aperture efficiencies can be achieved. At 115 GHz the aperture efficiency was measured to be (76 [+-] 6)% for a diffraction limited beam with sidelobes below [minus]17 dB. Results of a single hybrid antenna with an integrated Superconductor-Insulator-Superconductor (SIS) detector and a broad-band matching structure at submillimeter wavelengths are presented. The hybrid antenna is diffraction limited, space efficient in an array due to its high aperture efficiency, and is easily mass produced, thus being well suited for focal plane heterodyne receiver arrays.
Gao, Yanbin; Liu, Shifei; Atia, Mohamed M; Noureldin, Aboelmagd
2015-01-01
This paper takes advantage of the complementary characteristics of Global Positioning System (GPS) and Light Detection and Ranging (LiDAR) to provide periodic corrections to Inertial Navigation System (INS) alternatively in different environmental conditions. In open sky, where GPS signals are available and LiDAR measurements are sparse, GPS is integrated with INS. Meanwhile, in confined outdoor environments and indoors, where GPS is unreliable or unavailable and LiDAR measurements are rich, LiDAR replaces GPS to integrate with INS. This paper also proposes an innovative hybrid scan matching algorithm that combines the feature-based scan matching method and Iterative Closest Point (ICP) based scan matching method. The algorithm can work and transit between two modes depending on the number of matched line features over two scans, thus achieving efficiency and robustness concurrently. Two integration schemes of INS and LiDAR with hybrid scan matching algorithm are implemented and compared. Real experiments are performed on an Unmanned Ground Vehicle (UGV) for both outdoor and indoor environments. Experimental results show that the multi-sensor integrated system can remain sub-meter navigation accuracy during the whole trajectory. PMID:26389906
Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods
Robert, Christian P.
Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods Christian P. Robert Universit´e Paris Dauphine and CREST-INSEE http://www.ceremade.dauphine.fr/~xian 3-6 Mayo 2005 #12;Markov Chain Integration Notions on Markov Chains The Metropolis-Hastings Algorithm The Gibbs Sampler MCMC tools
NASA Astrophysics Data System (ADS)
Leclerc, Melanie R.; Côté, Patrice; Duchesne, François; Bastien, Pierre; Hernandez, Olivier; Colonna d'Istria, Pierre; Demers, Mathieu; Girard, Marc; Savard, Maxime; Lemieux, Dany; Thibault, Simon; Brousseau, Denis
2014-08-01
A polarimeter, to observe exoplanets in the visible and infrared, was built for the "Observatoire du Mont Mégantic" (OMM) to replace an existing instrument and reach 10-6 precision, a factor 100 improvement. The optical and mechanical designs are presented, with techniques used to precisely align the optical components and rotation axes to achieve the targeted precision. A photo-elastic modulator (PEM) and a lock-in amplifier are used to measure the polarization. The typical signal is a high DC superimposed to a very faint sinusoidal oscillation. Custom electronics was developed to measure the AC and DC amplitudes, and characterization results are presented.
Primordial black hole seeding from hybrid inflation : the direct integration approach
Giguere, Alexis
2013-01-01
We examine the notion that supermassive black holes at the centre of galaxies, such as the Milky Way, could have been seeded in the early universe by the mechanisms of hybrid inflation. Using luminosity data, we estimate ...
NASA Astrophysics Data System (ADS)
Srinivasan, P.; Priya, S.; Patel, Tarun; Gopalakrishnan, R. K.; Sharma, D. N.
2015-01-01
DD/DT fusion neutron generators are used as sources of 2.5 MeV/14.1 MeV neutrons in experimental laboratories for various applications. Detailed knowledge of the radiation dose rates around the neutron generators are essential for ensuring radiological protection of the personnel involved with the operation. This work describes the experimental and Monte Carlo studies carried out in the Purnima Neutron Generator facility of the Bhabha Atomic Research Center (BARC), Mumbai. Verification and validation of the shielding adequacy was carried out by measuring the neutron and gamma dose-rates at various locations inside and outside the neutron generator hall during different operational conditions both for 2.5-MeV and 14.1-MeV neutrons and comparing with theoretical simulations. The calculated and experimental dose rates were found to agree with a maximum deviation of 20% at certain locations. This study has served in benchmarking the Monte Carlo simulation methods adopted for shield design of such facilities. This has also helped in augmenting the existing shield thickness to reduce the neutron and associated gamma dose rates for radiological protection of personnel during operation of the generators at higher source neutron yields up to 1 × 1010 n/s.
Riboldi, M.; Chen, G. T. Y.; Baroni, G.; Paganetti, H.; Seco, J.
2015-01-01
We have designed a simulation framework for motion studies in radiation therapy by integrating the anthropomorphic NCAT phantom into a 4D Monte Carlo dose calculation engine based on DPM. Representing an artifact-free environment, the system can be used to identify class solutions as a function of geometric and dosimetric parameters. A pilot dynamic conformal study for three lesions (~ 2.0 cm) in the right lung was performed (70 Gy prescription dose). Tumor motion changed as a function of tumor location, according to the anthropomorphic deformable motion model. Conformal plans were simulated with 0 to 2 cm margin for the aperture, with additional 0.5 cm for beam penumbra. The dosimetric effects of intensity modulated radiotherapy (IMRT) vs. conformal treatments were compared in a static case. Results show that the Monte Carlo simulation framework can model tumor tracking in deformable anatomy with high accuracy, providing absolute doses for IMRT and conformal radiation therapy. A target underdosage of up to 3.67 Gy (lower lung) was highlighted in the composite dose distribution mapped at exhale. Such effects depend on tumor location and treatment margin and are affected by lung deformation and ribcage motion. In summary, the complexity in the irradiation of moving targets has been reduced to a controlled simulation environment, where several treatment options can be accurately modeled and quantified The implemented tools will be utilized for extensive motion study in lung/liver irradiation. PMID:19044324
Zhang, Wenli; Solanki, Manish; Müther, Nadine; Ebel, Melanie; Wang, Jichang; Sun, Chuanbo; Izsvak, Zsuzsanna; Ehrhardt, Anja
2013-01-01
Recombinant adeno-associated viral (AAV) vectors have been shown to be one of the most promising vectors for therapeutic gene delivery because they can induce efficient and long-term transduction in non-dividing cells with negligible side-effects. However, as AAV vectors mostly remain episomal, vector genomes and transgene expression are lost in dividing cells. Therefore, to stably transduce cells, we developed a novel AAV/transposase hybrid-vector. To facilitate SB-mediated transposition from the rAAV genome, we established a system in which one AAV vector contains the transposon with the gene of interest and the second vector delivers the hyperactive Sleeping Beauty (SB) transposase SB100X. Human cells were infected with the AAV-transposon vector and the transposase was provided in trans either by transient and stable plasmid transfection or by AAV vector transduction. We found that groups which received the hyperactive transposase SB100X showed significantly increased colony forming numbers indicating enhanced integration efficiencies. Furthermore, we found that transgene copy numbers in transduced cells were dose-dependent and that predominantly SB transposase-mediated transposition contributed to stabilization of the transgene. Based on a plasmid rescue strategy and a linear-amplification mediated PCR (LAM-PCR) protocol we analysed the SB100X-mediated integration profile after transposition from the AAV vector. A total of 1840 integration events were identified which revealed a close to random integration profile. In summary, we show for the first time that AAV vectors can serve as template for SB transposase mediated somatic integration. We developed the first prototype of this hybrid-vector system which with further improvements may be explored for treatment of diseases which originate from rapidly dividing cells. PMID:24116154
Priyanka Aggarwal; Zainab Syed; Naser El-Sheimy
2009-01-01
Navigation includes the integration of methodologies and systems for estimating time-varying position, velocity and attitude of moving objects. Navigation incorporating the integrated inertial navigation system (INS) and global positioning system (GPS) generally requires extensive evaluations of nonlinear equations involving double integration. Currently, integrated navigation systems are commonly implemented using the extended Kalman filter (EKF). The EKF assumes a linearized process,
Davis, Brian W; Raudsepp, Terje; Pearks Wilkerson, Alison J; Agarwala, Richa; Schäffer, Alejandro A; Houck, Marlys; Chowdhary, Bhanu P; Murphy, William J
2009-04-01
We describe the construction of a high-resolution radiation hybrid (RH) map of the domestic cat genome, which includes 2662 markers, translating to an estimated average intermarker distance of 939 kilobases (kb). Targeted marker selection utilized the recent feline 1.9x genome assembly, concentrating on regions of low marker density on feline autosomes and the X chromosome, in addition to regions flanking interspecies chromosomal breakpoints. Average gap (breakpoint) size between cat-human ordered conserved segments is less than 900 kb. The map was used for a fine-scale comparison of conserved syntenic blocks with the human and canine genomes. Corroborative fluorescence in situ hybridization (FISH) data were generated using 129 domestic cat BAC clones as probes, providing independent confirmation of the long-range correctness of the map. Cross-species hybridization of BAC probes on divergent felids from the genera Profelis (serval) and Panthera (snow leopard) provides further evidence for karyotypic conservation within felids, and demonstrates the utility of such probes for future studies of chromosome evolution within the cat family and in related carnivores. The integrated map constitutes a comprehensive framework for identifying genes controlling feline phenotypes of interest, and to aid in assembly of a higher coverage feline genome sequence. PMID:18951970
Davis, Brian W.; Raudsepp, Terje; Wilkerson, Alison J. Pearks; Agarwala, Richa; Schäffer, Alejandro A.; Houck, Marlys; Ryder, Oliver A.; Chowdhdary, Bhanu P.; Murphy, William J.
2008-01-01
We describe the construction of a high-resolution radiation hybrid (RH) map of the domestic cat genome, which includes 2,662 markers, translating to an estimated average intermarker distance of 939 kilobases (Kb). Targeted marker selection utilized the recent feline 1.9x genome assembly, concentrating on regions of low marker density on feline autosomes and the X chromosome, in addition to regions flanking interspecies chromosomal breakpoints. Average gap (breakpoint) size between cat-human ordered conserved segments is less than 900 Kb. The map was used for a fine-scale comparison of conserved syntenic blocks with the human and canine genomes. Corroborative fluorescence in situ hybridization (FISH) data were generated using 129 domestic cat BAC-clones as probes, providing independent confirmation of the long-range correctness of the map. Cross-species hybridization of BAC probes on divergent felids from the genera Profelis (serval) and Panthera (snow leopard) provides further evidence for karyotypic conservation within felids, and demonstrates the utility of such probes for future studies of chromosome evolution within the cat family and in related carnivores. The integrated map constitutes a comprehensive framework for identifying genes controlling feline phenotypes of interest, and to aid in assembly of a higher coverage feline genome sequence. PMID:18951970
Hou, Chao; Lang, Xing-You; Han, Gao-Feng; Li, Ying-Qi; Zhao, Lei; Wen, Zi; Zhu, Yong-Fu; Zhao, Ming; Li, Jian-Chen; Lian, Jian-She; Jiang, Qing
2013-01-01
Nanoarchitectured electroactive materials can boost rates of Li insertion/extraction, showing genuine potential to increase power output of Li-ion batteries. However, electrodes assembled with low-dimensional nanostructured transition metal oxides by conventional approach suffer from dramatic reductions in energy capacities owing to sluggish ion and electron transport kinetics. Here we report that flexible bulk electrodes, made of three-dimensional bicontinuous nanoporous Cu/MnO2 hybrid and seamlessly integrated with Cu solid current collector, substantially optimizes Li storage behavior of the constituent MnO2. As a result of the unique integration of solid/nanoporous hybrid architecture that simultaneously enhances the electron transport of MnO2, facilitates fast ion diffusion and accommodates large volume changes on Li insertion/extraction of MnO2, the supported MnO2 exhibits a stable capacity of as high as ~1100?mA h g?1 for 1000 cycles, and ultrahigh charge/discharge rates. It makes the environmentally friendly and low-cost electrode as a promising anode for high-performance Li-ion battery applications. PMID:24096928
NASA Astrophysics Data System (ADS)
Hou, Chao; Lang, Xing-You; Han, Gao-Feng; Li, Ying-Qi; Zhao, Lei; Wen, Zi; Zhu, Yong-Fu; Zhao, Ming; Li, Jian-Chen; Lian, Jian-She; Jiang, Qing
2013-10-01
Nanoarchitectured electroactive materials can boost rates of Li insertion/extraction, showing genuine potential to increase power output of Li-ion batteries. However, electrodes assembled with low-dimensional nanostructured transition metal oxides by conventional approach suffer from dramatic reductions in energy capacities owing to sluggish ion and electron transport kinetics. Here we report that flexible bulk electrodes, made of three-dimensional bicontinuous nanoporous Cu/MnO2 hybrid and seamlessly integrated with Cu solid current collector, substantially optimizes Li storage behavior of the constituent MnO2. As a result of the unique integration of solid/nanoporous hybrid architecture that simultaneously enhances the electron transport of MnO2, facilitates fast ion diffusion and accommodates large volume changes on Li insertion/extraction of MnO2, the supported MnO2 exhibits a stable capacity of as high as ~1100 mA h g-1 for 1000 cycles, and ultrahigh charge/discharge rates. It makes the environmentally friendly and low-cost electrode as a promising anode for high-performance Li-ion battery applications.
INTEGRATION OF GENETIC AND RADIATION HYBRID MAPS OF THE PIG: THE SECOND GENERATION IMPRH MAPS
Technology Transfer Automated Retrieval System (TEKTRAN)
More than 4500 markers, ESTs and genes have been mapped on IMpRH radiation hybrid panel and submitted to IMpRH Server before 30 March 2002, whereas 757 markers only were mapped on the first generation map (Hawken et al, 1999). To take advantage of the different resolutions observed on the genetic an...
NASA Astrophysics Data System (ADS)
Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Li Zhang; Chaobin Dang; Eiji Hihara
2010-01-01
This paper reports a performance analysis for a hybrid air conditioning system. In this system, the sensible heat load is primarily treated by a vapor compression heat hump; the latent heat load is treated by a liquid dehumidification system that uses a lithium chloride solution as a desiccant. In addition, by decreasing the humidity ratio of air flowing through the
Streamline Integration Using MPI-Hybrid Parallelism on a Large Multicore Architecture
two to six cores, current trends indicate that future supercomputers will consist of individual nodes and supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best problem and data dependent. Supercomputers are increasingly relying on nodes that contain multiple cores
Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information. PMID:25896820
Liu, Fenglin; Kang, Wenpei; Zhao, Chenhao; Su, Yunlan; Wang, Dujin; Shen, Qiang
2011-12-14
Hollow cylindrical multi-walled hybrid nanotubes go through dynamic growth and subsequent disappearance during the biomimetic fabrication of hexagonal calcite platelets, simulating the in vivo purpose-driven self-assembly of tubular plasma-membrane calcium-ion channels for biomaterials to adapt, respond and repair. PMID:22022703
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2004-06-01
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Arora, Bhavna; Mohanty, Binayak P; McGuire, Jennifer T
2015-04-15
Predicting and controlling the concentrations of redox-sensitive elements are primary concerns for environmental remediation of contaminated sites. These predictions are complicated by dynamic flow processes as hydrologic variability is a governing control on conservative and reactive chemical concentrations. Subsurface heterogeneity in the form of layers and lenses further complicates the flow dynamics of the system impacting chemical concentrations including redox-sensitive elements. In response to these complexities, this study investigates the role of heterogeneity and hydrologic processes in an effective parameter upscaling scheme from the column to the landfill scale. We used a Markov chain Monte Carlo (MCMC) algorithm to derive upscaling coefficients for hydrological and geochemical parameters, which were tested for variations across heterogeneous systems (layers and lenses) and interaction of flow processes based on the output uncertainty of dominant biogeochemical concentrations at the Norman Landfill site, a closed municipal landfill with prevalent organic and trace metal contamination. The results from MCMC analysis indicated that geochemical upscaling coefficients based on effective concentration ratios incorporating local heterogeneity across layered and lensed systems produced better estimates of redox-sensitive biogeochemistry at the field scale. MCMC analysis also suggested that inclusion of hydrological parameters in the upscaling scheme reduced the output uncertainty of effective mean geochemical concentrations by orders of magnitude at the Norman Landfill site. This was further confirmed by posterior density plots of the scaling coefficients that revealed unimodal characteristics when only geochemical processes were involved, but produced multimodal distributions when hydrological parameters were included. The multimodality again suggests the effect of heterogeneity and lithologic variability on the distribution of redox-sensitive elements at the Norman Landfill site. PMID:25644839
NASA Astrophysics Data System (ADS)
Bauer, J.; Sommerer, F.; Mairani, A.; Unholtz, D.; Farook, R.; Handrack, J.; Frey, K.; Marcelos, T.; Tessonnier, T.; Ecker, S.; Ackermann, B.; Ellerbrock, M.; Debus, J.; Parodi, K.
2014-08-01
Monte Carlo (MC) simulations of beam interaction and transport in matter are increasingly considered as essential tools to support several aspects of radiation therapy. Despite the vast application of MC to photon therapy and scattered proton therapy, clinical experience in scanned ion beam therapy is still scarce. This is especially the case for ions heavier than protons, which pose additional issues like nuclear fragmentation and varying biological effectiveness. In this work, we present the evaluation of a dedicated framework which has been developed at the Heidelberg Ion Beam Therapy Center to provide automated FLUKA MC simulations of clinical patient treatments with scanned proton and carbon ion beams. Investigations on the number of transported primaries and the dimension of the geometry and scoring grids have been performed for a representative class of patient cases in order to provide recommendations on the simulation settings, showing that recommendations derived from the experience in proton therapy cannot be directly translated to the case of carbon ion beams. The MC results with the optimized settings have been compared to the calculations of the analytical treatment planning system (TPS), showing that regardless of the consistency of the two systems (in terms of beam model in water and range calculation in different materials) relevant differences can be found in dosimetric quantities and range, especially in the case of heterogeneous and deep seated treatment sites depending on the ion beam species and energies, homogeneity of the traversed tissue and size of the treated volume. The analysis of typical TPS speed-up approximations highlighted effects which deserve accurate treatment, in contrast to adequate beam model simplifications for scanned ion beam therapy. In terms of biological dose calculations, the investigation of the mixed field components in realistic anatomical situations confirmed the findings of previous groups so far reported only in homogenous water targets. This work can thus be useful to other centers commencing clinical experience in scanned ion beam therapy.
Bauer, J; Sommerer, F; Mairani, A; Unholtz, D; Farook, R; Handrack, J; Frey, K; Marcelos, T; Tessonnier, T; Ecker, S; Ackermann, B; Ellerbrock, M; Debus, J; Parodi, K
2014-08-21
Monte Carlo (MC) simulations of beam interaction and transport in matter are increasingly considered as essential tools to support several aspects of radiation therapy. Despite the vast application of MC to photon therapy and scattered proton therapy, clinical experience in scanned ion beam therapy is still scarce. This is especially the case for ions heavier than protons, which pose additional issues like nuclear fragmentation and varying biological effectiveness. In this work, we present the evaluation of a dedicated framework which has been developed at the Heidelberg Ion Beam Therapy Center to provide automated FLUKA MC simulations of clinical patient treatments with scanned proton and carbon ion beams. Investigations on the number of transported primaries and the dimension of the geometry and scoring grids have been performed for a representative class of patient cases in order to provide recommendations on the simulation settings, showing that recommendations derived from the experience in proton therapy cannot be directly translated to the case of carbon ion beams. The MC results with the optimized settings have been compared to the calculations of the analytical treatment planning system (TPS), showing that regardless of the consistency of the two systems (in terms of beam model in water and range calculation in different materials) relevant differences can be found in dosimetric quantities and range, especially in the case of heterogeneous and deep seated treatment sites depending on the ion beam species and energies, homogeneity of the traversed tissue and size of the treated volume. The analysis of typical TPS speed-up approximations highlighted effects which deserve accurate treatment, in contrast to adequate beam model simplifications for scanned ion beam therapy. In terms of biological dose calculations, the investigation of the mixed field components in realistic anatomical situations confirmed the findings of previous groups so far reported only in homogenous water targets. This work can thus be useful to other centers commencing clinical experience in scanned ion beam therapy. PMID:25079387
Fluorescent In Situ Hybridization to Detect Transgene Integration into Plant Genomes
NASA Astrophysics Data System (ADS)
Schwarzacher, Trude
Fluorescent chromosome analysis technologies have advanced our understanding of genome organization during the last 30 years and have enabled the investigation of DNA organization and structure as well as the evolution of chromosomes. Fluorescent chromosome staining allows even small chromosomes to be visualized, characterized by their composition and morphology, and counted. Aneuploidies and polyploidies can be established for species, breeding lines, and individuals, including changes occurring during hybridization or tissue culture and transformation protocols. Fluorescent in situ hybridization correlates molecular information of a DNA sequence with its physical location on chromosomes and genomes. It thus allows determination of the physical position of sequences and often is the only means to determine the abundance and distribution of DNA sequences that are difficult to map with any other molecular method or would require segregation analysis, in particular multicopy or repetitive DNA. Equally, it is often the best way to establish the incorporation of transgenes, their numbers, and physical organization along chromosomes. This chapter presents protocols for probe and chromosome preparation, fluorescent in situ hybridization, chromosome staining, and the analysis of results.
Monte Carlo sampling from the quantum state space. II
Yi-Lin Seah; Jiangwei Shang; Hui Khoon Ng; David John Nott; Berthold-Georg Englert
2015-04-27
High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.
Lujan, Paul Joseph; /UC, Berkeley /LBL, Berkeley
2009-12-01
This thesis presents a measurement of the top quark mass obtained from p{bar p} collisions at {radical}s = 1.96 TeV at the Fermilab Tevatron using the CDF II detector. The measurement uses a matrix element integration method to calculate a t{bar t} likelihood, employing a Quasi-Monte Carlo integration, which enables us to take into account effects due to finite detector angular resolution and quark mass effects. We calculate a t{bar t} likelihood as a 2-D function of the top pole mass m{sub t} and {Delta}{sub JES}, where {Delta}{sub JES} parameterizes the uncertainty in our knowledge of the jet energy scale; it is a shift applied to all jet energies in units of the jet-dependent systematic error. By introducing {Delta}{sub JES} into the likelihood, we can use the information contained in W boson decays to constrain {Delta}{sub JES} and reduce error due to this uncertainty. We use a neural network discriminant to identify events likely to be background, and apply a cut on the peak value of individual event likelihoods to reduce the effect of badly reconstructed events. This measurement uses a total of 4.3 fb{sup -1} of integrated luminosity, requiring events with a lepton, large E{sub T}, and exactly four high-energy jets in the pseudorapidity range |{eta}| < 2.0, of which at least one must be tagged as coming from a b quark. In total, we observe 738 events before and 630 events after applying the likelihood cut, and measure m{sub t} = 172.6 {+-} 0.9 (stat.) {+-} 0.7 (JES) {+-} 1.1 (syst.) GeV/c{sup 2}, or m{sub t} = 172.6 {+-} 1.6 (tot.) GeV/c{sup 2}.
Tunnacliffe, A; Parkar, M; Povey, S; Bengtsson, B O; Stanley, K; Solomon, E; Goodfellow, P
1983-01-01
The dominant selectable gene, Ecogpt, has been introduced, by the calcium phosphate precipitation technique, into normal human fibroblasts, along with the SV40 early region genes. In one transfectant clone, integration of these sequences into human chromosome 17 was demonstrated by the construction of human-mouse somatic cell hybrids, selected for by growth in medium containing mycophenolic acid and xanthine. A whole cell hybrid, made between the human transfectant and a mouse L cell, was used as donor of the Ecogpt-carrying human chromosome 17 to 'tribrids' growing in suspension, made by whole cell fusion between a mouse thymoma cell line, and to microcell hybrids made with a mouse teratocarcinoma cell line. Two tribrids contained karyotypically normal human chromosomes 17 and a small number of other human chromosomes, while a third tribrid had a portion of the long arm of chromosome 17 translocated to mouse as its only human genetic material. Two independent microcell hybrids contained a normal chromosome 17 and no other human chromosome on a mouse teratocarcinoma background. These experiments demonstrate the ability to construct human-mouse somatic cell hybrids using a dominant selection system. By applying this approach it should be possible to select for a wide range of different human chromosomes in whole cell and microcell hybrids. In particular, transfer of single human chromosomes to mouse teratocarcinoma cells will allow examination of developmentally regulated human gene sequences after differentiation of such hybrids. Images Fig. 1. Fig. 2. Fig. 3. Fig. 4. Fig. 5. Fig. 6. Fig. 7. PMID:11892815
A. A. AZIMI; B. HOOSHYARI; N. MEHRDADI; G. H. NABI BIDHENDI
Nowadays, innovative processes especially processes with integrated growth (combined attached and suspended growth) such as moving bed biofilm reactor (MBBR) and integrated fixed film activated Sludge (IFAS) are being used successfully for new construction and upgrading existing wastewater treatment plants. Increasing the hydraulic capacity, COD and nutrients removal from the effluent are the two main targets of applying these processes.
Method for producing a hybridization of detector array and integrated circuit for readout
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (inventor); Grunthaner, Frank J. (inventor)
1993-01-01
A process is explained for fabricating a detector array in a layer of semiconductor material on one substrate and an integrated readout circuit in a layer of semiconductor material on a separate substrate in order to select semiconductor material for optimum performance of each structure, such as GaAs for the detector array and Si for the integrated readout circuit. The detector array layer is lifted off its substrate, laminated on the metallized surface on the integrated surface, etched with reticulating channels to the surface of the integrated circuit, and provided with interconnections between the detector array pixels and the integrated readout circuit through the channels. The adhesive material for the lamination is selected to be chemically stable to provide electrical and thermal insulation and to provide stress release between the two structures fabricated in semiconductor materials that may have different coefficients of thermal expansion.
Cost-effective monolithic and hybrid integration for metro and long-haul applications
NASA Astrophysics Data System (ADS)
Clayton, Rick; Carter, Andy; Betty, Ian; Simmons, Timothy
2003-12-01
Today's telecommunication market is characterized by conservative business practices: tight management of costs, low risk investing and incremental upgrades, rather than the more freewheeling approach taken a few years ago. Optimizing optical components for the current and near term market involves substantial integration, but within particular bounds. The emphasis on evolution, in particular, has led to increased standardization of functions and so created extensive opportunities for integrated product offerings. The same standardization that enables commercially successful integrated functions also changes the competitive environment, and changes the emphasis for component development; shifting the innovation priority from raw performance to delivering the most effective integrated products. This paper will discuss, with specific examples from our transmitter, receiver and passives product families, our understanding of the issues based on extensive experience in delivering high end integrated products to the market, and the direction it drives optical components.
NASA Astrophysics Data System (ADS)
Yan, T. H.; Pu, H. Y.; Chen, X. D.; Li, Q.; Xu, C.
2010-06-01
The design, realization and control technologies of a high-performance hybrid microvibration isolator for ultra-high-precision high-speed moving X/Y tables are presented in this paper—the novel isolator with integrated passive-active high level of damping. The passive damping was implemented using air-springs in both vertical and horizontal directions, with parallel linear motors in two directions to realize the active damping and the positioning functions. It is an actual hybrid isolation system because its air-spring can also be controlled through the pneumatic loop. The isolation servo system also has fast positioning capability via the feedforward compensation for the moving tables. Compared with the conventional filtered reference type control algorithms that rely on the assumption for the adaptive filter and the controlled system, in which the disturbance is estimated from the residual signal, the feedforward compensation here shows high effectiveness of vibration isolation and high-precision positioning performance for its platform. The performance of feedforward compensation has been enhanced via an efficient state estimation adaptive algorithm, the fast Kalman filter. Finally, experimental demonstration has been shown for the prototype system and the results have verified the effectiveness of the proposed isolator system design and the adaptive control algorithm for substantially enhanced damping of the platform system with the moving X/Y tables.
NASA Astrophysics Data System (ADS)
Patil, Avinash J.; Li, Mei; Mann, Stephen
2013-07-01
Synthesis of functional hybrid nanoscale objects has been a core focus of the rapidly progressing field of nanomaterials science. In particular, there has been significant interest in the integration of evolutionally optimized biological systems such as proteins, DNA, virus particles and cells with functional inorganic building blocks to construct mesoscopic architectures and nanostructured materials. However, in many cases the fragile nature of the biomolecules seriously constrains their potential applications. As a consequence, there is an on-going quest for the development of novel strategies to modulate the thermal and chemical stabilities, and performance of biomolecules under adverse conditions. This feature article highlights new methods of ``inorganic molecular wrapping'' of single or multiple protein molecules, individual double-stranded DNA helices, lipid bilayer vesicles and self-assembled organic dye superstructures using inorganic building blocks to produce bio-inorganic nanoconstructs with core-shell type structures. We show that spatial isolation of the functional biological nanostructures as ``armour-plated'' enzyme molecules or polynucleotide strands not only maintains their intact structure and biochemical properties, but also enables the fabrication of novel hybrid nanomaterials for potential applications in diverse areas of bionanotechnology.
Cassereau, Didier; Nauleau, Pierre; Bendjoudi, Aniss; Minonzio, Jean-Gabriel; Laugier, Pascal; Bossy, Emmanuel; Grimal, Quentin
2014-07-01
The development of novel quantitative ultrasound (QUS) techniques to measure the hip is critically dependent on the possibility to simulate the ultrasound propagation. One specificity of hip QUS is that ultrasounds propagate through a large thickness of soft tissue, which can be modeled by a homogeneous fluid in a first approach. Finite difference time domain (FDTD) algorithms have been widely used to simulate QUS measurements but they are not adapted to simulate ultrasonic propagation over long distances in homogeneous media. In this paper, an hybrid numerical method is presented to simulate hip QUS measurements. A two-dimensional FDTD simulation in the vicinity of the bone is coupled to the semi-analytic calculation of the Rayleigh integral to compute the wave propagation between the probe and the bone. The method is used to simulate a setup dedicated to the measurement of circumferential guided waves in the cortical compartment of the femoral neck. The proposed approach is validated by comparison with a full FDTD simulation and with an experiment on a bone phantom. For a realistic QUS configuration, the computation time is estimated to be sixty times less with the hybrid method than with a full FDTD approach. PMID:23849752
Patil, Avinash J; Li, Mei; Mann, Stephen
2013-08-21
Synthesis of functional hybrid nanoscale objects has been a core focus of the rapidly progressing field of nanomaterials science. In particular, there has been significant interest in the integration of evolutionally optimized biological systems such as proteins, DNA, virus particles and cells with functional inorganic building blocks to construct mesoscopic architectures and nanostructured materials. However, in many cases the fragile nature of the biomolecules seriously constrains their potential applications. As a consequence, there is an on-going quest for the development of novel strategies to modulate the thermal and chemical stabilities, and performance of biomolecules under adverse conditions. This feature article highlights new methods of "inorganic molecular wrapping" of single or multiple protein molecules, individual double-stranded DNA helices, lipid bilayer vesicles and self-assembled organic dye superstructures using inorganic building blocks to produce bio-inorganic nanoconstructs with core-shell type structures. We show that spatial isolation of the functional biological nanostructures as "armour-plated" enzyme molecules or polynucleotide strands not only maintains their intact structure and biochemical properties, but also enables the fabrication of novel hybrid nanomaterials for potential applications in diverse areas of bionanotechnology. PMID:23824335
Li, Liang; Chen, Zhiqiang; Cong, Wenxiang; Wang, Ge
2015-03-01
Spectral CT with photon counting detectors can significantly improve CT performance by reducing image noise and dose, increasing contrast resolution and material specificity, as well as enabling functional and molecular imaging with existing and emerging probes. However, the current photon counting detector architecture is difficult to balance the number of energy bins and the statistical noise in each energy bin. Moreover, the hardware support for multi-energy bins demands a complex circuit which is expensive. In this paper, we promote a new scheme known as hybrid detectors that combine the dynamic-threshold-based counting and integrating modes. In this scheme, an energy threshold can be dynamically changed during a spectral CT scan, which can be considered as compressive sensing along the spectral dimension. By doing so, the number of energy bins can be retrospectively specified, even in a spatially varying fashion. To establish the feasibility and merits of such hybrid detectors, we develop a tensor-based PRISM algorithm to reconstruct a spectral CT image from dynamic dual-energy data, and perform experiments with simulated and real data, producing very promising results. PMID:25252279
NASA Astrophysics Data System (ADS)
Chen, Po-Chiang; Ishikawa, Fumiaki N.; Chang, Hsiao-Kang; Ryu, Koungmin; Zhou, Chongwu
2009-03-01
A novel hybrid chemical sensor array composed of individual In2O3 nanowires, SnO2 nanowires, ZnO nanowires, and single-walled carbon nanotubes with integrated micromachined hotplates for sensitive gas discrimination was demonstrated. Key features of our approach include the integration of nanowire and carbon nanotube sensors, precise control of the sensor temperature using the micromachined hotplates, and the use of principal component analysis for pattern recognition. This sensor array was exposed to important industrial gases such as hydrogen, ethanol and nitrogen dioxide at different concentrations and sensing temperatures, and an excellent selectivity was obtained to build up an interesting 'smell-print' library of these gases. Principal component analysis of the sensing results showed great discrimination of those three tested chemicals, and in-depth analysis revealed clear improvement of selectivity by the integration of carbon nanotube sensors. This nanoelectronic nose approach has great potential for detecting and discriminating between a wide variety of gases, including explosive ones and nerve agents.
NASA Astrophysics Data System (ADS)
Sima, Felix; Wu, Dong; Xu, Jian; Midorikawa, Katsumi; Sugioka, Koji
2015-03-01
We propose herein the "ship-in-a-bottle" integration of three-dimensional (3D) polymeric sinusoidal ridges inside photosensitive glass microfluidic channel by a hybrid subtractive - additive femtosecond laser processing method. It consists of Femtosecond Laser Assisted Wet Etching (FLAE) of a photosensitive Foturan glass followed by Two-Photon Polymerization (TPP) of a SU-8 negative epoxy-resin. Both subtractive and additive processes are carried out using the same set-up with the change of laser focusing objective only. A 522 nm wavelength of the second harmonic generation from an amplified femtosecond Yb-fiber laser (FCPA µJewel D-400, IMRA America, 1045 nm; pulse width 360 fs, repetition rate 200 kHz) was employed for irradiation. The new method allows lowering the size limit of 3D objects created inside channels to smaller details down to the dimensions of a cell, and improve the structure stability. Sinusoidal periodic patterns and ridges are of great use as base scaffolds for building up new structures on their top or for modulating cell migration, guidance and orientation while created interspaces can be exploited for microfluidic applications. The glass microchannel offers robustness and appropriate dynamic flow conditions for cellular studies while the integrated patterns are reducing the size of structure to the level of cells responsiveness. Taking advantage of the ability to directly fabricate 3D complex shapes, both glass channels and polymeric integrated patterns enable us to 3D spatially design biochips for specific applications.
Homodyne laser Doppler vibrometer on silicon-on-insulator with integrated 90 degree optical hybrids.
Li, Yanlu; Baets, Roel
2013-06-01
A miniaturized homodyne laser Doppler vibrometer (LDV) with a compact 90° optical hybrid is experimentally demonstrated on a CMOS compatible silicon-on-insulator (SOI) platform. Optical components on this platform usually have inadequate suppressions of spurious reflections, which significantly influence the performance of the LDV. Numerical compensation methods are implemented to effectively decrease the impact of these spurious reflections. With the help of these compensation methods, measurements for both super-half-wavelength and sub-half-wavelength vibrations are demonstrated. Results show that the minimal detectable velocity is around 1.2 ?m/s. PMID:23736586
NASA Astrophysics Data System (ADS)
Cicak, K.; Andrews, R. W.; Yu, P.-L.; Peterson, R. W.; Purdy, T. P.; Burns, P. S.; Regal, C. A.; Lehnert, K. W.; Simmonds, R. W.
2015-03-01
Macroscopic high-stress silicon nitride membranes can be implemented as ultra-high-quality-factor mechanical resonators operating in the quantum regime with average phonon occupancy below one quantum. Mechanical motion of these resonators can be engineered to simultaneously couple both to (THz) light in free-space optical cavities and to microwave (GHz) fields in superconducting circuits. Exploiting this parametric coupling to realize quantum information transfer between these domains entails construction of devices with challenging requirements. These devices must integrate the membranes with superconducting circuits operating at cryogenic temperatures in proximity of free space optical photons while meeting demands for various quantum and coupling requirements. Here we show how to construct such ``hybrid quantum devices'' by microfabricating and assembling chip-based structures that can be inserted into high-finesse optical cavities compatible with low temperatures. We include an overview of recent fabrication improvements of membranes mechanically isolated from environment by phononic band-gap crystals.
Chandramohan, S; Ryu, Beo Deul; Kim, Hyun Kyu; Hong, Chang-Hee; Suh, Eun-Kyung
2011-03-15
This Letter reports on the fabrication of hybrid white-light-emitting diodes made of semiconductor nanocrystals (NCs) integrated on InGaN/GaN LEDs. Using core type and core/shell type CdSe NCs, the white light properties are systematically engineered for white light generation with high color rendering index (CRI). Unlike CdSe/ZnS core/shell NCs, which exhibited a unique narrowband edge emission, core type CdSe NCs offered extended broad emission toward orange/red wavelengths associated with deep trap states. Consequently, the light-emitting properties of the devices showed strong dependence on the type of NCs used, and devices with CdSe NCs offered admirable characteristics, such as Commission Internationale d'Eclairage coordinates of (0.356, 0.330) and a CRI as high as 87.4. PMID:21403688
Serge M. Gisler; Saranya Kittanakom; Daniel Fuster; Victoria Wong; Mia Bertic; Tamara Radanovic; Randy A. Hall; Heini Murer; Jurg Biber; Daniel Markovich; Orson W. Moe; Igor Stagljar
2008-01-01
PDZ-binding motifs are found in the C-terminal tails of numerous integral membrane proteins where they medi- ate specific protein-protein interactions by binding to PDZ-containing proteins. Conventional yeast two-hybrid screens have been used to probe protein-protein interac- tions of these soluble C termini. However, to date no in vivo technology has been available to study interactions between the full-length integral membrane
Guo, Jianning; Wang, Lingyun; Zhu, Jia; Zhang, Jianguo; Sheng, Deyang; Zhang, Xihui
2013-01-01
This article presents a highly integrated hybrid process for the advanced treatment of drinking water in dealing with the micro-polluted raw water. A flat sheet ceramic membrane with the pore size of 50?60 nm for ultrafiltration (UF) is used to integrate coagulation and ozonation together. At the same time, biological activated carbon filtration (BAC) is used to remove the ammonia and organic pollutants in raw water. A pilot study in the scale of 120 m(3)/d has been conducted in Southern China. The mainly-analyzed parameters include turbidity, particle counts, ammonia, total organic carbon (TOC), UV254, biological dissolved organic carbon (BDOC), dissolved oxygen (DO) as well as trans-membrane pressure (TMP). The experiments demonstrated that ceramic UF-membrane was able to remove most of turbidity and suspended particulate matters. The final effluent turbidity reached to 0.14 NTU on average. BAC was effective in removing ammonia and organic matters. Dissolved oxygen (DO) is necessary for the biodegradation of ammonia at high concentration. The removal efficiencies reached to 90% for ammonia with the initial concentration of 3.6 mg/L and 76% for TOC with the initial concentration of 3.8 mg/L. Ozonation can alter the molecular structure of organics in terms of UV254, reduce membrane fouling, and extend the operation circle. It is believed the hybrid treatment process developed in this article can achieve high performance with less land occupation and lower cost compared with the conventional processes. It is especially suitable for the developing countries in order to obtain high-quality drinking water in a cost-effective way. PMID:23705617
Yoo, S. J. Ben
Super-Long Cavity, Monolithically Integrated 1-GHz Hybrid Mode-Locked InP Laser for All mode-locked lasers (MLL) provide excellent performance in terms of stable output and high optical power) semiconductor MLLs useful for RF-Photonics and numerous other applications where low-cost electronics and high-speed
Electronic integration of fuel cell and battery system in novel hybrid vehicle
NASA Astrophysics Data System (ADS)
Fisher, Peter; Jostins, John; Hilmansen, Stuart; Kendall, Kevin
2012-12-01
The objective of this work was to integrate a lithium ion battery pack, together with its management system, into a hydrogen fuel cell drive train contained in a lightweight city car. Electronic units were designed to link the drive train components using conventional circuitry. These were built, tested and shown to perform according to the design. These circuits allowed start-up of battery management system, motor controller, fuel cell warm-up and torque monitoring. After assembling the fuel cell and battery in the vehicle, full system tests were performed. Analysis of results from vehicle demonstrations showed operation was satisfactory. The conclusion was that the electronic integration was successful, but the design needed optimisation and fine tuning. Eight vehicles were then fitted with the electronically integrated fuel cell-battery power pack. Trials were then started to test the integration more fully, with a duration of 12 months from 2011 to 2012 in the CABLED project.
NASA Astrophysics Data System (ADS)
Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann
2009-02-01
Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.
M. A. Clark; Bálint Joó; A. D. Kennedy; P. J. Silva
2011-09-20
We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian $H$ but conserve exactly a nearby shadow Hamiltonian $\\tilde{H}$. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.
A hybrid approach for integrated healthcare cooperative purchasing and supply chain configuration.
Rego, Nazaré; Claro, João; Pinho de Sousa, Jorge
2014-12-01
This paper presents an innovative and flexible approach for recommending the number, size and composition of purchasing groups, for a set of hospitals willing to cooperate, while minimising their shared supply chain costs. This approach makes the financial impact of the various cooperation alternatives transparent to the group and the individual participants, opening way to a negotiation process concerning the allocation of the cooperation costs and gains. The approach was developed around a hybrid Variable Neighbourhood Search (VNS)/Tabu Search metaheuristic, resulting in a flexible tool that can be applied to purchasing groups with different characteristics, namely different operative and market circumstances, and to supply chains with different topologies and atypical cost characteristics. Preliminary computational results show the potential of the approach in solving a broad range of problems. PMID:24370921
McGuinness, Kevin M
2012-12-01
The prescribing clinical health psychologist brings together in one individual a combination of skills to create a hybrid profession that can add value to any healthcare organization. This article addresses the high demand for mental health services and the inequitable distribution of mental health practitioners across the nation. The close link between physical and mental health and evidence that individuals in psychological distress often enter the mental health system via primary care medical clinics is offered as background to a discussion of the author's work as a commissioned officer of the U.S. Public Health Service assigned to the Chaparral Medical Center of La Clinica de Familia, Inc. near the U.S.-Mexico border. The prescribing clinical health psychologist in primary care medical settings is described as a valuable asset to the future of professional psychology. PMID:23179075
Li, Jinhong; Webster, Margaret A; Wright, Jonathan; Cocker, Jonathan M; Smith, Matthew C; Badakshi, Farah; Heslop-Harrison, Pat; Gilmartin, Philip M
2015-10-01
Heteromorphic flower development in Primula is controlled by the S locus. The S locus genes, which control anther position, pistil length and pollen size in pin and thrum flowers, have not yet been characterized. We have integrated S-linked genes, marker sequences and mutant phenotypes to create a map of the P. vulgaris S locus region that will facilitate the identification of key S locus genes. We have generated, sequenced and annotated BAC sequences spanning the S locus, and identified its chromosomal location. We have employed a combination of classical genetics and three-point crosses with molecular genetic analysis of recombinants to generate the map. We have characterized this region by Illumina sequencing and bioinformatic analysis, together with chromosome in situ hybridization. We present an integrated genetic and physical map across the P. vulgaris S locus flanked by phenotypic and DNA sequence markers. BAC contigs encompass a 1.5-Mb genomic region with 1 Mb of sequence containing 82 S-linked genes anchored to overlapping BACs. The S locus is located close to the centromere of the largest metacentric chromosome pair. These data will facilitate the identification of the genes that orchestrate heterostyly in Primula and enable evolutionary analyses of the S locus. PMID:25865367
NASA Astrophysics Data System (ADS)
Saljoghei, Arsalan; Browning, Colm; Smyth, Frank; Barry, Liam P.
2015-03-01
In this paper the transmission of OFDM based wired/wireless services for hybrid PONs using direct laser modulation is studied. To overcome the limitations imposed by direct modulation of cost effective low bandwidth laser transmitters, we make use of novel monolithically integrated Discrete Mode lasers and optical injection. The system includes a wired OFDM signal, operating at 12.5 Gb/s and three wireless signals delivering Long Term Evolution (LTE) services encoded with 16 QAM. The system's performance is evaluated for various relative power ratios of the wired/wireless signals. Additionally, the impact of Relative Intensity Noise (RIN) on such a hybrid system is studied through computer simulations.
Guan, Binbin; Scott, Ryan P; Qin, Chuan; Fontaine, Nicolas K; Su, Tiehui; Ferrari, Carlo; Cappuzzo, Mark; Klemens, Fred; Keller, Bob; Earnshaw, Mark; Yoo, S J B
2014-01-13
We demonstrate free-space space-division-multiplexing (SDM) with 15 orbital angular momentum (OAM) states using a three-dimensional (3D) photonic integrated circuit (PIC). The hybrid device consists of a silica planar lightwave circuit (PLC) coupled to a 3D waveguide circuit to multiplex/demultiplex OAM states. The low excess loss hybrid device is used in individual and two simultaneous OAM states multiplexing and demultiplexing link experiments with a 20 Gb/s, 1.67 b/s/Hz quadrature phase shift keyed (QPSK) signal, which shows error-free performance for 379,960 tested bits for all OAM states. PMID:24514976
HORIZONTAL HYBRID SOLAR LIGHT PIPE: AN INTEGRATED SYSTEM OF DAYLIGHT AND ELECTRIC LIGHT
This project will test the feasibility of an advanced energy efficient perimeter lighting system that integrates daylighting, electric lighting, and lighting controls to reduce electricity consumption. The system is designed to provide adequate illuminance levels in deep-floor...
ERIC Educational Resources Information Center
Kamruzzaman, M.
2014-01-01
This study reports an action research undertaken at Queensland University of Technology. It evaluates the effectiveness of the integration of geographic information systems (GIS) within the substantive domains of an existing land use planning course in 2011. Using student performance, learning experience survey, and questionnaire survey data, it…
NASA Astrophysics Data System (ADS)
Lindsay, Anthony; McCloskey, John; Simão, Nuno; Murphy, Shane; Bhloscaidh, Mairead Nic
2014-05-01
Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on an active megathrust is stored as potential slip, referred to as slip deficit, along locked sections of the fault. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip. Areas of unreleased slip indicate where the potential for large events remain. The location of recent earthquakes and their distribution of slip can be estimated from instrumentally recorded seismic and geodetic data. However, long-term slip-deficit modelling requires detailed information on the size and distribution of slip for pre-instrumental events over hundreds of years covering more than one 'seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of reconstructing slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them; they act as long term recorders of the vertical component of deformation. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Rather than accepting any one realisation as the definite model satisfying the coral displacement data, a Monte Carlo approach identifies a suite of models consistent with the observations. Using a Genetic Algorithm to accelerate the identification of desirable models, we have developed a Monte Carlo Slip Estimator- Genetic Algorithm (MCSE-GA) which exploits the full range of uncertainty associated with the displacements. Each iteration of the MCSE-GA samples different values from within the spread of uncertainties associated with each coral displacement. The Genetic Algorithm element of the MCSE-GA allows it to recombine the information stored in a population of randomly generated models to rapidly converge on a possible solution. These solutions are evaluated and those satisfying a threshold number of observations join an ensemble of models contributing to a final Weighted Average Model (WAM). The WAM represents a high resolution estimate of the slip distribution responsible for any given set of displacements. Analysis of the slip values of the ensemble models allows areas of high confidence to be identified where the standard deviation is low. Similarly, areas of low confidence will be found where standard deviations are high. This presentation will demonstrate the ability of the MCSE-GA to produce both accurate models of slip for a number of recent instrumentally recorded earthquakes along the Sunda Trench and estimates of slip during 1797 and 1833 paleoearthquakes that are consistent with those produced from other techniques.
Integrating climate change criteria in reforestation projects using a hybrid decision-support system
NASA Astrophysics Data System (ADS)
Curiel-Esparza, Jorge; Gonzalez-Utrillas, Nuria; Canto-Perello, Julian; Martin-Utrillas, Manuel
2015-09-01
The selection of appropriate species in a reforestation project has always been a complex decision-making problem in which, due mostly to government policies and other stakeholders, not only economic criteria but also other environmental issues interact. Climate change has not usually been taken into account in traditional reforestation decision-making strategies and management procedures. Moreover, there is a lack of agreement on the percentage of each one of the species in reforestation planning, which is usually calculated in a discretionary way. In this context, an effective multicriteria technique has been developed in order to improve the process of selecting species for reforestation in the Mediterranean region of Spain. A hybrid Delphi-AHP methodology is proposed, which includes a consistency analysis in order to reduce random choices. As a result, this technique provides an optimal percentage distribution of the appropriate species to be used in reforestation planning. The highest values of the weight given for each subcriteria corresponded to FR (fire forest response) and PR (pests and diseases risk), because of the increasing importance of the impact of climate change in the forest. However, CB (conservation of biodiversitiy) was in the third position in line with the aim of reforestation. Therefore, the most suitable species were Quercus faginea (19.75%) and Quercus ilex (19.35%), which offer a good balance between all the factors affecting the success and viability of reforestation.
Tabernacka, Agnieszka; Zborowska, Ewa; Lebkowska, Maria; Borawski, Maciej
2014-01-15
A two-stage waste air treatment system, consisting of hybrid bioreactors (modified bioscrubbers) and a biofilter, was used to treat waste air containing chlorinated ethenes - trichloroethylene (TCE) and tetrachloroethylene (PCE). The bioreactor was operated with loadings in the range 0.46-5.50gm(-3)h(-1) for TCE and 2.16-9.02gm(-3)h(-1) for PCE. The biofilter loadings were in the range 0.1-0.97gm(-3)h(-1) for TCE and 0.2-2.12gm(-3)h(-1) for PCE. Under low pollutant loadings, the efficiency of TCE elimination was 23-25% in the bioreactor and 54-70% in the biofilter. The efficiency of PCE elimination was 44-60% in the bioreactor and 50-75% in the biofilter. The best results for the bioreactor were observed one week after the pollutant loading was increased. However, the process did not stabilize. In the next seven days contaminant removal efficiency, enzymatic activity and biomass content were all diminished. PMID:24316808
Intraply Hybrid Composite Design
NASA Technical Reports Server (NTRS)
Chamis, C. C.; Sinclair, J. H.
1986-01-01
Several theoretical approaches combined in program. Intraply hybrid composites investigated theoretically and experimentally at Lewis Research Center. Theories developed during investigations and corroborated by attendant experiments used to develop computer program identified as INHYD (Intraply Hybrid Composite Design). INHYD includes several composites micromechanics theories, intraply hybrid composite theories, and integrated hygrothermomechanical theory. Equations from theories used by program as appropriate for user's specific applications.
Hybrid routing technique for a fault-tolerant, integrated information network
NASA Technical Reports Server (NTRS)
Meredith, B. D.
1986-01-01
The evolutionary growth of the space station and the diverse activities onboard are expected to require a hierarchy of integrated, local area networks capable of supporting data, voice, and video communications. In addition, fault-tolerant network operation is necessary to protect communications between critical systems attached to the net and to relieve the valuable human resources onboard the space station of time-critical data system repair tasks. A key issue for the design of the fault-tolerant, integrated network is the development of a robust routing algorithm which dynamically selects the optimum communication paths through the net. A routing technique is described that adapts to topological changes in the network to support fault-tolerant operation and system evolvability.
NASA Astrophysics Data System (ADS)
Yang, Wei; Hall, Trevor J.
2013-12-01
The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.
Fischer, J
2005-12-21
This report summarizes the results of a field verification pilot site investigation that involved the installation of a hybrid integrated active desiccant/vapor-compression rooftop heating, ventilation, and air-conditioning (HVAC) unit at an elementary school in the Atlanta Georgia area. For years, the school had experienced serious humidity and indoor air quality (IAQ) problems that had resulted in occupant complaints and microbial (mold) remediation. The outdoor air louvers of the original HVAC units had been closed in an attempt to improve humidity control within the space. The existing vapor compression variable air volume system was replaced by the integrated active desiccant rooftop (IADR) system that was described in detail in an Oak Ridge National Laboratory (ORNL) report published in 2004 (Fischer and Sand 2004). The IADR system and all space conditions have been monitored remotely for more than a year. The hybrid system was able to maintain both the space temperature and humidity as desired while delivering the outdoor air ventilation rate required by American Society of Heating, Refrigerating and Air-Conditioning Engineers Standard 62. The performance level of the IADR unit and the overall system energy efficiency was measured and found to be very high. A comprehensive IAQ investigation was completed by the Georgia Tech Research Institute before and after the system retrofit. Before-and-after data resulting from this investigation confirmed a significant improvement in IAQ, humidity control, and occupant comfort. These observations were reported by building occupants and are echoed in a letter to ORNL from the school district energy manager. The IADR system was easily retrofitted in place of the original rooftop system using a custom curb adapter. All work was completed in-house by the school's maintenance staff over one weekend. A subsequent cost analysis completed for the school district by the design engineer of record concluded that the IADR system being investigated was actually less expensive to install than other less-efficient options, most of which were unable to deliver the required ventilation while maintaining the desired space humidity levels.
Hybrid information privacy system: integration of chaotic neural network and RSA coding
NASA Astrophysics Data System (ADS)
Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.
2005-03-01
Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT
Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965
Hybrid Semiconductor-Molecular Integrated Circuits for Digital Electronics: CMOL Approach
NASA Astrophysics Data System (ADS)
Strukov, Dmitri B.
This chapter describes architectures of digital circuits including memories, general-purpose, and application-specific reconfigurable Boolean logic circuits for the prospective hybrid CMOS/nanowire/nanodevice ("CMOL") technology. The basic idea of CMOL circuits is to combine the advantages of CMOS technology (including its flexibility and high fabrication yield) with those of molecular-scale nanodevices. Two-terminal nanodevices would be naturally incorporated into nanowire crossbar fabric, enabling very high function density at acceptable fabrication costs. In order to overcome the CMOS/nanodevice interface problem, in CMOL circuits the interface is provided by sharp-tipped pins that are distributed all over the circuit area, on top of the CMOS stack. We show that CMOL memories with a nano/CMOS pitch ratio close to 10 may be far superior to the densest semiconductor memories by providing, e.g., 1 Tbit/cm^2 density even for the plausible defect fraction of 2%. Even greater defect tolerance (more than 20% for 99% circuit yield) can be achieved in both types of programmable Boolean logic CMOL circuits. In such circuits, two-terminal nanodevices provide programmable diode functionality for logic circuit operation, and allow circuit mapping and reconfiguration around defective nanodevices, while CMOS subsystem is used for signal restoration and latching. Using custom-developed design automation tools we have successfully mapped on reconfigurable general-purpose logic fabric ("CMOL FPGA") the well-known Toronto 20 benchmark circuits and estimated their performance. The results have shown that, in addition to high defect tolerance, CMOL FPGA circuits may have extremely high density (more than two orders of magnitude higher that that of usual CMOS FPGA with the same CMOS design rules) while operating at higher speed at acceptable power consumption. Finally, our estimates indicate that reconfigurable application-specific ("CMOL DSP") circuits may increase the speed of low-level image processing tasks by more than two orders of magnitude as compared to the fastest CMOS DSP chips implemented with the same CMOS design rules at the same area and power consumption.
Propulsion Airframe Aeroacoustic Integration Effects for a Hybrid Wing Body Aircraft Configuration
NASA Technical Reports Server (NTRS)
Czech, Michael J.; Thomas, Russell H; Elkoby, Ronen
2012-01-01
An extensive experimental investigation was performed to study the propulsion airframe aeroacoustic effects of a high bypass ratio engine for a hybrid wing body aircraft configuration where the engine is installed above the wing. The objective was to provide an understanding of the jet noise shielding effectiveness as a function of engine gas condition and location as well as nozzle configuration. A 4.7% scale nozzle of a bypass ratio seven engine was run at characteristic cycle points under static and forward flight conditions. The effect of the pylon and its orientation on jet noise was also studied as a function of bypass ratio and cycle condition. The addition of a pylon yielded significant spectral changes lowering jet noise by up to 4 dB at high polar angles and increasing it by 2 to 3 dB at forward angles. In order to assess jet noise shielding, a planform representation of the airframe model, also at 4.7% scale was traversed such that the jet nozzle was positioned from downstream of to several diameters upstream of the airframe model trailing edge. Installations at two fan diameters upstream of the wing trailing edge provided only limited shielding in the forward arc at high frequencies for both the axisymmetric and a conventional round nozzle with pylon. This was consistent with phased array measurements suggesting that the high frequency sources are predominantly located near the nozzle exit and, consequently, are amenable to shielding. The mid to low frequency sources were observed further downstream and shielding was insignificant. Chevrons were designed and used to impact the distribution of sources with the more aggressive design showing a significant upstream migration of the sources in the mid frequency range. Furthermore, the chevrons reduced the low frequency source levels and the typical high frequency increase due to the application of chevron nozzles was successfully shielded. The pylon was further modified with a technology that injects air through the shelf of the pylon which was effective in reducing low frequency noise and moving jet noise sources closer to the nozzle exit. In general, shielding effectiveness varied as a function of cycle condition with the cutback condition producing higher shielding compared to sideline power. The configuration with a more strongly immersed chevron and a pylon oriented opposite to the microphones produced the largest reduction in jet noise. In addition to the jet noise source, the shielding of a broadband point noise source was documented with up to 20 dB of noise reduction at directivity angles directly under the shielding surface.
NASA Astrophysics Data System (ADS)
Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.
2013-12-01
Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.
Monte Carlo methods on advanced computer architectures
Martin, W.R. [Univ. of Michigan, Ann Arbor, MI (United States)
1991-12-31
Monte Carlo methods describe a wide class of computational methods that utilize random numbers to perform a statistical simulation of a physical problem, which itself need not be a stochastic process. For example, Monte Carlo can be used to evaluate definite integrals, which are not stochastic processes, or may be used to simulate the transport of electrons in a space vehicle, which is a stochastic process. The name Monte Carlo came about during the Manhattan Project to describe the new mathematical methods being developed which had some similarity to the games of chance played in the casinos of Monte Carlo. Particle transport Monte Carlo is just one application of Monte Carlo methods, and will be the subject of this review paper. Other applications of Monte Carlo, such as reliability studies, classical queueing theory, molecular structure, the study of phase transitions, or quantum chromodynamics calculations for basic research in particle physics, are not included in this review. The reference by Kalos is an introduction to general Monte Carlo methods and references to other applications of Monte Carlo can be found in this excellent book. For the remainder of this paper, the term Monte Carlo will be synonymous to particle transport Monte Carlo, unless otherwise noted. 60 refs., 14 figs., 4 tabs.
NASA Astrophysics Data System (ADS)
Joosten, A.; Bochud, F.; Moeckli, R.
2014-08-01
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Design and characterization of a hybrid-integrated MEMS scanning grating spectrometer
NASA Astrophysics Data System (ADS)
Grüger, Heinrich; Knobbe, Jens; Pügner, Tino; Schenk, Harald
2013-03-01
Grating spectrometer, like the well-established Czerny-Turner, are based on an optical design consisting of several components. Typically at least two slits, two mirrors, the grating stage and a detector are required. There has been much work to reduce this effort, setups using only one mirror (Ebert - Fastie) or the replacement of the entrance slit through the use of thin optical fibers as well as integrated electronic detector arrays instead of a moving grating and an exit slit and single detector device have been applied. Reduced effort comes along with performance limitations: Either the optical resolution or throughput is affected or the use of the system is limited to the availability of detectors arrays with reasonable price. Components in micro opto electro mechanical systems (MOEMS-) technology and spectroscopic systems based thereon have been developed to improve this situation. Miniaturized scanning gratings fabricated on bonded silicon on insulator (BSOI-) wafers were used to design grating spectrometer for the near infrared requiring single detectors only. Discrete components offer flexibility but also need for adjustment of two mirrors, grating stage, fiber mount and the detector with its slit and optionally a second slit in the entrance area. Further development leads towards the integration of the slits into the MOEMS chip, thus less effort for adjustment. Flexibility might be reduced as adjustments of the optical design or grating spacing would require a new chip with own set of masks. Nevertheless if extreme miniaturization is desired this approach seems to be promising. Besides this, high volume production might be able for a comparable low price. A new chip was developed offering grating, two slits and a cavity for the detector chip. The optical design was adjusted to a planar arrangement of grating and slits. A detector buried in a chip cavity required a new mounting strategy. Other optical components were optimized and fabricated then the systems was assembled with electronics and software adjusted to the new design including some new features like integrated position sensors. A first test of systems to grant function of all components is presented. Further work will be aimed at improved performance like higher resolution and lower SNR.
LU,YUNFENG; FAN,HONGYOU; DOKE,NILESH; LOY,DOUGLAS A.; ASSINK,ROGER A.; LAVAN,DAVID A.; BRINKER,C. JEFFREY
2000-06-12
Since the discovery of surfactant-templated silica mesophases, the development of organic modification schemes to impart functionality to the pore surfaces has received much attention. Most recently, using the general class of compounds referred to as bridged silsesquioxanes (RO){sub 3}Si-R{prime}-Si(OR){sub 3} (Scheme 1), three research groups have reported the formation of a new class of poly(bridgedsilsesquioxane) mesophases BSQMs with integral organic functionality. In contrast to previous hybrid mesophases where organic ligands or molecules are situated on pore surfaces, this class of materials necessarily incorporates the organic constituents into the framework as molecularly dispersed bridging ligands. Although it is anticipated that this new mesostructural organization should result in synergistic properties derived from the molecular scale mixing of the inorganic and organic components, few properties of BSQMs have been measured. In addition samples prepared to date have been in the form of granular precipitates, precluding their use in applications like membranes, fluidics, and low k dielectric films needed for all foreseeable future generations of microelectronics.
NASA Astrophysics Data System (ADS)
Jiang, Zhaoshuo; Jig Kim, Sung; Plude, Shelley; Christenson, Richard
2013-10-01
Magneto-rheological (MR) fluid dampers can be used to reduce the traffic induced vibration in highway bridges and protect critical structural components from fatigue. Experimental verification is needed to verify the applicability of the MR dampers for this purpose. Real-time hybrid simulation (RTHS), where the MR dampers are physically tested and dynamically linked to a numerical model of the highway bridge and truck traffic, provides an efficient and effective means to experimentally examine the efficacy of MR dampers for fatigue protection of highway bridges. In this paper a complex highway bridge model with 263?178 degrees-of-freedom under truck loading is tested using the proposed convolution integral (CI) method of RTHS for a semiactive structural control strategy employing two large-scale 200 kN MR dampers. The formation of RTHS using the CI method is first presented, followed by details of the various components in the RTHS and a description of the implementation of the CI method for this particular test. The experimental results confirm the practicability of the CI method for conducting RTHS of complex systems.
Zhang, Xingyu; Subbaraman, Harish; Luo, Jingdong; Jen, Alex K -Y; Chung, Chi-jui; Yan, Hai; Pan, Zeyu; Nelson, Robert L; Chen, Ray T
2015-01-01
Silicon-organic hybrid integrated devices have emerging applications ranging from high-speed optical interconnects to photonic electromagnetic-field sensors. Silicon slot photonic crystal waveguides (PCWs) filled with electro-optic (EO) polymers combine the slow-light effect in PCWs with the high polarizability of EO polymers, which promises the realization of high-performance optical modulators. In this paper, a broadband, power-efficient, low-dispersion, and compact optical modulator based on an EO polymer filled silicon slot PCW is presented. A small voltage-length product of V{\\pi}*L=0.282Vmm is achieved, corresponding to an unprecedented record-high effective in-device EO coefficient (r33) of 1230pm/V. Assisted by a backside gate voltage, the modulation response up to 50GHz is observed, with a 3-dB bandwidth of 15GHz, and the estimated energy consumption is 94.4fJ/bit at 10Gbit/s. Furthermore, lattice-shifted PCWs are utilized to enhance the optical bandwidth by a factor of ~10X over other modulators bas...
NASA Astrophysics Data System (ADS)
Zhang, Xingyu; Hosseini, Amir; Subbaraman, Harish; Luo, Jingdong; Jen, Alex K.-Y.; Chung, Chi-jui; Yan, Hai; Pan, Zeyu; Nelson, Robert L.; Chen, Ray T.
2015-03-01
Silicon-organic hybrid integrated devices have emerging applications ranging from high-speed optical interconnects to photonic electromagnetic-field sensors. Silicon slot photonic crystal waveguides (PCWs) filled with electro-optic (EO) polymers combine the slow-light effect in PCWs with the high polarizability of EO polymers, which promises the realization of high-performance optical modulators. In this paper, a broadband, power-efficient, low-dispersion, and compact optical modulator based on an EO polymer filled silicon slot PCW is presented. A small voltage-length product of V?×L=0.282V×mm is achieved, corresponding to an unprecedented record-high effective in-device EO coefficient (r33) of 1230pm/V. Assisted by a backside gate voltage, the modulation response up to 50GHz is observed, with a 3-dB bandwidth of 15GHz, and the estimated energy consumption is 94.4fJ/bit at 10Gbit/s. Furthermore, lattice-shifted PCWs are utilized to enhance the optical bandwidth by a factor of ~10X over other modulators based on non-band-engineered PCWs and ring-resonators.
Li, Bin; Chen, Kan; Tian, Lianfang; Yeboah, Yao; Ou, Shanxing
2013-01-01
The segmentation and detection of various types of nodules in a Computer-aided detection (CAD) system present various challenges, especially when (1) the nodule is connected to a vessel and they have very similar intensities; (2) the nodule with ground-glass opacity (GGO) characteristic possesses typical weak edges and intensity inhomogeneity, and hence it is difficult to define the boundaries. Traditional segmentation methods may cause problems of boundary leakage and "weak" local minima. This paper deals with the above mentioned problems. An improved detection method which combines a fuzzy integrated active contour model (FIACM)-based segmentation method, a segmentation refinement method based on Parametric Mixture Model (PMM) of juxta-vascular nodules, and a knowledge-based C-SVM (Cost-sensitive Support Vector Machines) classifier, is proposed for detecting various types of pulmonary nodules in computerized tomography (CT) images. Our approach has several novel aspects: (1) In the proposed FIACM model, edge and local region information is incorporated. The fuzzy energy is used as the motivation power for the evolution of the active contour. (2) A hybrid PMM Model of juxta-vascular nodules combining appearance and geometric information is constructed for segmentation refinement of juxta-vascular nodules. Experimental results of detection for pulmonary nodules show desirable performances of the proposed method. PMID:23690876
NASA Astrophysics Data System (ADS)
Schwödiauer, Reinhard; Graz, Ingrid; Kaltenbrunner, Martin; Keplinger, Christoph; Bartu, Petr; Buchberger, Gerda; Ortwein, Christoph; Bauer, Siegfried
2008-03-01
Thin polymer foams with a closed cell void-structure can be internally charged by silent or partial discharges within the voids. The resulting material, which carries positive and negative charges on the internal void surfaces is called a ferroelectret. Ferroelectrets behave like typical ferroelectrics, hence they provide a novel class of ferroic materials. The soft foams are strongly piezoelectric in the 3-direction, but show negligible piezoelectric response in the transverse direction. This, together with a very low pyroelectric coefficient, make ferroelectrets highly suitable for flexible electroactive transducer element which can be integrated in thin bendable organic electronic devices. Here we describe some fundamental characteristics of cellular ferroelectrets and present a number of promising examples for a possible combination with various functional polymer systems. Our examples focus on flexible ferroelectret field-effect transistor systems for large-area sensor skins and microphones, flexible large-array position detectors (touchpad), and stretchable large-array pressure sensors.
NASA Astrophysics Data System (ADS)
Bhattacharya, Amitabh
2013-11-01
An efficient algorithm for simulating Stokes flow around particles is presented here, in which a second order Finite Difference method (FDM) is coupled to a Boundary Integral method (BIM). This method utilizes the strong points of FDM (i.e. localized stencil) and BIM (i.e. accurate representation of particle surface). Specifically, in each iteration, the flow field away from the particles is solved on a Cartesian FDM grid, while the traction on the particle surface (given the the velocity of the particle) is solved using BIM. The two schemes are coupled by matching the solution in an intermediate region between the particle and surrounding fluid. We validate this method by solving for flow around an array of cylinders, and find good agreement with Hasimoto's (J. Fluid Mech. 1959) analytical results.
33. Monte Carlo techniques 1 33. MONTE CARLO TECHNIQUES
Masci, Frank
distribution function F(x). For a discrete distribution, F(x) will have a discontinuous jump of size f distribution Most Monte Carlo sampling or integration techniques assume a "random number generator," which - distribution function (expressing the probability that x a) is given by Eq
Mingwu Yang; Yinchao Chen; Raj Mittra
2000-01-01
In this paper, we present a hybrid algorithm that combines the finite-difference time-domain (FDTD) and finite-volume time-domain (FVTD) methods to analyze microwave integrated-circuit structures that may contain curved perfect electric conductor (PEC) surfaces. We employ the conventional nonuniform FDTD in regions where the objects are describable with a rectangular mesh, while applying the FVTD method elsewhere where we need to
Multiscale kinetic Monte Carlo algorithm for simulating epitaxial growth
Jason P. Devita; Leonard M. Sander; Peter Smereka
2005-01-01
We present a fast Monte Carlo algorithm for simulating epitaxial surface growth, based on the continuous-time Monte Carlo algorithm of Bortz, Kalos, and Lebowitz. When simulating realistic growth regimes, much computational time is consumed by the relatively fast dynamics of the adatoms. Continuum and continuum-discrete hybrid methods have been developed to approach this issue; however, in many situations, the density
Creel, Scott
Hybridization & Conservation Natural hybridization can create genetic diversity, e.g. plant species of hybrid origin, genetic exchange among micro-organisms. But genetic hybridization due to human the genetic integrity of existing species to the point of causing extinctions. New Zealand grey duck (Anas
Lahdenperä, Susanne; Spangar, Anni; Lempainen, Anna-Maija; Joki, Laura; Soukka, Tero
2015-06-21
Switchable lanthanide luminescence is a binary probe technology that inherently enables a high signal modulation in separation-free detection of DNA targets. A luminescent lanthanide complex is formed only when the two probes hybridize adjacently to their target DNA. We have now further adapted this technology for the first time in the integration of a 2-plex polymerase chain reaction (PCR) amplification and hybridization-based solid-phase detection of the amplification products of the Staphylococcus aureus gyrB gene and an internal amplification control (IAC). The assay was performed in a sealed polypropylene PCR chip containing a flat-bottom reaction chamber with two immobilized capture probe spots. The surface of the reaction chamber was functionalized with NHS-PEG-azide and alkyne-modified capture probes for each amplicon, labeled with a light harvesting antenna ligand, and covalently attached as spots to the azide-modified reaction chamber using a copper(i)-catalyzed azide-alkyne cycloaddition. Asymmetric duplex-PCR was then performed with no template, one template or both templates present and with a europium ion carrier chelate labeled probe for each amplicon in the reaction. After amplification europium fluorescence was measured by scanning the reaction chamber as a 10 × 10 raster with 0.6 mm resolution in time-resolved mode. With this assay we were able to co-amplify and detect the amplification products of the gyrB target from 100, 1000 and 10,000 copies of isolated S. aureus DNA together with the amplification products from the initial 5000 copies of the synthetic IAC template in the same sealed reaction chamber. The addition of 10,000 copies of isolated non-target Escherichia coli DNA in the same reaction with 5000 copies of the synthetic IAC template did not interfere with the amplification or detection of the IAC. The dynamic range of the assay for the synthetic S. aureus gyrB target was three orders of magnitude and the limit of detection of 8 pM was obtained. This proof-of-concept study shows that the switchable lanthanide luminescent probes enable separation-free array-based multiplexed detection of the amplification products in a closed-tube PCR which can enable a higher degree of multiplexing than is currently feasible by using different spectrally separated fluorescent probes. PMID:25882638
Integrated Plasma Simulation of Lower Hybrid Current Drive Modification of Sawtooth in Alcator C-Mod
NASA Astrophysics Data System (ADS)
Bonoli, P. T.; Hubbard, A. E.; Schmidt, A. E.; Wright, J. C.; Kessel, C. E.; Batchelor, D. B.; Berry, L. A.; Harvey, R. W.
2010-11-01
Experiments were performed in Alcator C-Mod, where the onset time for sawteeth was delayed significantly (up to 0.5 s) relative to ohmically heated plasmas, through injection of off-axis LH current drive power [1]. In this poster we discuss simulations of these experiments using the Integrated Plasma Simulator (IPS) [2], through which driven current density profiles and hard x-ray spectra are computed using a ray tracing code (GENRAY) and Fokker Planck code (CQL3D) [3], that are executed repeatedly in time. The background plasma is evolved in these simulations using the TSC transport code with the Porcelli sawtooth model [4]. [4pt] [1] C. E. Kessel et al, Bull. of the Am. Phys. Soc. 53, Poster PP6.00074 (2008). [0pt] [2] D. Batchelor et al, Journal of Physics: Conf. Series 125, 012039 (2008). [0pt] [3] R. W. Harvey and M. G. McCoy, Proc. of the IAEA Tech. Comm. Mtg. on Sim. and Mod. of Therm. Plasmas, Montreal, Canada (1992). [0pt] [4] S. C. Jardin et al, Journal Comp. Phys. 66, 481 (1986).
NASA Technical Reports Server (NTRS)
2002-01-01
[figure removed for brevity, see original site]
This image shows the rugged cratered highland region of Libya Montes. Libya Montes forms part of the rim of an ancient impact basin called Isidis. This region of the highlands is fairly dissected with valley networks. There is still debate within the scientific community as to how valley networks themselves form: surface runoff (rainfall/snowmelt) or headward erosion via groundwater sapping. The degree of dissection here in this region suggests surface runoff rather than groundwater sapping. Small dunes are also visible on the floors of some of these channels.
Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time. NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.
Perkins, James Michael, 1978-
2007-01-01
A new heterogeneous integration technique has been developed and demonstrated to integrate vertical cavity surface emitting lasers (VCSELs) on silicon CMOS integrated circuits for optical interconnect applications. Individual ...
Analytical Applications of Monte Carlo Techniques.
ERIC Educational Resources Information Center
Guell, Oscar A.; Holcombe, James A.
1990-01-01
Described are analytical applications of the theory of random processes, in particular solutions obtained by using statistical procedures known as Monte Carlo techniques. Supercomputer simulations, sampling, integration, ensemble, annealing, and explicit simulation are discussed. (CW)
M. Mohammadi; S. H. Hosseinian; G. B. Gharehpetian
This study presents an optimized design of microgrid (MG) in distribution systems with multiple distributed generation (DG) units under different market policies such as pool\\/hybrid electricity market.Proposed microgrid includes various energy sources such as photovoltaic array and wind turbine with energy storage devices such as battery bank.In this study, microgrid is considered as independent power producer company (IPP) in power
NASA Astrophysics Data System (ADS)
Ghassib, Humam B.; Sakhel, Asaad R.; Obeidat, Omar; Al-Oqali, Amer; Sakhel, Roger R.
2012-01-01
We demonstrate the effectiveness of a statistical potential (SP) in the description of fermions in a worm-algorithm path-integral Monte Carlo simulation of a few 3He atoms floating on a 4He layer adsorbed on graphite. The SP in this work yields successful results, as manifested by the clusterization of 3He, and by the observation that the 3He atoms float on the surface of 4He. We display the positions of the particles in 3D coordinate space, which reveal clusterization of the 3He component. The correlation functions are also presented, which give further evidence for the clusterization.
Asymptotics of Fixed Point Distributions for Inexact Monte Carlo Algorithms
M. A. Clark; A. D. Kennedy
2007-10-18
We introduce a simple general method for finding the equilibrium distribution for a class of widely used inexact Markov Chain Monte Carlo algorithms. The explicit error due to the non-commutivity of the updating operators when numerically integrating Hamilton's equations can be derived using the Baker-Campbell-Hausdorff formula. This error is manifest in the conservation of a ``shadow'' Hamiltonian that lies close to the desired Hamiltonian. The fixed point distribution of inexact Hybrid algorithms may then be derived taking into account that the fixed point of the momentum heatbath and that of the molecular dynamics do not coincide exactly. We perform this derivation for various inexact algorithms used for lattice QCD calculations.
NASA Astrophysics Data System (ADS)
Zagórski, R.; Voitsekhovitch, I.; Ivanova-Stanik, I.; Köchl, F.; Belo, P.; Fable, E.; Garcia, J.; Garzotti, L.; Hobirk, J.; Hogeweij, G. M. D.; Joffrin, E.; Litaudon, X.; Polevoi, A. R.; Telesca, G.; contributors, JET
2015-05-01
The compatibility of two operational constraints—operation above the L–H power threshold and at low power to divertor—is examined for ITER long pulse H-mode and hybrid scenarios in integrated core–scrape off layer (SOL)–divertor modelling including impurities (intrinsic Be, He, W and seeded Ne). The core thermal, particle and momentum transport is simulated with the GLF23 transport model tested in the self-consistent simulations of temperatures, density and toroidal rotation velocity in JET hybrid discharges and extrapolated to ITER. The beneficial effect of toroidal rotation velocity on fusion gain is shown. The sensitivity studies with respect to operational (separatrix and pedestal density, Ne gas puff) and unknown physics (W convective velocity and perpendicular diffusion in SOL as well as W prompt re-deposition) parameters are performed to determine their influence on the operational window and fusion gain.
Fischer, J
2005-05-06
This report summarizes the results of a research and development (R&D) program to design and optimize an active desiccant-vapor compression hybrid rooftop system. The primary objective was to combine the strengths of both technologies to produce a compact, high-performing, energy-efficient system that could accommodate any percentage of outdoor air and deliver essentially any required combination of temperature and humidity, or sensible heat ratio (SHR). In doing so, such a product would address the significant challenges imposed on the performance capabilities of conventional packaged rooftop equipment by standards 62 and 90.1 of the American Society of Heating, Refrigerating and Air-Conditioning Engineers. The body of work completed as part of this program built upon previous R&D efforts supported by the U.S. Department of Energy and summarized by the Phase 3b report ''Active Desiccant Dehumidification Module Integration with Rooftop Packaged HVAC Units'' (Fischer and Sand 2002), in addition to Fischer, Hallstrom, and Sand 2000; Fischer 2000; and Fischer and Sand 2004. All initial design objectives established for this development program were successfully achieved. The performance flexibility desired was accomplished by a down-sized active desiccant wheel that processes only a portion of the supply airflow, which is pre-conditioned by a novel vapor compression cycle. Variable-speed compressors are used to deliver the capacity control required by a system handling a high percentage of outdoor air. An integrated direct digital control system allows for control capabilities not generally offered by conventional packaged rooftop systems. A 3000-cfm prototype system was constructed and tested in the SEMCO engineering test laboratory in Columbia, MO, and was found to operate in an energy-efficient fashion relative to more conventional systems. Most important, the system offered the capability to independently control the supply air temperature and humidity content to provide individual sensible and latent loads required by an occupied space without over-cooling and reheating air. The product was developed using a housing construction similar to that of a conventional packaged rooftop unit. The resulting integrated active desiccant rooftop (IADR) is similar in size to a currently available conventional rooftop unit sized to provide an equivalent total cooling capacity. Unlike a conventional rooftop unit, the IADR can be operated as a dedicated outdoor air system processing 100% outdoor air, as well as a total conditioning system capable of handling any ratio of return air to outdoor air. As part of this R&D program, a detailed investigation compared the first cost and operating cost of the IADR with costs for a conventional packaged approach for an office building located in Jefferson City, MO. The results of this comparison suggest that the IADR approach, once commercialized, could be cost-competitive with existing technology--exhibiting a one-year to two-year payback period--while simultaneously offering improved humidity control, indoor air quality, and energy efficiency.
Oliveira, Rita; Godinho, Raquel; Randi, Ettore; Alves, Paulo C
2008-09-12
Cross-breeding between wild and free-ranging domestic species is one of the main conservation problems for some threatened species. The situation of wildcats (Felis silvestris silvestris) in Europe is a good example of this critical phenomenon. Extensive hybridization was described in Hungary and Scotland, contrasting with occasional interbreeding in Italy and Germany. First analyses in Portugal revealed a clear genetic differentiation between wild and domestic cats; however, four hybrids were detected. Here, we extended the approach to Iberian Peninsula using multivariate and Bayesian analyses of multilocus genotypes for 44 Portuguese wildcats, 31 Spanish wildcats and 109 domestic cats. Globally, wild and domestic cats were significantly differentiated (FST=0.20, p<0.001) and clustered into two discrete groups. Diverse clustering methods and assignment criteria identified an additional hybrid in Portugal, performing a total of five admixed individuals. The power of admixture analyses was assessed by simulating hybrid genotypes, which revealed that used microsatellites were able to detect 100, 91 and 85% of first-generation hybrids, second-generation genotypes and backcrosses, respectively. These findings suggest that the true proportion of admixture can be higher than the value estimated in this study and that the improvement of genetic tools for hybrids detection is crucial for wildcat conservation. PMID:18522917
Reclaiming hybrid integrated circuits
NASA Technical Reports Server (NTRS)
Ebel, G.; Grossbard, H.
1978-01-01
Reclamation method consists of opening very small hole in package and shaking out trapped particles. Procedure is performed in dry box through which an inert gas is flowing to insure no room air enters package. Acoustic transducer monitors sound of vibrating particles. Amplifier produces audio and oscilloscope output. Hole is sealed with heated solder form.
An Improved Monte Carlo Algorithm for Elastic Electron Backscattering
Dimov, Ivan
is the initial angle). Such an equation may be transformed into an integral equation of the form = K + 0, as one- face analysis. We are interested in the angular distribution of the back- scattered electrons. The flow of electrons satisfies an integral equation, which might be solved by Monte Carlo methods. The Monte Carlo ap
Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
Alwee, Razana; Shamsuddin, Siti Mariyam Hj; Sallehuddin, Roselina
2013-01-01
Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729
The MC21 Monte Carlo Transport Code
Sutton TM, Donovan TJ, Trumbull TH, Dobreff PS, Caro E, Griesheimer DP, Tyburski LJ, Carpenter DC, Joo H
2007-01-09
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities.
NASA Astrophysics Data System (ADS)
Yu, Hong-Yan; Yuan, Li-Jun; Tao, Li; Wang, Bao-Jun; Chen, Wei-Xi; Liang, Song; Li, Yan-Ping; Ran, Guang-Zhao; Pan, Jiao-Qing; Wang, Wei
2014-03-01
An evanescently-coupled, hybrid InGaAsP-Si laser operating at 1.55 ?m is presented by selective area metal bonding (SAMB). The III-V laser, fabricated on a p-InP substrate with a semi-insulating InP:Fe buried heterostructure (BH), serves to provide optical gain. On the SOI wafer, a 3-?m wide and 500-nm high Si waveguide is formed and the bonding metal (AuSn alloy) is selectively deposited in the regions 6 ?m away from the Si waveguide on each side. The InGaAsP gain structure is flip-chip bonded onto the patterned SOI wafer using SAMB method which separates laterally the optical coupling area and the metal bonding area to avoid strong light absorption by the bonding metal. The hybrid laser runs with a maximum single-sided output power of 9 mw at room temperature. The slope efficiency of the hybrid laser is about 0.04 W/A, 4 times that of the laser before bonding which indicates that the light confinement is improved after the bonding. The hybrid laser has achieved 10 °C contimuous wave (CW) lasing. A near-field image of the hybrid laser is studied. As the inject current increases, the light spot markedly shifts down to the Si waveguide and covers the Si waveguide region, which demonstrates that the light generated in the III-V active region is coupled into the Si waveguide. This method allows for different III-V devices to be bonded onto any desired places on a SOI substrate. The simplicity and flexibility of the fabrication process and high yield make the hybrid laser a promising light source.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.
Aquino Perez, Gildardo
2009-05-15
. The primary analysis of the karyotype and ideogram construction was based on banding and Fluorescence In Situ Hybridization (FISH) for rDNA detection. FISH confirmed two locations for the NOR on telomeric regions of chromosomes 6 and 12 plus an additional less...
Maginn, E.J.; Bell, A.T.; Theodorou, D.N. (Lawrence Berkeley Lab., CA (United States) Univ. of California, Berkeley, CA (United States))
1995-02-16
The low-occupancy adsorption thermodynamics of n-alkanes ranging in length from C[sub 4] to C[sub 25] in the zeolite silicalite is predicted from molecular simulations. A bias Monte Carlo (MC) technique is described which permits these calculations to be carried out with modest computational expense. In addition, a general, systematic coarse-graining methodology is developed which enables the location and shape of chains of arbitrary length to be accurately described using a small number of degrees of freedom. By coupling this methodology with the bias Monte Carlo technique, the free energy of sorbed chains is calculated as a function of the coarse-grained configuration of chains. The results indicate that, at high temperature, n-alkanes probe all the accessible regions of the zeolite pore network, favoring high-entropy conformations that access more than one type of channel environment. As temperature decreases to room temperature, short chains continue to populate all regions of the zeolite, while chains longer than n-octane align along the straight channels in highly localized low-energy configurations. Macroscopic thermodynamic results, such as Henry's law constants and isosteric heats of adsorption, are calculated and compared to experimentally obtained values. 51 refs., 19 figs., 8 tabs.
Bayesian adaptive Markov chain Monte Carlo estimation of genetic parameters
Mathew, B; Bauer, A M; Koistinen, P; Reetz, T C; Léon, J; Sillanpää, M J
2012-01-01
Accurate and fast estimation of genetic parameters that underlie quantitative traits using mixed linear models with additive and dominance effects is of great importance in both natural and breeding populations. Here, we propose a new fast adaptive Markov chain Monte Carlo (MCMC) sampling algorithm for the estimation of genetic parameters in the linear mixed model with several random effects. In the learning phase of our algorithm, we use the hybrid Gibbs sampler to learn the covariance structure of the variance components. In the second phase of the algorithm, we use this covariance structure to formulate an effective proposal distribution for a Metropolis-Hastings algorithm, which uses a likelihood function in which the random effects have been integrated out. Compared with the hybrid Gibbs sampler, the new algorithm had better mixing properties and was approximately twice as fast to run. Our new algorithm was able to detect different modes in the posterior distribution. In addition, the posterior mode estimates from the adaptive MCMC method were close to the REML (residual maximum likelihood) estimates. Moreover, our exponential prior for inverse variance components was vague and enabled the estimated mode of the posterior variance to be practically zero, which was in agreement with the support from the likelihood (in the case of no dominance). The method performance is illustrated using simulated data sets with replicates and field data in barley. PMID:22805656
NASA Astrophysics Data System (ADS)
Watchareeruetai, Ukrit; Ohnishi, Noboru
We propose a color-based weed detection method specifically designed for detecting lawn weeds in winter. The proposed method exploits fuzzy logic to make inference from color information. Genetic algorithm is adopted to search for the optimal combination of color information, fuzzy membership functions, as well as fuzzy rules used in the method. Experimental results show that the proposed color-based method outperforms the conventional texture-based methods when testing with a winter dataset. In addition, we propose a hybrid system that incorporates both texture-based and color-based weed detection methods. It can automatically select a better method to perform weed detection, depending on an input image. The results show that the use of the hybrid system can significantly improve weed control performances for the overall datasets.
Kim, Jaiseung, E-mail: jkim@nbi.dk [Niels Bohr Institute and Discovery Center, Blegdamsvej 17, DK-2100 Copenhagen (Denmark)
2011-04-01
We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (f{sub NL}) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained f{sub NL} and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the f{sub NL} estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of f{sub NL} by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different.
2005-01-01
This report summarizes the results of a research and development (R&D) program to design and optimize an active desiccant-vapor compression hybrid rooftop system. The primary objective was to combine the strengths of both technologies to produce a compact, high-performing, energy-efficient system that could accommodate any percentage of outdoor air and deliver essentially any required combination of temperature and humidity, or
Wu, Weitai; Mitra, Nivedita; Yan, Elsa C Y; Zhou, Shuiqin
2010-08-24
Optical detection of glucose, high drug loading capacity, and self-regulated drug delivery are simultaneously possible using a multifunctional hybrid nanogel particle under a rational design in a colloid chemistry method. Such hybrid nanogels are made of Ag nanoparticle (NP) cores covered by a copolymer gel shell of poly(4-vinylphenylboronic acid-co-2-(dimethylamino)ethyl acrylate) [p(VPBA-DMAEA)]. The introduction of the glucose sensitive p(VPBA-DMAEA) gel shell onto Ag NPs makes the polymer-bound Ag NPs responsive to glucose. While the small sized Ag cores (10 +/- 3 nm) provide fluorescence as an optical code, the responsive polymer gel shell can adapt to a surrounding medium of different glucose concentrations over a clinically relevant range (0-30 mM), convert the disruptions in homeostasis of glucose level into optical signals, and regulate release of preloaded insulin. This shows a new proof-of-concept for diabetes treatment that exploits the properties from each building block of a multifunctional nano-object. The highly versatile multifunctional hybrid nanogels could potentially be used for simultaneous optical diagnosis, self-regulated therapy, and monitoring of the response to treatment. PMID:20731458
Dong, Haiyan; Wu, Zai-Sheng; Xu, Jianguo; Ma, Ji; Zhang, Huijuan; Wang, Jie; Shen, Weiyu; Xie, Jingjing; Jia, Lee
2015-10-15
Molecular beacon (MB) is widely explored as a signaling probe in powerful biosensing systems, for example, enzyme-assisted strand displacement amplification (SDA)-based system. The existing polymerization-based amplification system is often composed of recognition element, primer, template and fluorescence reporter. To develop a new MB sensing system and simply the signal amplification design, we herein attempted to propose a multifunctional integrated MB (MI-MB) for the polymerization amplification detection of target DNA via introducing a G-rich fragment into the loop of MB without using any exogenous auxiliary oligonucleotide probe. Utilizing only one MI-MB probe, the p53 target gene could trigger the cycles of hybridization/polymerization/displacement, resulting in amplification of the target hybridization event. Thus, the p53 gene can be detected down to 5 × 10(-10)M with the linear response range from 5 × 10(-10)M to 4 × 10(-7)M. Using the MI-MB, we could readily discriminate the point mutation-contained p53 from the wild-type one. As a proof-of-concept study, owing to its simplicity and multifunction, including recognition, replication, amplification and signaling, the MI-MB exhibits the great potential for the development of different biosensors for various biomedical applications, especially, for early cancer diagnosis. PMID:25982726
Jarrell, John D; Dolly, Brandon; Morgan, Jeffrey R
2010-03-01
Metal-organic chemistry allows for molecular mixing and creation of a range of submicron phase-separated structures from normally brittle metal oxides and flexible polymers with improved bioactivity and delivery properties. In this study, we used a high throughput platform to investigate the influence of organic metal oxide doping of polydimethylsiloxane (PDMS) coatings on cellular bioactivity and controlled release of vanadium compared with titanium oxide coatings without additional PDMS. Metal-organic-derived titanium and or vanadium was doped into PDMS and used to form a coating on the bottom of cell culture microplates in the absence of added water, acids, or bases. These hybrid coatings were rapidly screened to establish how titanium and vanadium concentration influences cell proliferation, adhesion, and morphology. We demonstrate that titanium doping of PDMS can be used to improve cell proliferation and adhesion, and that vanadium doping caused a biphasic dose response in proliferation. A 28-day vanadium and titanium elution study indicated that titanium was not released, but the presence of PDMS in coatings increased delivery rates of vanadium compared with titania coatings without polymer. Hybrid coatings of titanium-doped polymers have potential for improving wound healing dynamics, soft-tissue integration of medical implants, and use as controlled delivery vehicles. PMID:19301265
Malijevsky, Alexandr; Santos, Andres
2007-01-01
The two-body interaction in dilute solutions of polymer chains in good solvents can be modeled by means of effective bounded potentials, the simplest of which being that of penetrable spheres (PSs). In this paper we construct two simple analytical theories for the structural properties of PS fluids: a low-temperature (LT) approximation, that can be seen as an extension to PSs of the well-known solution of the Percus-Yevick (PY) equation for hard spheres, and a high-temperature (HT) approximation based on the exact asymptotic behavior in the limit of infinite temperature. Monte Carlo simulations for a wide range of temperatures and densities are performed to assess the validity of both theories. It is found that, despite their simplicity, the HT and LT approximations exhibit a fair agreement with the simulation data within their respective domains of applicability, so that they complement each other. A comparison with numerical solutions of the PY and the hypernetted-chain approximations is also carried out, t...
Alexandr Malijevsky; Santos B. Yuste; Andres Santos
2007-07-04
The two-body interaction in dilute solutions of polymer chains in good solvents can be modeled by means of effective bounded potentials, the simplest of which being that of penetrable spheres (PSs). In this paper we construct two simple analytical theories for the structural properties of PS fluids: a low-temperature (LT) approximation, that can be seen as an extension to PSs of the well-known solution of the Percus-Yevick (PY) equation for hard spheres, and a high-temperature (HT) approximation based on the exact asymptotic behavior in the limit of infinite temperature. Monte Carlo simulations for a wide range of temperatures and densities are performed to assess the validity of both theories. It is found that, despite their simplicity, the HT and LT approximations exhibit a fair agreement with the simulation data within their respective domains of applicability, so that they complement each other. A comparison with numerical solutions of the PY and the hypernetted-chain approximations is also carried out, the latter showing a very good performance, except inside the core at low temperatures.
NASA Astrophysics Data System (ADS)
Liu, Siqi; Weng, Bo; Tang, Zi-Rong; Xu, Yi-Jun
2014-12-01
A ternary hybrid structure of one-dimensional (1D) silver nanowire-doped reduced graphene oxide (RGO) integrated with a CdS nanowire (NW) network has been fabricated via a simple electrostatic self-assembly method followed by a hydrothermal reduction process. The electrical conductivity of RGO can be significantly enhanced by opening up new conduction channels by bridging the high resistance grain-boundaries (HGBs) with 1D Ag nanowires, which results in a prolonged lifetime of photo-generated charge carriers excited from the CdS NW network, thus making Ag NW-RGO an efficient co-catalyst with the CdS NW network toward artificial photosynthesis.A ternary hybrid structure of one-dimensional (1D) silver nanowire-doped reduced graphene oxide (RGO) integrated with a CdS nanowire (NW) network has been fabricated via a simple electrostatic self-assembly method followed by a hydrothermal reduction process. The electrical conductivity of RGO can be significantly enhanced by opening up new conduction channels by bridging the high resistance grain-boundaries (HGBs) with 1D Ag nanowires, which results in a prolonged lifetime of photo-generated charge carriers excited from the CdS NW network, thus making Ag NW-RGO an efficient co-catalyst with the CdS NW network toward artificial photosynthesis. Electronic supplementary information (ESI) available: Experimental details, photographs of the experimental setups for photocatalytic activity testing, SEM images of Ag NWs and CdS NWs, Zeta potential, Raman spectra, DRS spectra, PL spectra and PL decay time evolution, and photocatalytic performances of samples for reduction of 4-NA and recycling test. See DOI: 10.1039/c4nr04229h
NASA Astrophysics Data System (ADS)
Erbis, Vadim; Hegger, Christian; Güth, Dirk; Maas, Jürgen
2015-04-01
Drag losses in the powertrain are a serious deficiency for any energy-efficient application, especially for hybrid electrical vehicles. A promising approach for fulfilling requirements like efficiency, wear, safety and dynamics is the use of an innovative MRF-based clutch design for the transmission of power that is based on magnetorheological fluids (MRF). MRF are smart fluids with the particular characteristics of changing their apparent viscosity significantly under influence of the magnetic field. Their characteristics are fast switching times and a smooth torque control in the powertrain. In this paper, a novel clutch concept is investigated that facilitates the controlled movement of the MRF from an active torque-transmitting region into an inactive region of the shear gap. This concept enables a complete disengagement of the fluid engaging surfaces in a way that viscous drag torque can be eliminated. Therefore, a simulation based design for such MRF-based clutches is used to design the required magnetic excitation systems for enabling a well-defined safety behavior by the fluid control. Based on this approach, an MRF-based clutch is developed in detail which provides a loss-reduced alternative to conventional disengagement devices in the powertrain. The presented MRF-based clutch enables a investigation of different systems in one design by changing the magnetic excitation. Especially, different possibilities for the fail-safe behavior of the MRF-based clutch are considered to ensure a well-defined condition in electrical or hybrid powertrains in case of a system failure.
NASA Astrophysics Data System (ADS)
Li, Hongbo; Wu, Zai-Sheng; Shen, Zhifa; Shen, Guoli; Yu, Ruqin
2014-01-01
An interesting discovery is reported in that G-rich hairpin-based recognition probes can self-assemble into a nano-architecture based on the integration of an intermolecular G-quadruplex structure with the sticky-end pairing effect in the presence of target DNAs. Moreover, GNPs modified with partly complementary DNAs can intensively aggregate by hybridization-based intercalation between intermolecular G-quadruplexes, indicating an inspiring assembly mechanism and a powerful colorimetric DNA detection. The proposed intermolecular G-quadruplex-integrated sticky-end pairing assembly (called GISA)-based colorimetric system allows a specific and quantitative assay of p53 DNA with a linear range of more than two orders of magnitude and a detection limit of 0.2 nM, suggesting a considerably improved analytical performance. And more to the point, the discrimination of single-base mismatched target DNAs can be easily conducted via visual observation. The successful development of the present colorimetric system, especially the GISA-based aggregation mechanism of GNPs is different from traditional approaches, and offers a critical insight into the dependence of the GNP aggregation on the structural properties of oligonucleotides, opening a good way to design colorimetric sensing probes and DNA nanostructure. An interesting discovery is reported in that G-rich hairpin-based recognition probes can self-assemble into a nano-architecture based on the integration of an intermolecular G-quadruplex structure with the sticky-end pairing effect in the presence of target DNAs. Moreover, GNPs modified with partly complementary DNAs can intensively aggregate by hybridization-based intercalation between intermolecular G-quadruplexes, indicating an inspiring assembly mechanism and a powerful colorimetric DNA detection. The proposed intermolecular G-quadruplex-integrated sticky-end pairing assembly (called GISA)-based colorimetric system allows a specific and quantitative assay of p53 DNA with a linear range of more than two orders of magnitude and a detection limit of 0.2 nM, suggesting a considerably improved analytical performance. And more to the point, the discrimination of single-base mismatched target DNAs can be easily conducted via visual observation. The successful development of the present colorimetric system, especially the GISA-based aggregation mechanism of GNPs is different from traditional approaches, and offers a critical insight into the dependence of the GNP aggregation on the structural properties of oligonucleotides, opening a good way to design colorimetric sensing probes and DNA nanostructure. Electronic supplementary information (ESI) available: Experimental section, supplementary Figures and perspectives. See DOI: 10.1039/c3nr03547f
K. Takiguchi; T. Yamada; M. Abe; S. Kaneko; H. Suzuki
2008-01-01
A frequency-domain OCDMA modulator is developed by integrating PLC encoders with LN modulators. This modulator can hold data signals in phase to carry out self-homodyne detection to reduce beat noise.
S Lindenmeier; W Heinrich; P Russer
1996-01-01
The incorporation of a priori knowledge of the electrostatic and magneto-static fields into the Finite-Integral algorithm leads to higher efficiency under the condition that the numerical effort for the static field calculations is smaller than that for the conventional full-wave Finite-Integral method. In the electro-static case, the scalar potential approach allows for a fast solution. In the magneto-static case, however,
Hybrid solar-fossil fuel power generation
Sheu, Elysia J. (Elysia Ja-Zeng)
2012-01-01
In this thesis, a literature review of hybrid solar-fossil fuel power generation is first given with an emphasis on system integration and evaluation. Hybrid systems are defined as those which use solar energy and fuel ...
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2005-09-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
NASA Astrophysics Data System (ADS)
Xia, Weiwei; Shen, Lianfeng
We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.
NSDL National Science Digital Library
AMPS GK-12 Program,
At its core, the LEGO® MINDSTORMS® NXT product provides a programmable microprocessor. Students use the NXT processor to simulate an experiment involving thousands of uniformly random points placed within a unit square. Using the underlying geometry of the experimental model, as well as the geometric definition of the constant ? (pi), students form an empirical ratio of areas to estimate a numerical value of ?. Although typically used for numerical integration of irregular shapes, in this activity, students use a Monte Carlo simulation to estimate a common but rather complex analytical form—the numerical value of the most famous irrational number, ?.
Monte Carlo approach to turbulence
P. Düben; D. Homeier; K. Jansen; D. Mesterhazy; G. Münster
2009-11-03
The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
Alavikia, Babak; Ramahi, Omar M
2011-06-01
This work presents a hybrid finite-element-boundary integral algorithm to solve the problem of scattering from a finite and infinite array of two-dimensional cavities engraved in a perfectly electric conducting screen covered with a stratified dielectric layer. The solution region is divided into interior regions containing the cavities and the region exterior to the cavities. The finite-element formulation is applied only inside the interior regions to derive a linear system of equations associated with unknown field values. Using a two-boundary formulation, the surface integral equation employing the grounded dielectric slab Green's function in the spatial domain is applied at the opening of the cavities as a boundary constraint to truncate the solution region. Placing the truncation boundary at the opening of the cavities and inside the dielectric layer results in a highly efficient solution in terms of computational resources, which makes the algorithm well suited for the optimization problems involving scattering from grating surfaces. The near fields are generated for an array of cavities with different dimensions and inhomogeneous fillings covered with dielectric layers. PMID:21643387
Alavikia, Babak; Ramahi, Omar M
2011-10-01
This work presents a hybrid finite element-boundary integral algorithm to solve the problem of scattering from a finite array of two-dimensional cavities engraved in a perfectly electric conducting screen covered with multilayer stratified dielectric coating. The solution region is divided into interior regions containing the cavities and the region exterior to the cavities. The finite element formulation is applied only inside the interior regions to derive a linear system of equations associated with unknown field values. Using a two-boundary formulation, the surface integral equation employing a closed-form multilayer Green's function in the spatial domain is applied at the opening of the cavities as a boundary constraint to truncate the solution region. The closed-form Green's function in the spatial domain for multilayer planar coating is expressed in terms of complex images using the generalized pencil-of-function method in conjunction with a two-level sampling approach. Placing the truncation boundary at the opening of the cavities and inside the dielectric coating results in a highly efficient solution in terms of computational resources, which makes the algorithm well suited for optimization problems involving scattering from grating surfaces. The near fields are generated for array of cavities with different dimensions and inhomogeneous fillings covered with dielectric layers. PMID:21979527
NASA Technical Reports Server (NTRS)
Sproles, Darrell W.; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.
Sikora, M; Dohm, O; Alber, M
2007-08-01
A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification was performed for two cases: 6 MV head&neck and 15 MV prostate. The simulation is in agreement with film measurements within 2%/2 mm in the high dose regions (> or = 0.1 Gy = 5% D(max)) and 5%/2 mm in low dose regions (<0.1 Gy). PMID:17634643
2010-01-01
Background Azalea (Rhododendron simsii hybrids) is the most important flowering pot plant produced in Belgium, being exported world-wide. In the breeding program, flower color is the main feature for selection, only in later stages cultivation related plant quality traits are evaluated. As a result, plants with attractive flowering are kept too long in the breeding cycle. The inheritance of flower color has been well studied; information on the heritability of cultivation related quality traits is lacking. For this purpose, QTL mapping in diverse genetic backgrounds appeared to be a must and therefore 4 mapping populations were made and analyzed. Results An integrated framework map on four individual linkage maps in Rhododendron simsii hybrids was constructed. For genotyping, mainly dominant scored AFLP (on average 364 per population) and MYB-based markers (15) were combined with co-dominant SSR (23) and EST markers (12). Linkage groups were estimated in JoinMap. A consensus grouping for the 4 mapping populations was made and applied in each individual mapping population. Finally, 16 stable linkage groups were set for the 4 populations; the azalea chromosome number being 13. A combination of regression mapping (JoinMap) and multipoint-likelihood maximization (Carthagène) enabled the construction of 4 maps and their alignment. A large portion of loci (43%) was common to at least two populations and could therefore serve as bridging markers. The different steps taken for map optimization and integration into a reference framework map for QTL mapping are discussed. Conclusions This is the first map of azalea up to our knowledge. AFLP and SSR markers are used as a reference backbone and functional markers (EST and MYB) were added as candidate genes for QTL analysis. The alignment of the 4 maps on the basis of framework markers will facilitate in turn the alignment of QTL regions detected in each of the populations. The approach we took is thoroughly different than the recently published integrated maps and well-suited for mapping in a non-model crop. PMID:20070894
NASA Astrophysics Data System (ADS)
Zhu, B. J.; Shi, Y. L.; Sukop, M. C.
2009-04-01
Strong earthquakes can have catastrophic effects on society, and therefore the precise prediction of large earthquakes is crucial for seismic hazard reduction. The genesis and occurrence of earthquakes and their subsequent effects involve complex physical processes. Studying these processes helps us understand the mechanics of earthquakes and the future physical state of the earth. Earthquake studies focus on the nucleation of rupture, thermo- and hydro-mechanical weakening of fault zones during seismic slip, fracture propagation through branched and offset fault systems, and relations between stress, seismicity, and deformation in or near continental and subduction fault systems. Fluid driven fracture is a fundamental geophysical phenomenon operating in planetary interiors on many scales; it plays a major role in chemical differentiation of the upper mantle and dynamic delayed triggering of earthquakes process. Because our ability to make direct observation of the dynamics and styles of fluid driven fracture is quite limited, our understanding of this phenomenon relies on theoretical models that use fundamental physical principles and available field data to constrain the behavior of fluid driven cracks at depth. However, relatively little work has been done on 3D extended fluid driven crack propagation. This seems to be due mainly to the present limitations on practical methods (such as CPU time and storage requirements) and on theoretical aspects (strongly singular domain integrals). This requires general and accurate theoretical method. This work reports a new and accurate way of theoretical and numerical description of the extended 3D fluid (electromagnetic and flow) driven crack progression in saturated porous media for P- and S-waves under fully coupled electromagnetothermoelastic field. First, based on the viscous fluid flow reciprocal work theorem, the hybrid hypersingular integral equation (HIE) method proposed by the author was defined by combined with the coupled extended wave time-domain HIE, the lattice Boltzmann method and the interface phase field method. The general extended 3D fluid flow velocity wave solutions are obtained by the extended wave time-domains Green's function method. The 3D extended dynamic fluid driven crack modeling under fully coupled electromagnetothermoelastic P- and S-wave and flow field was established. Then, the problem is reduced to solving a set of extended hybrid HIEs coupled with nonlinear boundary domain integral equations, in which the unknown functions are the general extended flow velocity discontinuity waves. The behavior of the general extended singular stress indices around the crack front terminating is analyzed by hybrid time-domain main-part analysis. The general extended singular pore stress waves (SPSWs) and the extended dynamic stress intensity factors (DSIFs) on the fluid driven crack surface are obtained from closed-form solutions. In addition, a numerical method for the problem is proposed, in which the extended velocity discontinuity waves are approximated by the product of time-domain density functions and polynomials. The extended DSIFs and general extended SPSWs are calculated, and the results are presented toward demonstrating the applicability of the proposed method. Key words 3D fluid driven crack propagation mechanism; P- and S-waves; Fully coupled electromagnetothermoelastic field; Hypersingular integral method, Lattice Boltzmann method; Interface phase field method; Extended dynamic stress intensity factor; General extended singular pore stress waves.
ERIC Educational Resources Information Center
Zitter, Ilya; Hoeve, Aimee
2012-01-01
This paper deals with the problematic nature of the transition between education and the workplace. A smooth transition between education and the workplace requires learners to develop an integrated knowledge base, but this is problematic as most educational programmes offer knowledge and experiences in a fragmented manner, scattered over a…
The Use of Acoustic Emission in a Test for Beam-Lead, TAB, and Hybrid Chip Capacitor Bond Integrity
GEORGE G. HARMAN
1977-01-01
The use of acoustic emission (AE) in a test for beam-lead bond and anchor integrity has been investigated. AE refers to the emission of broad-band stress waves when materials are broken, cracked, or deformed. A major problem in the present work was to develop means of nondestructively stressing the delicate, irregularly extending beam leads. The most promising of the methods
Colaprico, Antonio; Cava, Claudia; Bertoli, Gloria; Bontempi, Gianluca; Castiglioni, Isabella
2015-01-01
In this work an integrated approach was used to identify functional miRNAs regulating gene pathway cross-talk in breast cancer (BC). We first integrated gene expression profiles and biological pathway information to explore the underlying associations between genes differently expressed among normal and BC samples and pathways enriched from these genes. For each pair of pathways, a score was derived from the distribution of gene expression levels by quantifying their pathway cross-talk. Random forest classification allowed the identification of pairs of pathways with high cross-talk. We assessed miRNAs regulating the identified gene pathways by a mutual information analysis. A Fisher test was applied to demonstrate their significance in the regulated pathways. Our results suggest interesting networks of pathways that could be key regulatory of target genes in BC, including stem cell pluripotency, coagulation, and hypoxia pathways and miRNAs that control these networks could be potential biomarkers for diagnostic, prognostic, and therapeutic development in BC. This work shows that standard methods of predicting normal and tumor classes such as differentially expressed miRNAs or transcription factors could lose intrinsic features; instead our approach revealed the responsible molecules of the disease.
Colaprico, Antonio; Cava, Claudia; Bertoli, Gloria; Bontempi, Gianluca; Castiglioni, Isabella
2015-01-01
In this work an integrated approach was used to identify functional miRNAs regulating gene pathway cross-talk in breast cancer (BC). We first integrated gene expression profiles and biological pathway information to explore the underlying associations between genes differently expressed among normal and BC samples and pathways enriched from these genes. For each pair of pathways, a score was derived from the distribution of gene expression levels by quantifying their pathway cross-talk. Random forest classification allowed the identification of pairs of pathways with high cross-talk. We assessed miRNAs regulating the identified gene pathways by a mutual information analysis. A Fisher test was applied to demonstrate their significance in the regulated pathways. Our results suggest interesting networks of pathways that could be key regulatory of target genes in BC, including stem cell pluripotency, coagulation, and hypoxia pathways and miRNAs that control these networks could be potential biomarkers for diagnostic, prognostic, and therapeutic development in BC. This work shows that standard methods of predicting normal and tumor classes such as differentially expressed miRNAs or transcription factors could lose intrinsic features; instead our approach revealed the responsible molecules of the disease. PMID:26240829
Computationally efficient Monte Carlo EM algorithms for generalized linear mixed models
Yi-Hau Chen
2006-01-01
Maximum likelihood estimation in generalized linear mixed models usually involves intractable integrals that may be of high dimension. To reduce the dimensions of the integrals involved in computation, a reduced form of the score equation obtained by exploiting conditional independence and random effects structures can be used. Two Monte Carlo methods, one based on direct Monte Carlo integration and the
Romand, Raymond; Ripp, Raymond; Poidevin, Laetitia; Boeglin, Marcel; Geffers, Lars; Dollé, Pascal; Poch, Olivier
2015-01-01
An in situ hybridization (ISH) study was performed on 2000 murine genes representing around 10% of the protein-coding genes present in the mouse genome using data generated by the EURExpress consortium. This study was carried out in 25 tissues of late gestation embryos (E14.5), with a special emphasis on the developing ear and on five distinct developing sensory organs, including the cochlea, the vestibular receptors, the sensory retina, the olfactory organ, and the vibrissae follicles. The results obtained from an analysis of more than 11,000 micrographs have been integrated in a newly developed knowledgebase, called ImAnno. In addition to managing the multilevel micrograph annotations performed by human experts, ImAnno provides public access to various integrated databases and tools. Thus, it facilitates the analysis of complex ISH gene expression patterns, as well as functional annotation and interaction of gene sets. It also provides direct links to human pathways and diseases. Hierarchical clustering of expression patterns in the 25 tissues revealed three main branches corresponding to tissues with common functions and/or embryonic origins. To illustrate the integrative power of ImAnno, we explored the expression, function and disease traits of the sensory epithelia of the five presumptive sensory organs. The study identified 623 genes (out of 2000) concomitantly expressed in the five embryonic epithelia, among which many (?12%) were involved in human disorders. Finally, various multilevel interaction networks were characterized, highlighting differential functional enrichments of directly or indirectly interacting genes. These analyses exemplify an under-represention of "sensory" functions in the sensory gene set suggests that E14.5 is a pivotal stage between the developmental stage and the functional phase that will be fully reached only after birth. PMID:25706271
Romand, Raymond; Ripp, Raymond; Poidevin, Laetitia; Boeglin, Marcel; Geffers, Lars; Dollé, Pascal; Poch, Olivier
2015-01-01
An in situ hybridization (ISH) study was performed on 2000 murine genes representing around 10% of the protein-coding genes present in the mouse genome using data generated by the EURExpress consortium. This study was carried out in 25 tissues of late gestation embryos (E14.5), with a special emphasis on the developing ear and on five distinct developing sensory organs, including the cochlea, the vestibular receptors, the sensory retina, the olfactory organ, and the vibrissae follicles. The results obtained from an analysis of more than 11,000 micrographs have been integrated in a newly developed knowledgebase, called ImAnno. In addition to managing the multilevel micrograph annotations performed by human experts, ImAnno provides public access to various integrated databases and tools. Thus, it facilitates the analysis of complex ISH gene expression patterns, as well as functional annotation and interaction of gene sets. It also provides direct links to human pathways and diseases. Hierarchical clustering of expression patterns in the 25 tissues revealed three main branches corresponding to tissues with common functions and/or embryonic origins. To illustrate the integrative power of ImAnno, we explored the expression, function and disease traits of the sensory epithelia of the five presumptive sensory organs. The study identified 623 genes (out of 2000) concomitantly expressed in the five embryonic epithelia, among which many (?12%) were involved in human disorders. Finally, various multilevel interaction networks were characterized, highlighting differential functional enrichments of directly or indirectly interacting genes. These analyses exemplify an under-represention of "sensory" functions in the sensory gene set suggests that E14.5 is a pivotal stage between the developmental stage and the functional phase that will be fully reached only after birth. PMID:25706271
Scalable integration of Li5FeO4 towards robust, high-performance lithium-ion hybrid capacitors.
Park, Min-Sik; Lim, Young-Geun; Hwang, Soo Min; Kim, Jung Ho; Kim, Jeom-Soo; Dou, Shi Xue; Cho, Jaephil; Kim, Young-Jun
2014-11-01
Lithium-ion hybrid capacitors have attracted great interest due to their high specific energy relative to conventional electrical double-layer capacitors. Nevertheless, the safety issue still remains a drawback for lithium-ion capacitors in practical operational environments because of the use of metallic lithium. Herein, single-phase Li5FeO4 with an antifluorite structure that acts as an alternative lithium source (instead of metallic lithium) is employed and its potential use for lithium-ion capacitors is verified. Abundant Li(+) amounts can be extracted from Li5FeO4 incorporated in the positive electrode and efficiently doped into the negative electrode during the first electrochemical charging. After the first Li(+) extraction, Li(+) does not return to the Li5FeO4 host structure and is steadily involved in the electrochemical reactions of the negative electrode during subsequent cycling. Various electrochemical and structural analyses support its superior characteristics for use as a promising lithium source. This versatile approach can yield a sufficient Li(+)-doping efficiency of >90% and improved safety as a result of the removal of metallic lithium from the cell. PMID:25208971
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Aphale, Ashish; Maisuria, Krushangi; Mahapatra, Manoj K; Santiago, Angela; Singh, Prabhakar; Patra, Prabir
2015-01-01
Supercapacitors also known as electrochemical capacitors, that store energy via either Faradaic or non-Faradaic processes, have recently grown popularity mainly because they complement, and can even replace, conventional energy storage systems in variety of applications. Supercapacitor performance can be improved significantly by developing new nanocomposite electrodes which utilizes both the energy storage processes simultaneously. Here we report, fabrication of the freestanding hybrid electrodes, by incorporating graphene and carbon nanotubes (CNT) in pyrrole monomer via its in-situ polymerization. At the scan rate of 5?mV s(-1), the specific capacitance of the polypyrrole-CNT-graphene (PCG) electrode film was 453?F g(-1) with ultrahigh energy and power density of 62.96?W h kg(-1) and 566.66?W kg(-1) respectively, as shown in the Ragone plot. A nanofibrous membrane was electrospun and effectively used as a separator in the supercapacitor. Four supercapacitors were assembled in series to demonstrate the device performance by lighting a 2.2?V LED. PMID:26395922
Aphale, Ashish; Maisuria, Krushangi; Mahapatra, Manoj K.; Santiago, Angela; Singh, Prabhakar; Patra, Prabir
2015-01-01
Supercapacitors also known as electrochemical capacitors, that store energy via either Faradaic or non-Faradaic processes, have recently grown popularity mainly because they complement, and can even replace, conventional energy storage systems in variety of applications. Supercapacitor performance can be improved significantly by developing new nanocomposite electrodes which utilizes both the energy storage processes simultaneously. Here we report, fabrication of the freestanding hybrid electrodes, by incorporating graphene and carbon nanotubes (CNT) in pyrrole monomer via its in-situ polymerization. At the scan rate of 5?mV s?1, the specific capacitance of the polypyrrole-CNT-graphene (PCG) electrode film was 453?F g?1 with ultrahigh energy and power density of 62.96?W h kg?1 and 566.66?W kg?1 respectively, as shown in the Ragone plot. A nanofibrous membrane was electrospun and effectively used as a separator in the supercapacitor. Four supercapacitors were assembled in series to demonstrate the device performance by lighting a 2.2?V LED. PMID:26395922
Quasi-Monte Carlo methods for lattice systems: A first look
NASA Astrophysics Data System (ADS)
Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.
2014-03-01
We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte Carlo, and especially Markov chain-Monte Carlo methods like the Metropolis or the hybrid Monte Carlo algorithm have been used to calculate approximate solutions of the path integral. These algorithms often lead to the undesired effect of autocorrelation in the samples of observables and suffer in any case from the slow asymptotic error behavior proportional to N, if N is the number of samples. Solution method: This program applies the quasi-Monte Carlo approach and the reweighting technique (respectively the weighted uniform sampling method) to generate uncorrelated samples of observables of the anharmonic oscillator with an improved asymptotic error behavior. Unusual features: The application of the quasi-Monte Carlo approach is quite revolutionary in the field of lattice field theories. Running time: The running time depends directly on the number of samples N and dimensions d. On modern computers a run with up to N=216=65536 (including 9 replica runs) and d=100 should not take much longer than one minute.
Ali Naji; Paul J. Atzberger; Frank L. H. Brown
2009-05-23
We introduce a simulation strategy to consistently couple continuum biomembrane dynamics to the motion of discrete biological macromolecules residing within or on the membrane. The methodology is used to study the diffusion of integral membrane proteins that impart a curvature on the bilayer surrounding them. Such proteins exhibit a substantial reduction in diffusion coefficient relative to "flat" proteins; this effect is explained by elementary hydrodynamic considerations.
Kyoung-Youm Kim; Se Yoon Kim; Myoung Won Kim; Suntae Jung
2005-01-01
This paper reports on the development of planar-lightwave-circuit wavelength-division-multiplexing (PLC-WDM) filters based on double polynomial curve directional couplers, which will be integrated into compact and low-cost bidirectional optical transceivers for FTTx systems. Silica-based PLC-WDM filters smaller than 5 mm×100 ?m for Ethernet passive optical network (E-PON) applications were designed and fabricated to have low bidirectional crosstalk (lower than -40 dB)
Kyoung-Youm Kim; Se Yoon Kim; Myoung Won Kim; Suntae Jung
2005-01-01
This paper reports on the development of planar-lightwave-circuit wavelength-division-multiplexing (PLC-WDM) filters based on double polynomial curve directional couplers, which will be integrated into compact and low-cost bidirectional optical transceivers for FTTx systems. Silica-based PLC-WDM filters smaller than 5 mm x 100 m for Ethernet passive optical network (E-PON) applications were designed and fabricated to have low bidirectional crosstalk (lower than
Alavikia, Babak; Ramahi, Omar M
2011-12-01
This work presents a novel finite-element solution to the problem of scattering from a finite and an infinite array of cylindrical objects with arbitrary shapes and materials over perfectly conducting ground planes. The formulation is based on using the surface integral equation with Green's function of the first or second kind as a boundary constraint. The solution region is divided into interior regions containing the cylindrical objects and the region exterior to all the objects. The finite-element formulation is applied inside the interior regions to derive a linear system of equations associated with nodal field values. Using two-boundary formulation, the surface integral equation is then applied at the truncation boundary as a boundary constraint to connect nodes on the boundaries to interior nodes. The technique presented here is highly efficient in terms of computing resources, versatile, and accurate in comparison with previously published methods. The near and far fields are generated for a finite and an infinite array of objects. While the surface integral equation in combination with the finite-element method was applied before to the problem of scattering from objects in free space, the application of the method to the important problem of scattering from objects above infinite flat ground planes is presented here for the first time, to our knowledge. PMID:22193264
Schichtel, B.A.; Malm, W.C.; Gebhart, K.A.; Barna, M.G.; Knipping, E.M. [Colorado State University, Ft. Collins, CO (United States)
2006-04-04
The Big Bend Regional Aerosol and Visibility (BRAVO) study was an intensive air quality study designed to understand the causes of haze in Big Bend National Park. Daily speciated fine aerosols were measured from July through October 1999 at 37 sites located mostly in Texas. In support of BRAVO, two chemical transport models (CTMs) were used to apportion particulate sulfate at Big Bend and other sites in Texas to sources in the eastern and western United States, Texas, Mexico, and the Carbon I and II coal-fired power plants, located 225 km southeast of Big Bend in Mexico. Analysis of the CTM source attribution results and comparison to results from receptor models revealed systematic biases. To reduce the multiplicative biases, a hybrid source apportionment model, based on inverse modeling, was developed that adjusted the initial CTM source contributions so the modeled sulfate concentrations optimally fit the measured data, resulting in refined daily source contributions. The method was tested using synthetic data and successfully reduced source attribution biases. The refined sulfate source attribution results reduced the initial eastern U.S. contribution to Big Bend, averaged over the BRAVO study period, from about 40% to about 30%, while Mexico's contribution increased from 24 - 32% about 40%. The contribution from the Carbon facility increased from similar to 14% to over 20%. The increase in Mexico's contribution is consistent with more recent SO{sub 2} emissions estimates that indicate that the BRAVO Mexican SO{sub 2} emissions were underestimated. Source attribution results for other monitoring sites in west Texas were similar to results at Big Bend.
Luo, Ye; Chamanzar, Maysamreza; Apuzzo, Aniello; Salas-Montiel, Rafael; Nguyen, Kim Ngoc; Blaize, Sylvain; Adibi, Ali
2015-02-11
The enhancement and confinement of electromagnetic radiation to nanometer scale have improved the performances and decreased the dimensions of optical sources and detectors for several applications including spectroscopy, medical applications, and quantum information. Realization of on-chip nanofocusing devices compatible with silicon photonics platform adds a key functionality and provides opportunities for sensing, trapping, on-chip signal processing, and communications. Here, we discuss the design, fabrication, and experimental demonstration of light nanofocusing in a hybrid plasmonic-photonic nanotaper structure. We discuss the physical mechanisms behind the operation of this device, the coupling mechanisms, and how to engineer the energy transfer from a propagating guided mode to a trapped plasmonic mode at the apex of the plasmonic nanotaper with minimal radiation loss. Optical near-field measurements and Fourier modal analysis carried out using a near-field scanning optical microscope (NSOM) show a tight nanofocusing of light in this structure to an extremely small spot of 0.00563(?/(2n(rmax)))(3) confined in 3D and an exquisite power input conversion of 92%. Our experiments also verify the mode selectivity of the device (low transmission of a TM-like input mode and high transmission of a TE-like input mode). A large field concentration factor (FCF) of about 4.9 is estimated from our NSOM measurement with a radius of curvature of about 20 nm at the apex of the nanotaper. The agreement between our theory and experimental results reveals helpful insights about the operation mechanism of the device, the interplay of the modes, and the gradual power transfer to the nanotaper apex. PMID:25562706
Multiple quadrature by Monte Carlo techniques
Voss, John Dietrich
1966-01-01
10 Importance Sampling 27 Error in Evaluating 4. 1 vs. Number of Points of Evaluation at Intergrand 30 Non-Central Case 33 Needle and Parallel Lines Sample Space . . 43 CHAPTER I INTRODUCTION Monte Carlo was the code name given to a method... (around 30). The results are checked against a table of known values. The table may then be extended for higher degrees of free- dom by the Monte Carlo technique used. The non-central cumulative Chi-square distribution is also obtained by integration...
Rosca, Florin
2012-06-15
Purpose: To present a mixed electron and photon IMRT planning technique using electron beams with an energy range of 6-22 MeV and standard hardware that minimizes integral dose to patients for targets as deep as 7.5 cm. Methods: Ten brain cases, two lung, a thyroid, an abdominal, and a parotid case were planned using two planning techniques: a photon-only IMRT (IMRT) versus a mixed modality treatment (E + IMRT) that includes an enface electron beam and a photon IMRT portion that ensures a uniform target coverage. The electron beam is delivered using a regular cutout placed in an electron cone. The electron energy was chosen to provide a good trade-off between minimizing integral dose and generating a uniform, deliverable plan. The authors choose electron energies that cover the deepest part of PTV with the 65%-70% isodose line. The normal tissue integral dose, the dose for ring structures around the PTV, and the volumes of the 75%, 50%, and 25% isosurfaces were used to compare the dose distributions generated by the two planning techniques. Results: The normal tissue integral dose was lowered by about 20% by the E + IMRT plans compared to the photon-only IMRT ones for most studied cases. With the exception of lungs, the dose reduction associated to the E + IMRT plans was more pronounced further away from the target. The average dose ratio delivered to the 0-2 cm and the 2-4 cm ring structures for brain patients for the two planning techniques were 89.6% and 70.8%, respectively. The enhanced dose sparing away from the target for the brain patients can also be observed in the ratio of the 75%, 50%, and 25% isodose line volumes for the two techniques, which decreases from 85.5% to 72.6% and further to 65.1%, respectively. For lungs, the lateral electron beams used in the E + IMRT plans were perpendicular to the mostly anterior/posterior photon beams, generating much more conformal plans. Conclusions: The authors proved that even using the existing electron delivery hardware, a mixed electron/photon planning technique (E + IMRT) can decrease the normal tissue integral dose compared to a photon-only IMRT plan. Different planning approaches can be enabled by the use of an electron beam directed toward organs at risk distal to the target, which are still spared due the rapid dose fall-off of the electron beam. Examples of such cases are the lateral electron beams in the thoracic region that do not irradiate the heart and contralateral lung, electron beams pointed toward kidneys in the abdominal region, or beams treating brain lesions pointed toward the brainstem or optical apparatus. For brain, electron vertex beams can also be used without irradiating the whole body. Since radiation retreatments become more and more common, minimizing the normal tissue integral dose and the dose delivered to tissues surrounding the target, as enabled by E + IMRT type techniques, should receive more attention.
NASA Astrophysics Data System (ADS)
Tuantranont, Adisorn
2001-12-01
Micro-Electro-Mechanical Systems (MEMS) are integrated micrometer-sized devices or systems that combine electrical and mechanical components. Integrated circuit (IC) technology is used for batch fabrication of mechanical and electrical devices in the from micrometer to millimeter. The research and development of optical beam steering devices using MEMS is presented in this thesis. Phase modulation beam steering techniques including phased array and decentered microlens beam steering are implemented. Theories of phase modulation beam steering techniques including geometrical and Fourier transform analysis are studied and applied to understand the principal operation of the devices. Design consideration, analytical/finite element analysis, device characterization, and optical testing methodologies to design, model, and characterize the fabricated devices are presented. Several novel lenslet integrated MEM-deformable micromirror arrays including the high optical fill factor and large-stroke micromirror array and MEMS-controllable microlens array are created and demonstrated as miniature beam steering systems for targeting and tracking application, display, optical interconnects, and optical communication. Optical analysis of decentered microlens for beam steering is studied. Optical efficiency and coupling analysis of the MEMS-controllable microlens array in the optical interconnect systems are analyzed based on the Gaussian beam propagation of Vertical Cavity Surface Emitting Lasers (VCSELs). Tolerance analysis, including lateral and longitudinal misalignment when the device is implemented, is studied to determine the possible range of critical tolerances. Finally, wavefront aberrations of a steered beam are modeled and determined using optical design software to understand the influences of focus shift and aberrations on the system performance.
Monte Carlo study of Lefschetz thimble structure in one-dimensional Thirring model at finite density
Fujii, Hirotsugu; Kikukawa, Yoshio
2015-01-01
We consider the one-dimensional massive Thirring model formulated on the lattice with staggered fermions and an auxiliary compact vector (link) field, which is exactly solvable and shows a phase transition with increasing the chemical potential of fermion number: the crossover at a finite temperature and the first order transition at zero temperature. We complexify its path-integration on Lefschetz thimbles and examine its phase transition by hybrid Monte Carlo simulations on the single dominant thimble. We observe a discrepancy between the numerical and exact results in the crossover region for small inverse coupling $\\beta$ and/or large lattice size $L$, while they are in good agreement at the lower and higher density regions. We also observe that the discrepancy persists in the continuum limit keeping the temperature finite and it becomes more significant toward the low-temperature limit. This numerical result is consistent with our analytical study of the model's thimble structure. And these results imply...
Dwyer, Heather E; Jasieniuk, Marie; Okada, Miki; Shapiro, Arthur M
2015-01-01
Gene flow and hybridization among species dramatically affect our understanding of the species as a biological unit, species relationships, and species adaptations. In North American Colias eurytheme and Colias eriphyle, there has been historical debate over the extent of hybridization occurring and the identity of phenotypically intermediate individuals as genetic hybrids. This study assesses the population structure of these two species to measure the extent of hybridization and the genetic identity of phenotypic intermediates as hybrids. Amplified fragment length polymorphism (AFLP) marker analysis was performed on 378 specimens collected from northern California and Nevada. Population structure was inferred using a Bayesian/Markov chain Monte Carlo method, which probabilistically assigns individuals to genetic clusters. Three genetic clusters provided the best fit for the data. C. eurytheme individuals were primarily assigned to two closely related clusters, and C. eriphyle individuals were mostly assigned to a third, more distantly related cluster. There appeared to be significant hybridization between the two species. Individuals of intermediate phenotype (putative hybrids) were found to be genetically indistinguishable from C. eriphyle, indicating that previous work based on the assumption that these intermediate forms are hybrids may warrant reconsideration. PMID:26306172
D. D. Ferrante; J. Doll; G. S. Guralnik; D. Sabo
2002-09-04
Using a common technique for approximating distributions [generalized functions], we are able to use standard Monte Carlo methods to compute QFT quantities in Minkowski spacetime, under phase transitions, or when dealing with coalescing stationary points.
Powers, J J
2011-11-28
This study focused on creating a new tristructural isotropic (TRISO) coated particle fuel performance model and demonstrating the integration of this model into an existing system of neutronics and heat transfer codes, creating a user-friendly option for including fuel performance analysis within system design optimization and system-level trade-off studies. The end product enables both a deeper understanding and better overall system performance of nuclear energy systems limited or greatly impacted by TRISO fuel performance. A thorium-fueled hybrid fusion-fission Laser Inertial Fusion Energy (LIFE) blanket design was used for illustrating the application of this new capability and demonstrated both the importance of integrating fuel performance calculations into mainstream design studies and the impact that this new integrated analysis had on system-level design decisions. A new TRISO fuel performance model named TRIUNE was developed and verified and validated during this work with a novel methodology established for simulating the actual lifetime of a TRISO particle during repeated passes through a pebble bed. In addition, integrated self-consistent calculations were performed for neutronics depletion analysis, heat transfer calculations, and then fuel performance modeling for a full parametric study that encompassed over 80 different design options that went through all three phases of analysis. Lastly, side studies were performed that included a comparison of thorium and depleted uranium (DU) LIFE blankets as well as some uncertainty quantification work to help guide future experimental work by assessing what material properties in TRISO fuel performance modeling are most in need of improvement. A recommended thorium-fueled hybrid LIFE engine design was identified with an initial fuel load of 20MT of thorium, 15% TRISO packing within the graphite fuel pebbles, and a 20cm neutron multiplier layer with beryllium pebbles in flibe molten salt coolant. It operated at a system power level of 2000 MW{sub th}, took about 3.5 years to reach full plateau power, and was capable of an End of Plateau burnup of 38.7 %FIMA if considering just the neutronic constraints in the system design; however, fuel performance constraints led to a maximum credible burnup of 12.1 %FIMA due to a combination of internal gas pressure and irradiation effects on the TRISO materials (especially PyC) leading to SiC pressure vessel failures. The optimal neutron spectrum for the thorium-fueled blanket options evaluated seemed to favor a hard spectrum (low but non-zero neutron multiplier thicknesses and high TRISO packing fractions) in terms of neutronic performance but the fuel performance constraints demonstrated that a significantly softer spectrum would be needed to decrease the rate of accumulation of fast neutron fluence in order to improve the maximum credible burnup the system could achieve.
NASA Astrophysics Data System (ADS)
Powers, Jeffrey J.
2011-12-01
This study focused on creating a new tristructural isotropic (TRISO) coated particle fuel performance model and demonstrating the integration of this model into an existing system of neutronics and heat transfer codes, creating a user-friendly option for including fuel performance analysis within system design optimization and system-level trade-off studies. The end product enables both a deeper understanding and better overall system performance of nuclear energy systems limited or greatly impacted by TRISO fuel performance. A thorium-fueled hybrid fusion-fission Laser Inertial Fusion Energy (LIFE) blanket design was used for illustrating the application of this new capability and demonstrated both the importance of integrating fuel performance calculations into mainstream design studies and the impact that this new integrated analysis had on system-level design decisions. A new TRISO fuel performance model named TRIUNE was developed and verified and validated during this work with a novel methodology established for simulating the actual lifetime of a TRISO particle during repeated passes through a pebble bed. In addition, integrated self-consistent calculations were performed for neutronics depletion analysis, heat transfer calculations, and then fuel performance modeling for a full parametric study that encompassed over 80 different design options that went through all three phases of analysis. Lastly, side studies were performed that included a comparison of thorium and depleted uranium (DU) LIFE blankets as well as some uncertainty quantification work to help guide future experimental work by assessing what material properties in TRISO fuel performance modeling are most in need of improvement. A recommended thorium-fueled hybrid LIFE engine design was identified with an initial fuel load of 20MT of thorium, 15% TRISO packing within the graphite fuel pebbles, and a 20cm neutron multiplier layer with beryllium pebbles in flibe molten salt coolant. It operated at a system power level of 2000 MWth, took about 3.5 years to reach full plateau power, and was capable of an End of Plateau burnup of 38.7 %FIMA if considering just the neutronic constraints in the system design; however, fuel performance constraints led to a maximum credible burnup of 12.1 %FIMA due to a combination of internal gas pressure and irradiation effects on the TRISO materials (especially PyC) leading to SiC pressure vessel failures. The optimal neutron spectrum for the thorium-fueled blanket options evaluated seemed to favor a hard spectrum (low but non-zero neutron multiplier thicknesses and high TRISO packing fractions) in terms of neutronic performance but the fuel performance constraints demonstrated that a significantly softer spectrum would be needed to decrease the rate of accumulation of fast neutron fluence in order to improve the maximum credible burnup the system could achieve.
Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE
Bekar, Kursat B [ORNL] [ORNL; Celik, Cihangir [ORNL] [ORNL; Wiarda, Dorothea [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL; Dunn, Michael E [ORNL] [ORNL
2013-01-01
Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.
Yoon, Ki-Hong; Oh, Su Hwan; Kim, Ki Soo; Kwon, O-Kyun; Oh, Dae Kon; Noh, Young-Ouk; Lee, Hyung-Jong
2010-03-15
We presented a hybridly-integrated tunable external cavity laser with 0.8 nm mode spacing 16 channels operating in the direct modulation of 2.5-Gbps for a low-cost source of a WDM-PON system. The tunable laser was fabricated by using a superluminescent diode (SLD) and a polymer Bragg reflector. The maximum output power and the power slope efficiency of the tunable laser were 10.3 mW and 0.132 mW/mA, respectively, at the SLD current of 100 mA and the temperature of 25 degrees C. The directly-modulated tunable laser successfully provided 2.5-Gbps transmissions through 20-km standard single mode fiber. The power penalty of the tunable laser was less than 0.8 dB for 16 channels after a 20-km transmission. The power penalty variation was less than 1.4 dB during the blue-shifted wavelength tuning. PMID:20389571
NASA Astrophysics Data System (ADS)
Li, L.; Wang, K.; Li, H.; Eibert, T. F.
2014-11-01
A hybrid higher-order finite element boundary integral (FE-BI) technique is discussed where the higher-order FE matrix elements are computed by a fully analytical procedure and where the gobal matrix assembly is organized by a self-identifying procedure of the local to global transformation. This assembly procedure applys to both, the FE part as well as the BI part of the algorithm. The geometry is meshed into three-dimensional tetrahedra as finite elements and nearly orthogonal hierarchical basis functions are employed. The boundary conditions are implemented in a strong sense such that the boundary values of the volume basis functions are directly utilized within the BI, either for the tangential electric and magnetic fields or for the asssociated equivalent surface current densities by applying a cross product with the unit surface normals. The self-identified method for the global matrix assembly automatically discerns the global order of the basis functions for generating the matrix elements. Higher order basis functions do need more unknowns for each single FE, however, fewer FEs are needed to achieve the same satisfiable accuracy. This improvement provides a lot more flexibility for meshing and allows the mesh size to raise up to ?/3. The performance of the implemented system is evaluated in terms of computation time, accuracy and memory occupation, where excellent results with respect to precision and computation times of large scale simulations are found.
Romero-García, Sebastian; Merget, Florian; Shen, Bin; Witzens, Jeremy
2013-01-01
We report on two edge-coupling and power splitting devices for hybrid integration of III-V lasers with sub-micrometric silicon-on-insulator (SOI) waveguides. The proposed devices relax the horizontal alignment tolerances required to achieve high coupling efficiencies and are suitable for passively aligned assembly with pick-and-place tools. Light is coupled to two on-chip single mode SOI waveguides with almost identical power coupling efficiency, but with a varying relative phase accommodating the lateral misalignment between the laser diode and the coupling devices, and is suitable for the implementation of parallel optics transmitters. Experimental characterization with both a lensed fiber and a Fabry-P\\'erot semiconductor laser diode has been performed. Excess insertion losses (in addition to the 3 dB splitting) taken as the worst case over both waveguides of respectively 2 dB and 3.1 dB, as well as excellent 1 dB horizontal loss misalignment ranges of respectively 2.8 um and 3.8 um (worst case over both i...
Diagrammatic Monte Carlo and Worm Algorithm Techniques
NASA Astrophysics Data System (ADS)
Prokof'ev, Nikolay
This chapter reviews basic principles of Diagrammatic Monte Carlo and Worm Algorithm techniques. Diagrammatic Monte Carlo establishes generic rules for unbiased sampling of well defined configuration spaces when the only source of errors is of statistical origin due to finite sampling time, no matter whether configuration parameters involve discrete, as in the Ising model, or continuous, as in Feynman diagrams or lattice path integrals, variables. Worm Algorithms allow one to sample efficiently configuration spaces with complex topology and non-local constraints which cause severe problems for Monte Carlo schemes based on local updates. They achieve this goal by working with the enlarged configuration space which includes configurations violating constraints present in the original formulation.
Nagasawa, Zenzo; Kusaba, Koji; Aoki, Yosuke
2008-06-01
In empirical antibacterial therapy, regional surveillance is expected to yield important information for the determination of the class and dosage regimen of antibacterial agents to be used when dealing with infections with organisms such as Pseudomonas aeruginosa, in which strains resistant to antibacterial agents have been increasing. The minimal inhibitory concentrations (MICs) of five carbapenem antibiotics against P. aeruginosa strains isolated in the Northern Kyushu district of Japan between 2005 and 2006 were measured, and 100 strains for which carbapenem MICs were < or =0.5-32 microg/ml were selected. In this study, MIC was measured by two methods, i.e., the common serial twofold dilution method and an integrated concentration method, in which the concentration was changed, in increments of 2 microg/ml, from 2 to 16 microg/ml. The MIC(50)/MIC(90) values for imipenem, meropenem, biapenem, doripenem, and panipenem, respectively, with the former method were 8/16, 4/16, 4/16, 2/8, and 16/16 microg/ml; and the values were 6/10, 4/12, 4/10, 2/6, and 10/16 microg/ml with the latter method. The MIC data obtained with both methods were subjected to pharmacokinetic/pharmacodynamic (PK/PD) analysis with Monte Carlo simulation to calculate the probability of achieving the target of time above MIC (T>MIC) with each carbapenem. The probability of achieving 25% time above the MIC (T>MIC; % of T>MIC for dosing intervals) and 40% T>MIC against P. aeruginosa with any dosage regimen was higher with doripenem than with any other carbapenem tested. When the two sets of MIC data were subjected to PK/PD analysis, the difference between the two methods in the probability of achieving each % T>MIC was small, thus endorsing the validity of the serial twofold dilution method. PMID:18574662
Huffman, J. B.; Lindsey, L. L.; Snyder, M. K.
1981-03-10
The development of a roof spray system for passive/hybrid building cooling is described. Progress to date in defining and evaluating the issues and constraints relevant to spray roof cooling is described in the context of Butler's passive/hybrid manufactured buildings development program. (MHR)
Streaming-matrix hybrid method for discrete-ordinates calculations
Clark, B.A.
1983-01-01
The streaming matrix hybrid method (SMHM) is a deterministic streaming method applied in void regions within a discrete-ordinates problem. The SMHM is used within the discrete-ordinates space-angle sweeping scheme so that it remains inside the standard discrete-ordinates inner iteration procedure. Streaming matrix equations are derived from the integral transport equation; a brief description of the implementation of the SMHM in discrete-ordinates transport codes is presented. The SMHM is applied to a cylindrical duct problem (L/D = 10) and compared with discrete-ordinates and Monte Carlo solutions. The SMHM provides more accurate results than discrete-ordinates without undue run-time or storage penalties.
Multiscale Kinetic Monte-Carlo for Simulating Epitaxial Growth
Jason P. DeVita; Leonard M. Sander; Peter Smereka
2005-01-01
We present a fast Monte-Carlo algorithm for simulating epitaxial surface\\u000agrowth, based on the continuous-time Monte-Carlo algorithm of Bortz, Kalos and\\u000aLebowitz. When simulating realistic growth regimes, much computational time is\\u000aconsumed by the relatively fast dynamics of the adatoms. Continuum and\\u000acontinuum-discrete hybrid methods have been developed to approach this issue;\\u000ahowever in many situations, the density of adatoms
Improved geometry representations for Monte Carlo radiation transport.
Martin, Matthew Ryan (Cornell University)
2004-08-01
ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.
Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods
Robert, Christian P.
Markov Chain Monte Carlo Methods Markov Chain Monte Carlo Methods Christian P. Robert Universit´e Paris-Dauphine, IuF, & CRESt http://www.ceremade.dauphine.fr/~xian October 16, 2013 #12;Markov Chain 2011] #12;Markov Chain Monte Carlo Methods Outline Motivations, Random Variable Generation Chapters 1
NASA Astrophysics Data System (ADS)
Robinson, Patrick J.
Gasification has been used in industry on a relatively limited scale for many years, but it is emerging as the premier unit operation in the energy and chemical industries. The switch from expensive and insecure petroleum to solid hydrocarbon sources (coal and biomass) is occurring due to the vast amount of domestic solid resources, national security and global warming issues. Gasification (or partial oxidation) is a vital component of "clean coal" technology. Sulfur and nitrogen emissions can be reduced, overall energy efficiency is increased and carbon dioxide recovery and sequestration are facilitated. Gasification units in an electric power generation plant produce a fuel gas for driving combustion turbines. Gasification units in a chemical plant generate synthesis gas, which can be used to produce a wide spectrum of chemical products. Future plants are predicted to be hybrid power/chemical plants with gasification as the key unit operation. The coupling of an Integrated Gasification Combined Cycle (IGCC) with a methanol plant can handle swings in power demand by diverting hydrogen gas from a combustion turbine and synthesis gas from the gasifier to a methanol plant for the production of an easily-stored, hydrogen-consuming liquid product. An additional control degree of freedom is provided with this hybrid plant, fundamentally improving the controllability of the process. The idea is to base-load the gasifier and use the more responsive gas-phase units to handle disturbances. During the summer days, power demand can fluctuate up to 50% over a 12-hour period. The winter provides a different problem where spikes of power demand can go up 15% within the hour. The following dissertation develops a hybrid IGCC / methanol plant model, validates the steady-state results with a National Energy Technical Laboratory study, and tests a proposed control structure to handle these significant disturbances. All modeling was performed in the widely used chemical process simulators Aspen Plus and Aspen Dynamics. This dissertation first presents a simple approximate method for achieving the objective of having a gasifier model that can be exported into Aspen Dynamics. Limitations in the software dealing with solids make this a necessary task. The basic idea is to use a high molecular weight hydrocarbon that is present in the Aspen library as a pseudo fuel. For many plantwide dynamic studies, a rigorous high-fidelity dynamic model of the gasifier is not needed because its dynamics are very fast and the gasifier gas volume is a relatively small fraction of the total volume of the entire plant. The proposed approximate model captures the essential macro-scale thermal, flow, composition and pressure dynamics. This paper does not attempt to optimize the design or control of gasifiers, but merely presents an idea of how to dynamically simulate coal gasification in an approximate way. This dissertation also presents models of the downstream units of a typical IGCC. Dynamic simulations of the H2S absorption/stripping unit, Water-gas Shift (WGS) reactors, and CO2 absorption/stripping unit are essential for the development of stable and agile plantwide control structures of this hybrid power/chemical plant. Due to the high pressure of the system, hydrogen sulfide is removed by means of physical absorption. SELEXOLRTM (a mixture of the dimethyl ethers of polyethylene glycol) is used to achieve a gas purity of less than 5 ppm H2S. This desulfurized synthesis gas is sent to two water gas shift reactors that convert a total of 99% of carbon monoxide to hydrogen. Physical absorption of carbon dioxide with Selexol produces a hydrogen rich stream (90 mol% H2) to be fed into combustion turbines or to a methanol plant. Steady-state economic designs and plantwide control structures are developed in this dissertation. A steady-state economic design, control structure, and successful turndown of the methanol plant are shown in this dissertation. The Plantwide control structure and interaction among units are also shown. The methanol plant was si
Hybrid organic-inorganic optoelectronic subsystems on a chip
Louay Eldada; Junichiro Fujita; Antonije Radojevic; Reinald Gerhardt; Tomoyuki Izuhara
2005-01-01
We report on hybrid organic-inorganic optoelectronic sysbsystems that integrate passive and active optical functions. The integration approaches involve various levels of hybridization, from splicing of pigtailed elements, to chip-to-chip attachment, to hybrid on-chip integration involving grafting and flip-chip mounting, and finally to true heteroepitaxy. The materials integrated include polymer, silica, silicon, silicon oxynitride, lithium niobate, indium phosphide, gallium arsenide, yttrium
NASA Astrophysics Data System (ADS)
Micheloni, R.; Crippa, L.; Picca, M.
In recent years, both industry and academia have increased their research effort in the hybrid memory management space, developing a wide variety of systems. It is worth mentioning that "hybrid" is a generic term and it can have different meanings depending on the context. For instance, a storage system can be hybrid because it combines HDD and SSD; an SSD can be hybrid because it combines SLC and MLC Flash memories, or it combines different non-volatile memories like NAND and ReRAM. In this chapter we look at all these different meanings.
NASA Astrophysics Data System (ADS)
Buscheck, T. A.; Chen, M.; Lu, C.; Sun, Y.; Hao, Y.; Elliot, T. R.; Celia, M. A.; Bielicki, J. M.
2012-12-01
The challenges of mitigating climate change and generating sustainable renewable energy are inseparable and can be addressed by synergistic integration of geothermal energy production with secure geologic CO2 storage (GCS). Pressure buildup can be a limiting factor for GCS and geothermal reservoir operations, due to a number of concerns, including the potential for CO2 leakage and induced seismicity, while pressure depletion can limit geothermal energy recovery. Water-use demands can also be a limiting factor for GCS and geothermal operations, particularly where water resources are already scarce. Economic optimization of geothermal-GCS involves trade-offs of various benefits and risks, along with their associated costs: (1) heat extraction per ton of delivered CO2, (2) permanent CO2 storage, (3) energy recovery per unit well (and working-fluid recirculation) costs, and (4) economic lifetime of a project. We analyze a hybrid, multi-stage approach using both formation brine and injected CO2 as working fluids to attempt to optimize the benefits of sustainable energy production and permanent CO2 storage, while conserving water resources and minimizing environmental risks. We consider a range of well-field patterns and operational schemes. Initially, the fluid production is entirely brine. After CO2 breakthrough, the fraction of CO2 in production, which is called the CO2 "cut", increases with time. Thus, brine is the predominant working fluid for early time, with the contribution of CO2 to heat extraction increasing with CO2 cut (and time). We find that smaller well spacing between CO2 injectors and producers favors earlier CO2 breakthrough and a more rapid rise in CO2 cut, which increases the contribution of recirculated CO2, thereby improving the heat extraction per ton of delivered CO2. On the other hand, larger well spacing increases permanent CO2 storage, energy production per unit well cost, while reducing the thermal drawdown rate, which extends the economic lifetime of a project. For the range of cases considered, we were never able to eliminate the co-production of brine; thus, brine management is likely to be important for reservoir operations, whether or not brine is considered as a candidate working fluid. Future work will address site-specific reservoir conditions and infrastructure factors, such as proximity to potential CO2 sources. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
P. K. Singh; S. K. Dasgupta
2005-01-01
It is an old world genus, consisting of two cultivated and two wild species besides a wild new world species. Significant heterosis has also been reported in this genus and is being commercially exploited. It is an insect pollinated crop. Isolation is needed to prevent out crossing by insects when hybrid seed is produced by open pollination. Hybrid seed is
NSDL National Science Digital Library
The first site takes you to the very informative essay at Motor Trend's site on its car of the year, the Toyota Prius (1). The next site is from the Union of Concerned Scientists. This great resource, called Clean Vehicles, offers all sorts of info about vehicles for the future (2). The Department of Energy's Hybrid Electric Vehicle Program page (3 ) offers lots of good information about the technology surrounding the cars as well as information on how you can get a tax break if you buy one. In fairness to both Honda (4 ) (Note: Honda also makes the Insight) and Toyota (5 ) these two sites take you to their webpages devoted to their two comparable hybrid cars, the Honda Civic Hybrid and the Toyota Prius. The last site takes you to a recent story on NPR about the future of hybrid technology and hybrid SUVs (6 )
Meyer, C A
2015-01-01
A review of the theoretical and experimental status of hybrid hadrons is presented. The states $\\pi_1(1400)$, $\\pi_1(1600)$, and $\\pi_1(2015)$ are thoroughly reviewed, along with experimental results from GAMS, VES, Obelix, COMPASS, KEK, CLEO, Crystal Barrel, CLAS, and BNL. Theoretical lattice results on the gluelump spectrum, adiabatic potentials, heavy and light hybrids, and transition matrix elements are discussed. These are compared with bag, string, flux tube, and constituent gluon models. Strong and electromagnetic decay models are described and compared to lattice gauge theory results. We conclude that while good evidence for the existence of a light isovector exotic meson exists, its confirmation as a hybrid meson awaits discovery of its iso-partners. We also conclude that lattice gauge theory rules out a number of hybrid models and provides a reference to judge the success of others.
C. A. Meyer; E. S. Swanson
2015-03-04
A review of the theoretical and experimental status of hybrid hadrons is presented. The states $\\pi_1(1400)$, $\\pi_1(1600)$, and $\\pi_1(2015)$ are thoroughly reviewed, along with experimental results from GAMS, VES, Obelix, COMPASS, KEK, CLEO, Crystal Barrel, CLAS, and BNL. Theoretical lattice results on the gluelump spectrum, adiabatic potentials, heavy and light hybrids, and transition matrix elements are discussed. These are compared with bag, string, flux tube, and constituent gluon models. Strong and electromagnetic decay models are described and compared to lattice gauge theory results. We conclude that while good evidence for the existence of a light isovector exotic meson exists, its confirmation as a hybrid meson awaits discovery of its iso-partners. We also conclude that lattice gauge theory rules out a number of hybrid models and provides a reference to judge the success of others.
NASA Astrophysics Data System (ADS)
Meyer, C. A.; Swanson, E. S.
2015-05-01
A review of the theoretical and experimental status of hybrid hadrons is presented. The states ?1(1400) , ?1(1600) , and ?1(2015) are thoroughly reviewed, along with experimental results from GAMS, VES, Obelix, COMPASS, KEK, CLEO, Crystal Barrel, CLAS, and BNL. Theoretical lattice results on the gluelump spectrum, adiabatic potentials, heavy and light hybrids, and transition matrix elements are discussed. These are compared with bag, string, flux tube, and constituent gluon models. Strong and electromagnetic decay models are described and compared to lattice gauge theory results. We conclude that while good evidence for the existence of a light isovector exotic meson exists, its confirmation as a hybrid meson awaits discovery of its iso-partners. We also conclude that lattice gauge theory rules out a number of hybrid models and provides a reference to judge the success of others.
NSDL National Science Digital Library
This animated YouTube video, created by Southwest Center for Microsystems Education (SCME), illustrates how DNA hybridization works in the context of nanofabrication. The animation and associated narration describe "DNA hybridization is when a single-stranded DNA (ssDNA) molecule bonds with a complementary ssDNA molecule from another source forming a "hybrid". This animation shows a double-stranded DNA (dsDNA) molecule dividing into two ssDNA strands. One ssDNA remains on the substrate as the "probe". A complementary ssDNA from another source (a ssDNA with a complementary base pair sequence) joins with the probe forming a 'hybrid' dsDNA molecule." A supporting learning module and activities can be downloaded from the SCME website.
Cooling/grounding mount for hybrid circuits
NASA Technical Reports Server (NTRS)
Bagstad, B.; Estrada, R.; Mandel, H.
1981-01-01
Extremely short input and output connections, adequate grounding, and efficient heat removal for hybrid integrated circuits are possible with mounting. Rectangular clamp holds hybrid on printed-circuit board, in contact with heat-conductive ground plate. Clamp is attached to ground plane by bolts.
Marcus, Ryan C.
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Practical Markov Chain Monte Carlo
Charles J. Geyer
1992-01-01
Markov chain Monte Carlo using the Metropolis-Hastings algorithm is a general method for the simulation of stochastic processes having probability densities known up to a constant of proportionality. Despite recent advances in its theory, the practice has remained controversial. This article makes the case for basing all inference on one long run of the Markov chain and estimating the Monte
1 Simulation Monte Carlo methods
Verschelde, Jan
Outline 1 Simulation Monte Carlo methods random numbers 2 Repeat Until binary expansion break Intro to Computer Science (MCS 260) running simulations L-12 9 February 2015 1 / 30 #12;Simulation Monte. Simulation consists in the repeated drawing of samples according to a probability distribution. We count
Bayesian Monte Carlo for the Global Optimization of Expensive Functions
Groot, Perry
representing environmental variables. Typ- ically, xc needs to be optimised, whereas xe are uncontrollable Bayesian Monte Carlo to obtain the objective function by integrating out environmental variables- istic case (i.e., no environmental variables) and show that the ALC criterion appears significantly
Bayesian internal dosimetry calculations using Markov Chain Monte Carlo.
Miller, G; Martz, H F; Little, T T; Guilmette, R
2002-01-01
A new numerical method for solving the inverse problem of internal dosimetry is described. The new method uses Markov Chain Monte Carlo and the Metropolis algorithm. Multiple intake amounts, biokinetic types, and times of intake are determined from bioassay data by integrating over the Bayesian posterior distribution. The method appears definitive, but its application requires a large amount of computing time. PMID:11926369
A Monte Carlo method for calculating orbits of comets
J. Q. Zheng; M. J. Valtonen; S. Mikkola; J. J. Matese; P. G. Whitman; H. Rickman
1994-01-01
The present work is divided into two stages: 1. By using large numbers (several millions) of accurate orbit integrations with the K-S regularization, probability distributions for changes in the orbital elements of comets during encounters with planets are evaluated. 2. These distributions are used in a Monte Carlo simulation scheme which follows the evolution of orbits under repeated close encounters.
Ceperley, D; Alder, B
1986-02-01
An outline of a random walk computational method for solving the Schrödinger equation for many interacting particles is given, together with a survey of results achieved so far and of applications that remain to be explored. Monte Carlo simulations can be used to calculate accurately the bulk properties of the light elements hydrogen, helium, and lithium as well as the properties of the isolated atoms and of molecules made up from these elements. It is now possible to make reliable predictions of the behavior of these substances under experimentally difficult conditions, such as high pressure, and of properties that are difficult to measure experimentally, such as the momentum distribution in superfluid helium. For chemical systems, the stochastic method has a number of advantages over the widely used variational approach to determine ground-state properties, namely fast convergence to the exact result within objectively established error bounds. PMID:17750966
Silicon-PDMS optofluidic integration
NASA Astrophysics Data System (ADS)
Testa, Genni; Persichetti, Gianluca; Sarro, Pasqualina M.; Bernini, Romeo
2015-02-01
In this work we show that integrated Hybrid Silicon-PDMS Antiresonant Reflecting Optical Waveguide (H-ARROW) can be applied for the realization of optofluidic devices. H-ARROW is constituted by the optofluidic channel of a conventional ARROW, sealed with a thin PDMS layer. This layout simplifies the integration of microfluidic parts to manipulate liquid samples, which can be easily fabricated in the PDMS layer. Hybrid ARROWs have been fabricated and used in order to design complex devices like an integrated hybrid liquid core optofluidic ring resonator (h-LCORR) and a hybrid optofluidic platform for fluorescence measurements.
Murphy, D
1993-01-01
This chapter describes a standard method for the hybridization of labeled DNA probes to nucleic acids bound to a nylon matrix. Filters bearing bound nucleic acids produced by Northern blotting of RNA (Chapter 39), Southern blotting of DNA (Chapter 37), and slot blotting of DNA (Chapters 35) or RNA (Chapter 40) are hybridized to labeled probes using the method described below. The advantages of this method are, first, that the use of a high concentration of SDS in the hybridization buffer ensures a low background level of nonspecific probe adherence to the membrane and, second, an extended period of filter prehybridization is not required. The inclusion of a large amount of SDS does, however, necessitate that the nucleic acids are covalently bonded to the matrix by UV light crosslinking. The inclusion of formamide (15% [v/v]) is also recommended in order to reduce the viscosity of the hybridization buffer. Formamide also has the effect of reducing the temperature of the hybridization reaction. PMID:21390694
Monte Carlo studies of field theory and quantum gravity
NASA Astrophysics Data System (ADS)
Gregory, Eric Brittain
In this dissertation I describe three main research projects in which I have participated as a graduate student. They share the common theme of using Monte Carlo computer simulation to investigate quantum field theories. I begin by giving a brief review of Monte Carlo simulation as a discrete path integral approach to a quantum theory. Two of the projects involve tests of the Monte Carlo renormalization group method, a systematic way of integrating out short distance features of a physical system in order to gain insight about its critical behavior, and hence its continuum limit. After a review of the ideas of the renormalization group, I discuss our thorough investigation of Monte Carlo renormalization of ?4 field theory on a two-dimensional square lattice. The second renormalization project overlaps with the other main thrust of my research, studying quantum gravity as the continuum limit of a sum over all possible ways of piecing together discrete simplices, or simplicial quantum gravity. I describe a unique Monte Carlo renormalization group study of scalar fields coupled to two-dimensional quantum gravity, where we were able to extract the anomalous field dimension for a case inaccessible to analytic methods. Finally I discuss a study of four-dimensional quantum gravity coupled to gauge fields and special concerns one must be aware of when measuring connected correlators in fluctuating geometry.
Semiconductor/High-Tc-Superconductor Hybrid ICs
NASA Technical Reports Server (NTRS)
Burns, Michael J.
1995-01-01
Hybrid integrated circuits (ICs) containing both Si-based semiconducting and YBa(2)Cu(3)O(7-x) superconducting circuit elements on sapphire substrates developed. Help to prevent diffusion of Cu from superconductors into semiconductors. These hybrid ICs combine superconducting and semiconducting features unavailable in superconducting or semiconducting circuitry alone. For example, complementary metal oxide/semiconductor (CMOS) readout and memory devices integrated with fast-switching Josephson-junction super-conducting logic devices and zero-resistance interconnections.
NASA Astrophysics Data System (ADS)
Kim, Daeil; Yun, Junyeong; Lee, Geumbee; Ha, Jeong Sook
2014-09-01
We report on the on-chip fabrication of high performance flexible micro-supercapacitor (MSC) arrays with hybrid electrodes of multi-walled carbon nanotube (MWNT)/V2O5 nanowire (NW) composites and a solid electrolyte, which could power the SnO2 NW UV sensor integrated on the same flexible substrate. The patterned MSC using hybrid electrodes of MWNT/V2O5 NW composites with 10 vol% of V2O5 NWs exhibited excellent electrochemical performance with a high volume capacitance of 80 F cm-3 at a scan rate of 10 mV s-1 in a PVA-LiCl electrolyte and good cycle performance to maintain 82% of the capacitance after 10 000 cycles at a current density of 11.6 A cm-3. The patterned MSC also showed an excellent energy density of 6.8 mW h cm-3, comparable to that of a Li-thin film battery (1-10 mW h cm-3), and a power density of 80.8 W cm-3 comparable to that of state-of-the-art MSCs. In addition, the flexible MSC array on a PET substrate showed mechanical stability over bending with a bending radius down to 1.5 mm under both compressive and tensile stress. Even after 1000 bending cycles at a bending radius of 7 mm, 94% of the initial capacitance was maintained. Furthermore, we have shown the operation of a SnO2 NW UV sensor using such a fabricated MSC array integrated into the same circuit on the PET substrate.We report on the on-chip fabrication of high performance flexible micro-supercapacitor (MSC) arrays with hybrid electrodes of multi-walled carbon nanotube (MWNT)/V2O5 nanowire (NW) composites and a solid electrolyte, which could power the SnO2 NW UV sensor integrated on the same flexible substrate. The patterned MSC using hybrid electrodes of MWNT/V2O5 NW composites with 10 vol% of V2O5 NWs exhibited excellent electrochemical performance with a high volume capacitance of 80 F cm-3 at a scan rate of 10 mV s-1 in a PVA-LiCl electrolyte and good cycle performance to maintain 82% of the capacitance after 10 000 cycles at a current density of 11.6 A cm-3. The patterned MSC also showed an excellent energy density of 6.8 mW h cm-3, comparable to that of a Li-thin film battery (1-10 mW h cm-3), and a power density of 80.8 W cm-3 comparable to that of state-of-the-art MSCs. In addition, the flexible MSC array on a PET substrate showed mechanical stability over bending with a bending radius down to 1.5 mm under both compressive and tensile stress. Even after 1000 bending cycles at a bending radius of 7 mm, 94% of the initial capacitance was maintained. Furthermore, we have shown the operation of a SnO2 NW UV sensor using such a fabricated MSC array integrated into the same circuit on the PET substrate. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr04138k
Angular biasing in implicit Monte-Carlo
Zimmerman, G.B.
1994-10-20
Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.
Hybrid Silicon-Molecular Electronics
Qiliang Li
2008-01-01
As CMOS technology extends beyond the current technology node, many challenges to conventional MOSFET were raised. Non-classical CMOS to extend and fundamentally new technologies to replace current CMOS technology are under intensive investigation to meet these challenges. The approach of hybrid silicon\\/molecular electronics is to provide a smooth transition technology by integrating molecular intrinsic scalability and diverse properties with the
Kim, Daeil; Yun, Junyeong; Lee, Geumbee; Ha, Jeong Sook
2014-10-21
We report on the on-chip fabrication of high performance flexible micro-supercapacitor (MSC) arrays with hybrid electrodes of multi-walled carbon nanotube (MWNT)/V2O5 nanowire (NW) composites and a solid electrolyte, which could power the SnO2 NW UV sensor integrated on the same flexible substrate. The patterned MSC using hybrid electrodes of MWNT/V2O5 NW composites with 10 vol% of V2O5 NWs exhibited excellent electrochemical performance with a high volume capacitance of 80 F cm(-3) at a scan rate of 10 mV s(-1) in a PVA-LiCl electrolyte and good cycle performance to maintain 82% of the capacitance after 10,000 cycles at a current density of 11.6 A cm(-3). The patterned MSC also showed an excellent energy density of 6.8 mW h cm(-3), comparable to that of a Li-thin film battery (1-10 mW h cm(-3)), and a power density of 80.8 W cm(-3) comparable to that of state-of-the-art MSCs. In addition, the flexible MSC array on a PET substrate showed mechanical stability over bending with a bending radius down to 1.5 mm under both compressive and tensile stress. Even after 1000 bending cycles at a bending radius of 7 mm, 94% of the initial capacitance was maintained. Furthermore, we have shown the operation of a SnO2 NW UV sensor using such a fabricated MSC array integrated into the same circuit on the PET substrate. PMID:25184811
Quantum Monte Carlo study of the protonated water dimer
Dagrada, Mario; Saitta, Antonino M; Sorella, Sandro; Mauri, Francesco
2013-01-01
We report an extensive theoretical study of the protonated water dimer (Zundel ion) by means of the highly correlated variational Monte Carlo and lattice regularized Monte Carlo approaches. This system represents the simplest model for proton transfer (PT) and a correct description of its properties is essential in order to understand the PT mechanism in more complex acqueous systems. Our Jastrow correlated AGP wave function ensures an accurate treatment of electron correlations. Exploiting the advantages of contracting the primitive basis set over atomic hybrid orbitals, we are able to limit dramatically the number of variational parameters with a systematic control on the numerical precision, crucial in order to simulate larger systems. We investigate energetics and geometrical properties of the Zundel ion as a function of the oxygen-oxygen distance, taken as reaction coordinate. In both cases, our QMC results are found in excellent agreement with coupled cluster CCSD(T) technique, the quantum chemistry "go...
Kinetic Monte Carlo Studies of Hydrogen Abstraction from Graphite
H. M. Cuppen; L. Hornekaer
2008-07-01
We present Monte Carlo simulations on Eley-Rideal abstraction reactions of atomic hydrogen chemisorbed on graphite. The results are obtained via a hybrid approach where energy barriers derived from density functional theory calculations are used as input to Monte Carlo simulations. By comparing with experimental data, we discriminate between contributions from different Eley-Rideal mechanisms. A combination of two different mechanisms yields good quantitative and qualitative agreement between the experimentally derived and the simulated Eley-Rideal abstraction cross sections and surface configurations. These two mechanisms include a direct Eley-Rideal reaction with fast diffusing H atoms and a dimer mediated Eley-Rideal mechanism with increased cross section at low coverage. Such a dimer mediated Eley-Rideal mechanism has not previously been proposed and serves as an alternative explanation to the steering behavior often given as the cause of the coverage dependence observed in Eley-Rideal reaction cross sections.
Reliability Analysis of Electric Power Systems Using an Object-oriented Hybrid Modeling Approach
Schläpfer, Markus; Kröger, Wolfgang
2012-01-01
The ongoing evolution of the electric power systems brings about the need to cope with increasingly complex interactions of technical components and relevant actors. In order to integrate a more comprehensive spectrum of different aspects into a probabilistic reliability assessment and to include time-dependent effects, this paper proposes an object-oriented hybrid approach combining agent-based modeling techniques with classical methods such as Monte Carlo simulation. Objects represent both technical components such as generators and transmission lines and non-technical components such as grid operators. The approach allows the calculation of conventional reliability indices and the estimation of blackout frequencies. Furthermore, the influence of the time needed to remove line overloads on the overall system reliability can be assessed. The applicability of the approach is demonstrated by performing simulations on the IEEE Reliability Test System 1996 and on a model of the Swiss high-voltage grid.
Ion-specific thermodynamics of multicomponent electrolytes: A hybrid HNC/MD approach
NASA Astrophysics Data System (ADS)
Vrbka, Luboš; Lund, Mikael; Kalcher, Immanuel; Dzubiella, Joachim; Netz, Roland R.; Kunz, Werner
2009-10-01
Using effective infinite dilution ion-ion interaction potentials derived from explicit-water molecular dynamics (MD) computer simulations in the hypernetted-chain (HNC) integral equation theory we calculate the liquid structure and thermodynamic properties, namely, the activity and osmotic coefficients of various multicomponent aqueous electrolyte mixtures. The electrolyte structure expressed by the ion-ion radial distribution functions is for most ions in excellent agreement with MD and implicit solvent Monte Carlo (MC) simulation results. Calculated thermodynamic properties are also represented consistently among these three methods. Our versatile HNC/MD hybrid method allows for a quick prediction of the thermodynamics of multicomponent electrolyte solutions for a wide range of concentrations and an efficient assessment of the validity of the employed MD force-fields with possible implications in the development of thermodynamically consistent parameter sets.
Ion-specific thermodynamics of multicomponent electrolytes: a hybrid HNC/MD approach.
Vrbka, Lubos; Lund, Mikael; Kalcher, Immanuel; Dzubiella, Joachim; Netz, Roland R; Kunz, Werner
2009-10-21
Using effective infinite dilution ion-ion interaction potentials derived from explicit-water molecular dynamics (MD) computer simulations in the hypernetted-chain (HNC) integral equation theory we calculate the liquid structure and thermodynamic properties, namely, the activity and osmotic coefficients of various multicomponent aqueous electrolyte mixtures. The electrolyte structure expressed by the ion-ion radial distribution functions is for most ions in excellent agreement with MD and implicit solvent Monte Carlo (MC) simulation results. Calculated thermodynamic properties are also represented consistently among these three methods. Our versatile HNC/MD hybrid method allows for a quick prediction of the thermodynamics of multicomponent electrolyte solutions for a wide range of concentrations and an efficient assessment of the validity of the employed MD force-fields with possible implications in the development of thermodynamically consistent parameter sets. PMID:20568849
Markov Chain Monte Carlo Usher's Algorithm
Bremen, UniversitÃ¤t
Concepts Markov Chain Monte Carlo Usher's Algorithm Markov Chain Monte Carlo for Parameter Optimization Holger Schultheis 18.11.2013 1 / 27 #12;Concepts Markov Chain Monte Carlo Usher's Algorithm Topics 1 Concepts 2 Markov Chain Monte Carlo Basics Example Metropolis and Simulated Annealing 3 Usher
Markov Chain Monte Carlo Usher's Algorithm
Bremen, UniversitÃ¤t
Concepts Markov Chain Monte Carlo Usher's Algorithm Markov Chain Monte Carlo for Parameter Optimization Holger Schultheis 19.11.2012 1 / 27 #12;Concepts Markov Chain Monte Carlo Usher's Algorithm Topics 1 Concepts 2 Markov Chain Monte Carlo Basics Example Metropolis and Simulated Annealing 3 Usher
Optimum and efficient sampling for variational quantum Monte Carlo
Trail, John Robert; 10.1063/1.3488651
2010-01-01
Quantum mechanics for many-body systems may be reduced to the evaluation of integrals in 3N dimensions using Monte-Carlo, providing the Quantum Monte Carlo ab initio methods. Here we limit ourselves to expectation values for trial wavefunctions, that is to Variational quantum Monte Carlo. Almost all previous implementations employ samples distributed as the physical probability density of the trial wavefunction, and assume the Central Limit Theorem to be valid. In this paper we provide an analysis of random error in estimation and optimisation that leads naturally to new sampling strategies with improved computational and statistical properties. A rigorous lower limit to the random error is derived, and an efficient sampling strategy presented that significantly increases computational efficiency. In addition the infinite variance heavy tailed random errors of optimum parameters in conventional methods are replaced with a Normal random error, strengthening the theoretical basis of optimisation. The method is ...
Vectorizing and macrotasking Monte Carlo neutral particle algorithms
Heifetz, D.B.
1987-04-01
Monte Carlo algorithms for computing neutral particle transport in plasmas have been vectorized and macrotasked. The techniques used are directly applicable to Monte Carlo calculations of neutron and photon transport, and Monte Carlo integration schemes in general. A highly vectorized code was achieved by calculating test flight trajectories in loops over arrays of flight data, isolating the conditional branches to as few a number of loops as possible. A number of solutions are discussed to the problem of gaps appearing in the arrays due to completed flights, which impede vectorization. A simple and effective implementation of macrotasking is achieved by dividing the calculation of the test flight profile among several processors. A tree of random numbers is used to ensure reproducible results. The additional memory required for each task may preclude using a larger number of tasks. In future machines, the limit of macrotasking may be possible, with each test flight, and split test flight, being a separate task.
Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
Cemgil, A T; 10.1613/jair.1121
2011-01-01
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcr...
The Monte Carlo method in quantum field theory
Colin Morningstar
2007-02-20
This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.
Zhang, Zhenbin; Sun, Liangliang; Zhu, Guijie; Yan, Xiaojing; Dovichi, Norman J
2015-06-01
A sulfonate-silica hybrid strong cation-exchange (SCX) monolith was synthesized at the proximal end of a capillary zone electrophoresis column and used for on-line solid-phase extraction (SPE) sample preconcentration. Sample was prepared in an acidic buffer and deposited onto the SCX-SPE monolith and eluted using a basic buffer. Electrophoresis was performed in an acidic buffer. This combination of buffers results in formation of a dynamic pH junction, which allows use of relatively large elution buffer volume while maintaining peak efficiency and resolution. All experiments were performed with a 50 µm ID capillary, a 1cm long SCX-SPE monolith, a 60cm long separation capillary, and a electrokinetically pumped nanospray interface. The volume of the capillary is 1.1 µL. By loading 21 µL of a 1×10(-7) M angiotensin II solution, an enrichment factor of 3000 compared to standard electrokinetic injection was achieved on this platform while retaining efficient electrophoretic performance (N=44,000 plates). The loading capacity of the sulfonate SCX hybrid monolith was determined to be ~15 pmol by frontal analysis with 10(-5) M angiotensin II. The system was also applied to the analysis of a 10(-4) mg/mL bovine serum albumin tryptic digest; the protein coverage was 12% and 11 peptides were identified. Finally, by loading 5.5 µL of a 10(-3) mg/mL E. coli digest, 109 proteins and 271 peptides were identified in a 20 min separation; the median separation efficiency generated by these peptides was 25,000 theoretical plates. PMID:25863379
Rare-Event Verification for Stochastic Hybrid Systems Paolo Zuliani
Clarke, Edmund M.
Rare-Event Verification for Stochastic Hybrid Systems Paolo Zuliani Computer Science Department small -- rare events. It is well known that sampling-based (Monte Carlo) techniques, such as statistical model checking, do not perform well for es- timating rare-event probabilities. The problem
Monte Carlo solution of a semi-discrete transport equation
Urbatsch, T.J.; Morel, J.E.; Gulick, J.C.
1999-09-01
The authors present the S{sub {infinity}} method, a hybrid neutron transport method in which Monte Carlo particles traverse discrete space. The goal of any deterministic/stochastic hybrid method is to couple selected characters from each of the methods in hopes of producing a better method. The S{sub {infinity}} method has the features of the lumped, linear-discontinuous (LLD) spatial discretization, yet it has no ray-effects because of the continuous angular variable. They derive the S{sub {infinity}} method for the solid-state, mono-energetic transport equation in one-dimensional slab geometry with isotropic scattering and an isotropic internal source. They demonstrate the viability of the S{sub {infinity}} method by comparing their results favorably to analytic and deterministic results.
Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv
2014-01-01
JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745
Badapanda, Chandan
2013-01-01
The suppression subtractive hybridization (SSH) approach, a PCR based approach which amplifies differentially expressed cDNAs (complementary DNAs), while simultaneously suppressing amplification of common cDNAs, was employed to identify immuneinducible genes in insects. This technique has been used as a suitable tool for experimental identification of novel genes in eukaryotes as well as prokaryotes; whose genomes have been sequenced, or the species whose genomes have yet to be sequenced. In this article, I have proposed a method for in silico functional characterization of immune-inducible genes from insects. Apart from immune-inducible genes from insects, this method can be applied for the analysis of genes from other species, starting from bacteria to plants and animals. This article is provided with a background of SSH-based method taking specific examples from innate immune-inducible genes in insects, and subsequently a bioinformatics pipeline is proposed for functional characterization of newly sequenced genes. The proposed workflow presented here, can also be applied for any newly sequenced species generated from Next Generation Sequencing (NGS) platforms. PMID:23519487
Hybridization process for back-illuminated silicon Geiger-mode avalanche photodiode arrays
Schuette, Daniel R.
We present a unique hybridization process that permits high-performance back-illuminated silicon Geiger-mode avalanche photodiodes (GM-APDs) to be bonded to custom CMOS readout integrated circuits (ROICs) - a hybridization ...
Jung Woon Lim; Sung Hwan Hwang; Seon Hoon Kim; Boo-Gyoun Kim; Byung Sup Rho
2007-01-01
We have developed a low cost 1.25 Gbps WDM bidirectional module using two integrated optical subassemblies which are composed of a planar lightwave circuit (PLC) platform and a silicon optical bench (SiOB) platform. The low cost module is achieved by employing a flip-chip bonding method with passive alignment using a Fabry-Perot laser diode (FP-LD) with a monitoring waveguide-photodiode (PD) on
Explicit Lie group integrators
Forest, E. (Exploratory Studies Group, Lawrence Berkeley Laboratory, University of California, Berkeley, California 94720 (United States)); Murphy, J. (National Synchrotron Light Source, Brookhaven National Laboratory, Upton, New York 11973 (United States)); Reusch, M.F. (Grumman Space Systems, Princeton, New Jersey 08540-6620 (United States))
1991-06-05
In this paper, we review the theory of explicit symplectic integration. We present a non-trivial application going beyond Ruth's original paper. Finally, we contrast the explicit and the implicit methods by deriving a hybrid integrator for a special one dimensional Hamiltonian.
Genomic Networks of Hybrid Sterility
Turner, Leslie M.; White, Michael A.; Tautz, Diethard; Payseur, Bret A.
2014-01-01
Hybrid dysfunction, a common feature of reproductive barriers between species, is often caused by negative epistasis between loci (“Dobzhansky-Muller incompatibilities”). The nature and complexity of hybrid incompatibilities remain poorly understood because identifying interacting loci that affect complex phenotypes is difficult. With subspecies in the early stages of speciation, an array of genetic tools, and detailed knowledge of reproductive biology, house mice (Mus musculus) provide a model system for dissecting hybrid incompatibilities. Male hybrids between M. musculus subspecies often show reduced fertility. Previous studies identified loci and several X chromosome-autosome interactions that contribute to sterility. To characterize the genetic basis of hybrid sterility in detail, we used a systems genetics approach, integrating mapping of gene expression traits with sterility phenotypes and QTL. We measured genome-wide testis expression in 305 male F2s from a cross between wild-derived inbred strains of M. musculus musculus and M. m. domesticus. We identified several thousand cis- and trans-acting QTL contributing to expression variation (eQTL). Many trans eQTL cluster into eleven ‘hotspots,’ seven of which co-localize with QTL for sterility phenotypes identified in the cross. The number and clustering of trans eQTL—but not cis eQTL—were substantially lower when mapping was restricted to a ‘fertile’ subset of mice, providing evidence that trans eQTL hotspots are related to sterility. Functional annotation of transcripts with eQTL provides insights into the biological processes disrupted by sterility loci and guides prioritization of candidate genes. Using a conditional mapping approach, we identified eQTL dependent on interactions between loci, revealing a complex system of epistasis. Our results illuminate established patterns, including the role of the X chromosome in hybrid sterility. The integrated mapping approach we employed is applicable in a broad range of organisms and we advocate for widespread adoption of a network-centered approach in speciation genetics. PMID:24586194
Proton Upset Monte Carlo Simulation
NASA Technical Reports Server (NTRS)
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Improved Evolutionary Hybrids for Flexible Ligand Docking in Autodock
Belew, R.K.; Hart, W.E.; Morris, G.M.; Rosin, C.
1999-01-27
In this paper we evaluate the design of the hybrid evolutionary algorithms (EAs) that are currently used to perform flexible ligand binding in the Autodock docking software. Hybrid EAs incorporate specialized operators that exploit domain-specific features to accelerate an EA's search. We consider hybrid EAs that use an integrated local search operator to reline individuals within each iteration of the search. We evaluate several factors that impact the efficacy of a hybrid EA, and we propose new hybrid EAs that provide more robust convergence to low-energy docking configurations than the methods currently available in Autodock.
Semi-stochastic full configuration interaction quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Holmes, Adam; Petruzielo, Frank; Khadilkar, Mihir; Changlani, Hitesh; Nightingale, M. P.; Umrigar, C. J.
2012-02-01
In the recently proposed full configuration interaction quantum Monte Carlo (FCIQMC) [1,2], the ground state is projected out stochastically, using a population of walkers each of which represents a basis state in the Hilbert space spanned by Slater determinants. The infamous fermion sign problem manifests itself in the fact that walkers of either sign can be spawned on a given determinant. We propose an improvement on this method in the form of a hybrid stochastic/deterministic technique, which we expect will improve the efficiency of the algorithm by ameliorating the sign problem. We test the method on atoms and molecules, e.g., carbon, carbon dimer, N2 molecule, and stretched N2. [4pt] [1] Fermion Monte Carlo without fixed nodes: a Game of Life, death and annihilation in Slater Determinant space. George Booth, Alex Thom, Ali Alavi. J Chem Phys 131, 050106, (2009).[0pt] [2] Survival of the fittest: Accelerating convergence in full configuration-interaction quantum Monte Carlo. Deidre Cleland, George Booth, and Ali Alavi. J Chem Phys 132, 041103 (2010).
Comparative Monte Carlo efficiency by Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Rubenstein, B. M.; Gubernatis, J. E.; Doll, J. D.
2010-09-01
We propose a modified power method for computing the subdominant eigenvalue ?2 of a matrix or continuous operator. While useful both deterministically and stochastically, we focus on defining simple Monte Carlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfunction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our Monte Carlo methods by computing ?2 of various Markov chain transition matrices. As |?2| of this matrix controls the rate at which Monte Carlo sampling relaxes to a stationary condition, its computation also enabled us to compare efficiencies of several Monte Carlo algorithms as applied to two quite different types of problems. We first computed ?2 for several one- and two-dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as functions of temperature and applied magnetic field. Next, we computed ?2 for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis algorithm as a function of temperature and the maximum allowable step size ? . Based on the ?2 criterion, we found for the Ising models that small lattices appear to give an adequate picture of comparative efficiency and that the heat-bath algorithm is more efficient than the Metropolis algorithm only at low temperatures where both algorithms are inefficient. For the harmonic trap problem, we found that the traditional rule of thumb of adjusting ? so that the Metropolis acceptance rate is around 50% is often suboptimal. In general, as a function of temperature or ? , ?2 for this model displayed trends defining optimal efficiency that the acceptance ratio does not. The cases studied also suggested that Monte Carlo simulations for a continuum model are likely more efficient than those for a discretized version of the model.
Monte Carlo Methods and Applications for the Nuclear Shell Model
Dean, D.J.; White, J.A.
1998-08-10
The shell-model Monte Carlo (SMMC) technique transforms the traditional nuclear shell-model problem into a path-integral over auxiliary fields. We describe below the method and its applications to four physics issues: calculations of sd-pf-shell nuclei, a discussion of electron-capture rates in pf-shell nuclei, exploration of pairing correlations in unstable nuclei, and level densities in rare earth systems.
Tests for Unit Roots: A Monte Carlo Investigation
G. William Schwert
1989-01-01
Recent work by Said and Dickey (1984, 1985), Phillips (1987), and Phillips and Perron (1988) examines tests for unit roots in the autoregressive part of mixed autoregressive integrated moving average models (tests for stationary). Monte Carlo experiments show that these unit-root tests have different finite-sample distributions from the unit-root tests developed by Fuller (1976) and Dickey and Fuller (1979, 1981)
Quasi-Monte Carlo estimation in generalized linear mixed models
Jianxin Pan; Robin Thompson
Generalized linear mixed models (GLMMs) are useful for modelling longitudinal and clustered data, but parameter estimation is very challenging because the likelihood may involve high-dimensional integrals that are analytically intractable. Gauss-Hermite quadrature (GHQ) approximation can be applied but is only suitable for low-dimensional random effects. Based on the Quasi-Monte Carlo (QMC) approximation, a heuristic approach is proposed to calculate the
Quasi-Monte Carlo estimation in generalized linear mixed models
Jianxin Pan; Robin Thompson
2007-01-01
Generalized linear mixed models (GLMMs) are useful for modelling longitudinal and clustered data, but parameter estimation is very challenging because the likelihood may involve high-dimensional integrals that are analytically intractable. Gauss–Hermite quadrature (GHQ) approximation can be applied but is only suitable for low-dimensional random effects. Based on the Quasi-Monte Carlo (QMC) approximation, a heuristic approach is proposed to calculate the
NASA Astrophysics Data System (ADS)
Show, Bijay Kumar; Mondal, Dipak Kumar; Maity, Joydeep
2014-12-01
In this research work, the dry sliding wear behavior of 6351 Al-(4 vol.% SiC + 4 vol.% Al2O3) hybrid composite was investigated at low sliding speed (1 m/s) against a hardened EN 31 disk at different loads. In general, the wear mechanism involved adhesion (along with associated subsurface cracking and delamination) and microcutting abrasion at lower load. While at higher load, abrasive wear involving microcutting and microploughing along with adherent oxide formation was observed. The overall wear rate increased with increasing normal load. The massive particle clusters as well as individual reinforcement particles were found to stand tall to resist abrasive wear. Besides, at higher load, the generation of adherent nodular tribo-oxide through nucleation and epitaxial growth on existing Al2O3 particles lowered down the wear rate. Accordingly, at any normal load, 6351 Al-(4 vol.% SiC + 4 vol.% Al2O3) hybrid composite exhibited superior wear resistance (lower overall wear rate) than the reported wear resistance of monolithic 6351 Al alloy.
Bayesian Phylogenetic Inference Using DNA Sequences: A Markov Chain Monte Carlo Method
Ziheng Yang; Bruce Rannala
An improved Bayesian method is presented for estimating phylogenetic trees using DNA sequence data. The birth- death process with species sampling is used to specify the prior distribution of phylogenies and ancestral speciation times, and the posterior probabilities of phylogenies are used to estimate the maximum posterior probability (MAP) tree. Monte Carlo integration is used to integrate over the ancestral
Mission Analysis, Operations, and Navigation Toolkit Environment (Monte) Version 040
NASA Technical Reports Server (NTRS)
Sunseri, Richard F.; Wu, Hsi-Cheng; Evans, Scott E.; Evans, James R.; Drain, Theodore R.; Guevara, Michelle M.
2012-01-01
Monte is a software set designed for use in mission design and spacecraft navigation operations. The system can process measurement data, design optimal trajectories and maneuvers, and do orbit determination, all in one application. For the first time, a single software set can be used for mission design and navigation operations. This eliminates problems due to different models and fidelities used in legacy mission design and navigation software. The unique features of Monte 040 include a blowdown thruster model for GRAIL (Gravity Recovery and Interior Laboratory) with associated pressure models, as well as an updated, optimalsearch capability (COSMIC) that facilitated mission design for ARTEMIS. Existing legacy software lacked the capabilities necessary for these two missions. There is also a mean orbital element propagator and an osculating to mean element converter that allows long-term orbital stability analysis for the first time in compiled code. The optimized trajectory search tool COSMIC allows users to place constraints and controls on their searches without any restrictions. Constraints may be user-defined and depend on trajectory information either forward or backwards in time. In addition, a long-term orbit stability analysis tool (morbiter) existed previously as a set of scripts on top of Monte. Monte is becoming the primary tool for navigation operations, a core competency at JPL. The mission design capabilities in Monte are becoming mature enough for use in project proposals as well as post-phase A mission design. Monte has three distinct advantages over existing software. First, it is being developed in a modern paradigm: object- oriented C++ and Python. Second, the software has been developed as a toolkit, which allows users to customize their own applications and allows the development team to implement requirements quickly, efficiently, and with minimal bugs. Finally, the software is managed in accordance with the CMMI (Capability Maturity Model Integration), where it has been ap praised at maturity level 3.
NASA Astrophysics Data System (ADS)
Berger, C. E.; Anderson, E. R.; Drut, J. E.
2015-05-01
We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. Complementing those results, we show the corresponding density profiles. The calculations were performed with a lattice Monte Carlo approach based on a nonuniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and can be generalized to traps of other shapes. In all cases, it yields a position-dependent coupling and a corresponding nonuniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions. We present results for N =4 ,...,20 particles (although the method can be extended beyond that) to cover the range from few- to many-particle systems. This method is exact up to statistical and systematic uncertainties, which we account for—and thus also represents an ab initio calculation of this system, providing a benchmark for other methods and a prediction for ultracold-atom experiments.
C. E. Berger; E. R. Anderson; J. E. Drut
2015-06-08
We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. Complementing those results, we show the corresponding density profiles. The calculations were performed with a new lattice Monte Carlo approach based on a non-uniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and can be generalized to traps of other shapes. In all cases, it yields a position-dependent coupling and a corresponding non-uniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions. We present results for N=4,...,20 particles (although the method can be extended beyond that) to cover the range from few- to many-particle systems. This method is also exact up to statistical and systematic uncertainties, which we account for -- and thus also represents the first ab initio calculation of this system, providing a benchmark for other methods and a prediction for ultracold-atom experiments.
Templating assembly of multifunctional hybrid colloidal spheres.
Yan, Xuehai; Li, Junbai; Möhwald, Helmuth
2012-05-22
3D hybrid colloidal spheres with integrated functions and collective properties are fabricated using a variety of common inorganic nano-objects as building blocks in association with polyelectrolyte encapsulation through a facile template strategy. The fabrication strategy is generally suited for design of functional colloidal spheres in a simple and controllable manner, and thus opens a new avenue for developing hybrid materials with multiple functions and collective properties. PMID:22529033
HOPSPACK: Hybrid Optimization Parallel Search Package.
Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica
2008-12-01
In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4
Michael H. Seymour
2010-08-17
I review the status of the general-purpose Monte Carlo event generators for the LHC, with emphasis on areas of recent physics developments. There has been great progress, especially in multi-jet simulation, but I mention some question marks that have recently arisen.
NSDL National Science Digital Library
McGath, Gary
This is the description and instructions for the Monte Carlo Estimation of Pi applet. It is a simulation of throwing darts at a figure of a circle inscribed in a square. It shows the relationship between the geometry of the figure and the statistical outcome of throwing the darts.
Synchronous Parallel Kinetic Monte Carlo
Mart?nez, E; Marian, J; Kalos, M H
2006-12-14
A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.
Markov Chain Monte Carlo and Gibbs Sampling
Walsh, Bruce
Appendix 3 Markov Chain Monte Carlo and Gibbs Sampling Far better an approximate answer development of Markov Chain Monte Carlo (MCMC) meth uses the previous sample value to randomly generate the next sample value, generating a Markov chain
Optimizing Efficiency of Perturbative Monte Carlo Method
Truong, Thanh N.
-- --Method TOM J. EVANS, THANH N. TRUONG 1998 ABSTRACT: We introduce error weighting functions into the perturbative Monte Carlo method for use andror MM regions. 1998 John Wiley & Sons, Inc. J Comput Chem 19: 1632 1638, 1998 Keywords: Monte Carlo
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl(-) + CH3Cl ? ClCH3 + Cl(-)) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF. PMID:26156461
NASA Astrophysics Data System (ADS)
Kido, Kentaro; Kasahara, Kento; Yokogawa, Daisuke; Sato, Hirofumi
2015-07-01
In this study, we reported the development of a new quantum mechanics/molecular mechanics (QM/MM)-type framework to describe chemical processes in solution by combining standard molecular-orbital calculations with a three-dimensional formalism of integral equation theory for molecular liquids (multi-center molecular Ornstein-Zernike (MC-MOZ) method). The theoretical procedure is very similar to the 3D-reference interaction site model self-consistent field (RISM-SCF) approach. Since the MC-MOZ method is highly parallelized for computation, the present approach has the potential to be one of the most efficient procedures to treat chemical processes in solution. Benchmark tests to check the validity of this approach were performed for two solute (solute water and formaldehyde) systems and a simple SN2 reaction (Cl- + CH3Cl ? ClCH3 + Cl-) in aqueous solution. The results for solute molecular properties and solvation structures obtained by the present approach were in reasonable agreement with those obtained by other hybrid frameworks and experiments. In particular, the results of the proposed approach are in excellent agreements with those of 3D-RISM-SCF.
Xiao, Fang-Xing; Hung, Sung-Fu; Tao, Hua Bing; Miao, Jianwei; Yang, Hong Bin; Liu, Bin
2014-12-21
Hierarchically ordered ZnO nanorods (NRs) decorated nanoporous-layer-covered TiO2 nanotube array (ZnO NRs/NP-TNTAs) nanocomposites have been prepared by an efficient, two-step anodization route combined with an electrochemical deposition strategy, by which monodispersed one-dimensional (1D) ZnO NRs were uniformly grown on the framework of NP-TNTAs. The crystal phases, morphologies, optical properties, photocatalytic as well as photoelectrocatalytic performances of the well-defined ZnO NRs/NP-TNTAs heterostructures were systematically explored to clarify the structure-property correlation. It was found that the ZnO NRs/NP-TNTAs heterostructure exhibits significantly enhanced photocatalytic and photoelectrocatalytic performances, along with favorable photostability toward degradation of organic pollutants under UV light irradiation, as compared to the single component counterparts. The remarkably enhanced photoactivity of ZnO NRs/NP-TNTAs heterostructure is ascribed to the intimate interfacial integration between ZnO NRs and NP-TNTAs substrate imparted by the unique spatially branched hierarchical structure, thereby contributing to the efficient transfer and separation of photogenerated electron-hole charge carriers. Moreover, the specific active species during the photocatalytic process was unambiguously determined and photocatalytic mechanism was tentatively presented. It is anticipated that our work could provide new insights for the construction of various hierarchical 1D-1D hybrid nanocomposites for extensive photocatalytic applications. PMID:25363649
Small hybrid solar power system
M. Kane; D. Larrain; D. Favrat; Y. Allani
2003-01-01
This paper introduces a novel concept of mini-hybrid solar power plant integrating a field of solar concentrators, two superposed Organic Rankine Cycles (ORC) and a (bio-)Diesel engine. The Organic Rankine Cycles include hermetic scroll expander-generators11The word expander is often used to characterize units recovering the expansion energy of a gas, in particular when based on a volumetric machine. The word
Efficient pseudo-random number generation for monte-carlo simulations using graphic processors
NASA Astrophysics Data System (ADS)
Mohanty, Siddhant; Mohanty, A. K.; Carminati, F.
2012-06-01
A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.
NASA Astrophysics Data System (ADS)
Congote, J.; Moreno, A.; Kabongo, L.; Pérez, J.-L.; San-José, R.; Ruiz, O.
2012-10-01
City models visualisation, buildings, structures and volumetric information, is an important task in Computer Graphics and Urban Planning. The different formats and data sources involved in the visualisation make the development of applications a big challenge. We present a homogeneous web visualisation framework using X3DOM and MEDX3DOM for the visualisation of these urban objects. We present an integration of different declarative data sources, enabling the utilization of advanced visualisation algorithms to render the models. It has been tested with a city model composed of buildings from the Madrid University Campus, some volumetric datasets coming from Air Quality Models and 2D layers wind datasets. Results show that the visualisation of all the urban models can be performed in real time on the Web. An HTML5 web interface is presented to the users, enabling real time modifications of visualisation parameters.
Hybrid method of deterministic and probabilistic approaches for multigroup neutron transport problem
Lee, D.
2012-07-01
A hybrid method of deterministic and probabilistic methods is proposed to solve Boltzmann transport equation. The new method uses a deterministic method, Method of Characteristics (MOC), for the fast and thermal neutron energy ranges and a probabilistic method, Monte Carlo (MC), for the intermediate resonance energy range. The hybrid method, in case of continuous energy problem, will be able to take advantage of fast MOC calculation and accurate resonance self shielding treatment of MC method. As a proof of principle, this paper presents the hybrid methodology applied to a multigroup form of Boltzmann transport equation and confirms that the hybrid method can produce consistent results with MC and MOC methods. (authors)
Markov Chain Monte Carlo Usher's Algorithm
Bremen, UniversitÃ¤t
Metropolis & SA Markov Chain Properties irreducibility: i, j , k such that p (k) ij > 0 at each pointConcepts Markov Chain Monte Carlo Usher's Algorithm Markov Chain Monte Carlo for Parameter Optimization Holger Schultheis 04.11.2014 1 / 27 #12;Concepts Markov Chain Monte Carlo Usher's Algorithm Topics
MARKOV CHAIN MONTE CARLO MATTHEW JOSEPH
May, J. Peter
MARKOV CHAIN MONTE CARLO MATTHEW JOSEPH Abstract. Markov chain Monte Carlo is an umbrella term for algorithms that use Markov chains to sample from a given probability distribution. This paper is a brief examination of Markov chain Monte Carlo and its usage. We begin by discussing Markov chains and the ergodicity
Monte Carlo Experiments: Design and Implementation.
ERIC Educational Resources Information Center
Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian
2001-01-01
Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)
Target Density Normalization for Markov Chain Monte Carlo Algorithms
Allen Caldwell; Chang Liu
2014-10-28
Techniques for evaluating the normalization integral of the target density for Markov Chain Monte Carlo algorithms are described and tested numerically. It is assumed that the Markov Chain algorithm has converged to the target distribution and produced a set of samples from the density. These are used to evaluate sample mean, harmonic mean and Laplace algorithms for the calculation of the integral of the target density. A clear preference for the sample mean algorithm applied to a reduced support region is found, and guidelines are given for implementation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.
Monte Carlo Simulations and Generation of the SPI Response
NASA Technical Reports Server (NTRS)
Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.
2003-01-01
In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.
HYBRIDIZATION NETWORKS Charles Semple
Semple, Charles
;2 Hybridization Networks organism to another which is not its offspring. On the other hand, hybridiza- tions amongst animals and the problem of distinguishing hybridization from other causes of phylogenetic hybridization are certain plant and bird species. Recombination is a type of hybridization that has been well
Comparative Monte Carlo Efficiency by Monte Carlo Analysis
Rubenstein, B M; Doll, J D
2010-01-01
We propose a modified power method for computing the subdominant eigenvalue $\\lambda_2$ of a matrix or continuous operator. Here we focus on defining simple Monte Carlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfuction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our Monte Carlo methods by computing the $\\lambda_2$ of various Markov chain transition matrices. We first computed ${\\lambda_2}$ for several one and two dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as a function of temperature and applied magnetic field. Next, we computed $\\lambda_2$ for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis ...
Monte Carlo calculation of monitor unit for electron arc therapy
Chow, James C. L.; Jiang Runqing
2010-04-15
Purpose: Monitor unit (MU) calculations for electron arc therapy were carried out using Monte Carlo simulations and verified by measurements. Variations in the dwell factor (DF), source-to-surface distance (SSD), and treatment arc angle ({alpha}) were studied. Moreover, the possibility of measuring the DF, which requires gantry rotation, using a solid water rectangular, instead of cylindrical, phantom was investigated. Methods: A phase space file based on the 9 MeV electron beam with rectangular cutout (physical size=2.6x21 cm{sup 2}) attached to the block tray holder of a Varian 21 EX linear accelerator (linac) was generated using the EGSnrc-based Monte Carlo code and verified by measurement. The relative output factor (ROF), SSD offset, and DF, needed in the MU calculation, were determined using measurements and Monte Carlo simulations. An ionization chamber, a radiographic film, a solid water rectangular phantom, and a cylindrical phantom made of polystyrene were used in dosimetry measurements. Results: Percentage deviations of ROF, SSD offset, and DF between measured and Monte Carlo results were 1.2%, 0.18%, and 1.5%, respectively. It was found that the DF decreased with an increase in {alpha}, and such a decrease in DF was more significant in the {alpha} range of 0 deg. - 60 deg. than 60 deg. - 120 deg. Moreover, for a fixed {alpha}, the DF increased with an increase in SSD. Comparing the DF determined using the rectangular and cylindrical phantom through measurements and Monte Carlo simulations, it was found that the DF determined by the rectangular phantom agreed well with that by the cylindrical one within {+-}1.2%. It shows that a simple setup of a solid water rectangular phantom was sufficient to replace the cylindrical phantom using our specific cutout to determine the DF associated with the electron arc. Conclusions: By verifying using dosimetry measurements, Monte Carlo simulations proved to be an alternative way to perform MU calculations effectively for electron arc therapy. Since Monte Carlo simulations can generate a precalculated database of ROF, SSD offset, and DF for the MU calculation, with a reduction in human effort and linac beam-on time, it is recommended that Monte Carlo simulations be partially or completely integrated into the commissioning of electron arc therapy.
NASA Astrophysics Data System (ADS)
Arora, Vivek; Baras, John S.; Dillon, Douglas; Falk, Aaron; Suphasindhu, Narin
1995-01-01
Access to the Internet is either too slow (dial-up SLIP) or too expensive (switched 56 kbps, frame relay) for the home user or small enterprise. The Center for Satellite and Hybrid Communication Networks and Hughes Network Systems have collaborated using systems integration principles to develop a prototype of a low-cost hybrid (dial-up and satellite) newtork terminal which can deliver data from the Internet to the user at rates up to 160 kbps. An asymmetric TCP/IP connection is used breaking the network link into two physical channels: a terrestrial dial-up for carrying data from the terminal into the Internet and a receive-only satellite link carrying IP packets from the Internet to the user. With a goal of supporting bandwidth hungry Internet applications such as Mosaic, Gopher, and FTP, this system has been designed to support any Intel 80386/486 PC, any commercial TCP/IP package, any unmodified host on the Internet, and any of the routers, etc., within the Internet. The design exploits the following three observations: 1) satellites are able to offer high bandwidth connections to a large geographical area, 2) a receive-only VSAT is cheap to manufacture and easier to install than one which can also transmit, and 3) most computer users, especially those in a home environment, will want to consume much more information than they generate. IP encapsulation, or tunneling, issued to manipulate the TCP/IP protocols to route packets asymmetrically.
NASA Astrophysics Data System (ADS)
Caragiulo, P.; Dragone, A.; Markovic, B.; Herbst, R.; Nishimura, K.; Reese, B.; Herrmann, S.; Hart, P.; Blaj, G.; Segal, J.; Tomada, A.; Hasi, J.; Carini, G.; Kenney, C.; Haller, G.
2014-09-01
ePix100 is the first variant of a novel class of integrating pixel ASICs architectures optimized for the processing of signals in second generation LINAC Coherent Light Source (LCLS) X-Ray cameras. ePix100 is optimized for ultra-low noise application requiring high spatial resolution. ePix ASICs are based on a common platform composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog to digital converters per column. The ePix100 variant has 50?mx50?m pixels arranged in a 352x384 matrix, a resolution of 50e- r.m.s. and a signal range of 35fC (100 photons at 8keV). In its final version it will be able to sustain a frame rate of 1kHz. A first prototype has been fabricated and characterized and the measurement results are reported here.
Coupling kinetic Monte-Carlo and continuum models with application to epitaxial growth
Tim P. Schulze; Peter Smereka; Weinan E
2003-01-01
We present a hybrid method for simulating epitaxial growth that combines kinetic Monte-Carlo (KMC) simulations with the Burton–Cabrera–Frank model for crystal growth. This involves partitioning the computational domain into KMC regions and regions where we time-step a discretized diffusion equation. Computational speed and accuracy are discussed. We find that the method is significantly faster than KMC while accounting for stochastic
NASA Technical Reports Server (NTRS)
Platt, M. E.; Lewis, E. E.; Boehm, F.
1991-01-01
A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.
Hybrid community energy systems.
Jody, B. J.; Daniels, E. J.; Karvelas, D. E.; Energy Systems
2000-01-01
The availability of efficient, economical, and reliable energy supplies can help attract industry and commercial businesses to a municipality or a region. Efficient use of energy can also improve the air quality and reduce pollution. Therefore, municipalities should explore and encourage the development and implementation of efficient energy systems. Integrated hybrid energy systems can be designed to meet the total energy requirements of large and small communities. These systems can yield significant energy and cost savings when compared with independent systems serving individual units or when compared with the conventional practice of buying power from a utility and producing thermal energy on-site. To maximize energy and cost savings, the design engineer should look beyond the conventional when designing such systems.
Han, Eun Young [Department of Radiation Oncology, University of Arkansas Medical Sciences, Little Rock, Arkansas 72205 (United States)] [Department of Radiation Oncology, University of Arkansas Medical Sciences, Little Rock, Arkansas 72205 (United States); Lee, Choonsik [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States)] [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Mcguire, Lynn; Brown, Tracy L. Y. [Department of Radiology, Division of Nuclear Medicine, University of Arkansas Medical Sciences, Little Rock, Arkansas 72205 (United States)] [Department of Radiology, Division of Nuclear Medicine, University of Arkansas Medical Sciences, Little Rock, Arkansas 72205 (United States); Bolch, Wesley E. [J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States)] [J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States)
2013-08-15
Purpose: To calculate organ S values (mGy/Bq-s) and effective doses per time-integrated activity (mSv/Bq-s) for pediatric and adult family members exposed to an adult male or female patient treated with I-131 using a series of hybrid computational phantoms coupled with a Monte Carlo radiation transport technique.Methods: A series of pediatric and adult hybrid computational phantoms were employed in the study. Three different exposure scenarios were considered: (1) standing face-to-face exposures between an adult patient and pediatric or adult family phantoms at five different separation distances; (2) an adult female patient holding her newborn child, and (3) a 1-yr-old child standing on the lap of an adult female patient. For the adult patient model, two different thyroid-related diseases were considered: hyperthyroidism and differentiated thyroid cancer (DTC) with corresponding internal distributions of {sup 131}I. A general purpose Monte Carlo code, MCNPX v2.7, was used to perform the Monte Carlo radiation transport.Results: The S values show a strong dependency on age and organ location within the family phantoms at short distances. The S values and effective dose per time-integrated activity from the adult female patient phantom are relatively high at shorter distances and to younger family phantoms. At a distance of 1 m, effective doses per time-integrated activity are lower than those values based on the NRC (Nuclear Regulatory Commission) by a factor of 2 for both adult male and female patient phantoms. The S values to target organs from the hyperthyroid-patient source distribution strongly depend on the height of the exposed family phantom, so that their values rapidly decrease with decreasing height of the family phantom. Active marrow of the 10-yr-old phantom shows the highest S values among family phantoms for the DTC-patient source distribution. In the exposure scenario of mother and baby, S values and effective doses per time-integrated activity to the newborn and 1-yr-old phantoms for a hyperthyroid-patient source are higher than values for a DTC-patient source.Conclusions: The authors performed realistic assessments of {sup 131}I organ S values and effective dose per time-integrated activity from adult patients treated for hyperthyroidism and DTC to family members. In addition, the authors’ studies consider Monte Carlo simulated “mother and baby/child” exposure scenarios for the first time. Based on these results, the authors reconfirm the strong conservatism underlying the point source method recommended by the US NRC. The authors recommend that various factors such as the type of the patient's disease, the age of family members, and the distance/posture between the patient and family members must be carefully considered to provide realistic dose estimates for patient-to-family exposures.
Comparative Monte Carlo efficiency by Monte Carlo analysis.
Rubenstein, B M; Gubernatis, J E; Doll, J D
2010-09-01
We propose a modified power method for computing the subdominant eigenvalue ?{2} of a matrix or continuous operator. While useful both deterministically and stochastically, we focus on defining simple Monte Carlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfunction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our Monte Carlo methods by computing ?{2} of various Markov chain transition matrices. As |?{2}| of this matrix controls the rate at which Monte Carlo sampling relaxes to a stationary condition, its computation also enabled us to compare efficiencies of several Monte Carlo algorithms as applied to two quite different types of problems. We first computed ?{2} for several one- and two-dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as functions of temperature and applied magnetic field. Next, we computed ?{2} for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis algorithm as a function of temperature and the maximum allowable step size ?. Based on the ?{2} criterion, we found for the Ising models that small lattices appear to give an adequate picture of comparative efficiency and that the heat-bath algorithm is more efficient than the Metropolis algorithm only at low temperatures where both algorithms are inefficient. For the harmonic trap problem, we found that the traditional rule of thumb of adjusting ? so that the Metropolis acceptance rate is around 50% is often suboptimal. In general, as a function of temperature or ? , ?{2} for this model displayed trends defining optimal efficiency that the acceptance ratio does not. The cases studied also suggested that Monte Carlo simulations for a continuum model are likely more efficient than those for a discretized version of the model. PMID:21230207
Weichsel, T; Hartung, U; Kopte, T; Zschornack, G; Kreller, M; Philipp, A
2015-09-01
A metal ion source prototype has been developed: a combination of magnetron sputter technology with 2.45 GHz electron cyclotron resonance (ECR) ion source technology-a so called magnetron ECR ion source (MECRIS). An integrated ring-shaped sputter magnetron with an Al target is acting as a powerful metal atom supply in order to produce an intense current of singly charged metal ions. Preliminary experiments show that an Al(+) ion current with a density of 167 ?A/cm(2) is extracted from the source at an acceleration voltage of 27 kV. Spatially resolved double Langmuir probe measurements and optical emission spectroscopy were used to study the plasma states of the ion source: sputter magnetron, ECR, and MECRIS plasma. Electron density and temperature as well as Al atom density were determined as a function of microwave and sputter magnetron power. The effect of ECR heating is strongly pronounced in the center of the source. There the electron density is increased by one order of magnitude from 6 × 10(9) cm(-3) to 6 × 10(10) cm(-3) and the electron temperature is enhanced from about 5 eV to 12 eV, when the ECR plasma is ignited to the magnetron plasma. Operating the magnetron at constant power, it was observed that its discharge current is raised from 1.8 A to 4.8 A, when the ECR discharge was superimposed with a microwave power of 2 kW. At the same time, the discharge voltage decreased from about 560 V to 210 V, clearly indicating a higher plasma density of the MECRIS mode. The optical emission spectrum of the MECRIS plasma is dominated by lines of excited Al atoms and shows a significant contribution of lines arising from singly ionized Al. Plasma emission photography with a CCD camera was used to prove probe measurements and to identify separated plasma emission zones originating from the ECR and magnetron discharge. PMID:26429434
NASA Astrophysics Data System (ADS)
Weichsel, T.; Hartung, U.; Kopte, T.; Zschornack, G.; Kreller, M.; Philipp, A.
2015-09-01
A metal ion source prototype has been developed: a combination of magnetron sputter technology with 2.45 GHz electron cyclotron resonance (ECR) ion source technology—a so called magnetron ECR ion source (MECRIS). An integrated ring-shaped sputter magnetron with an Al target is acting as a powerful metal atom supply in order to produce an intense current of singly charged metal ions. Preliminary experiments show that an Al+ ion current with a density of 167 ?A/cm2 is extracted from the source at an acceleration voltage of 27 kV. Spatially resolved double Langmuir probe measurements and optical emission spectroscopy were used to study the plasma states of the ion source: sputter magnetron, ECR, and MECRIS plasma. Electron density and temperature as well as Al atom density were determined as a function of microwave and sputter magnetron power. The effect of ECR heating is strongly pronounced in the center of the source. There the electron density is increased by one order of magnitude from 6 × 109 cm-3 to 6 × 1010 cm-3 and the electron temperature is enhanced from about 5 eV to 12 eV, when the ECR plasma is ignited to the magnetron plasma. Operating the magnetron at constant power, it was observed that its discharge current is raised from 1.8 A to 4.8 A, when the ECR discharge was superimposed with a microwave power of 2 kW. At the same time, the discharge voltage decreased from about 560 V to 210 V, clearly indicating a higher plasma density of the MECRIS mode. The optical emission spectrum of the MECRIS plasma is dominated by lines of excited Al atoms and shows a significant contribution of lines arising from singly ionized Al. Plasma emission photography with a CCD camera was used to prove probe measurements and to identify separated plasma emission zones originating from the ECR and magnetron discharge.
Zimmerman, G.B.
1997-06-24
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ion and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burns nd burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Monte Carlo modelling of TRIGA research reactor
NASA Astrophysics Data System (ADS)
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( ?, ?) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Bacteria Allocation Using Monte Carlo
NSDL National Science Digital Library
Hill, David R.
This applet, created by David Hill and Lila Roberts, uses the Monte Carlo technique to simulate a count of bacteria that are present as a result of a certain sampling process. This simulation could be modified to perform other experiments. This experiment is geared towards high school calculus students or probability courses for mathematics majors in college. Students must possess a basic understanding of probability concepts before performing this experiment. Overall, it is a nice activity for a mathematics classroom.
Hybrid mimics and hybrid vigor in Arabidopsis.
Wang, Li; Greaves, Ian K; Groszmann, Michael; Wu, Li Min; Dennis, Elizabeth S; Peacock, W James
2015-09-01
F1 hybrids can outperform their parents in yield and vegetative biomass, features of hybrid vigor that form the basis of the hybrid seed industry. The yield advantage of the F1 is lost in the F2 and subsequent generations. In Arabidopsis, from F2 plants that have a F1-like phenotype, we have by recurrent selection produced pure breeding F5/F6 lines, hybrid mimics, in which the characteristics of the F1 hybrid are stabilized. These hybrid mimic lines, like the F1 hybrid, have larger leaves than the parent plant, and the leaves have increased photosynthetic cell numbers, and in some lines, increased size of cells, suggesting an increased supply of photosynthate. A comparison of the differentially expressed genes in the F1 hybrid with those of eight hybrid mimic lines identified metabolic pathways altered in both; these pathways include down-regulation of defense response pathways and altered abiotic response pathways. F6 hybrid mimic lines are mostly homozygous at each locus in the genome and yet retain the large F1-like phenotype. Many alleles in the F6 plants, when they are homozygous, have expression levels different to the level in the parent. We consider this altered expression to be a consequence of transregulation of genes from one parent by genes from the other parent. Transregulation could also arise from epigenetic modifications in the F1. The pure breeding hybrid mimics have been valuable in probing the mechanisms of hybrid vigor and may also prove to be useful hybrid vigor equivalents in agriculture. PMID:26283378
Hybrid mimics and hybrid vigor in Arabidopsis
Wang, Li; Greaves, Ian K.; Groszmann, Michael; Wu, Li Min; Dennis, Elizabeth S.; Peacock, W. James
2015-01-01
F1 hybrids can outperform their parents in yield and vegetative biomass, features of hybrid vigor that form the basis of the hybrid seed industry. The yield advantage of the F1 is lost in the F2 and subsequent generations. In Arabidopsis, from F2 plants that have a F1-like phenotype, we have by recurrent selection produced pure breeding F5/F6 lines, hybrid mimics, in which the characteristics of the F1 hybrid are stabilized. These hybrid mimic lines, like the F1 hybrid, have larger leaves than the parent plant, and the leaves have increased photosynthetic cell numbers, and in some lines, increased size of cells, suggesting an increased supply of photosynthate. A comparison of the differentially expressed genes in the F1 hybrid with those of eight hybrid mimic lines identified metabolic pathways altered in both; these pathways include down-regulation of defense response pathways and altered abiotic response pathways. F6 hybrid mimic lines are mostly homozygous at each locus in the genome and yet retain the large F1-like phenotype. Many alleles in the F6 plants, when they are homozygous, have expression levels different to the level in the parent. We consider this altered expression to be a consequence of transregulation of genes from one parent by genes from the other parent. Transregulation could also arise from epigenetic modifications in the F1. The pure breeding hybrid mimics have been valuable in probing the mechanisms of hybrid vigor and may also prove to be useful hybrid vigor equivalents in agriculture. PMID:26283378
NASA Astrophysics Data System (ADS)
Rozov, V.; Alekseev, A.
2015-08-01
A necessity to address a wide spectrum of engineering problems in ITER determined the need for efficient tools for modeling of the magnetic environment and force interactions between the main components of the magnet system. The assessment of the operating window for the machine, determined by the electro-magnetic (EM) forces, and the check of feasibility of particular scenarios play an important role for ensuring the safety of exploitation. Such analysis-powered prevention of damages forms an element of the Machine Operations and Investment Protection strategy. The corresponding analysis is a necessary step in preparation of the commissioning, which finalizes the construction phase. It shall be supported by the development of the efficient and robust simulators and multi-physics/multi-system integration of models. The developed numerical model of interactions in the ITER magnetic system, based on the use of pre-computed influence matrices, facilitated immediate and complete assessment and systematic specification of EM loads on magnets in all foreseen operating regimes, their maximum values, envelopes and the most critical scenarios. The common principles of interaction in typical bilateral configurations have been generalized for asymmetry conditions, inspired by the plasma and by the hardware, including asymmetric plasma event and magnetic system fault cases. The specification of loads is supported by the technology of functional approximation of nodal and distributed data by continuous patterns/analytical interpolants. The global model of interactions together with the mesh-independent analytical format of output provides the source of self-consistent and transferable data on the spatial distribution of the system of forces for assessments of structural performance of the components, assemblies and supporting structures. The numerical model used is fully parametrized, which makes it very suitable for multi-variant and sensitivity studies (positioning, off-normal events, asymmetry, etc). The obtained results and matrices form a basis for a relatively simple and robust force processor as a specialized module of a global simulator for diagnostic, operational instrumentation, monitoring and control, as well as a scenario assessment tool. This paper gives an overview of the model, applied technique, assessed problems and obtained qualitative and quantitative results.
NAPA C: Compiling for a Hybrid RISC\\/FPGA Architecture
Maya Gokhale; Janice M. Stone
1998-01-01
Hybrid architectures combining conventional processors with configurable logic resources enable efficient coordination of control with datapath computation. With integration of the two components on a single device, loop control and data-dependent branching can be handled by the conventional processor. While regular datapath computation occurs on the configurable hardware. This paper describes a novel pragma-based approach to programming such hybrid devices.