For comprehensive and current results, perform a real-time search at Science.gov.

1

Graduiertenschule Hybrid Monte Carlo

Graduiertenschule Hybrid Monte Carlo SS 2005 Heermann - UniversitÂ¨at Heidelberg Seite 1 #12;Graduiertenschule Â· In conventional Monte-Carlo (MC) calculations of condensed matter systems, such as an N probability distribution, unlike Monte-Carlo calculations. Â· The Hybrid Monte-Carlo (HMC) method combines

Heermann, Dieter W.

2

Monte Carlo integration on GPU

We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C, and more than 60 times faster than the original FORTRAN programs.

J. Kanzaki

2010-10-11

3

The Rational Hybrid Monte Carlo Algorithm

The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.

M. A. Clark

2006-10-06

4

Non-Hermitian Polynomial Hybrid Monte Carlo

We report on a new variant of the hybrid Monte Carlo algorithm employing a polynomial approximation of the inverse of the non-Hermitian Dirac-Wilson operator. Our approximation relies on simple and stable recurrence relations of complex Chebyshev polynomials. First performance figures are presented.

Oliver Witzel

2008-09-05

5

Monte Carlo One-dimension Integration Model

NSDL National Science Digital Library

The Monte Carlo One-dimension Integration Model illustrates the Monte Carlo integration algorithm to compute the integral of a function f(x). The simulation allows you to select the number of random points, to make an automatic fit to the function graph in the Y axis (thus improving the accuracy of the estimation), and to display the points or not. The simulation computes the actual value of the integral using a Romberg algorithm to test the Monte Carlo integral approximation.

Franciscouembre

2012-02-08

6

Monte Carlo Integration Lecture 2 The Problem

Monte Carlo Integration Lecture 2 The Problem Let be a probability measure over the Borel -field X S and h(x) = 0 otherwise. #12;Monte Carlo Integration Lecture 2 When the problem appears to be intractable, Press et al (1992) and reference therein). For high dimensional problems, Monte Carlo methods have

Liang, Faming

7

Hybrid optofluidic integration.

Complete integration of microfluidic and optical functions in a single lab-on-chip device is one goal of optofluidics. Here, we demonstrate the hybrid integration of a PDMS-based fluid handling layer with a silicon-based optical detection layer in a single optofluidic system. The optical layer consists of a liquid-core antiresonant reflecting optical waveguide (ARROW) chip that is capable of single particle detection and interfacing with optical fiber. Integrated devices are reconfigurable and able to sustain high pressures despite the small dimensions of the liquid-core waveguide channels. We show the combination of salient sample preparation capabilities-particle mixing, distribution, and filtering-with single particle fluorescence detection. Specifically, we demonstrate fluorescent labelling of ?-DNA, followed by flow-based single-molecule detection on a single device. This points the way towards amplification-free detection of nucleic acids with low-complexity biological sample preparation on a chip. PMID:23969694

Parks, Joshua W; Cai, Hong; Zempoaltecatl, Lynnell; Yuzvinsky, Thomas D; Leake, Kaelyn; Hawkins, Aaron R; Schmidt, Holger

2013-10-21

8

Hybrid optofluidic integration

Complete integration of microfluidic and optical functions in a single lab-on-chip device is one goal of optofluidics. Here, we demonstrate the hybrid integration of a PDMS-based fluid handling layer with a silicon-based optical detection layer in a single optofluidic system. The optical layer consists of a liquid-core antiresonant reflecting optical waveguide (ARROW) chip that is capable of single particle detection and interfacing with optical fiber. Integrated devices are reconfigurable and able to sustain high pressures despite the small dimensions of the liquid-core waveguide channels. We show the combination of salient sample preparation capabilities—particle mixing, distribution, and filtering—with single particle fluorescence detection. Specifically, we demonstrate fluorescent labelling of ?-DNA, followed by flow-based single-molecule detection on a single device. This points the way towards amplification-free detection of nucleic acids with low-complexity biological sample preparation on a chip. PMID:23969694

Parks, Joshua W.; Cai, Hong; Zempoaltecatl, Lynnell; Yuzvinsky, Thomas D.; Leake, Kaelyn; Hawkins, Aaron R.

2013-01-01

9

Constant pressure hybrid Molecular Dynamics-Monte Carlo simulations

New hybrid Molecular Dynamics-Monte Carlo methods are proposed to increase the efficiency of constant-pressure simulations. Two variations of the isobaric Molecular Dynamics component of the algorithms are considered. In the first, we use the extended-ensemble method of Andersen [H. C. Andersen, J. Chem. Phys. 72, 2384 (1980)]. In the second, we arrive at a new constant-pressure Monte Carlo technique based

Roland Faller; Juan J. de Pablo

2002-01-01

10

Chapter 2 Monte Carlo Integration This chapter gives an introduction to Monte Carlo integration useful in computer graphics. Good references on Monte Carlo methods include Kalos & Whitlock [1986 for Monte Carlo applications to neutron transport problems; Lewis & Miller [1984] is a good source

Stanford University

11

Constant pressure hybrid Monte Carlo simulations in GROMACS.

Adaptation and implementation of the Generalized Shadow Hybrid Monte Carlo (GSHMC) method for molecular simulation at constant pressure in the NPT ensemble are discussed. The resulting method, termed NPT-GSHMC, combines Andersen barostat with GSHMC to enable molecular simulations in the environment natural for biological applications, namely, at constant pressure and constant temperature. Generalized Hybrid Monte Carlo methods are designed to maintain constant temperature and volume and extending their functionality to preserving pressure is not trivial. The theoretical formulation of NPT-GSHMC was previously introduced. Our main contribution is the implementation of this methodology in the GROMACS molecular simulation package and the evaluation of properties of NPT-GSHMC, such as accuracy, performance, effectiveness for real physical systems in comparison with well-established molecular simulation techniques. Benchmarking tests are presented and the obtained preliminary results are promising. For the first time, the generalized hybrid Monte Carlo simulations at constant pressure are available within the popular open source molecular dynamics software package. PMID:25408507

Fernández-Pendás, Mario; Escribano, Bruno; Radivojevi?, Tijana; Akhmatskaya, Elena

2014-12-01

12

Monte Carlo Reliability Model for Microwave Monolithic Integrated Circuits

Monte Carlo Reliability Model for Microwave Monolithic Integrated Circuits Aris Christou Materials of the failure rate of each component due to interaction effects of the failed components. The Monte Carlo failure rates become nonconstant. The Monte Carlo technique is an appropriate methodology used to treat

Rubloff, Gary W.

13

Constant pressure hybrid Molecular Dynamics-Monte Carlo simulations

NASA Astrophysics Data System (ADS)

New hybrid Molecular Dynamics-Monte Carlo methods are proposed to increase the efficiency of constant-pressure simulations. Two variations of the isobaric Molecular Dynamics component of the algorithms are considered. In the first, we use the extended-ensemble method of Andersen [H. C. Andersen, J. Chem. Phys. 72, 2384 (1980)]. In the second, we arrive at a new constant-pressure Monte Carlo technique based on the reversible generalization of the weak-coupling barostat [H. J. C. Berendsen et al., J. Chem. Phys. 81, 3684 (1984)]. This latter technique turns out to be highly effective in equilibrating and maintaining a target pressure. It is superior to the extended-ensemble method, which in turn is superior to simple volume-rescaling algorithms. The efficiency of the proposed methods is demonstrated by studying two systems. The first is a simple Lennard-Jones fluid. The second is a mixture of polyethylene chains of 200 monomers.

Faller, Roland; de Pablo, Juan J.

2002-01-01

14

Bayesian Inference in Econometric Models Using Monte Carlo Integration

Methods for the systematic application of Monte Carlo integration with importance sampling to Bayesian inference are developed. Conditions under which the numerical approximation converges almost surely to the true value with the number of Monte Carlo replications, and its numerical accuracy may be assessed reliably, are given. Importance sampling densities are derived from multivariate normal or student approximations to the

John Geweke

1989-01-01

15

ITER Neutronics Modeling Using Hybrid Monte Carlo/Deterministic and CAD-Based Monte Carlo Methods

The immense size and complex geometry of the ITER experimental fusion reactor require the development of special techniques that can accurately and efficiently perform neutronics simulations with minimal human effort. This paper shows the effect of the hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward-Weighted CADIS (FW-CADIS) - in enhancing the efficiency of the neutronics modeling of ITER and demonstrates the applicability of coupling these methods with computer-aided-design-based MC. Three quantities were calculated in this analysis: the total nuclear heating in the inboard leg of the toroidal field coils (TFCs), the prompt dose outside the biological shield, and the total neutron and gamma fluxes over a mesh tally covering the entire reactor. The use of FW-CADIS in estimating the nuclear heating in the inboard TFCs resulted in a factor of ~ 275 increase in the MC figure of merit (FOM) compared with analog MC and a factor of ~ 9 compared with the traditional methods of variance reduction. By providing a factor of ~ 21 000 increase in the MC FOM, the radiation dose calculation showed how the CADIS method can be effectively used in the simulation of problems that are practically impossible using analog MC. The total flux calculation demonstrated the ability of FW-CADIS to simultaneously enhance the MC statistical precision throughout the entire ITER geometry. Collectively, these calculations demonstrate the ability of the hybrid techniques to accurately model very challenging shielding problems in reasonable execution times.

Ibrahim, A. [University of Wisconsin; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Sawan, M. [University of Wisconsin; Wilson, P. [University of Wisconsin; Wagner, John C [ORNL; Heltemes, Thad [University of Wisconsin, Madison

2011-01-01

16

BROWNIAN PROCESSES FOR MONTE CARLO INTEGRATION ON COMPACT LIE GROUPS

BROWNIAN PROCESSES FOR MONTE CARLO INTEGRATION ON COMPACT LIE GROUPS S. SAID, The University for the evaluation of integrals of smooth functions defined on compact Lie groups. The approach is based on the ergodic property of Brownian processes in compact Lie groups. The paper provides an elementary proof

Manton, Jonathan

17

Path Integral Monte-Carlo Calculations for Relativistic Oscillator

The problem of Relativistic Oscillator has been studied in the framework of Path Integral Monte-Carlo(PIMC) approach. Ultra-relativistic and non-relativistic limits have been discussed. We show that PIMC method can be effectively used for investigation of relativistic systems.

Alexandr Ivanov; Oleg Pavlovsky

2014-11-11

18

PLC hybrid integration technology and its application to photonic components

Silica-based planar lightwave circuit (PLC) hybrid integration is a promising way to provide highly functional photonic components. This paper is an overview of recent progress in PLC hybrid integration technology including optoelectronic semiconductor devices for the hybrid integration, various devices for wavelength-division multiplexing, and all-optical time-division multiplexing

Kuniharu Kato; Yuichi Tohmori

2000-01-01

19

Mean field simulation for Monte Carlo integration Part II : Feynman-Kac models

, cloning, splitting, condensation, resampled Monte Carlo, enrichment, go with the winner, subset simulationMean field simulation for Monte Carlo integration Part II : Feynman-Kac models P. Del Moral INRIA. 2012 Some hyper-refs Mean field simulation for Monte Carlo integration. Chapman & Hall - Maths & Stats

Del Moral , Pierre

20

Hybrid LSDA\\/Diffusion Quantum Monte-Carlo Method for Spin Sequences in Vertical Quantum Dots

We present an new hybrid Diffusion Quantum Monte-Carlo (DQMC)\\/Local Spin Density Approximation (LSDA) method, to compute the electronic structure of vertical quantum dots (VQD). The exact many-body electronic configuration is computed with a realistic confining potential. Our model confirms the atomic-like model of 2D shell structures obeying Hund's rule already predicted by LSDA.

P. Matagne; T. Wilkens; J. P. Leburton; R. Martin

2002-01-01

21

A Concise Force Calculation for Hybrid Monte Carlo with Improved Actions

We present a concise way to calculate force for Hybrid Monte Carlo with improved actions using the fact that changes in thin and smeared link matrices lie in their respective tangent vector spaces. Since hypercubic smearing schemes are very memory intensive, we also present a memory optimized implementation of them.

Nikhil Karthik

2014-01-06

22

Path integral Monte Carlo on a lattice: extended states

The equilibrium properties of a single quantum particle (qp) interacting with a classical gas for a wide range of temperatures that explore the system's behavior in the classical as well as in the quantum regime is investigated. Both the quantum particle and atoms are restricted to the sites of a one-dimensional lattice. A path-integral formalism is developed within the context of the canonical ensemble in which the quantum particle is represented by a closed, variable-step random walk on the lattice. Monte Carlo methods are employed to determine the system's properties. For the case of a free particle, analytical expressions for the energy, its fluctuations, and the qp-qp correlation function are derived and compared with the Monte Carlo simulations. To test the usefulness of the path integral formalism, the Metropolis algorithm is employed to determine the equilibrium properties of the qp for a periodic interaction potential, forcing the qp to occupy extended states. We consider a striped potential in one dimension, where every other lattice site is occupied by an atom with potential $\\epsilon$, and every other lattice site is empty. This potential serves as a stress test for the path integral formalism because of its rapid site-to-site variation. An analytical solution was determined in this case by utilizing Bloch's theorem due to the periodicity of the potential. Comparisons of the potential energy, the total energy, the energy fluctuations and the correlation function are made between the results of the Monte Carlo simulations and the analytical calculations.

Mark O'Callaghan; Bruce N. Miller

2014-02-08

23

Path Integral Monte Carlo Simulation of Hot, Dense Hydrogen

NASA Astrophysics Data System (ADS)

Path integral Monte Carlo simulations have been applied to study the hot, dense hydrogen at Mega-bar pressures corresponding to the density and temperature range of 1 < rs < 14 and 5000 K < T < 1000000 K. We determine the equation of state and study the phase diagram including the molecular, atomic and plasma regime. We discuss the effects of different fermion nodes, which we improved by developing a variational thermal density matrix [1]. Furthermore, we calculate the deuterium Hugoniot [2] and compare with shock wave experiments. [1] B. Militzer, E.L. Pollock, ``Variational Density Matrix Method for Warm Condensed Matter and Application to Dense Hydrogen'', Phys. Rev. E 61 (2000) 3470. [2] B. Militzer, D. M. Ceperley, "Path Integral Monte Carlo Calculation of the Deuterium Hugoniot", Phys. Rev. Lett. 85 (2000) 1890.

Militzer, Burkhard

2001-03-01

24

Longitudinal development of extensive air showers: Hybrid code SENECA and full Monte Carlo

NASA Astrophysics Data System (ADS)

New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air shower modelling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tri-dimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower development: the first step predicts the large fluctuations in the very first particle interactions at high energies while the second step provides a well detailed lateral distribution simulation of the final stages of the air shower. Both Monte Carlo simulation steps are connected by a cascade equation system which reproduces correctly the hadronic and electromagnetic longitudinal profile. We study the influence of this approach on the main longitudinal characteristics of proton, iron nucleus and gamma induced air showers and compare the predictions of the well known CORSIKA code using the QGSJET hadronic interaction model.

Ortiz, Jeferson A.; Medina-Tanco, Gustavo; de Souza, Vitor

2005-06-01

25

Longitudinal development of extensive air showers: hybrid code SENECA and full Monte Carlo

New experiments, exploring the ultra-high energy tail of the cosmic ray spectrum with unprecedented detail, are exerting a severe pressure on extensive air hower modeling. Detailed fast codes are in need in order to extract and understand the richness of information now available. Some hybrid simulation codes have been proposed recently to this effect (e.g., the combination of the traditional Monte Carlo scheme and system of cascade equations or pre-simulated air showers). In this context, we explore the potential of SENECA, an efficient hybrid tridimensional simulation code, as a valid practical alternative to full Monte Carlo simulations of extensive air showers generated by ultra-high energy cosmic rays. We extensively compare hybrid method with the traditional, but time consuming, full Monte Carlo code CORSIKA which is the de facto standard in the field. The hybrid scheme of the SENECA code is based on the simulation of each particle with the traditional Monte Carlo method at two steps of the shower development: the first step predicts the large fluctuations in the very first particle interactions at high energies while the second step provides a well detailed lateral distribution simulation of the final stages of the air shower. Both Monte Carlo simulation steps are connected by a cascade equation system which reproduces correctly the hadronic and electromagnetic longitudinal profile. We study the influence of this approach on the main longitudinal characteristics of proton-induced air showers and compare the predictions of the well known CORSIKA code using the QGSJET hadronic interaction model.

Jeferson A. Ortiz; Gustavo Medina Tanco; V. de Souza

2004-11-15

26

Hybrid Mie-MCML Monte Carlo simulation of light propagation in skin layers

NASA Astrophysics Data System (ADS)

Monte Carlo modeling of light transport in multi-layered tissues (MCML) has been used for simulating light transport in human skin layers. The Monte Carlo simulations can perform ray tracing of light on the basis of optical energy. A hybrid simulator combining MCML with Mie scattering theory (HMCS) has been developed in this study to analyze light propagating in human skin on the amplitude basis. The HMCS and MCML are compared in terms of diffused light intensity profile in the skin surface and photon fluence in the penetration for the three-layered model of skin tissue.

Kawai, Yu; Iwai, Toshiaki

2014-08-01

27

The rigorous 2-step (R2S) method uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the neutron transport calculation of the R2S method. The prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their use in the accurate full-scale neutronics analyses of fusion reactors. This paper describes a novel hybrid Monte Carlo/deterministic technique that uses the Consistent Adjoint Driven Importance Sampling (CADIS) methodology but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) method speeds up the Monte Carlo neutron calculation of the R2S method using an importance function that represents the importance of the neutrons to the final SDDR. Using a simplified example, preliminarily results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the increase over analog Monte Carlo is higher than 10,000.

Ibrahim, Ahmad M [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Peterson, Joshua L [ORNL] [ORNL; Grove, Robert E [ORNL] [ORNL

2013-01-01

28

When Are Quasi-Monte Carlo Algorithms Efficient for High Dimensional Integrals?

Recently, quasi-Monte Carlo algorithms have been successfully used for multivariate integration of high dimensiond, and were significantly more efficient than Monte Carlo algorithms. The existing theory of the worst case error bounds of quasi-Monte Carlo algorithms does not explain this phenomenon. This paper presents a partial answer to why quasi-Monte Carlo algorithms can work well for arbitrarily larged. It is

Ian H. Sloan; Henryk Wozniakowski

1998-01-01

29

Hybrid Monte Carlo with Wilson Dirac operator on the Fermi GPU

In this article we present our implementation of a Hybrid Monte Carlo algorithm for Lattice Gauge Theory using two degenerate flavours of Wilson-Dirac fermions on a Fermi GPU. We find that using registers instead of global memory speeds up the code by almost an order of magnitude. To map the array variables to scalars, so that the compiler puts them in the registers, we use code generators. Our final program is more than 10 times faster than a generic single CPU.

Abhijit Chakrabarty; Pushan Majumdar

2012-07-10

30

Hybrid Monte Carlo simulation of polymer chains A. It-b&k Department of Theoretical Physics) We develop the hybrid Monte Carlo method for simulations of single off-lattice polymer chains. We(ln N)-". 1. INTRODUCTION Monte Carlo methods is a well-established tool in the study of polymer models

IrbÃ¤ck, Anders

31

A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems

Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model) for deep penetration problems such as examined in this paper. In this research, we investigate the application of a variant of the hybrid Monte Carlo-deterministic method proposed by Cooper and Larsen to global deep penetration problems involving binary stochastic media. To our knowledge, hybrid Monte Carlo-deterministic methods have not previously been applied to problems involving a stochastic medium. We investigate two approaches for computing the approximate deterministic estimate of the forward scalar flux distribution used to automatically generate the weight windows. The first approach uses the atomic mix approximation to the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. The second approach uses the Levermore-Pomraning model for the binary stochastic medium transport problem and a low-order discrete ordinates angular approximation. In both cases, we use Monte Carlo Algorithm B with weight windows automatically generated from the approximate forward scalar flux distribution to obtain the solution of the transport problem.

Keady, K P; Brantley, P

2010-03-04

32

A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport

Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.

Bal, Guillaume, E-mail: gb2030@columbia.edu [Department of Applied Physics and Applied Mathematics, Columbia University, 200 S.W. Mudd Building, 500 W. 120th Street, New York, NY 10027 (United States); Davis, Anthony B., E-mail: Anthony.B.Davis@jpl.nasa.gov [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Mail Stop 169-237, Pasadena, CA 91109 (United States); Kavli Institute for Theoretical Physics, Kohn Hall, University of California, Santa Barbara, CA 93106-4030 (United States); Langmore, Ian, E-mail: ianlangmore@gmail.com [Department of Applied Physics and Applied Mathematics, Columbia University, 200 S.W. Mudd Building, 500 W. 120th Street, New York, NY 10027 (United States)

2011-08-20

33

MODEL DEVELOPMENT FOR INTEGRATED HYBRID ELECTRIC VEHICLE DYNAMIC STABILITY SYSTEMS

This study expanded an existing full car dynamic model (HVOSM.VD2) to enable simulation of electric, hybrid electric, and fuel cell vehicles with integrated vehicle stability systems. A prototype range extending series hybrid vehicle was constructed with independent front wheel drives. A hybrid vehicle stability assist (VSA) algorithm was developed to perform proportional control of yaw rate through left\\/right distribution of

Joel R. Anstrom

2003-01-01

34

Hybrid Parallel Programming Models for AMR Neutron Monte-Carlo Transport

NASA Astrophysics Data System (ADS)

This paper deals with High Performance Computing (HPC) applied to neutron transport theory on complex geometries, thanks to both an Adaptive Mesh Refinement (AMR) algorithm and a Monte-Carlo (MC) solver. Several Parallelism models are presented and analyzed in this context, among them shared memory and distributed memory ones such as Domain Replication and Domain Decomposition, together with Hybrid strategies. The study is illustrated by weak and strong scalability tests on complex benchmarks on several thousands of cores thanks to the petaflopic supercomputer Tera100.

Dureau, David; Poëtte, Gaël

2014-06-01

35

A Hybrid (Monte-Carlo/Deterministic) Approach for Multi-Dimensional Radiation Transport

A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.

Guillaume Bal; Anthony Davis; Ian Langmore

2011-05-07

36

Hybrid integrated optics in volume holographic photopolymer

NASA Astrophysics Data System (ADS)

Traditional planar lightwave circuits fabricated from lithographically-patterned waveguides in glasses, semi-conductors or polymers cannot accommodate the wide range of materials required by typical optical devices. In addition, such waveguides are nearly always defined in the material surface and thus can support only a limited density of interconnects and suffer poor performance at waveguide crossings. Furthermore, the inflexibility of lithographic approaches - including both waveguides and "silicon-bench" methods - requires optical sub-components with unreasonable and expensive tolerances. We propose an alternative integrated optics platform based on 3D direct-write lithography into an optically addressable encapsulant. Arbitrary micro-optics are first embedded in a liquid monomer which is then cured into a semi-solid pre-polymer. It is essential that this step take place with minimal shrinkage to avoid stresses. A scanning confocal microscope then nondestructively identifies the component locations and their tolerances. The controller customizes the circuit design to accommodate these tolerances and then scans a 0.3 to 0.6 NA focus within the volume of the holographic polymer to create waveguides, lenses or other passive interconnects with one micron resolution. A final incoherent exposure cures and solidifies the polymer, finishing the process. The resulting hybrid optoelectronic circuits contain 3D routed waveguides interconnecting active and passive micro-optic devices in environmentally robust, hermetically sealed packages. A feature of particular interest is the ability to write waveguides directly off of the tips of embedded fibers, passively interfacing the circuits to fiber. We show that polymers developed for holographic data storage have the properties required for this application.

McLeod, Robert R.; Sullivan, Amy C.; Grabowski, Matthew W.; Scott, Timothy F.

2004-10-01

37

Optofluidic hybrid platform with integrated solid core waveguides

NASA Astrophysics Data System (ADS)

An optofluidic hybrid platform based on hybrid liquid core ARROW waveguides has been fabricated and tested. Solid core hybrid ARROW was integrated in a self-aligned optical configuration with the ARROW optofluidic channel for an improved collection efficiency. The platform was fabricated using a modular approaches. The microfluidic system was completely realized with PDMS using a layered structure while the optical part was realized developing a hybrid silicon/PDMS solution. The performance of the system has been tested by carrying out fluorescence measurements on Cy5 water solutions, obtaining an LOD of 2.5 nM.

Testa, G.; Persichetti, G.; Sarro, P. M.; Bernini, R.

2014-03-01

38

Higher order and infinite Trotter-number extrapolations in path integral Monte Carlo

Improvements beyond the primitive approximation in the path integral Monte Carlo method are explored both in a model problem and in real systems. Two different strategies are studied: The Richardson extrapolation on top of the path integral Monte Carlo data and the Takahashi-Imada action. The Richardson extrapolation, mainly combined with the primitive action, always reduces the number-of-beads dependence, helps in

L. Brualla; K. Sakkos; J. Boronat; J. Casulleras

2004-01-01

39

QYMSYM: A GPU-accelerated hybrid symplectic integrator

NASA Astrophysics Data System (ADS)

QYMSYM is a GPU accelerated 2nd order hybrid symplectic integrator that identifies close approaches between particles and switches from symplectic to Hermite algorithms for particles that require higher resolution integrations. This is a parallel code running with CUDA on a video card that puts the many processors on board to work while taking advantage of fast shared memory.

Moore, Alexander; Quillen, Alice C.

2012-10-01

40

Hybrid Online Education: Identifying Integration Models Using Adventure Learning

ERIC Educational Resources Information Center

In this paper we sought to understand how teachers chose to integrate a hybrid online education program in their classrooms, how students responded to this choice, and how students' experiences were influenced by the integration model chosen by the teachers. Data collected via classroom observations, personal interviews, and focus groups suggest…

Doering, Aaron; Veletsianos, George

2008-01-01

41

NASA Astrophysics Data System (ADS)

Hybrid schemes combining the strength of molecular dynamics (MD) and Metropolis Monte Carlo (MC) offer a promising avenue to improve the sampling efficiency of computer simulations of complex systems. A number of recently proposed hybrid methods consider new configurations generated by driving the system via a non-equilibrium MD (neMD) trajectory, which are subsequently treated as putative candidates for Metropolis MC acceptance or rejection. To obey microscopic detailed balance, it is necessary to alter the momentum of the system at the beginning and/or the end of the neMD trajectory. This strict rule then guarantees that the random walk in configurational space generated by such hybrid neMD-MC algorithm will yield the proper equilibrium Boltzmann distribution. While a number of different constructs are possible, the most commonly used prescription has been to simply reverse the momenta of all the particles at the end of the neMD trajectory ("one-end momentum reversal"). Surprisingly, it is shown here that the choice of momentum reversal prescription can have a considerable effect on the rate of convergence of the hybrid neMD-MC algorithm, with the simple one-end momentum reversal encountering particularly acute problems. In these neMD-MC simulations, different regions of configurational space end up being essentially isolated from one another due to a very small transition rate between regions. In the worst-case scenario, it is almost as if the configurational space does not constitute a single communicating class that can be sampled efficiently by the algorithm, and extremely long neMD-MC simulations are needed to obtain proper equilibrium probability distributions. To address this issue, a novel momentum reversal prescription, symmetrized with respect to both the beginning and the end of the neMD trajectory ("symmetric two-ends momentum reversal"), is introduced. Illustrative simulations demonstrate that the hybrid neMD-MC algorithm robustly yields a correct equilibrium probability distribution with this prescription.

Chen, Yunjie; Roux, Benoît

2014-09-01

42

Using Supervised Learning to Improve Monte Carlo Integral Estimation

(importance sampling, quasi-Monte Carlo, etc.) without adding bias. An extensive set of experiments the fuel-burn metrics of future commercial aircraft and sonic boom loudness measures, and the efficiency, computational cost, significant increases in accuracy are gained. Nomenclature b = bias of M Dx = set of samples

Alonso, Juan J.

43

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.

Wagner, John C [ORNL; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Turner, John A [ORNL

2011-01-01

44

This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which attempts to achieve uniform statistical uncertainty throughout a designated problem space. The MC DD development is being implemented in conjunction with the Denovo deterministic radiation transport package to have direct access to the 3-D, massively parallel discrete-ordinates solver (to support the hybrid method) and the associated parallel routines and structure. This paper describes the hybrid method, its implementation, and initial testing results for a realistic 2-D quarter core pressurized-water reactor model and also describes the MC DD algorithm and its implementation.

Wagner, John C [ORNL] [ORNL; Mosher, Scott W [ORNL] [ORNL; Evans, Thomas M [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Turner, John A [ORNL] [ORNL

2010-01-01

45

Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene

In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.

Dominik Smith; Lorenz von Smekal

2013-11-05

46

Global Evaluation of Prompt Dose Rates in ITER Using Hybrid Monte Carlo/Deterministic Techniques

The hybrid Monte Carlo (MC)/deterministic techniques - Consistent Adjoint Driven Importance Sampling (CADIS) and Forward Weighted CADIS (FW-CADIS) - enable the full 3-D modeling of very large and complicated geometries. The ability of performing global MC calculations for nuclear parameters throughout the entire ITER reactor was demonstrated. The 2 m biological shield (bioshield) reduces the total prompt operational dose by six orders of magnitude. The divertor cryo-pump port results in a peaking factor of 120 in the prompt operational dose rate behind the bioshield by a factor of 47. The peak values of the prompt dose rates at the back surface of the bioshield were 240 uSv/hr and 94 uSv/hr corresponding to the regions behind the divertor cryo-pump port and the equatorial port, respectively.

Ibrahim, A. [University of Wisconsin; Sawan, M. [University of Wisconsin; Mosher, Scott W [ORNL; Evans, Thomas M [ORNL; Peplow, Douglas E. [ORNL; Wilson, P. [University of Wisconsin; Wagner, John C [ORNL

2011-01-01

47

Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo

NASA Astrophysics Data System (ADS)

The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.

Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.

2014-08-01

48

Monte Carlo-fluid hybrid model of the accumulatioln of dust particles at sheath edges in radio. The processesgoverning the transport of charged dust in the model are drift of partially shieldedparticles in plasmasusedfor fabrica- tion of microelectronic componentsbecauseit can contam- inatethe wafer

Kushner, Mark

49

A Deterministic-Monte Carlo Hybrid Method for Time-Dependent Neutron Transport Problems

A new deterministic-Monte Carlo hybrid solution technique is derived for the time-dependent transport equation. This new approach is based on dividing the time domain into a number of coarse intervals and expanding the transport solution in a series of polynomials within each interval. The solutions within each interval can be represented in terms of arbitrary source terms by using precomputed response functions. In the current work, the time-dependent response function computations are performed using the Monte Carlo method, while the global time-step march is performed deterministically. This work extends previous work by coupling the time-dependent expansions to space- and angle-dependent expansions to fully characterize the 1D transport response/solution. More generally, this approach represents and incremental extension of the steady-state coarse-mesh transport method that is based on global-local decompositions of large neutron transport problems. An example of a homogeneous slab is discussed as an example of the new developments.

Justin Pounders; Farzad Rahnema

2001-10-01

50

Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.

Ibrahim, Ahmad M [ORNL] [ORNL; Wilson, P. [University of Wisconsin] [University of Wisconsin; Sawan, M. [University of Wisconsin] [University of Wisconsin; Mosher, Scott W [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Grove, Robert E [ORNL] [ORNL

2013-01-01

51

Feasibility of a Monte Carlo-deterministic hybrid method for fast reactor analysis

A Monte Carlo and deterministic hybrid method is investigated for the analysis of fast reactors in this paper. Effective multi-group cross sections data are generated using a collision estimator in the MCNP5. A high order Legendre scattering cross section data generation module was added into the MCNP5 code. Both cross section data generated from MCNP5 and TRANSX/TWODANT using the homogeneous core model were compared, and were applied to DIF3D code for fast reactor core analysis of a 300 MWe SFR TRU burner core. For this analysis, 9 groups macroscopic-wise data was used. In this paper, a hybrid calculation MCNP5/DIF3D was used to analyze the core model. The cross section data was generated using MCNP5. The k{sub eff} and core power distribution were calculated using the 54 triangle FDM code DIF3D. A whole core calculation of the heterogeneous core model using the MCNP5 was selected as a reference. In terms of the k{sub eff}, 9-group MCNP5/DIF3D has a discrepancy of -154 pcm from the reference solution, 9-group TRANSX/TWODANT/DIF3D analysis gives -1070 pcm discrepancy. (authors)

Heo, W.; Kim, W.; Kim, Y. [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of)] [Korea Advanced Institute of Science and Technology - KAIST, 291 Daehak-ro, Yuseong-gu, Daejeon, 305-701 (Korea, Republic of); Yun, S. [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)] [Korea Atomic Energy Research Institute - KAERI, 989-111 Daedeok-daero, Yuseong-gu, Daejeon, 305-353 (Korea, Republic of)

2013-07-01

52

High-order Path Integral Monte Carlo methods for solving quantum dot problems

The conventional second-order Path Integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work, we show that optimized fourth-order Path Integral Monte Carlo methods, which use no more than 5 free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.

Chin, Siu A

2014-01-01

53

The Integrated TIGER Series (ITS) is a software package that solves coupled electron-photon transport problems. ITS performs analog photon tracking for energies between 1 keV and 1 GeV. Unlike its deterministic counterpart, the Monte Carlo calculations of ITS do not require a memory-intensive meshing of phase space; however, its solutions carry statistical variations. Reducing these variations is heavily dependent on runtime. Monte Carlo simulations must therefore be both physically accurate and computationally efficient. Compton scattering is the dominant photon interaction above 100 keV and below 5-10 MeV, with higher cutoffs occurring in lighter atoms. In its current model of Compton scattering, ITS corrects the differential Klein-Nishina cross sections (which assumes a stationary, free electron) with the incoherent scattering function, a function dependent on both the momentum transfer and the atomic number of the scattering medium. While this technique accounts for binding effects on the scattering angle, it excludes the Doppler broadening the Compton line undergoes because of the momentum distribution in each bound state. To correct for these effects, Ribbefor's relativistic impulse approximation (IA) will be employed to create scattering cross section differential in both energy and angle for each element. Using the parameterizations suggested by Brusa et al., scattered photon energies and angle can be accurately sampled at a high efficiency with minimal physical data. Two-body kinematics then dictates the electron's scattered direction and energy. Finally, the atomic ionization is relaxed via Auger emission or fluorescence. Future work will extend these improvements in incoherent scattering to compounds and to adjoint calculations.

Quirk, Thomas, J., IV (University of New Mexico)

2004-08-01

54

Sequential Monte Carlo filtering techniques applied to integrated navigation systems

This paper addresses the problem of integrated aircraft navigation, more specifically how to integrate inertial navigation with terrain aided positioning. This is a highly nonlinear and non-Gaussian recursive state estimation problem which requires state of the art methods. We propose an algorithm based on the particle filter with particular attention to the complexity of the problem. The proposed algorithm takes

P.-J. Nordlund; F. Gustafsson

2001-01-01

55

Hybrid monte carlo method for simulation of two-component aerosol coagulation and phase segregation.

The paper presents the development of a hybrid Monte Carlo (MC) method for the simulation of the simultaneous coagulation and phase segregation of an immiscible two-component binary aerosol. The model is intended to qualitatively model our prior studies of the synthesis of mixed metal oxides for which phase-segregated domains have been observed in molten nanodroplets. In our previous works (J. Aerosol Sci.32, 1479 (2001); Chem. Eng. Sci.56, 5763 (2001); submitted for publication) we developed sectional and monodisperse models where the internal state of the aerosol particles was described. These methods have certain limitations and it is difficult to include additional physical effects into the framework. Our new approach combines both constant volume and constant number Monte Carlo methods. Similar to our previous models, we assume that the phase segregation is kinetically controlled. The MC approach allows us to compute the mean number of enclosures (minor phase) per droplet, average enclosure volume, and the width of the enclosure size distribution. The results show that asymptotic behavior of enclosure distribution exists that is independent of initial conditions, which is very close to the continuum self-preserving distribution. Temperature is a key parameter because it allows for a significant change in the internal transport rate within each droplet. In particular, increasing the temperature significantly enhances the Brownian coagulation rate and lowers the number of enclosures per droplet. As a result, the MC results indicate that the growth of the minor phase can be moderated quite dramatically by small changes in system temperature. These results serve to illustrate the utility of this synthesis approach to the controlled growth of nanoparticles through the use of a majority matrix to slow down the encounter frequency of the minor phase and therefore its particle size. PMID:16290566

Efendiev, Y; Zachariah, M R

2002-05-01

56

Integrated Hybrid System Architecture for Risk Analysis

NASA Technical Reports Server (NTRS)

A conceptual design has been announced of an expert-system computer program, and the development of a prototype of the program, intended for use as a project-management tool. The program integrates schedule and risk data for the purpose of determining the schedule applications of safety risks and, somewhat conversely, the effects of changes in schedules on changes on safety. It is noted that the design has been delivered to a NASA client and that it is planned to disclose the design in a conference presentation.

Moynihan, Gary P.; Fonseca, Daniel J.; Ray, Paul S.

2010-01-01

57

Monte-Carlo and Quasi-Monte-Carlo Methods for Numerical Integration

We consider the problem of numerical integration in dimension s, with eventually large s; the usual rules need a very huge number of nodes with increasing dimension to obtain some accuracy, say an error bound less than 10-,; this phenomenon is called \\

HENRI FAURE

58

NASA Astrophysics Data System (ADS)

The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

Whitehead, James Joshua

59

We propose a new method to calculate ground state position dependent observables in quantum many-body systems. The method, which we call the path-integral diffusion Monte Carlo (PI-DMC) method, is essentially a combination of path-integral Monte Carlo (PIMC) and diffusion Monte Carlo (DMC) methods. The distribution resulting from a DMC simulation is further propagated in imaginary time by PIMC sampling. Tests

Bala´zs Hete´nyi; Eran Rabani; B. J. Berne

1999-01-01

60

We propose a new method to calculate ground state position dependent observables in quantum many-body systems. The method, which we call the path-integral diffusion Monte Carlo ~PI-DMC! method, is essentially a combination of path-integral Monte Carlo ~PIMC! and diffusion Monte Carlo ~DMC! methods. The distribution resulting from a DMC simulation is further propagated in imaginary time by PIMC sampling. Tests

Balazs Hetenyi; Eran Rabani; B. J. Berne

61

Equation of state of the hydrogen plasma by path integral Monte Carlo simulation

The equation of state of hydrogen plasma is calculated by the restricted path integral Monte Carlo method. We have investigated the plasma from the classical weak coupling regime to the quantum strongly coupled regime. Good agreement is found with the existing theories for low electronic degeneracy. Inception of molecular formation is observed at low densities and temperatures.

Pierleoni, C.; Ceperley, D.M.; Bernu, B.; Magro, W.R. (Laboratoire de Physique Theorique des Liquides, Universite Pierre Marie Curie, 4 Place Jussieu, 75252 Paris Cedex 05 (France) Centre Europeen de Calcul Atomique et Moleculaire, Ecole Normale Superieure, 46 Allee d'Italie, 69364 Lyon Cedex 07 (France) National Center for Supercomputing Applications, Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, Illinois 61801 (United States))

1994-10-17

62

Superfluid response of 4 HeNN2O clusters probed by path integral Monte Carlo

Superfluid response of 4 HeNÂN2O clusters probed by path integral Monte Carlo simulations Lecheng and dynamical prop- erties of 4 HeNÂN2O clusters (N 6 40) are investigated. The simulations employed a newly

Le Roy, Robert J.

63

We have developed a new propagator, called the local parabolic reference (LPR), for use in the numerical evaluation of discretized Feynman path integrals by Metropolis Monte Carlo simulations. The form of the propagator is motivated by fitting a local quadratic reference potential (with positive, negative or zero curvature) to the potential energy surface of interest, and constructing the exact propagator

Cecilia E. Chao; Hans C. Andersen

1997-01-01

64

Importance Sampling and Adjoint Hybrid Methods in Monte Carlo Transport with Reflecting Boundaries

Adjoint methods form a class of importance sampling methods that are used to accelerate Monte Carlo (MC) simulations of transport equations. Ideally, adjoint methods allow for zero-variance MC estimators provided that the solution to an adjoint transport equation is known. Hybrid methods aim at (i) approximately solving the adjoint transport equation with a deterministic method; and (ii) use the solution to construct an unbiased MC sampling algorithm with low variance. The problem with this approach is that both steps can be prohibitively expensive. In this paper, we simplify steps (i) and (ii) by calculating only parts of the adjoint solution. More specifically, in a geometry with limited volume scattering and complicated reflection at the boundary, we consider the situation where the adjoint solution "neglects" volume scattering, whereby significantly reducing the degrees of freedom in steps (i) and (ii). A main application for such a geometry is in remote sensing of the environment using physics-based signal models. Volume scattering is then incorporated using an analog sampling algorithm (or more precisely a simple modification of analog sampling called a heuristic sampling algorithm) in order to obtain unbiased estimators. In geometries with weak volume scattering (with a domain of interest of size comparable to the transport mean free path), we demonstrate numerically significant variance reductions and speed-ups (figures of merit).

Guillaume Bal; Ian Langmore

2011-04-13

65

Thermodynamics of coupled protein adsorption and stability using hybrid Monte Carlo simulations.

A better understanding of changes in protein stability upon adsorption can improve the design of protein separation processes. In this study, we examine the coupling of the folding and the adsorption of a model protein, the B1 domain of streptococcal protein G, as a function of surface attraction using a hybrid Monte Carlo (HMC) approach with temperature replica exchange and umbrella sampling. In our HMC implementation, we are able to use a molecular dynamics (MD) time step that is an order of magnitude larger than in a traditional MD simulation protocol and observe a factor of 2 enhancement in the folding and unfolding rate. To demonstrate the convergence of our systems, we measure the travel of our order parameter the fraction of native contacts between folded and unfolded states throughout the length of our simulations. Thermodynamic quantities are extracted with minimum statistical variance using multistate reweighting between simulations at different temperatures and harmonic distance restraints from the surface. The resultant free energies, enthalpies, and entropies of the coupled unfolding and absorption processes are in qualitative agreement with previous experimental and computational observations, including entropic stabilization of the adsorbed, folded state relative to the bulk on surfaces with low attraction. PMID:24716898

Zhong, Ellen D; Shirts, Michael R

2014-05-01

66

Nucleation free-energy barriers with Hybrid Monte-Carlo/Umbrella Sampling.

The aim of this work is to evaluate nucleation free-energy barriers using molecular dynamics (MD). More specifically, we use a combination of Hybrid Monte Carlo (HMC) and an Umbrella Sampling scheme, and compute the crystallisation barrier of NaCl from its melt. Firstly the convergence and performance of HMC for different time-steps and the number of MD steps within a HMC cycle are assessed. The calculated potential energies and densities converge regardless of the chosen time-step. However the acceptance ratio of the Metropolis step within the HMC scheme strongly depends on the time-step and affects the performance. It is shown that the acceptance ratio is close to 100% for time-steps of the order of those commonly used in molecular dynamics runs. We then explore the results obtained with a "non-Metropolised" version of HMC where the MD trajectories are always accepted (omitting the Metropolis criteria) and conclude that they are satisfactory for time-steps below 5 fs. Next, HMC is combined with Umbrella Sampling (HMC/US) to compute the nucleation free-energy for both the standard and the "non-Metropolised" HMC (using a small time-step) and in both cases find excellent agreement with the reported values. To conclude, we explore approximations to the HMC/US technique implementing HMC with isothermal-isobaric MD trajectories. The computed nucleation free-energy curve is coincident, within the statistical error, with previous calculations. PMID:25323418

Gonzalez, M A; Sanz, E; McBride, C; Abascal, J L F; Vega, C; Valeriani, C

2014-10-22

67

Neutral depletion in inductively coupled plasmas using hybrid-type direct simulation Monte Carlo

Neutral and ion transport phenomena were simulated by a hybrid-type direct simulation Monte Carlo (DSMC) method for a one-dimensional (1D) electrostatic plasma in Ar/N{sub 2} mixtures to identify the mechanism of neutral depletion. The results show that gas heating and pressure balance are the main mechanisms of neutral depletion in an inductively coupled plasma. When plasma pressure becomes comparable to neutral pressure in high density plasma sources (T{sub e}{approx}2-5 eV, n{sub e}{approx}10{sup 11}-10{sup 12} cm{sup -3}), the total pressure (neutral pressure and plasma pressure) is conserved. Therefore, the finite plasma pressure (mainly electron pressure) reduces the neutral pressure. Neutrals collide with ions that have been accelerated by the ambipolar electric field and with Franck-Condon dissociated atoms, resulting in gas heating. Significant neutral depletion (up to 90%) is found at the typical condition of inductively coupled plasma process reactors. The resulting neutral depletion enhances the plasma transport to the surrounding wall, increases the particle loss, and decreases the plasma density.

Shimada, Masashi; Tynan, George R.; Cattolica, Robert [Department of Mechanical and Aerospace Engineering, and Center for Energy Research, University of California, San Diego, 9500 Gilman Dr., La Jolla, California 92093-0417 (United States)

2008-02-01

68

Metal-catalyzed growth mechanisms of carbon nanotubes (CNTs) were studied by hybrid molecular dynamics-Monte Carlo simulations using a recently developed ReaxFF reactive force field. Using this novel approach, including relaxation effects, a CNT with definable chirality is obtained, and a step-by-step atomistic description of the nucleation process is presented. Both root and tip growth mechanisms are observed. The importance of the relaxation of the network is highlighted by the observed healing of defects. PMID:20939511

Neyts, Erik C; Shibuta, Yasushi; van Duin, Adri C T; Bogaerts, Annemie

2010-11-23

69

Multichip hybrid integration on PLC platform using passive alignment technique

A multi-chip hybrid integration technique on a planar lightwave circuit (PLC) platform achieves bonding accuracy of better than 1.0 ?m and adequate bonding strength. This procedure consists of a chip-by-chip alignment step and a simultaneous solder reflowing step. In the chip-by-chip assembly step, opto-electronic chips were successively placed at their optimum positions by passive alignment while keeping the platform temperature

Y. Nakasuga; T. Hashimoto; Y. Yamada; H. Terui; M. Yanagisawa; K. Moriwaki; Y. Akahori; Y. Tohmori; K. Kato; S. Sekine; M. Horiguchi

1996-01-01

70

Novel integration technique for silicon/III-V hybrid laser.

Integrated semiconductor lasers on silicon are one of the most crucial devices to enable low-cost silicon photonic integrated circuits for high-bandwidth optic communications and interconnects. While optical amplifiers and lasers are typically realized in III-V waveguide structures, it is beneficial to have an integration approach which allows flexible and efficient coupling of light between III-V gain media and silicon waveguides. In this paper, we propose and demonstrate a novel fabrication technique and associated transition structure to realize integrated lasers without the constraints of other critical processing parameters such as the starting silicon layer thicknesses. This technique employs epitaxial growth of silicon in a pre-defined trench with taper structures. We fabricate and demonstrate a long-cavity hybrid laser with a narrow linewidth of 130 kHz and an output power of 1.5 mW using the proposed technique. PMID:25401832

Dong, Po; Hu, Ting-Chen; Liow, Tsung-Yang; Chen, Young-Kai; Xie, Chongjin; Luo, Xianshu; Lo, Guo-Qiang; Kopf, Rose; Tate, Alaric

2014-11-01

71

HRMC_1.1: Hybrid Reverse Monte Carlo method with silicon and carbon potentials

NASA Astrophysics Data System (ADS)

The Hybrid Reverse Monte Carlo (HRMC) code models the atomic structure of materials via the use of a combination of constraints including experimental diffraction data and an empirical energy potential. This energy constraint is in the form of either the Environment Dependent Interatomic Potential (EDIP) for carbon and silicon and the original and modified Stillinger-Weber potentials applicable to silicon. In this version, an update is made to correct an error in the EDIP carbon energy calculation routine. New version program summaryProgram title: HRMC version 1.1 Catalogue identifier: AEAO_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAO_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 36 991 No. of bytes in distributed program, including test data, etc.: 907 800 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: Any computer capable of running executables produced by the g77 Fortran compiler. Operating system: Unix, Windows RAM: Depends on the type of empirical potential use, number of atoms and which constraints are employed. Classification: 7.7 Catalogue identifier of previous version: AEAO_v1_0 Journal reference of previous version: Comput. Phys. Comm. 178 (2008) 777 Does the new version supersede the previous version?: Yes Nature of problem: Atomic modelling using empirical potentials and experimental data. Solution method: Monte Carlo Reasons for new version: An error in a term associated with the calculation of energies using the EDIP carbon potential which results in incorrect energies. Summary of revisions: Fix to correct brackets in the two body part of the EDIP carbon potential routine. Additional comments: The code is not standard FORTRAN 77 but includes some additional features and therefore generates errors when compiled using the Nag95 compiler. It does compile successfully with the GNU g77 compiler ( http://www.gnu.org/software/fortran/fortran.html). Running time: Depends on the type of empirical potential use, number of atoms and which constraints are employed. The test included in the distribution took 37 minutes on a DEC Alpha PC.

Opletal, G.; Petersen, T. C.; O'Malley, B.; Snook, I. K.; McCulloch, D. G.; Yarovsky, I.

2011-02-01

72

Path integrals and large deviations in stochastic hybrid systems

NASA Astrophysics Data System (ADS)

We construct a path-integral representation of solutions to a stochastic hybrid system, consisting of one or more continuous variables evolving according to a piecewise-deterministic dynamics. The differential equations for the continuous variables are coupled to a set of discrete variables that satisfy a continuous-time Markov process, which means that the differential equations are only valid between jumps in the discrete variables. Examples of stochastic hybrid systems arise in biophysical models of stochastic ion channels, motor-driven intracellular transport, gene networks, and stochastic neural networks. We use the path-integral representation to derive a large deviation action principle for a stochastic hybrid system. Minimizing the associated action functional with respect to the set of all trajectories emanating from a metastable state (assuming that such a minimization scheme exists) then determines the most probable paths of escape. Moreover, evaluating the action functional along a most probable path generates the so-called quasipotential used in the calculation of mean first passage times. We illustrate the theory by considering the optimal paths of escape from a metastable state in a bistable neural network.

Bressloff, Paul C.; Newby, Jay M.

2014-04-01

73

NASA Astrophysics Data System (ADS)

Optimized measurements for the susceptibility, self-energy, as well as three-leg and four-leg vertex functions are introduced for the continuous-time hybridization-expansion quantum Monte Carlo solver for the impurity model in the presence of a retarded interaction. The self-energy and vertex functions are computed from impurity averages which involve time integrals over the retarded interaction. They can be evaluated efficiently within the segment representation. These quantities are computed within dynamical mean-field theory in the presence of plasmonic screening. In the antiadiabatic regime, the self-energy is strongly renormalized but retains features of the low energy scale set by the screened interaction. An explicit expression for its high-frequency behavior is provided. Across the screening driven and interaction driven metal-insulator transitions, the vertex functions are found to exhibit similar structural changes, which are hence identified as generic features of the Mott transition.

Hafermann, Hartmut

2014-06-01

74

Hybrid Clustering by Integrating Text and Citation based Graphs in Journal Database Analysis

Hybrid Clustering by Integrating Text and Citation based Graphs in Journal Database Analysis Xinhai.Glanzel@econ.kuleuven.ac.be Abstract We propose a hybrid clustering strategy by integrating heterogeneous information sources as graphs. The hybrid clustering method is extended on the basis of modularity based Louvain method. We introduce two

75

Higher order and infinite Trotter-number extrapolations in path integral Monte Carlo.

Improvements beyond the primitive approximation in the path integral Monte Carlo method are explored both in a model problem and in real systems. Two different strategies are studied: The Richardson extrapolation on top of the path integral Monte Carlo data and the Takahashi-Imada action. The Richardson extrapolation, mainly combined with the primitive action, always reduces the number-of-beads dependence, helps in determining the approach to the dominant power law behavior, and all without additional computational cost. The Takahashi-Imada action has been tested in two hard-core interacting quantum liquids at low temperature. The results obtained show that the fourth-order behavior near the asymptote is conserved, and that the use of this improved action reduces the computing time with respect to the primitive approximation. PMID:15260589

Brualla, L; Sakkos, K; Boronat, J; Casulleras, J

2004-07-01

76

Higher order and infinite Trotter-number extrapolations in path integral Monte Carlo

NASA Astrophysics Data System (ADS)

Improvements beyond the primitive approximation in the path integral Monte Carlo method are explored both in a model problem and in real systems. Two different strategies are studied: The Richardson extrapolation on top of the path integral Monte Carlo data and the Takahashi-Imada action. The Richardson extrapolation, mainly combined with the primitive action, always reduces the number-of-beads dependence, helps in determining the approach to the dominant power law behavior, and all without additional computational cost. The Takahashi-Imada action has been tested in two hard-core interacting quantum liquids at low temperature. The results obtained show that the fourth-order behavior near the asymptote is conserved, and that the use of this improved action reduces the computing time with respect to the primitive approximation.

Brualla, L.; Sakkos, K.; Boronat, J.; Casulleras, J.

2004-07-01

77

Hybrid integrated PDMS microfluidics with a silica capillary.

To harness the properties of both PDMS and silica, we have demonstrated hybrid integrated PDMS microfluidic systems with fused silica capillaries. The hybrid integrated PDMS microfluidics and silica capillary (iPSC) modules exhibit a novel architecture and method for leakage free CE sample injection merely requiring a single high voltage source and one pair of electrodes. The use of the iPSC device is based on a modular approach which allows the capillary to be reused extensively whilst replacing the attached fluidic module for different experiments. Integrating fused silica capillaries with PDMS microfluidic modules allows the direct application of a wide variety of well established conventional CE protocols for separations of complex analytes. Furthermore it bears the potential for facile coupling to standard electro-spray ionization mass spectrometry (ESI-MS), letting users focus on the sample analysis rather than the development of new separation protocols. The fabrication of the iPSC module consists of a simple and quick three-step method that submerges a fused silica capillary in PDMS prepolymer. After cross linking the prepolymer and punching the inlets, the iPSC module layer can be mounted onto a microfluidic device for CE separation. PMID:20480112

Dimov, Ivan K; Riaz, Asif; Ducrée, Jens; Lee, Luke P

2010-06-01

78

BACKGROUND: To evaluate the dosimetric differences between Superposition\\/Convolution (SC) and Monte Carlo (MC) calculated dose distributions for simultaneous integrated boost (SIB) prostate cancer intensity modulated radiotherapy (IMRT) compared to experimental (film) measurements and the implications for clinical treatments. METHODS: Twenty-two prostate patients treated with an in-house SIB-IMRT protocol were selected. SC-based plans used for treatment were re-evaluated with EGS4-based MC

Nesrin Dogan; Ivaylo Mihaylov; Yan Wu; Paul J Keall; Jeffrey V Siebers; Michael P Hagan

2009-01-01

79

Sequential Monte Carlo simulation for the estimation of small reachability probabilities the probability that a system reaches a given set within some time horizon is considered. Standard Monte Carlo methods for reachability probability estimation do not require specific assumptions on the system under

Del Moral , Pierre

80

High Voltage Dielectrophoretic and Magnetophoretic Hybrid Integrated Circuit / Microfluidic Chip

A hybrid integrated circuit (IC) / microfluidic chip is presented that independently and simultaneously traps and moves microscopic objects suspended in fluid using both electric and magnetic fields. This hybrid chip controls the location of dielectric objects, such as living cells and drops of fluid, on a 60 × 61 array of pixels that are 30 × 38 ?m2 in size, each of which can be individually addressed with a 50 V peak-to-peak, DC to 10 MHz radio frequency voltage. These high voltage pixels produce electric fields above the chip’s surface with a magnitude , resulting in strong dielectrophoresis (DEP) forces . Underneath the array of DEP pixels there is a magnetic matrix that consists of two perpendicular sets of 60 metal wires running across the chip. Each wire can be sourced with 120 mA to trap and move magnetically susceptible objects using magnetophoresis (MP). The DEP pixel array and magnetic matrix can be used simultaneously to apply forces to microscopic objects, such as living cells or lipid vesicles, that are tagged with magnetic nanoparticles. The capabilities of the hybrid IC / microfluidic chip demonstrated in this paper provide important building blocks for a platform for biological and chemical applications. PMID:20625468

Issadore, David; Franke, Thomas; Brown, Keith A.; Hunt, Thomas P.; Westervelt, Robert M.

2010-01-01

81

Hybrid polymer photonic crystal fiber with integrated chalcogenide glass nanofilms

NASA Astrophysics Data System (ADS)

The combination of chalcogenide glasses with polymer photonic crystal fibers (PCFs) is a difficult and challenging task due to their different thermo-mechanical material properties. Here we report the first experimental realization of a hybrid polymer-chalcogenide PCF with integrated As2S3 glass nanofilms at the inner surface of the air-channels of a poly-methyl-methacrylate (PMMA) PCF. The integrated high refractive index glass films introduce distinct antiresonant transmission bands in the 480-900 nm wavelength region. We demonstrate that the ultra-high Kerr nonlinearity of the chalcogenide glass makes the polymer PCF nonlinear and provides a possibility to shift the transmission band edges as much as 17 nm by changing the intensity. The proposed fabrication technique constitutes a new highway towards all-fiber nonlinear tunable devices based on polymer PCFs, which at the moment is not possible with any other fabrication method.

Markos, Christos; Kubat, Irnis; Bang, Ole

2014-08-01

82

Hybrid integrated optic modules for real-time signal processing

NASA Technical Reports Server (NTRS)

The most recent progress on four relatively new hybrid integrated optic device modules in LiNbO3 waveguides and one in YIG/GGG waveguide that are currently being studied are discussed. The five hybrid modules include a time-integrating acoustooptic correlator, a channel waveguide acoustooptic frequency shifter/modulator, an electrooptic channel waveguide total internal reflection moculator/switch, an electrooptic analog-to-digital converter using a Fabry-Perot modulator array, and a noncollinear magnetooptic modulator using magnetostatic surface waves. All of these devices possess the desirable characteristics of very large bandwidth (GHz or higher), very small substrate size along the optical path (typically 1.5 cm or less), single-mode optical propagation, and low drive power requirement. The devices utilize either acoustooptic, electrooptic or magnetooptic effects in planar or channel waveguides and, therefore, act as efficient interface devices between a light wave and temporal signals. Major areas of application lie in wideband multichannel optical real-time signal processing and communications. Some of the specific applications include spectral analysis and correlation of radio frequency (RF) signals, fiber-optic sensing, optical computing and multiport switching/routing, and analog-to-digital conversion of wide RF signals.

Tsai, C. S.

1984-01-01

83

NASA Astrophysics Data System (ADS)

We propose a new method to calculate ground state position dependent observables in quantum many-body systems. The method, which we call the path-integral diffusion Monte Carlo (PI-DMC) method, is essentially a combination of path-integral Monte Carlo (PIMC) and diffusion Monte Carlo (DMC) methods. The distribution resulting from a DMC simulation is further propagated in imaginary time by PIMC sampling. Tests of the new method for simple cases such as the harmonic oscillator, a double well, and a set of ten coupled harmonic oscillators show that the results for several observables converge rapidly to the exact result.

Hetényi, Balázs; Rabani, Eran; Berne, B. J.

1999-04-01

84

At terrestrial high latitudes, the plasma flows along ``open'' field lines, gradually going from a collision-dominated region into a collisionless region. Over several decades, the (fluid-like) generalized transport equations, TE, and the particle-based Monte Carlo, MC, approaches evolved as two of the most powerful simulation techniques that address this problem. In contrast to the computationally intensive Monte Carlo, the transport

J. Ji; A. R. Barakat; R. W. Schunk

2009-01-01

85

NASA Technical Reports Server (NTRS)

We have made a direct comparison between two different computer simulations of a plane, parallel, collisionless shock including particle acceleration to energies typical of those of diffuse ions observed at the earth bow shock. Despite the fact that the one-dimensional hybrid and Monte Carlo techniques employ entirely different algorithms, they give surprisingly close agreement in the overall shapes of the complete distribution functions for protons as well as heavier ions. Both methods show that energetic ions emerge smoothly from the background thermal plasma with approximately the same relative injection rate and that the fraction of the incoming plasma's energy flux that is converted into downstream enthalpy flux of the accelerated population (i.e., the acceleration efficiency) is similar in the two cases. The fraction of the downstream proton distribution made up of superthermal particles is quite large, with at least 10% of the energy flux going into protons with energies above 10 keV. In addition, an upstream precursor, produced by backstreaming energetic particles, is present in both shocks, although the Monte Carlo precursor is considerably longer than that produced in the hybrid shock. These results offer convincing evidence that, at least in these ways, the two simulations are consistent in their description of parallel shock structure and particle acceleration, and they lay the groundwork for development of shock models employing a combination of both methods.

Ellison, Donald C.; Giacalone, J.; Burgess, D.; Schwartz, S. J.

1993-01-01

86

The choice of appropriate interaction models is among the major disadvantages of conventional methods such as molecular dynamics and Monte Carlo simulations. On the other hand, the so-called reverse Monte Carlo (RMC) method, based on experimental data, can be applied without any interatomic and/or intermolecular interactions. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term into the acceptance criteria. This method is referred to as the hybrid reverse Monte Carlo (HRMC) method. The idea of this paper is to test the validity of a combined potential model of coulomb and Lennard-Jones in a fluoride glass system BaMnMF_{7} (M=Fe,V) using HRMC method. The results show a good agreement between experimental and calculated characteristics, as well as a meaningful improvement in partial pair distribution functions. We suggest that this model should be used in calculating the structural properties and in describing the average correlations between components of fluoride glass or a similar system. We also suggest that HRMC could be useful as a tool for testing the interaction potential models, as well as for conventional applications.

S. M. Mesli; M. Habchi; M. Kotbi; H. Xu

2013-03-25

87

Modeling Integrated Cellular Machinery Using Hybrid Petri-Boolean Networks

The behavior and phenotypic changes of cells are governed by a cellular circuitry that represents a set of biochemical reactions. Based on biological functions, this circuitry is divided into three types of networks, each encoding for a major biological process: signal transduction, transcription regulation, and metabolism. This division has generally enabled taming computational complexity dealing with the entire system, allowed for using modeling techniques that are specific to each of the components, and achieved separation of the different time scales at which reactions in each of the three networks occur. Nonetheless, with this division comes loss of information and power needed to elucidate certain cellular phenomena. Within the cell, these three types of networks work in tandem, and each produces signals and/or substances that are used by the others to process information and operate normally. Therefore, computational techniques for modeling integrated cellular machinery are needed. In this work, we propose an integrated hybrid model (IHM) that combines Petri nets and Boolean networks to model integrated cellular networks. Coupled with a stochastic simulation mechanism, the model simulates the dynamics of the integrated network, and can be perturbed to generate testable hypotheses. Our model is qualitative and is mostly built upon knowledge from the literature and requires fine-tuning of very few parameters. We validated our model on two systems: the transcriptional regulation of glucose metabolism in human cells, and cellular osmoregulation in S. cerevisiae. The model produced results that are in very good agreement with experimental data, and produces valid hypotheses. The abstract nature of our model and the ease of its construction makes it a very good candidate for modeling integrated networks from qualitative data. The results it produces can guide the practitioner to zoom into components and interconnections and investigate them using such more detailed mathematical models. PMID:24244124

Berestovsky, Natalie; Zhou, Wanding; Nagrath, Deepak; Nakhleh, Luay

2013-01-01

88

Golden Ratio Versus Pi as Random Sequence Sources for Monte Carlo Integration

NASA Technical Reports Server (NTRS)

We discuss here the relative merits of these numbers as possible random sequence sources. The quality of these sequences is not judged directly based on the outcome of all known tests for the randomness of a sequence. Instead, it is determined implicitly by the accuracy of the Monte Carlo integration in a statistical sense. Since our main motive of using a random sequence is to solve real world problems, it is more desirable if we compare the quality of the sequences based on their performances for these problems in terms of quality/accuracy of the output. We also compare these sources against those generated by a popular pseudo-random generator, viz., the Matlab rand and the quasi-random generator ha/ton both in terms of error and time complexity. Our study demonstrates that consecutive blocks of digits of each of these numbers produce a good random sequence source. It is observed that randomly chosen blocks of digits do not have any remarkable advantage over consecutive blocks for the accuracy of the Monte Carlo integration. Also, it reveals that pi is a better source of a random sequence than theta when the accuracy of the integration is concerned.

Sen, S. K.; Agarwal, Ravi P.; Shaykhian, Gholam Ali

2007-01-01

89

The computational modeling of medical imaging systems often requires obtaining a large number of simulated images with low statistical uncertainty which translates into prohibitive computing times. We describe a novel hybrid approach for Monte Carlo simulations that maximizes utilization of CPUs and GPUs in modern workstations. We apply the method to the modeling of indirect x-ray detectors using a new and improved version of the code MANTIS, an open source software tool used for the Monte Carlo simulations of indirect x-ray imagers. We first describe a GPU implementation of the physics and geometry models in fastDETECT2 (the optical transport model) and a serial CPU version of the same code. We discuss its new features like on-the-fly column geometry and columnar crosstalk in relation to the MANTIS code, and point out areas where our model provides more flexibility for the modeling of realistic columnar structures in large area detectors. Second, we modify PENELOPE (the open source software package that handles the x-ray and electron transport in MANTIS) to allow direct output of location and energy deposited during x-ray and electron interactions occurring within the scintillator. This information is then handled by optical transport routines in fastDETECT2. A load balancer dynamically allocates optical transport showers to the GPU and CPU computing cores. Our hybridMANTIS approach achieves a significant speed-up factor of 627 when compared to MANTIS and of 35 when compared to the same code running only in a CPU instead of a GPU. Using hybridMANTIS, we successfully hide hours of optical transport time by running it in parallel with the x-ray and electron transport, thus shifting the computational bottleneck from optical tox-ray transport. The new code requires much less memory than MANTIS and, asa result, allows us to efficiently simulate large area detectors. PMID:22469917

Sharma, Diksha; Badal, Andreu; Badano, Aldo

2012-04-21

90

NASA Technical Reports Server (NTRS)

Nonadiabatic transitions are central to many areas of chemical and condensed matter physics, ranging from biological electron transfer to the optical properties of one-dimensional conductors. Here, a path integral Monte Carlo method is used to simulate such transitions, based on the observation that nonadiabatic rate coefficients are often dominated by saddle point trajectories that correspond to an imaginary time. Simple analytic theories can be used to continue these imaginary time correlation functions to determine rate coefficients. The advantages and drawbacks of this approach are discussed.

Wolynes, Peter G.

1987-01-01

91

Coulomb tunneling for fusion reactions in dense matter: Path integral Monte Carlo versus mean field

We compare Path Integral Monte Carlo calculations by Militzer and Pollock (Phys. Rev. B 71, 134303, 2005) of Coulomb tunneling in nuclear reactions in dense matter to semiclassical calculations assuming WKB Coulomb barrier penetration through the radial mean-field potential. We find a very good agreement of two approaches at temperatures higher than ~1/5 of the ion plasma temperature. We obtain a simple parameterization of the mean field potential and of the respective reaction rates. We analyze Gamow-peak energies of reacting ions in various reaction regimes and discuss theoretical uncertainties of nuclear reaction rates taking carbon burning in dense stellar matter as an example.

A. I. Chugunov; H. E. DeWitt; D. G. Yakovlev

2007-07-24

92

The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis. PMID:24002801

Ng, C M

2013-10-01

93

NASA Astrophysics Data System (ADS)

Fluid models have been widely used and conducted successfully in high pressure plasma simulations where the drift–diffusion and the local-field approximation are valid. However, fluid models are not able to demonstrate non-local effects related to large electron energy relaxation mean free path in low pressure plasmas. To overcome this weakness, a hybrid model coupling electron Monte Carlo collision (EMCC) method with the fluid model is introduced to obtain precise electron energy distribution functions using pseudo-particles. Steady state simulation results by a one-dimensional hybrid model which includes EMCC method for the collisional reactions but uses drift–diffusion approximation for electron transport in a fluid model are compared with those of a conventional particle-in-cell (PIC) and a fluid model for low pressure capacitively coupled plasmas. At a wide range of pressure, the hybrid model agrees well with the PIC simulation with a reduced calculation time while the fluid model shows discrepancy in the results of the plasma density and the electron temperature.

Hwang, Seok Won; Lee, Ho-Jun; Lee, Hae June

2014-12-01

94

State and parameter estimation using Monte Carlo evaluation of path integrals

Transferring information from observations of a dynamical system to estimate the fixed parameters and unobserved states of a system model can be formulated as the evaluation of a discrete time path integral in model state space. The observations serve as a guiding potential working with the dynamical rules of the model to direct system orbits in state space. The path integral representation permits direct numerical evaluation of the conditional mean path through the state space as well as conditional moments about this mean. Using a Monte Carlo method for selecting paths through state space we show how these moments can be evaluated and demonstrate in an interesting model system the explicit influence of the role of transfer of information from the observations. We address the question of how many observations are required to estimate the unobserved state variables, and we examine the assumptions of Gaussianity of the underlying conditional probability.

John C. Quinn; Henry D. I. Abarbanel

2009-12-08

95

Monte Carlo Simulations of Globular Cluster Evolution. IV. Direct Integration of Strong Interactions

We study the dynamical evolution of globular clusters containing populations of primordial binaries, using our newly updated Monte Carlo cluster evolution code with the inclusion of direct integration of binary scattering interactions. We describe the modifications we have made to the code, as well as improvements we have made to the core Monte Carlo method. We present several test calculations to verify the validity of the new code, and perform many comparisons with previous analytical and numerical work in the literature. We simulate the evolution of a large grid of models, with a wide range of initial cluster profiles, and with binary fractions ranging from 0 to 1, and compare with observations of Galactic globular clusters. We find that our code yields very good agreement with direct N-body simulations of clusters with primordial binaries, but yields some results that differ significantly from other approximate methods. Notably, the direct integration of binary interactions reduces their energy generation rate relative to the simple recipes used in Paper III, and yields smaller core radii. Our results for the structural parameters of clusters during the binary-burning phase are now in the tail of the range of parameters for observed clusters, implying that either clusters are born significantly more or less centrally concentrated than has been previously considered, or that there are additional physical processes beyond two-body relaxation and binary interactions that affect the structural characteristics of clusters.

John M. Fregeau; Frederic A. Rasio

2006-08-11

96

A Monte Carlo study to check the hadronic interaction models by a new EAS hybrid experiment in Tibet

A new EAS hybrid experiment has been designed by constructing a YAC (Yangbajing Air shower Core) detector array inside the existing Tibet-III air shower array. The first step of YAC, called "YAC-I", consists of 16 plastic scintillator units (4 rows times 4 columns) each with an area of 40 cm * 50 cm which is used to check hadronic interaction models used in AS simulations. A Monte Carlo study shows that YAC-I can record high energy electromagnetic component in the core region of air showers induced by primary particles of several tens TeV energies where the primary composition is directly measured by space experiments. It may provide a direct check of the hadronic interaction models currently used in the air shower simulations in the corresponding energy region. In present paper, the method of the observation and the sensitivity of the characteristics of the observed events to the different interaction models are discussed.

Ying Zhang; J. Huang; L. Jiang; D. Chen; L. K. Ding; M. Shibata; Y. Katayose; N. Hotta; M. Ohnishi; T. Ouchi; T. Saito

2013-03-14

97

A Monte Carlo study to check the hadronic interaction models by a new EAS hybrid experiment in Tibet

A new EAS hybrid experiment has been designed by constructing a YAC (Yangbajing Air shower Core) detector array inside the existing Tibet-III air shower array. The first step of YAC, called "YAC-I", consists of 16 plastic scintillator units (4 rows times 4 columns) each with an area of 40 cm * 50 cm which is used to check hadronic interaction models used in AS simulations. A Monte Carlo study shows that YAC-I can record high energy electromagnetic component in the core region of air showers induced by primary particles of several tens TeV energies where the primary composition is directly measured by space experiments. It may provide a direct check of the hadronic interaction models currently used in the air shower simulations in the corresponding energy region. In present paper, the method of the observation and the sensitivity of the characteristics of the observed events to the different interaction models are discussed.

Zhang, Ying; Jiang, L; Chen, D; Ding, L K; Shibata, M; Katayose, Y; Hotta, N; Ohnishi, M; Ouchi, T; Saito, T

2013-01-01

98

Design of a hybrid-integrated MEMS scanning grating spectrometer

NASA Astrophysics Data System (ADS)

Optical MEMS (micro electro mechanical systems) have been used to reduce size, weight and costs of any kind of optical systems very successfully starting in the last decades. Scientists at Fraunhofer IPMS invented a resonant drive for 1-d and 2-d MEMS scanning mirror devices. Besides mirrors also scanning gratings have been realized. Now, rapidly growing new applications demand for enhanced functions and further miniaturization. This task cannot be solved by simply putting more functionality into the MEMS chip, for example grating and slit structures, but by three dimensional hybrid integration of the complete optical system into a stack of several functional substrates. Here we present the optical system design and realization strategy for a scanning grating spectrometer for the near infrared (NIR) range. First samples will be mounted from single components by a bonder tool (Finetech Fineplacer Femto) but the option of wafer assembly will be kept open for future developments. Extremely miniaturized NIR spectrometer could serve a wide variety of applications for handheld devices from food quality analysis to medical services or materials identification.

Pügner, Tino; Knobbe, Jens; Grüger, Heinrich; Schenk, Harald

2011-10-01

99

WORM ALGORITHM PATH INTEGRAL MONTE CARLO APPLIED TO THE 3He-4He II SANDWICH SYSTEM

NASA Astrophysics Data System (ADS)

We present a numerical investigation of the thermal and structural properties of the 3He-4He sandwich system adsorbed on a graphite substrate using the worm algorithm path integral Monte Carlo (WAPIMC) method [M. Boninsegni, N. Prokof'ev and B. Svistunov, Phys. Rev. E74, 036701 (2006)]. For this purpose, we have modified a previously written WAPIMC code originally adapted for 4He on graphite, by including the second 3He-component. To describe the fermions, a temperature-dependent statistical potential has been used. This has proven very effective. The WAPIMC calculations have been conducted in the millikelvin temperature regime. However, because of the heavy computations involved, only 30, 40 and 50 mK have been considered for the time being. The pair correlations, Matsubara Green's function, structure factor, and density profiles have been explored at these temperatures.

Al-Oqali, Amer; Sakhel, Asaad R.; Ghassib, Humam B.; Sakhel, Roger R.

2012-12-01

100

The narrow resonance (NR) approximation has, in the past, been applied to regular lattices with fairly simple unit cells. Attempts to use the NR approximation to deal with fine details of the lattice structure, or with complicated lattice cells, have generally been based on assumptions and approximations that are rather difficult to evaluate. A benchmark method is developed in which slowing down is still treated in the NR approximation, but spatial neutron transport is handled by Monte Carlo. This benchmark method is used to evaluate older methods for analyzing the double-heterogeneity effect in fast reactors, and for computing resonance integrals in the PROTEUS lattices. New methods for treating the PROTEUS lattices are proposed.

Chen, I.J.; Gelbard, E.M.

1988-07-01

101

Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

Smith, L.M.; Hochstedler, R.D. [Univ of Tennessee Space Inst., Tullahoma, TN (United States). Dept. of Electrical Engineering] [Univ of Tennessee Space Inst., Tullahoma, TN (United States). Dept. of Electrical Engineering

1997-02-01

102

JOURNAL OF LIGHTWAVE TECHNOLOGY, VOL. 30, NO. 1, JANUARY 1, 2012 1 Integrated Hybrid Silicon silicon platform towards realizing an integrated WDM transmitter on sil- icon. Using ion implantation on silicon are presented. Index Terms--DFB laser, DFB-EAM, MZM, photonic integrated circuits, quantum well

Bowers, John

103

HiFi-WiN: Hybrid Integrated Fiber-Wireless Networking for Broadband Metropolitan Area

HiFi-WiN: Hybrid Integrated Fiber-Wireless Networking for Broadband Metropolitan Area Access reuse strategies to broadband wireless networks poses new engineering chal- lenges with respect that can help support wireless frequency reuse within a broadband wireless access network is hybrid

Kansas, University of

104

Soil phosphomonoesterase enzyme activities and phosphate fractions from rhizosphere soils were studied at maximum tillering and panicle initiation stages for three hybrid and two inbred high yielding varieties of rice in a pot culture net-house experiment receiving green manure (Sesbania) and farmyard manure. The P mobilization pattern under integrated nutrient management was also studied. Pusa RH21, a hybrid rice cultivar,

Subhendu Bhadraray; T. Purakayastha; P. Chhonkar; Vijay Verma

2002-01-01

105

Hybrid integrated photonic components based on a polymer platform

NASA Astrophysics Data System (ADS)

We report on a polymer-on-silicon optical bench platform that enables the hybrid integration of elemental passive and active optical functions. Planar polymer circuits are produced photolithographically, and slots are formed in them for the insertion of chips and films of a variety of materials. The polymer circuits provide interconnects, static routing elements such as couplers, taps, and multi/demultiplexers, as well as thermo-optically dynamic elements such as switches, variable optical attenuators, and tunable notch filters. Crystal-ion-sliced thin films of lithium niobate are inserted in the polymer circuit for polarization control or for electro-optic modulation. Films of yttrium iron garnet and neodymium iron boron magnets are inserted in order to magneto-optically achieve non-reciprocal operation for isolation and circulation. Indium phosphide and gallium arsenide chips are inserted for light generation, amplification, and detection, as well as wavelength conversion. The functions enabled by this multi-material platform span the range of the building blocks needed in optical circuits, while using the highest-performance material system for each function. We demonstrated complex-functionality photonic components based on this technology, including a metro ring node module and a tunable optical transmitter. The metro ring node chip includes switches, variable optical attenuators, taps, and detectors; it enables optical add/drop multiplexing, power monitoring, and automatic load balancing, and it supports shared and dedicated protection protocols in two-fiber metro ring optical networks. The tunable optical transmitter chip includes a tunable external cavity laser, an isolator, and a high-speed modulator.

Eldada, Louay A.

2003-06-01

106

MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transpor...

Hoefer, Axel; Hennebach, Maik; Schmid, Michael; Porsch, Dieter

2014-01-01

107

Extension of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes to 100 GeV

Version 2.1 of the Integrated Tiger Series (ITS) of electron-photon Monte Carlo codes was modified to extend their ability to model interactions up to 100 GeV. Benchmarks against experimental results conducted at 10 and 15 GeV confirm the accuracy of the extended codes. 12 refs., 2 figs., 2 tabs.

Miller, S.G.

1988-08-01

108

Nif -hybrids of enterobacter: Selection for nif gene integration with chlorate

The nif gene group from Klebsiella can be transferred into Enterobacter cloacae by conjugation using Escherichia coli donor cells carrying the composite self-transmissible nif-plasmid pRD1. A small fraction of the hybrids obtained is stable upon prolonged passaging without selection. Their stability is due to integration of pRD1 into the chromosome. Such integration hybrids were chlorate resistant, and nitrate reductase negative,

W. Klingmfiller; K. T. Shanmugam; M. Singh

1983-01-01

109

Kinetic isotope effect in malonaldehyde determined from path integral Monte Carlo simulations.

The primary H/D kinetic isotope effect on the intramolecular proton transfer in malonaldehyde is determined from quantum instanton path integral Monte Carlo simulations on a fully dimensional and validated potential energy surface for temperatures between 250 and 1500 K. Our calculations, based on thermodynamic integration with respect to the mass of the transferring particle, are significantly accelerated by the direct evaluation of the kinetic isotope effect instead of computing it as a ratio of two rate constants. At room temperature, the KIE from the present simulations is 5.2 ± 0.4. The KIE is found to vary considerably as a function of temperature and the low-T behaviour is dominated by the fact that the free energy derivative in the reactant state increases more rapidly than in the transition state. Detailed analysis of the various contributions to the quantum rate constant together with estimates for rates from conventional transition state theory and from periodic orbit theory suggest that the KIE in malonaldehyde is dominated by zero point energy effects and that tunneling plays a minor role at room temperature. PMID:24233185

Huang, Jing; Buchowiecki, Marcin; Nagy, Tibor; Vaní?ek, Ji?í; Meuwly, Markus

2014-01-01

110

Monte Carlo simulation of small electron fields collimated by the integrated photon MLC.

In this study, a Monte Carlo (MC)-based beam model for an ELEKTA linear accelerator was established. The beam model is based on the EGSnrc Monte Carlo code, whereby electron beams with nominal energies of 10, 12 and 15?MeV were considered. For collimation of the electron beam, only the integrated photon multi-leaf-collimators (MLCs) were used. No additional secondary or tertiary add-ons like applicators, cutouts or dedicated electron MLCs were included. The source parameters of the initial electron beam were derived semi-automatically from measurements of depth-dose curves and lateral profiles in a water phantom. A routine to determine the initial electron energy spectra was developed which fits a Gaussian spectrum to the most prominent features of depth-dose curves. The comparisons of calculated and measured depth-dose curves demonstrated agreement within 1%/1?mm. The source divergence angle of initial electrons was fitted to lateral dose profiles beyond the range of electrons, where the imparted dose is mainly due to bremsstrahlung produced in the scattering foils. For accurate modelling of narrow beam segments, the influence of air density on dose calculation was studied. The air density for simulations was adjusted to local values (433?m above sea level) and compared with the standard air supplied by the ICRU data set. The results indicate that the air density is an influential parameter for dose calculations. Furthermore, the default value of the BEAMnrc parameter 'skin depth' for the boundary crossing algorithm was found to be inadequate for the modelling of small electron fields. A higher value for this parameter eliminated discrepancies in too broad dose profiles and an increased dose along the central axis. The beam model was validated with measurements, whereby an agreement mostly within 3%/3?mm was found. PMID:21242628

Mihaljevic, Josip; Soukup, Martin; Dohm, Oliver; Alber, Markus

2011-02-01

111

Streamline Integration using MPI-Hybrid Parallelism on a Large Multi-Core Architecture

Streamline computation in a very large vector field data set represents a significant challenge due to the non-local and datadependentnature of streamline integration. In this paper, we conduct a study of the performance characteristics of hybrid parallel programmingand execution as applied to streamline integration on a large, multicore platform. With multi-core processors now prevalent in clustersand supercomputers, there is a need to understand the impact of these hybrid systems in order to make the best implementation choice.We use two MPI-based distribution approaches based on established parallelization paradigms, parallelize-over-seeds and parallelize-overblocks,and present a novel MPI-hybrid algorithm for each approach to compute streamlines. Our findings indicate that the work sharing betweencores in the proposed MPI-hybrid parallel implementation results in much improved performance and consumes less communication andI/O bandwidth than a traditional, non-hybrid distributed implementation.

Camp, David; Garth, Christoph; Childs, Hank; Pugmire, Dave; Joy, Kenneth I.

2010-11-01

112

McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields and the integral line source to describe photon transport through the atmosphere. The source energy must be between 0.02 and 100 MeV. For heavily shielded sources with energies above 20 MeV, McSKY results must be used cautiously, especially at detector locations near the source.

Shultis, J.K.; Faw, R.E.; Stedry, M.H. [Kansas State Univ., Manhattan, KS (United States). Dept. of Nuclear Engineering; Hall, W. [Kansas State Univ., Manhattan, KS (United States)

1994-07-01

113

We suggest a simple fitting formula to represent Comptonized X- and gamma-ray spectra from a hot ($kT_e \\gtrsim 10$ keV), Thomson thick ($\\tau_T \\gtrsim 5$) hybrid thermal/nonthermal plasma in spherical geometry with homogeneous soft-photon injection throughout the Comptonizing region. We have used this formula to fit a large data base of Monte-Carlo generated photon spectra, and provide correlations between the physical parameters of the plasma and the fit parameters of our analytic fit function. These allow us to construct Thomson thick Comptonization spectra without performing computer intensive Monte Carlo simulations of the high-Thomson-depth hybrid-plasma Comptonization problem. Our formulae can easily be used in data analysis packages such as XSPEC, thus rendering rapid $\\chi^2$ fitting of such spectra to real data feasible.

M. Boettcher; R. Saxena; A. W. Crider

2001-02-16

114

A mathematical simulation approach based on the general purpose Monte Carlo N-particle transport code MCNP was developed to predict the response of the XRF branch of the hybrid K-edge\\/K-XRF densitometer (HKED). The respective MCNP models for two different versions of HKED instruments currently in use were set up and experimentally validated. The setting up of the models involved comprehensive simulations

A. N. Berlizov; D. A. Sharikov; H. Ottmar; H. Eberle; J. Galy; K. Luetzenkirchen

2010-01-01

115

A hybrid approach to device integration on a genetic analysis platform

NASA Astrophysics Data System (ADS)

Point-of-care (POC) systems require significant component integration to implement biochemical protocols associated with molecular diagnostic assays. Hybrid platforms where discrete components are combined in a single platform are a suitable approach to integration, where combining multiple device fabrication steps on a single substrate is not possible due to incompatible or costly fabrication steps. We integrate three devices each with a specific system functionality: (i) a silicon electro-wetting-on-dielectric (EWOD) device to move and mix sample and reagent droplets in an oil phase, (ii) a polymer microfluidic chip containing channels and reservoirs and (iii) an aqueous phase glass microarray for fluorescence microarray hybridization detection. The EWOD device offers the possibility of fully integrating on-chip sample preparation using nanolitre sample and reagent volumes. A key challenge is sample transfer from the oil phase EWOD device to the aqueous phase microarray for hybridization detection. The EWOD device, waveguide performance and functionality are maintained during the integration process. An on-chip biochemical protocol for arrayed primer extension (APEX) was implemented for single nucleotide polymorphism (SNiP) analysis. The prepared sample is aspirated from the EWOD oil phase to the aqueous phase microarray for hybridization. A bench-top instrumentation system was also developed around the integrated platform to drive the EWOD electrodes, implement APEX sample heating and image the microarray after hybridization.

Brennan, Des; Jary, Dorothee; Kurg, Ants; Berik, Evgeny; Justice, John; Aherne, Margaret; Macek, Milan; Galvin, Paul

2012-10-01

116

Path integral Monte Carlo study of quantum-hard sphere solids

NASA Astrophysics Data System (ADS)

A path integral study of the fcc, hcp, and bcc quantum hard-sphere solids is presented. Ranges of densities within the interval of reduced de Broglie wavelengths 0.2<=?B*<=0.8 have been analyzed using Monte Carlo simulations with Cao-Berne propagator. Energies, pressures, and structural quantities (pair radial correlation functions, centroid structure factors, and Steinhardt order parameters) have been computed. Also, applications of the Einstein crystal technique [L. M. Sesé, J. Chem. Phys. 126, 164508 (2007)] have been made to compute the free energies of the fcc and hcp solids. Some technical points related to the latter technique are discussed, and it is shown that these calculations produce consistent results with increasing sample sizes. The fluid-solid (fcc and hcp) equilibria have been studied, thus completing prior work by this author on the fluid-fcc equilibrium. Within the accuracy attained no significant differences between the relative stabilities of the fcc and hcp lattices have been detected. The bcc case stands apart from the other two lattices, as the simulations lead either to irregular lattices (two types) that keep some traces of bcc-memory, or to spontaneous transitions to hcp-like lattices. The latter transitions make manifestly clear the potential repercussions that the quantum hard-sphere behavior can have on solid-solid equilibria at low temperatures in real systems (e.g., helium).

Sesé, Luis M.

2013-07-01

117

Path integral Monte Carlo study of quantum-hard sphere solids.

A path integral study of the fcc, hcp, and bcc quantum hard-sphere solids is presented. Ranges of densities within the interval of reduced de Broglie wavelengths 0.2??B(*)?0.8 have been analyzed using Monte Carlo simulations with Cao-Berne propagator. Energies, pressures, and structural quantities (pair radial correlation functions, centroid structure factors, and Steinhardt order parameters) have been computed. Also, applications of the Einstein crystal technique [L. M. Sese?, J. Chem. Phys. 126, 164508 (2007)] have been made to compute the free energies of the fcc and hcp solids. Some technical points related to the latter technique are discussed, and it is shown that these calculations produce consistent results with increasing sample sizes. The fluid-solid (fcc and hcp) equilibria have been studied, thus completing prior work by this author on the fluid-fcc equilibrium. Within the accuracy attained no significant differences between the relative stabilities of the fcc and hcp lattices have been detected. The bcc case stands apart from the other two lattices, as the simulations lead either to irregular lattices (two types) that keep some traces of bcc-memory, or to spontaneous transitions to hcp-like lattices. The latter transitions make manifestly clear the potential repercussions that the quantum hard-sphere behavior can have on solid-solid equilibria at low temperatures in real systems (e.g., helium). PMID:23901988

Sesé, Luis M

2013-07-28

118

Path-integral Monte Carlo study of a model adsorbate with internal quantum states

NASA Astrophysics Data System (ADS)

An adsorbate in the strong-binding and small-corrugation limit is studied. The resulting two-dimensional fluid is treated in the adiabatic approximation: the translations of the particles are treated classically, whereas the internal quantum degrees of freedom are modeled by interacting two-state tunneling systems. The temperature-coverage phase diagram is obtained to a high degree of precision by combining finite-size-scaling ideas with path-integral Monte Carlo techniques. Even this simplified adsorbate model possesses a surprisingly complex phase diagram including first- and second-order transitions as well as a tricritical and a triple point. We identify gas, liquid, fluid, and square-lattice solid phases combined with a preferred internal quantum state depending on temperature and coverage. We determine the order of the transitions and localize the different phase boundaries of this many-body quantum system reliably. Mean-field approximations and low-density expansions are compared to the simulations. A possible extension of the block analysis method to determine triple points and solid coexistence densities is discussed.

Marx, D.; Nielaba, P.; Binder, K.

1993-04-01

119

MOCABA is a combination of Monte Carlo sampling and Bayesian updating algorithms for the prediction of integral functions of nuclear data, such as reactor power distributions or neutron multiplication factors. Similarly to the established Generalized Linear Least Squares (GLLS) methodology, MOCABA offers the capability to utilize integral experimental data to reduce the prior uncertainty of integral observables. The MOCABA approach, however, does not involve any series expansions and, therefore, does not suffer from the breakdown of first-order perturbation theory for large nuclear data uncertainties. This is related to the fact that, in contrast to the GLLS method, the updating mechanism within MOCABA is applied directly to the integral observables without having to "adjust" any nuclear data. A central part of MOCABA is the nuclear data Monte Carlo program NUDUNA, which performs random sampling of nuclear data evaluations according to their covariance information and converts them into libraries for transport code systems like MCNP or SCALE. What is special about MOCABA is that it can be applied to any integral function of nuclear data, and any integral measurement can be taken into account to improve the prediction of an integral observable of interest. In this paper we present two example applications of the MOCABA framework: the prediction of the neutron multiplication factor of a water-moderated PWR fuel assembly based on 21 criticality safety benchmark experiments and the prediction of the power distribution within a toy model reactor containing 100 fuel assemblies.

Axel Hoefer; Oliver Buss; Maik Hennebach; Michael Schmid; Dieter Porsch

2014-11-12

120

Background To evaluate the dosimetric differences between Superposition/Convolution (SC) and Monte Carlo (MC) calculated dose distributions for simultaneous integrated boost (SIB) prostate cancer intensity modulated radiotherapy (IMRT) compared to experimental (film) measurements and the implications for clinical treatments. Methods Twenty-two prostate patients treated with an in-house SIB-IMRT protocol were selected. SC-based plans used for treatment were re-evaluated with EGS4-based MC calculations for treatment verification. Accuracy was evaluated with-respect-to film-based dosimetry. Comparisons used gamma (?)-index, distance-to-agreement (DTA), and superimposed dose distributions. The treatment plans were also compared based on dose-volume indices and 3-D ? index for targets and critical structures. Results Flat-phantom comparisons demonstrated that the MC algorithm predicted measurements better than the SC algorithm. The average PTVprostate D98 agreement between SC and MC was 1.2% ± 1.1. For rectum, the average differences in SC and MC calculated D50 ranged from -3.6% to 3.4%. For small bowel, there were up to 30.2% ± 40.7 (range: 0.2%, 115%) differences between SC and MC calculated average D50 index. For femurs, the differences in average D50 reached up to 8.6% ± 3.6 (range: 1.2%, 14.5%). For PTVprostate and PTVnodes, the average gamma scores were >95.0%. Conclusion MC agrees better with film measurements than SC. Although, on average, SC-calculated doses agreed with MC calculations within the targets within 2%, there were deviations up to 5% for some patient's treatment plans. For some patients, the magnitude of such deviations might decrease the intended target dose levels that are required for the treatment protocol, placing the patients in different dose levels that do not satisfy the protocol dose requirements. PMID:19527515

Dogan, Nesrin; Mihaylov, Ivaylo; Wu, Yan; Keall, Paul J; Siebers, Jeffrey V; Hagan, Michael P

2009-01-01

121

Bead-Fourier path-integral Monte Carlo method applied to systems of identical particles

To make the path-integral Monte Carlo (PIMC) method more effective and practical in application to systems of identical particles with strong interactions, we introduce a combined bead-Fourier (BF) PIMC approach with the ordinary bead method and the Fourier PIMC method of Doll and Freeman [J. Chem. Phys. {bold 80}, 2239 (1984); {bold 80}, 5709 (1984)] being its extreme cases. Optimal choice of the number of beads and of Fourier components enables us to reproduce reliably the ground-state energy and electron density distribution in the H atom as well as the exact data for the harmonic oscillator. Applying the BF method to systems of identical particles we use the procedure of simultaneous accounting for all classes of permutations suggested in the previous work [Phys. Rev. A {bold 48}, 4075 (1993)] with subsequent symmetrization of the exchange factor in the weight function to make the sign problem milder. A procedure of random walk in the spin space enables us to obtain spin-dependent averages. We derived exact partition functions and canonical averages for a model system of N noninteracting identical particles (N=2,3,4,{hor_ellipsis}) with the spin (fermions or bosons) in a d-dimensional harmonic field (d=1,2,3) that provided a reliable test of the developed MC procedures. Simulations for N=2,3 reproduce well the exact dependencies. Further simulations showed how gradual switching on of the electrostatic repulsion between particles in this system results in significant weakening of the exchange effects. {copyright} {ital 1997} {ital The American Physical Society}

Vorontsov-Velyaminov, P.N.; Nesvit, M.O.; Gorbunov, R.I. [Faculty of Physics, St. Petersburg State University, 198904, St. Petersburg (Russia)] [Faculty of Physics, St. Petersburg State University, 198904, St. Petersburg (Russia)

1997-02-01

122

Nif-hybrids of Enterobacter: selection for nif gene integration with chlorate.

The nif gene group from Klebsiella can be transferred into Enterobacter cloacae by conjugation using Escherichia coli donor cells carrying the composite self-transmissible nif-plasmid pRD1. A small fraction of the hybrids obtained is stable upon prolonged passaging without selection. Their stability is due to integration of pRD1 into the chromosome. Such integration hybrids were chlorate resistant, and nitrate reductase negative, which indicated that integration preferentially occurred within one of the genes for the production or functioning of this enzyme. Chlorate resistance could, therefore, be used to select for additional nitrate reductase-negative sublines with pRD1 in their chromosome. Such sublines have been analyzed further for the presence of nif genes, other pRD1 markers, and for stability. In all except one the complete plasmid seems to have been integrated. Some tend to revert to nitrate utilisation (chlorate sensitivity). PMID:6353161

Klingmüller, W; Shanmugam, K T; Singh, M

1983-01-01

123

NASA Astrophysics Data System (ADS)

Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative "condensed transport" formulation, a Generalized Boltzmann-Fokker-Planck GBFP method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations.

Franke, Brian C.; Kensek, Ronald P.; Prinja, Anil K.

2014-06-01

124

Integrating DNA With Semiconductor Materials: Bio-inorganic Hybrid Devices

NASA Astrophysics Data System (ADS)

A method for the automated solid-phase synthesis of DNA on a semiconductor chip with the potential for photolithography to fabricate hybrid electronic-DNA devices was developed. The on-chip oligonucleotide synthetic quality was comparable to standard CPG supports as confirmed by HPLC and gel electrophoresis. Enzymatic manipulation of the immobilised ssDNA was possible by radiolabelling with T4 polynucleotide kinase. Spatial control, afforded by photolithography, was visualised by phosphorimaging radiolabelled dsDNA. The charge transfer properties of DNA were investigated by the association of Ru(NH3)63+ with the phosphate backbone and by intercalation with redox active methylene blue. Additionally ferrocene modified nucleosides were incorporated into oligonucleotides to act as electronic mediators for charge transfer. Initial investigations into the effect of the redox group on the nucleobase indicated their potential for use as bioelectronic building blocks for incorporation into silicon based molecular systems.

Pike, Andrew R.; Lie, Lars H.; Patole, Samson N.; Ryder, Lyndsey C.; Connolly, Bernard. A.; Horrocks, Benjamin R.; Houlton, Andrew

2002-11-01

125

CE2: towards a large scale hybrid search engine with integrated ranking support

The Web contains a large amount of documents and increasingly, also semantic data in the form of RDF triples. Many of these triples are annotations that are associated with documents. While structured query is the principal mean to retrieve semantic data, keyword queries are typically used for document retrieval. Clearly, a form of hybrid search that seamlessly integrates these formalisms

Haofen Wang; Thanh Tran; Chang Liu

2008-01-01

126

High-speed 32-channel optical wavelength selector using PLC hybrid integration

We have developed a new high-speed 32-channel optical wavelength selector (OWS) with gate drivers using PLC hybrid integration. It has excellent characteristics consisting of a low insertion loss of 2.3 dB, a low cross talk of -46 dB and the error-free optical wavelength selection of 10-Gbit\\/s optical signal

F. Ebisawa; I. Ogawa; Y. Akahori; K. Takiguchi; Y. Tamura; T. Hashimoto; A. Sugita; Y. Yamada; Y. Suzaki; N. Yoshimoto; Y. Tohmori; S. Mino; T. Ito; K. Magari; Y. Kawaguchi; A. Himeno; K. Kato

1999-01-01

127

Hybrid Integrated Circuit Manufacturing Process as Controlled by Shop Information Systems

Hybrid integrated circuits (HIC's) used in Bell System transmission equipment are rapidly increasing in numbers, type, and complexity. An efficient and effective system of controlling and monitoring the HIC manufacturing process is therefore of prime importance if scheduling, inventory, product yield, and operator performance are to be accurately determined. The major sequence of operations is described of both thin- and

D. Krause; D. Locy

1980-01-01

128

Optimal process design often requires the solution of mixed integer non-linear programming problems. Optimization procedures must be robust and efficient if they are to be incorporated in automated design systems. For heat integrated separation process design, a natural hybrid evolutionary\\/local search method with these properties is possible. The method is based on the use of local search methods for the

E. S. Fraga

2003-01-01

129

With the increasing popularity of cloud computing, Hadoop has become a widely used open source cloud computing framework for large scale data processing. However, few studies have been done to enhance data confidentiality of Hadoop against storage servers. In this paper, we address the data confidentiality issue by integrating hybrid encryption schemes and the Hadoop distributed file system (HDFS). We

Hsiao-Ying Lin; Shiuan-Tzuo Shen; Wen-Guey Tzeng; Bao-Shuh P. Lin

2012-01-01

130

The energy demand of distillation-molecular sieve systems for ethanol recovery/dehydration can be significant, particularly for dilute solutions. An alternative hybrid process integrating vapor stripping (like a beer still) with vapor compression and a vapor permeation membrane s...

131

Conceptual Integration of Hybridization by Algerian Students Intending to Teach Physical Sciences

ERIC Educational Resources Information Center

This work aims to assess the difficulties encountered by students of the Ecole Normale Superieure of Kouba (Algeria) intending to teach physical science in the integration of the hybridization of atomic orbitals. It is a concept that they should use in describing the formation of molecular orbitals ([sigma] and [pi]) in organic chemistry and gaps…

Salah, Hazzi; Dumon, Alain

2011-01-01

132

In this paper we consider optimization as an approach for quickly and flexibly developing hybrid cognitive ca- pabilities that are efficient, scalable, and can exploit knowledge to improve solution speed and quality. In this context, we focus on the Three-Weight Algorithm, which aims to solve general optimization problems. We propose novel methods by which to integrate knowl- edge with this

Nate Derbinsky; Jose Bento; Jonathan S. Yedidia

2013-01-01

133

This paper summarizes the development of the surface integral and finite element hybrid method for two and three dimensional fracture mechanics analysis. The fracture, which is a discontinuity in the displacement field, is modeled explicitly and efficiently by use of dislocations for two dimensional analysis and by dipoles of point forces for three dimensional applications. The boundary value problem of

W. D. Keat; B. S. Annigeri; M. P. Cleary

1988-01-01

134

Technology platform for hybrid integration of MOEMS on reconfigurable silicon micro-optical table

Technology platform for hybrid integration of MOEMS on reconfigurable silicon micro-optical table S micromachining of standard silicon wafer (baseplate) or SOI wafer (holders). The design and technology of RFS. In this paper we present technological platform to build MOEMS devices on reconfigurable silicon free

Paris-Sud XI, UniversitÃ© de

135

Video integration in a GNSS/INS hybridization architecture for approach and landing

Augmentation Systems (SBAS) and Aircraft Based Augmentation System (ABAS). ABAS system integrates on-board information that can for example be provided by Inertial Navigation System (INS) so as to enhance the performance of the navigation system. GNSS/INS hybridization is already performed on current commercial

Paris-Sud XI, UniversitÃ© de

136

Hybrid shop floor control system for Computer Integrated Manufacturing (CIM)

A shop floor can be considered as an important level to develop Computer Integrated Manufacturing system (CIMs). However,\\u000a a shop floor is a dynamic environment where unexpected events continuously occur, and impose changes to the planned activities.\\u000a To deal with this problem, a shop floor should adopt an appropriate control system that is responsible for the coordination\\u000a and control of

Kyung-Hyun Choi; Seok-Hee Lee

2001-01-01

137

Path Integral Monte Carlo Approach to the U(1) Lattice Gauge Theory in (2+1) Dimensions

Path Integral Monte Carlo simulations have been performed for U(1) lattice gauge theory in (2+1) dimensions on anisotropic lattices. We extractthe static quark potential, the string tension and the low-lying "glueball" spectrum.The Euclidean string tension and mass gap decrease exponentially at weakcoupling in excellent agreement with the predictions of Polyakov and G{\\" o}pfert and Mack, but their magnitudes are five times bigger than predicted. Extrapolations are made to the extreme anisotropic or Hamiltonian limit, and comparisons are made with previous estimates obtained in the Hamiltonian formulation.

Mushtaq Loan; Michael Brunner; Clare Sloggett; Chris Hamer

2002-09-26

138

Bayesian Estimates of Equation System Parameters: An Application of Integration by Monte Carlo

Monte Carlo (MC) is used to draw parameter values from a distribution defined on the structural parameter space of an equation system. Making use of the prior density, the likelihood, and Bayes' Theorem it is possible to estimate posterior moments of both structural and reduced form parameters. The MC method allows a rather liberal choice of prior distributions. The number

Tuen Kloek; Herman K van Dijk

1978-01-01

139

Integration of Neuroscience and Endocrinology in Hybrid PBL Curriculum

NSDL National Science Digital Library

At the University of Missouri-Columbia, the medical school employs a problem-based learning curriculum that began in 1993. Since the curriculum was changed, student performance on step 1 of the United States Medical Licensing Examination has significantly increased from slightly below the national average to almost one-half a standard deviation above the national mean. In the first and second years, classes for students are organized in classes or blocks that are 8 wk long, followed by 1 wk for evaluation. Initially, basic science endocrinology was taught in the fourth block of the first year with immunology and molecular biology. Student and faculty evaluations of the curriculum indicated that endocrinology did not integrate well with the rest of the material taught in that block. To address these issues, basic science endocrinology was moved into another block with neurosciences. We integrate endocrinology with neurosciences by using the hypothalamus and its role in neuroendocrinology as a springboard for endocrinology. This is accomplished by using clinical cases with clear neuroscience and endocrinology aspects such as CushingÃÂs disease and multiple endocrine neoplastic syndrome type 1.

PhD J. Thomas Cunningham (University of Missouri-Columbia School of Medicine Dept. of Physiology); PhD Ronald H. Freeman (University of Missouri-Columbia School of Medicine Dept. of Physiology); Dr. Michael C. Hosokawa (University of Missouri-Columbia School of Medicine Dept. of Family and Community Medicine)

2001-12-01

140

A Programmable MicroFluidic Processor: Integrated and Hybrid Solutions

The Programmable Fluidic Processor (PFP), a device conceived of by researchers at MD Anderson Cancer Center, is a reconfigurable and programmable bio-chemical analysis system designed for handheld operation in a variety of applications. Unlike most microfluidic systems which utilize channels to control fluids, the PFP device is a droplet-based system. The device is based on dielectrophoresis; a fluid transport phenomenon that utilizes mismatched polarizability between a droplet and its medium to induce droplet mobility. In the device, sample carrying droplets are polarized by an array of electrodes, individually addressable by subsurface microelectronics. My research focused on the development of a polymer-based microfluidic injection system for injecting these droplets onto the electrode array. The first of two device generations fabricated at LLNL was designed using extensive research and modeling performed by MD Anderson and Coventor. Fabricating the first generation required several iterations and design changes in order to generate an acceptable device for testing. Difficulties in planar fabrication of the fluidic system and a narrow channel design necessitated these changes. The second generation device incorporated modifications of the previous generation and improved on deficiencies discovered during experimentation with the initial device. Extensive modeling of the injection channels and fluid storage chamber also aided in redesigning the device's microfluidic system. A micromolding technique with interlocking features enabled precise alignments and dimensional control, critical requirements for device optimization. Fabrication of a final device will be fully integrated with the polymer-based microfluidics bonded directly to the silicon-based microelectronics. The optimized design and process flow developed in the trial generations will readily transfer to this approach.

Rose, K A

2002-05-10

141

Integrated process of Donnan dialysis and pertraction in a multimembrane hybrid system

This paper deals with the functioning of an integrated membrane system composed as follows:where CEM denotes a membrane made of cation-exchange polymer. The system combines the functions of Donnan dialysis (DD) and pertraction in the multimembrane hybrid system (MHS). The MHS consists of an agitated bulk liquid membrane and two CEMs. The DD–MHS system was designed to improve the treatment

R Wódzki; P Szczepa?ski

2001-01-01

142

An integrated hybrid device for binary-phase-shift-keying subcarrier modulation

We propose an integrated optical\\/radio frequency (RF) hybrid device for binary-phase-shift-keying subcarrier modulation that is based on optical amplitude modulation and interference with phase delay. The device consists of two multiple-quantum-well (MQW) electroabsorption (EA) modulators branched with two multimode interference (MMI) couplers. When an RF carrier was applied to one modulator and a digital signal to the other one, the

M. Shin; J. Lim; C. Y. Park; J. Kim; J. S. Kim; K. E. Pyun; S. Hong

2000-01-01

143

There are numerous scenarios where radioactive particulates can be displaced by external forces. For example, the detonation of a radiological dispersal device in an urban environment will result in the release of radioactive particulates that in turn can be resuspended into the breathing space by external forces such as wind flow in the vicinity of the detonation. A need exists to quantify the internal (due to inhalation) and external radiation doses that are delivered to bystanders; however, current state-of-the-art codes are unable to calculate accurately radiation doses that arise from the resuspension of radioactive particulates in complex topographies. To address this gap, a coupled computational fluid dynamics and Monte Carlo radiation transport approach has been developed. With the aid of particulate injections, the computational fluid dynamics simulation models characterize the resuspension of particulates in a complex urban geometry due to air-flow. The spatial and temporal distributions of these particulates are then used by the Monte Carlo radiation transport simulation to calculate the radiation doses delivered to various points within the simulated domain. A particular resuspension scenario has been modeled using this coupled framework, and the calculated internal (due to inhalation) and external radiation doses have been deemed reasonable. GAMBIT and FLUENT comprise the software suite used to perform the Computational Fluid Dynamics simulations, and Monte Carlo N-Particle eXtended is used to perform the Monte Carlo Radiation Transport simulations. PMID:25162421

Ali, Fawaz; Waller, Ed

2014-10-01

144

A novel hybridization of the finite element (FE) and boundary integral methods is presented for an efficient and accurate numerical analysis of electromagnetic scattering and radiation problems. The proposed method derives an adaptive numerical absorbing boundary condition (ABC) for the finite element solution based on boundary integral equations. Unlike the standard finite element-boundary integral approach, the proposed method is free

Jian Liu; Jian-Ming Jin

2001-01-01

145

NASA Astrophysics Data System (ADS)

Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied.

Urbic, T.; Holovko, M. F.

2011-10-01

146

Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes-Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334

Urbic, T; Holovko, M F

2011-10-01

147

Associative version of Henderson-Abraham-Barker theory is applied for the study of Mercedes–Benz model of water near hydrophobic surface. We calculated density profiles and adsorption coefficients using Percus-Yevick and soft mean spherical associative approximations. The results are compared with Monte Carlo simulation data. It is shown that at higher temperatures both approximations satisfactory reproduce the simulation data. For lower temperatures, soft mean spherical approximation gives good agreement at low and at high densities while in at mid range densities, the prediction is only qualitative. The formation of a depletion layer between water and hydrophobic surface was also demonstrated and studied. PMID:21992334

Urbic, T.; Holovko, M. F.

2011-01-01

148

Integrating emerging topics through online team design in a hybrid communication networks course to an emerging communication networks topic. The online team design project was evaluated with a thematic. All rights reserved. Keywords: Asynchronous communication; Communication networks course; Emerging

Reisslein, Martin

149

Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.

O'Brien, M J; Procassini, R J; Joy, K I

2009-03-09

150

We investigate how the fixed-node diffusion Monte Carlo energy of solids depends on single-particle orbitals used in Slater-Jastrow wave functions. We demonstrate that the dependence can be significant, in particular in the case of 3d transition-metal compounds, which we adopt as examples. We illustrate how exchange-correlation functionals with variable exact-exchange component can be exploited to reduce the fixed-node errors. On

Jindrich Kolorenc; Shuming Hu; Lubos Mitas

2010-01-01

151

McSKY: A hybrid Monte-Carlo lime-beam code for shielded gamma skyshine calculations

McSKY evaluates skyshine dose from an isotropic, monoenergetic, point photon source collimated into either a vertical cone or a vertical structure with an N-sided polygon cross section. The code assumes an overhead shield of two materials, through the user can specify zero shield thickness for an unshielded calculation. The code uses a Monte-Carlo algorithm to evaluate transport through source shields

J. K. Shultis; R. E. Faw; M. H. Stedry; W. Hall

1994-01-01

152

A method to suppress backreflection noise due to facet reflection in a resonator integrated optic gyro (RIOG) is demonstrated using hybrid phase-modulation technology (HPMT). First, calculations are carried out to evaluate the effect of the backreflection. Although its amplitude has been remarkably decreased by angle polishing, residual backreflection noise is still a severe factor in RIOGs. Next, a hybrid phase-modulation method to eliminate the backreflection noise is constructed, and the frequency spectra of the photodetector outputs before and after adopting HPMT are analyzed. Theoretical analysis shows that the backreflection noise spectra will split from each other as a result of the hybrid phase modulation. In association with the pectinate-filter characteristics of digital correlation detection, the backreflection noise can be suppressed. Finally, the RIOG experimental setup is established and compared with opposite-slope triangle phase-modulation technology. HPMT has the advantage of suppressing backreflection noise, with the RIOG bias stability greatly improved from 2.34 to 0.22 deg/s (10 s integration time). PMID:23478771

Feng, Lishuang; Lei, Ming; Liu, Huilan; Zhi, Yinzhou; Wang, Junjie

2013-03-10

153

The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler.

Kirk, B.L.

1985-12-01

154

Performance analysis of an OTEC plant and a desalination plant using an integrated hybrid cycle

A performance analysis of an OTEC plant using an integrated hybrid cycle (I-H OTEC Cycle) has been conducted. The I-H OTEC cycle is a combination of a closed-cycle OTEC plant and a spray flash desalination plant. In an I-H OTEC cycle, warm sea water evaporates the liquid ammonia in the OTEC evaporator, then enters the flash chamber and evaporates itself. The evaporated steam enters the desalination condenser and is condensed by the cold sea water passed through the OTEC condenser. The optimization of the I-H OTEC cycle is analyzed by the method of steepest descent. The total heat transfer area of heat exchangers per net power is used as an objective function. Numerical results are reported for a 10 MW I-H OTEC cycle with plate-type heat exchangers and ammonia as working fluid. The results are compared with those of a joint hybrid OTEC cycle (J-H OTEC Cycle).

Uehara, Haruo; Miyara, Akio; Ikegami, Yasuyuki [Saga Univ. (Japan); Nakaoka, Tsutomu [National Fisheries Univ., Simonoseki, Yamaguchi (Japan). Dept. of Marine Engineering

1996-05-01

155

Integration of Multisensor Hybrid Reasoners to Support Personal Autonomy in the Smart Home

The deployment of the Ambient Intelligence (AmI) paradigm requires designing and integrating user-centered smart environments to assist people in their daily life activities. This research paper details an integration and validation of multiple heterogeneous sensors with hybrid reasoners that support decision making in order to monitor personal and environmental data at a smart home in a private way. The results innovate on knowledge-based platforms, distributed sensors, connected objects, accessibility and authentication methods to promote independent living for elderly people. TALISMAN+, the AmI framework deployed, integrates four subsystems in the smart home: (i) a mobile biomedical telemonitoring platform to provide elderly patients with continuous disease management; (ii) an integration middleware that allows context capture from heterogeneous sensors to program environment's reaction; (iii) a vision system for intelligent monitoring of daily activities in the home; and (iv) an ontologies-based integrated reasoning platform to trigger local actions and manage private information in the smart home. The framework was integrated in two real running environments, the UPM Accessible Digital Home and MetalTIC house, and successfully validated by five experts in home care, elderly people and personal autonomy. PMID:25232910

Valero, Miguel Angel; Bravo, Jose; Chamizo, Juan Manuel Garcia; Lopez-de-Ipina, Diego

2014-01-01

156

Sole means navigation and integrity through hybrid Loran-C and NAVSTAR GPS

NASA Technical Reports Server (NTRS)

A sole means navigation system does not only call for integrity, but also for coverage, reliability, availability and accuracy. Even though ground monitored GPS will provide integrity, availability is still not sufficient. One satellite outage can affect a large service area for several hours per day. The same holds for differential GPS; a total satellite outage cannot be corrected for. To obtain sufficient coverage, extra measurements are needed, either in the form of extra GPS satellites (expensive) or through redundant measurements from other systems. LORAN-C is available and will, hybridized with GPS, result in a system that has the potential to satisfy the requirements for a sole means navigation system for use in the continental United States. Assumptions are made about the qualification sole means, mainly based on current sole means systems such as VOR/DME. In order to allow for system design that will satisfy sole means requirements, it is recommended that a definition of a sole means navigation system be established. This definition must include requirements for availability, reliability, and integrity currently not specified. In addition to the definition of a sole means navigation system, certification requirements must be established for hybrid navigation systems. This will allow for design and production of a new generation of airborne navigation systems that will reduce overall system costs and simplify training procedures.

Vangraas, Frank

1990-01-01

157

In this paper the authors report the initial steps in the development of a Monte Carlo method for evaluation of real-time Feynman path integrals for many-particle dynamics. The approach leads to Gaussian factors. These Gaussian factors result from the use of a generalization of their new discrete distributed approximating functions (DDAFs) to continuous distributed approximating functions (CDAFs) so as to

Donald J. Kouri; Wei Zhu; Xin Ma; B. Montgomery Pettitt; David K. Hoffman

1992-01-01

158

We present pair-wise, charge-neutral model potentials for an iron-titanium-yttrium-oxygen system. These simple models are designed to provide a tractable method of simulating nanostructured ferritic alloys (NFAs) using off-lattice Monte Carlo and molecular dynamics techniques without deviating significantly from the formalism employed in existing Monte Carlo simulations. The model is fitted to diamagnetic density functional theory (DFT) calculations of the various species over a range of densities and concentrations. The resulting model potentials provide reasonable and in some cases even excellent mechanical and thermodynamic properties for the pure metals. The model replicates the qualitative trends in formation energy predicted by DFT, though the energies of formation do not agree as well for dilute systems as they do for more concentrated systems. We find that on-lattice models will consistently favor tetrahedral oxygen interstitial sites over octahedral interstitial sites, while relaxed systems typically favor octahedral sites. This emphasizes the need for the off-lattice simulations for which this potential was designed. PMID:23288578

Hammond, Karl D; Voigt, Hyon-Jee Lee; Marus, Lauren A; Juslin, Niklas; Wirth, Brian D

2013-02-01

159

NASA Astrophysics Data System (ADS)

Based on the quasiparticle model of the quark-gluon plasma (QGP), a color quantum path-integral Monte-Carlo (PIMC) method for the calculation of thermodynamic properties and—closely related to the latter—a Wigner dynamics method for calculation of transport properties of the QGP are formulated. The QGP partition function is presented in the form of a color path integral with a new relativistic measure instead of the Gaussian one traditionally used in the Feynman-Wiener path integral. A procedure of sampling color variables according to the SU(3) group Haar measure is developed for integration over the color variable. It is shown that the PIMC method is able to reproduce the lattice QCD equation of state at zero baryon chemical potential at realistic model parameters (i.e., quasiparticle masses and coupling constant) and also yields valuable insight into the internal structure of the QGP. Our results indicate that the QGP reveals quantum liquidlike(rather than gaslike) properties up to the highest considered temperature of 525 MeV. The pair distribution functions clearly reflect the existence of gluon-gluon bound states, i.e., glueballs, at temperatures just above the phase transition, while mesonlike qq¯ bound states are not found. The calculated self-diffusion coefficient agrees well with some estimates of the heavy-quark diffusion constant available from recent lattice data and also with an analysis of heavy-quark quenching in experiments on ultrarelativistic heavy-ion collisions, however, appreciably exceeds other estimates. The lattice and heavy-quark-quenching results on the heavy-quark diffusion are still rather diverse. The obtained results for the shear viscosity are in the range of those deduced from an analysis of the experimental elliptic flow in ultrarelativistic heavy-ions collisions, i.e., in terms the viscosity-to-entropy ratio, 1/4???/S<2.5/4?, in the temperature range from 170 to 440 MeV.

Filinov, V. S.; Ivanov, Yu. B.; Fortov, V. E.; Bonitz, M.; Levashov, P. R.

2013-03-01

160

ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2008-04-01

161

Cascaded all-optical operations in a hybrid integrated 80-Gb/s logic circuit.

We demonstrate logic functionalities in a high-speed all-optical logic circuit based on differential Mach-Zehnder interferometers with semiconductor optical amplifiers as the nonlinear optical elements. The circuit, implemented by hybrid integration of the semiconductor optical amplifiers on a planar lightwave circuit platform fabricated in silica glass, can be flexibly configured to realize a variety of Boolean logic gates. We present both simulations and experimental demonstrations of cascaded all-optical operations for 80-Gb/s on-off keyed data. PMID:24921554

LeGrange, J D; Dinu, M; Sochor, T; Bollond, P; Kasper, A; Cabot, S; Johnson, G S; Kang, I; Grant, A; Kay, J; Jaques, J

2014-06-01

162

Solving an integrated employee timetabling and job-shop scheduling problem via hybrid branch for an integrated employee timetabling and job-shop scheduling problem. Each method we investigate uses a constraint related to scheduling jobs on the machines and the decisions related to employee timetabling are often

Paris-Sud XI, UniversitÃ© de

163

NASA Astrophysics Data System (ADS)

This thesis deals with a novel type of integrated dielectric waveguide which is synthesized on a planar grounded substrate by perforation of the zones adjacent to a guiding channel in the center. The resulting Substrate Integrated Image Guide (SIIG) not only allows for low-loss guidance of electromagnetic waves in a similar way as the standard image guide, but also meets the requirements of low cost and ease of integration. A first objective was the detailed analysis of the propagation properties of fundamental and higher order modes in this waveguide structure, regarding attenuation, dispersion behavior, bandwidth, leakage effects, and the impact of fabrication tolerances. For this purpose, specifically adapted techniques of analysis are presented, since established methods for the conventional image guide can not be applied to the more complex periodic SIIG. Commercial electromagnetic full-wave software is used along with a dual-line approach involving a subsequent extraction of the propagation constant from simulated S-parameters. Alternatively, the solution of the eigenmode problem of a single SIIG unit cell also performs the task. Both techniques are in good agreement and provide accurate results, which is supported by measurements on laser-fabricated prototypes. It is shown that the achievable attenuation is much lower than in the standard integrated technologies and that losses mainly depend on the chosen dielectric material. As a consequence, the SIIG also is an attractive technology for applications beyond the mmW band, i. e. in the terahertz range. Design recommendations for the geometric parameters of the SIIG are discussed and a simplified equivalent model with homogeneous dielectric regions is introduced to speed up the design of passive components. Low-loss transitions between dissimilar waveguide structures are indispensable key components for a hybrid integrated platform. In order to enable the connection of standard measurement equipment in the W-band (75 GHz to 110 GHz), a transition from rectangular waveguide to SIIG was developed. Another transition to either microstrip or CPW is essential to enable coplanar probe measurements and to achieve compatibility with monolithic millimeter wave integrated circuits (MMICs). Microstrip and image guide have very different requirements for the substrate thickness, for which reason efforts were concentrated on a wideband transition between the SIIG and CPW. The designed transition shows good broadband performance and minimal radiation loss. Other transitions from the SIIG to the Substrate Integrated Waveguide (SIW) are also presented in the context of substrate integrated circuits (SICs). The latter technology combines planar transmission lines and originally non-planar waveguide structures that are synthesized in planar form on a common substrate. High alignment precision is a direct consequence, which eliminates the necessity for additional tuning. As an open dielectric waveguide technology with very small transmission loss, the SIIG is particularly suitable for antennas and corresponding feed lines. The similarity of the SIIG with other dielectric waveguides and especially with the image guide suggests a knowledge transfer from known dielectric antennas. A planar SIIG rod antenna was designed and fabricated, as a derivative of the established polyrod antenna. The structural shape is simple and compact, and it provides a medium gain in the range of 10 dBi to 15 dBi. A second developed type, an SIIG traveling-wave linear array antenna, is frequency-steerable through broadside due to special radiation elements. The novel design of a slab-mode antenna forms an endfire beam by a planar lens configuration. In addition, all of those dielectric-based antennas are highly efficient. Being synthesized on a planar substrate, the SIIG can be combined in a hybrid way with other waveguide structures on the same substrate in so-called substrate integrated circuits (SICs). It joins the SIW and the Substrate Integrated Non-Radiative Dielectric guide (SINRD) and adds unique featu

Patrovsky, Andreas

164

A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638

Mashayekhi, S.; Razzaghi, M.; Tripak, O.

2014-01-01

165

A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638

Mashayekhi, S; Razzaghi, M; Tripak, O

2014-01-01

166

Monte-Carlo Analysis of a Radar-Fuze Digital Integrator.

National Technical Information Service (NTIS)

The memory time of an RC integrator (i.e., a single-pole, low-pass filter) is proportional to the product of the resistance R and the capacitance C. Thus, the memory time will tend to vary with environment and from unit to unit in production. In a radar f...

C. S. Williams, L. T. James

1975-01-01

167

We compute two- and three-body cluster functions that describe contributions of composite entities, like hydrogen atoms, ions H(-), H2 (+), and helium atoms, and also charge-charge and atom-charge interactions, to the equation of state of a hydrogen-helium mixture at low density. A cluster function has the structure of a truncated virial coefficient and behaves, at low temperatures, like a usual partition function for the composite entity. Our path integral Monte Carlo calculations use importance sampling to sample efficiently the cluster partition functions even at low temperatures where bound state contributions dominate. We also employ a new and efficient adaptive discretization scheme that allows one not only to eliminate Coulomb divergencies in discretized path integrals, but also to direct the computational effort where particles are close and thus strongly interacting. The numerical results for the two-body function agree with the analytically known quantum second virial coefficient. The three-body cluster functions are compared at low temperatures with familiar partition functions for composite entities. PMID:25399134

Wendland, D; Ballenegger, V; Alastuey, A

2014-11-14

168

Integrated modelling of the current profile in steady-state and hybrid ITER scenarios

NASA Astrophysics Data System (ADS)

We present integrated modelling of steady-state and hybrid scenarios for ITER parameters using several predictive transport codes. These employ models for non-inductive current drive sources in conjunction with various theory-based and semi-empirical transport models. In conjunction with the simulation effort, the current drive models are being evaluated in a series of cross-code and code-experiment comparisons under ITER-relevant conditions. New benchmark evaluations of current drive from injection of neutral beams (NBCD), electron cyclotron waves (ECCD) and lower hybrid waves (LHCD) are reported. Simulations using several transport modelling codes self-consistently calculate the heating and current drive sources using ITER design parameters. Operating constraints are also taken into account, although the calculations reported here still require further refinement. The modelling addresses both the final stationary state and dynamic access to it. The simulations indicate that generation and control of internal and edge barriers to access and maintain high confinement will be a major undertaking for future simulations, as well as a challenge for the ITER steady-state and hybrid experimental programme.

Houlberg, W. A.; Gormezano, C.; Artaud, J. F.; Barbato, E.; Basiuk, V.; Becoulet, A.; Bonoli, P.; Budny, R. V.; Eriksson, L. G.; Farina, D.; Gribov, Yu.; Harvey, R. W.; Hobirk, J.; Imbeaux, F.; Kessel, C. E.; Leonov, V.; Murakami, M.; Polevoi, A.; Poli, E.; Prater, R.; St. John, H.; Volpe, F.; Westerhof, E.; Zvonkov, A.; ITPA Steady State Operation Topical Group; Confinement Database, ITPA; Modeling Topical Group

2005-11-01

169

NASA Astrophysics Data System (ADS)

This paper proposes an approach for integrating Plug-In Hybrid Electric Vehicles (PHEV) to an existing residential photovoltaic system, to control and optimize the power consumption of residential load. Control involves determining the source from which residential load will be catered, where as optimization of power flow reduces the stress on the grid. The system built to achieve the goal is a combination of the existing residential photovoltaic system, PHEV, Power Conditioning Unit (PCU), and a controller. The PCU involves two DC-DC Boost Converters and an inverter. This paper emphasizes on developing the controller logic and its implementation in order to accommodate the flexibility and benefits of the proposed integrated system. The proposed controller logic has been simulated using MATLAB SIMULINK and further implemented using Digital Signal Processor (DSP) microcontroller, TMS320F28035, from Texas Instruments

Nagarajan, Adarsh; Shireen, Wajiha

2013-06-01

170

Hybrid PSO Based Integration of Multiple Representations of Thermal Hand Vein Patterns

NASA Astrophysics Data System (ADS)

This paper outlines a novel personal authentication approach by integrating the multiple feature representations of thermal hand vein patterns. In the present work, vein patterns are regarded as comprising textures. Accordingly two types of texture features using Gabor wavelets and fuzzy logic are extracted from the acquired vein images. Since both the approaches have different domains of feature representation, their integration is accomplished at the decision level by incorporating individual decisions using the Euclidean distance based classifiers. The optimal decision parameters comprising individual decision thresholds and one fusion rule out of 16 rules for two features are estimated with the help of hybrid Particle Swarm Optimization (PSO) which can optimize the decisions taken by the individual classifiers. The experimental results carried out on 100 user database are promising thus confirming the usefulness of the proposed authentication system.

Kumar, Amioy; Hanmandlu, Madasu; Gupta, H. M.

171

The ObjECTS: Framework for Integrated Assessment: Hybrid Modeling of Transportation

Technology is a central issue for the global climate change problem, requiring analysis tools that can examine the impact of specific technologies with a long-term, global context. This paper describes the architecture of the ObjECTS-MiniCAM integrated assessment model, which implements a long-term, global model of energy, economy, agriculture, land-use, atmosphere, and climate change in a framework that allows the flexible incorporation of explicit technology detail. We describe the implementation of a ''bottom-up'' representation of the transportation sector as an illustration of this approach, in which the resulting hybrid model is fully integrated, internally consistent and theoretically compatible with the regional and global modeling framework. The analysis of the transportation sector presented here supports and clarifies the need for a comprehensive strategy promoting advanced vehicle technologies and an economy-wide carbon policy to cost-effectively reduce carbon emissions from the transportation sector in the long-term.

Kim, Son H.; Edmonds, James A.; Lurz, Joshua; Smith, Steven J.; Wise, Marshall A.

2006-09-01

172

An Integrated Methodology for Medieval Landscape Reconstruction: The Case Study of Monte Serico

Landscape Archaeology, in addition to classical disciplines such as geography, history, environment and human topics, improved\\u000a the use and the integration of all new technological instruments by other documentary sources. Lidar is a new Remote Sensing\\u000a technique recently adopted in archaeological research. Its main application is the study of any kind of archaeological feature\\u000a causing a variation in surface elevation

Maria Danese; Marilisa Biscione; Rossella Coluzzi; Rosa Lasaponara; Beniamino Murgante; Nicola Masini

2009-01-01

173

Fully integrated hybrid silicon free-space beam steering source with 32-channel phased array

NASA Astrophysics Data System (ADS)

Free-space beam steering using optical phased arrays is a promising method for implementing free-space communication links and Light Detection and Ranging (LIDAR) without the sensitivity to inertial forces and long latencies which characterize moving parts. Implementing this approach on a silicon-based photonic integrated circuit adds the additional advantage of working with highly developed CMOS processing techniques. In this work we discuss our progress in the development of a fully integrated 32 channel PIC with a widely tunable diode laser, a waveguide phased array, an array of fast phase modulators, an array of hybrid III-V/silicon amplifiers, surface gratings, and a graded index lens (GRIN) feeding an array of photodiodes for feedback control. The PIC has been designed to provide beam steering across a 15°x5° field of view with 0.6°x0.6° beam width and background peaks suppressed 15 dB relative to the main lobe within the field of view for arbitrarily chosen beam directions. Fabrication follows the hybrid silicon process developed at UCSB with modifications to incorporate silicon diodes and a GRIN lens.

Hulme, J. C.; Doylend, J. K.; Heck, M. J. R.; Peters, J. D.; Davenport, M. L.; Bovington, J. T.; Coldren, L. A.; Bowers, J. E.

2014-03-01

174

A neural network and Kalman filter hybrid approach for GPS/INS integration

It is well known that Kalman filtering is an optimal real-time data fusion method for GPS/INS integration. However, it has some limitations in terms of stability, adaptability and observability. A Kalman filter can perform optimally only when its dynamic model is correctly defined and the noise statistics for the measurement and process are completely known. It is found that estimated Kalman filter states could be influenced by several factors, including vehicle dynamic variations, filter tuning results, and environment changes, etc., which are difficult to model. Neural networks can map input-output relationships without apriori knowledge about them; hence a proper designed neural network is capable of learning and extracting these complex relationships with enough training. This paper presents a GPS/INS integrated system that combines Kalman filtering and neural network algorithms to improve navigation solutions during GPS outages. An Extended Kalman filter estimates INS measurement errors, plus position, velocity and attitude errors etc. Kalman filter states, and gives precise navigation solutions while GPS signals are available. At the same time, a multi-layer neural network is trained to map the vehicle dynamics with corresponding Kalman filter states, at the same rate of measurement update. After the output of the neural network meets a similarity threshold, it can be used to correct INS measurements when no GPS measurements are available. Selecting suitable inputs and outputs of the neural network is critical for this hybrid method. Detailed analysis unveils that some Kalman filter states are highly correlated with vehicle dynamic variations. The filter states that heavily impact system navigation solutions are selected as the neural network outputs. The principle of this hybrid method and the neural network design are presented. Field test data are processed to evaluate the performance of the proposed method. Key words: hybrid, neural network, Kalman filter, navigation solution 1.

Jianguo Jack Wang; Jinling Wang; David Sinclair; Leo Watts

2006-01-01

175

Microsphere integrated gelatin-siloxane hybrid scaffolds were successfully synthesized by using a combined sol-gel processing,\\u000a post-gelation soaking and freeze-drying process. A bone-like apatite layer was able to form in the Ca2+-containing porous hybrids upon soaking in a simulated body fluid (SBF) up to 1 day. The rate of gentamicin sulfate (GS) release\\u000a from the GS-loaded gelatin-siloxane hybrid microsphere became constant after

Lin Wang; Bing Yu; Li-ping Sun; Lei Ren; Qi-qing Zhang

2008-01-01

176

By means of the exact Path Integral Monte Carlo method we have performed a detailed microscopic study of 4He nanodroplets doped with an argon ion, Ar$^+$, at $T=0.5$ K. We have computed density profiles, energies, dissociation energies and characterized the local order around the ion for nanodroplets with a number of 4He atoms ranging from 10 to 64 and also 128. We have found the formation of a stable solid structure around the ion, a "snowball", consisting of 3 concentric shells in which the 4He atoms are placed on at the vertices of platonic solids: the first inner shell is an icosahedron (12 atoms); the second one is a dodecahedron with 20 atoms placed on the faces of the icosahedron of the first shell; the third shell is again an icosahedron composed of 12 atoms placed on the faces of the dodecahedron of the second shell. The "magic numbers" implied by this structure, 12, 32 and 44 helium atoms, have been observed in a recent experimental study [Bartl et al, J. Phys. Chem. A 118, 2014] of these complexes;...

Tramonto, Filippo; Nava, Marco; Galli, Davide E

2014-01-01

177

The International Safety Management (ISM) Code defines a broad framework for the safe management and operation of merchant ships, maintaining high standards of safety and environmental protection. On the other hand, ISO 14001:2004 provides a generic, worldwide environmental management standard that has been utilized by several industries. Both the ISM Code and ISO 14001:2004 have the practical goal of establishing a sustainable Integrated Environmental Management System (IEMS) for shipping businesses. This paper presents a hybrid design methodology that shows how requirements from both standards can be combined into a single execution scheme. Specifically, the Analytic Hierarchy Process (AHP) and Fuzzy Axiomatic Design (FAD) are used to structure an IEMS for ship management companies. This research provides decision aid to maritime executives in order to enhance the environmental performance in the shipping industry. PMID:19038488

Celik, Metin

2009-03-01

178

Electrocoagulation-integrated hybrid membrane processes for the treatment of tannery wastewater.

Three different combinations of treatment techniques, i.e. electrocoagulation combined with microfiltration (EMR), membrane bioreactor (MBR) and electrocoagulation integrated with membrane bioreactor (hybrid MBR, (HMBR)), were analysed and compared for the treatment of tannery wastewater operated for 7 days under the constant trans-membrane pressure of 5 kPa. HMBR was found to be most suitable in performance as well as fouling reduction, with 94 % of chemical oxygen demand (COD) removal, 100 % chromium removal and 8 % improvement in percentage reduction in permeate flux compared to MBR with only 90 % COD removal and 67 % chromium removal. The effect of mixed liquor suspended solids on fouling was also investigated and was found to be insignificant. EMR was capable of elevating the flux but was not as efficient as HMBR and MBR in COD removal. Fouling reduction by HMBR was further confirmed by SEM-EDX and particle size analysis. PMID:23653316

Keerthi; Vinduja, V; Balasubramanian, N

2013-10-01

179

An integrated microfluidic chip for chromosome enumeration using fluorescence in situ hybridization.

Fluorescence in situ hybridization (FISH) is a powerful technique for probing the genetic content of individual cells at the chromosomal scale. Conventional FISH techniques provide a sensitive diagnostic tool for the detection of chromosomal alterations on a cell-by-cell basis; however, the cost-per-test in terms of reagent and highly qualified labour has prevented its wide-spread utilization in clinical settings. Here, we address the inefficient use of labour with the first integrated and automated on-chip FISH implementation, one that requires only minutes of setup time from the technician. Our microfluidic chip has lowered the reagent use by 20-fold, decreased the labour time by 10-fold, and substantially reduced the amount of support equipment needed. We believe this cost-effective platform will make sensitive FISH techniques more accessible for routine clinical usage. PMID:19023479

Sieben, Vincent J; Debes-Marun, Carina S; Pilarski, Linda M; Backhouse, Christopher J

2008-12-01

180

NASA Technical Reports Server (NTRS)

In order to understand the global structure, dynamics, and physical and chemical processes occurring in the upper atmospheres, exospheres, and ionospheres of the Earth, the other planets, comets and planetary satellites and their interactions with their outer particles and fields environs, it is often necessary to address the fundamentally non-equilibrium aspects of the physical environment. These are regions where complex chemistry, energetics, and electromagnetic field influences are important. Traditional approaches are based largely on hydrodynamic or magnetohydrodynamic (MHD) formulations and are very important and highly useful. However, these methods often have limitations in rarefied physical regimes where the molecular collision rates and ion gyrofrequencies are small and where interactions with ionospheres and upper neutral atmospheres are important. At the University of Michigan we have an established base of experience and expertise in numerical simulations based on particle codes which address these physical regimes. The Principal Investigator, Dr. Michael Combi, has over 20 years of experience in the development of particle-kinetic and hybrid kinetichydrodynamics models and their direct use in data analysis. He has also worked in ground-based and space-based remote observational work and on spacecraft instrument teams. His research has involved studies of cometary atmospheres and ionospheres and their interaction with the solar wind, the neutral gas clouds escaping from Jupiter s moon Io, the interaction of the atmospheres/ionospheres of Io and Europa with Jupiter s corotating magnetosphere, as well as Earth s ionosphere. This report describes our progress during the year. The contained in section 2 of this report will serve as the basis of a paper describing the method and its application to the cometary coma that will be continued under a research and analysis grant that supports various applications of theoretical comet models to understanding the inner comae of comets (grant NAGS- 13239 from the Planetary Atmospheres program).

Combi, Michael R.

2004-01-01

181

NSDL National Science Digital Library

Monte Carlo modeling refers to the solution of mathematical problems with the use of random numbers. This can include both function integration and the modeling of stochastic phenomena using random processes.

Joiner, David; The Shodor Education Foundation, Inc.

182

\\u000a According to the current control characters of hybrid active power filter (HAPF), the current control model of HAPF is designed.\\u000a The fuzzy recursive integral PI control algorithm is presented when it is compared to conventional PI control method. The\\u000a control algorithm is applied to auto-regulate the proportional and integral parameters of PI. Thus, the robustness and response\\u000a speed is enhanced;

Zhong Tang; Daifa Liao

2009-01-01

183

A novel fabrication method and structure of a planar lightwave circuit (PLC) platform for hybrid integration of an optical module by passive alignment technology is presented. Precise formation of V-grooves in the PLC platform can be easily obtained by the proposed process. The passive alignments of optical elements, including optical fiber, are achieved in one-mask. LD modules were implemented by

Soo-Jin Park; Ki-Tae Jeong; Sang-Ho Park; Hee-Kyung Sung

2002-01-01

184

Glioblastoma, the most aggressive primary brain tumor in humans, exhibits a large degree of molecular heterogeneity. Understanding the molecular pathology of a tumor and its linkage to behavior is an important foundation for developing and evaluating approaches to clinical management. Here we integrate array-comparative genomic hybridization and array- based gene expression profiles to identify relationships between DNA copy number aberrations,

Janice M. Nigro; Anjan Misra; Ivan Smirnov; Howard Colman; Chandi Griffin; Natalie Ozburn; Mingang Chen; Edward Pan; Dimpy Koul; Burt G. Feuerstein; Kenneth D. Aldape

2005-01-01

185

First application close measurements applying the new hybrid integrated MEMS spectrometer

NASA Astrophysics Data System (ADS)

Grating spectrometers have been designed in many different configurations. Now potential high volume applications ask for extremely miniaturized and low cost systems. By the use of integrated MEMS (micro electro mechanical systems) scanning grating devices a less expensive single detector can be used in the NIR instead of the array detectors required for fixed grating systems. Meanwhile the design of a hybrid integrated MEMS scanning grating spectrometer has been drawn. The MEMS device was fabricated in the Fraunhofer IPMS own clean room facility. This chip is mounted on a small circuit board together with the detector and then stacked with spacer and mirror substrate. The spectrometer has been realized by stacking several planar substrates by sophisticated mounting technologies. The spectrometer has been designed for the 950nm - 1900nm spectral range and 9nm spectral resolution with organic matter analysis in mind. First applications are considered in the food quality analysis and food processing technology. As example for the use of a spectrometer with this performance the grill process of steak was analyzed. Similar measurement would be possible on dairy products, vegetables or fruit. The idea is a mobile spectrometer for in situ and on site analysis applications in or attached to a host system providing processing, data access and input-output capabilities, disregarding this would be a laptop, tablet, smart phone or embedded platform.

Grüger, Heinrich; Pügner, Tino; Knobbe, Jens; Schenk, Harald

2013-05-01

186

NASA Astrophysics Data System (ADS)

A mathematical simulation approach based on the general purpose Monte Carlo N-particle transport code MCNP was developed to predict the response of the XRF branch of the hybrid K-edge/K-XRF densitometer (HKED). The respective MCNP models for two different versions of HKED instruments currently in use were set up and experimentally validated. The setting up of the models involved comprehensive simulations of a bremsstrahlung photon source, the examination of different particle transport models, as well as the examination of different photon attenuation and X-ray fluorescence data libraries. The computation speed was significantly increased through the extensive use of the variance reduction techniques. The models were validated through the series of benchmarking experiments performed with a representative set of uranium, plutonium and mixed U/Pu reference solutions. The models and simulation approach developed are intended for: (i) establishing a consistent mathematical calibration approach for the XRF branch of the HKED instruments, which will require minimum calibration effort and time, (ii) extending the applicability of the HKED method to non-standard samples (e.g. U/Pu mixtures with unusual element ratios) and non-standard sample matrices (e.g. HM matrices from the pyro-processing of irradiated nuclear fuel) without investing a great deal of extra calibration work, and (iii) improving the accuracy of the measurements through the modelling of special measurement effects (e.g. the secondary excitation effect, the interference with X-ray escape peaks, the inconsistent unfolding of the overlapping peaks and peak background delineation in the measured XRF spectrum), which are difficult or sometimes impossible to account for experimentally.

Berlizov, A. N.; Sharikov, D. A.; Ottmar, H.; Eberle, H.; Galy, J.; Luetzenkirchen, K.

2010-03-01

187

NASA Astrophysics Data System (ADS)

Controlled integration of multiple semiconducting oxides into each single unit of ordered nanotube arrays is highly desired in scientific research for the realization of more attractive applications. We herein report a diffusion-controlled solid-solid route to evolve simplex Co(CO3)0.5(OH)0.11H2O@TiO2 core-shell nanowire arrays (NWs) into CoO-CoTiO3 integrated hybrid nanotube arrays (NTs) with preserved morphology. During the evolution procedure, the decomposition of Co(CO3)0.5(OH)0.11H2O NWs into chains of CoCO3 nanoparticles initiates the diffusion process and promotes the interfacial solid-solid diffusion reaction even at a low temperature of 450 °C. The resulting CoO-CoTiO3 NTs possess well-defined sealed tubular geometries and a special ``inner-outer'' hybrid nature, which is suitable for application in Li-ion batteries (LIBs). As a proof-of-concept demonstration of the functions of such hybrid NTs in LIBs, CoO-CoTiO3 NTs are directly tested as LIB anodes, exhibiting both a high capacity (~600 mA h g-1 still remaining after 250 continuous cycles) and a much better cycling performance (no capacity fading within 250 total cycles) than CoO NWs. Our work presents not only a diffusion route for the formation of integrated hybrid NTs but also a new concept that can be employed as a general strategy to fabricate other oxide-based hybrid NTs for energy storage devices.Controlled integration of multiple semiconducting oxides into each single unit of ordered nanotube arrays is highly desired in scientific research for the realization of more attractive applications. We herein report a diffusion-controlled solid-solid route to evolve simplex Co(CO3)0.5(OH)0.11H2O@TiO2 core-shell nanowire arrays (NWs) into CoO-CoTiO3 integrated hybrid nanotube arrays (NTs) with preserved morphology. During the evolution procedure, the decomposition of Co(CO3)0.5(OH)0.11H2O NWs into chains of CoCO3 nanoparticles initiates the diffusion process and promotes the interfacial solid-solid diffusion reaction even at a low temperature of 450 °C. The resulting CoO-CoTiO3 NTs possess well-defined sealed tubular geometries and a special ``inner-outer'' hybrid nature, which is suitable for application in Li-ion batteries (LIBs). As a proof-of-concept demonstration of the functions of such hybrid NTs in LIBs, CoO-CoTiO3 NTs are directly tested as LIB anodes, exhibiting both a high capacity (~600 mA h g-1 still remaining after 250 continuous cycles) and a much better cycling performance (no capacity fading within 250 total cycles) than CoO NWs. Our work presents not only a diffusion route for the formation of integrated hybrid NTs but also a new concept that can be employed as a general strategy to fabricate other oxide-based hybrid NTs for energy storage devices. Electronic supplementary information (ESI) available: SEM images of Co(CO3)0.5(OH)0.11H2O NWs, SEM/TEM images of CoO-CoTiO3 hybrid nanotubes and the calculation of CoTiO3 theoretical capacity. See DOI: 10.1039/c3nr01786a

Jiang, Jian; Luo, Jingshan; Zhu, Jianhui; Huang, Xintang; Liu, Jinping; Yu, Ting

2013-08-01

188

NASA Astrophysics Data System (ADS)

MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10-5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each tally. The tally system has been optimized to maintain a high level of efficiency, even as the number of edit regions becomes very large.

Griesheimer, D. P.; Gill, D. F.; Nease, B. R.; Sutton, T. M.; Stedry, M. H.; Dobreff, P. S.; Carpenter, D. C.; Trumbull, T. H.; Caro, E.; Joo, H.; Millman, D. L.

2014-06-01

189

NASA Astrophysics Data System (ADS)

Hybrid integration of prefabricated III-V laser diodes with sub-micrometric silicon photonic waveguides suffers from a tradeoff between alignment tolerance and coupling efficiency. In this work, we demonstrate integrated coupling devices that substantially alleviate this problem by means of a balanced distribution of the laser power between two on-chip single mode SOI waveguides. With the reported coupling devices, a horizontal misalignment of the laser is converted in a variation of the relative phase of the light coupled into the two waveguides, allowing to satisfy the reciprocity principle while maintaining a high total coupling efficiency and a balanced power splitting. The relaxed alignment tolerances facilitate passive assembly of the lasers with pick-and-place tools. The balanced splitting of the power between waveguides is particularly well suited for optical interconnects with parallel transmitters. Here, the device design is discussed for both edge couplers and grating couplers relying on similar design principles. Furthermore, experimental characterization of edge-coupling structures with a lensed fiber and a Fabry-Pérot laser is presented. These devices have been fabricated with 193nm DUV optical lithography and are compatible with mainstream CMOS technology. The edge couplers with the best horizontal misalignment exhibits an excellent 1 dB loss horizontal misalignment range of 3.8 ?m with excess insertion losses below 3.1 dB (in addition to the 3dB splitting). The back-reflection induced by the device has been assessed to be below -20 dB and measured relative intensity noise is better than measured from the same laser coupled to a lensed fiber.

Romero-García, S.; Marzban, B.; Sharif Azadeh, S.; Merget, F.; Shen, B.; Witzens, J.

2014-05-01

190

Graphene is well-known as a two-dimensional sheet of carbon atoms arrayed in a honeycomb structure. It has some unique and fascinating properties, which are useful for realizing many optoelectronic devices and applications, including transistors, photodetectors, solar cells, and modulators. To enhance light-graphene interactions and take advantage of its properties, a promising approach is to combine a graphene sheet with optical waveguides, such as silicon nanophotonic wires considered in this paper. Here we report local and nonlocal optically induced transparency (OIT) effects in graphene-silicon hybrid nanophotonic integrated circuits. A low-power, continuous-wave laser is used as the pump light, and the power required for producing the OIT effect is as low as ?0.1 mW. The corresponding power density is several orders lower than that needed for the previously reported saturated absorption effect in graphene, which implies a mechanism involving light absorption by the silicon and photocarrier transport through the silicon-graphene junction. The present OIT effect enables low power, all-optical, broadband control and sensing, modulation and switching locally and nonlocally. PMID:25372937

Yu, Longhai; Zheng, Jiajiu; Xu, Yang; Dai, Daoxin; He, Sailing

2014-11-25

191

Photo-crosslinkable hybrid material with improved aging stability for integrated optics

NASA Astrophysics Data System (ADS)

In the last decade, the processing of the waveguide structures on various substrates under mild conditions has been an appealing aim. The lithographic patterning of organic-inorganic hybrid materials processed by means of sol-gel technology allows the production of waveguides and other optical components. We describe the synthesis of a new, photo-patternable, organically modified material with an improved ageing stability. Synthesis step does not involve widely used zirconia precursors, but it retains the same possibility of altering the refractive index by tailoring of the material composition. Refractive index values varied from 1.4700 to 1.5100. Measured birefringence values meet the requirements of most integrated planar optic applications. The synthesized material is compatible with silicon, glass and plastic substrates. Material was analyzed using 29Si NMR techniques. The processed slab waveguides were characterized by using the prism coupling technique at various wavelengths. The attenuation in the waveguide was determined by the cut-back method, and it was found to be less than 0.5dB/cm at the wavelength of 830 nm. The morphology of the microstructures was measured by using the interferometer equipment. Slab waveguides rms values were in order of only 2 nm.

Kusevic, Maja; Maaninen, Arto; Hiltunen, Jussi; Hiltunen, Marianne; Tuominen, Jarkko; Karioja, Pentti

2004-08-01

192

Hybrid finite element (FE)--boundary integral (BI) analysis of infinite periodic arrays is extended to include planar multilayered Green's functions. In this manner, a portion of the volumetric dielectric region can be modeled via the finite element method whereas uniform multilayered regions can be modeled using a multilayered Green's function. As such, thick uniform substrates can be modeled without loss of efficiency and accuracy. The multilayered Green's function is analytically computed in the spectral domain and the resulting BI matrix-vector products are evaluated via the fast spectral domain algorithm (FSDA). As a result, the computational cost of the matrix-vector products is kept at O(N). Furthermore, the number of Floquet modes in the expansion are kept very few by placing the BI surfaces within the computational unit cell. Examples of frequency selective surface (FSS) arrays are analyzed with this method to demonstrate the accuracy and capability of the approach. One example involves complicated multilayered substrates above and below an inhomogeneous filter element and the other is an optical ring-slot array on a substrate several hundred wavelengths in thickness. Comparisons with measurements are included.

T.F. Eibert; J.L. Volakis; Y.E. Erdemli

2002-03-03

193

NASA Astrophysics Data System (ADS)

This work presents the integration of a common mode filter with ElectroStatic Discharge protection on a silicon/porous silicon hybrid substrate. The porous silicon fabrication was performed after the integration of active components. Thus, a fluoropolymer hard mask was used to protect the active devices during anodization and can be easily removed without damaging the porous silicon. Electrical characterization results have shown fully operational components and an increase of performance with the hybrid substrate regarding to p+-type silicon. Indeed, the cutoff frequency was increased by 8.8 GHz when porous silicon was fabricated below the bump pads and the inductors. This improvement is a promising result to extend the application of RF components for future communication standards with silicon technology.

Capelle, M.; Billoué, J.; Concord, J.; Poveda, P.; Gautier, G.

2014-02-01

194

NASA Technical Reports Server (NTRS)

The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. The Hybrid Automated Reliability Predictor (HARP) tutorial provides insight into HARP modeling techniques and the interactive textual prompting input language via a step-by-step explanation and demonstration of HARP's fault occurrence/repair model and the fault/error handling models. Example applications are worked in their entirety and the HARP tabular output data are presented for each. Simple models are presented at first with each succeeding example demonstrating greater modeling power and complexity. This document is not intended to present the theoretical and mathematical basis for HARP.

Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Bavuso, Salvatore J.

1994-01-01

195

Monte Carlo neutrino oscillations

We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wave function. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

Kneller, James P. [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202 (United States); School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455 (United States); McLaughlin, Gail C. [Department of Physics, North Carolina State University, Raleigh, North Carolina 27695-8202 (United States)

2006-03-01

196

Monte Carlo Neutrino Oscillations

We demonstrate that the effects of matter upon neutrino propagation may be recast as the scattering of the initial neutrino wavefunction. Exchanging the differential, Schrodinger equation for an integral equation for the scattering matrix S permits a Monte Carlo method for the computation of S that removes many of the numerical difficulties associated with direct integration techniques.

James P. Kneller; Gail C. McLaughlin

2005-09-29

197

NASA Astrophysics Data System (ADS)

The European BOOM project aims at the realization of high-capacity photonic routers using the silicon material as the base for functional and cost-effective integration. Here we present the design, fabrication and testing of the first BOOMgeneration of hybrid integrated silicon photonic devices that implement key photonic routing functionalities. Ultra-fast all-optical wavelength converters and micro-ring resonator UDWDM label photodetectors are realized using either 4um SOI rib or SOI nanowire boards. For the realization of these devices, flip-chip compatible non-linear SOAs and evanescent PIN detectors have been designed and fabricated. These active components are integrated on the SOI boards using high precision flip-chip mounting and heterogeneous InP-to-silicon integration techniques. This type of scalable and cost-effective silicon-based component fabrication opens up the possibility for the realization of chip-scale, power efficient, Tb/s capacity photonic routers.

Stampoulidis, Leontios; Vyrsokinos, Konstantinos; Stamatiadis, Christos; Avramopoulos, Hercules; Zimmermann, Lars; Voigt, Karsten; Sheng, Zhen; Van Thourhout, Dries; Kreissl, Jochen; Mörl, Ludwig; Bolten, Jens; Wahlbrink, Thorsten; Gomez-Agis, Fausto; Tangdiongga, Eduward; Dorren, Harmen J. S.; Pagano, Annachiara; Riccardi, Emilio

2010-05-01

198

Hybrid materials science: a promised land for the integrative design of multifunctional materials.

For more than 5000 years, organic-inorganic composite materials created by men via skill and serendipity have been part of human culture and customs. The concept of "hybrid organic-inorganic" nanocomposites exploded in the second half of the 20th century with the expansion of the so-called "chimie douce" which led to many collaborations between a large set of chemists, physicists and biologists. Consequently, the scientific melting pot of these very different scientific communities created a new pluridisciplinary school of thought. Today, the tremendous effort of basic research performed in the last twenty years allows tailor-made multifunctional hybrid materials with perfect control over composition, structure and shape. Some of these hybrid materials have already entered the industrial market. Many tailor-made multiscale hybrids are increasingly impacting numerous fields of applications: optics, catalysis, energy, environment, nanomedicine, etc. In the present feature article, we emphasize several fundamental and applied aspects of the hybrid materials field: bioreplication, mesostructured thin films, Lego-like chemistry designed hybrid nanocomposites, and advanced hybrid materials for energy. Finally, a few commercial applications of hybrid materials will be presented. PMID:24866174

Nicole, Lionel; Laberty-Robert, Christel; Rozes, Laurence; Sanchez, Clément

2014-06-21

199

Novel Concepts for Integrating the Electric Drive and Auxiliary DC–DC Converter for Hybrid Vehicles

Cost, volume, and weight are three major driving forces in the automotive area. This is also true for hybrid electric vehicles, which are attracting more and more attention due to increasing fuel costs and air pollution. In hybrid vehicles, the energy distribution system causes a significant share of the volume and the costs. One part of this system is the

Hanna Plesko; Jorma Luomi; Johann W. Kolar

2008-01-01

200

Hybrid Environmental Control System Integrated Modeling Trade Study Analysis for Commercial Aviation

NASA Astrophysics Data System (ADS)

Current industry trends demonstrate aircraft electrification will be part of future platforms in order to achieve higher levels of efficiency in various vehicle level sub-systems. However electrification requires a substantial change in aircraft design that is not suitable for re-winged or re-engined applications as some aircraft manufacturers are opting for today. Thermal limits arise as engine cores progressively get smaller and hotter to improve overall engine efficiency, while legacy systems still demand a substantial amount of pneumatic, hydraulic and electric power extraction. The environmental control system (ECS) provides pressurization, ventilation and air conditioning in commercial aircraft, making it the main heat sink for all aircraft loads with exception of the engine. To mitigate the architecture thermal limits in an efficient manner, the form in which the ECS interacts with the engine will have to be enhanced as to reduce the overall energy consumed and achieve an energy optimized solution. This study examines a tradeoff analysis of an electric ECS by use of a fully integrated Numerical Propulsion Simulation System (NPSS) model that is capable of studying the interaction between the ECS and the engine cycle deck. It was found that a peak solution lays in a hybrid ECS where it utilizes the correct balance between a traditional pneumatic and a fully electric system. This intermediate architecture offers a substantial improvement in aircraft fuel consumptions due to a reduced amount of waste heat and customer bleed in exchange for partial electrification of the air-conditions pack which is a viable option for re-winged applications.

Parrilla, Javier

201

MÖNCH, a small pitch, integrating hybrid pixel detector for X-ray applications

NASA Astrophysics Data System (ADS)

PSI is developing several new detector families based on charge integration and analog readout (CI) to respond to the needs of X-ray free electron lasers (XFELs), where a signal up to ~ 104 photons impinging simultaneously on a pixel make single photon counting detectors unusable. MÖNCH is a novel hybrid silicon pixel detector where CI is combined with a challengingly small pixel size of 25 × 25 ?m2. CI enables the detector to process several incoming photon simultaneously in XFEL applications. Moreover, due to the small pixel size, the charge produced by an impinging photon is often shared. In low flux experiments the analog information provided by single photons can be used either to obtain spectral information or to improve the position resolution by interpolation. Possible applications are resonant and non-resonant inelastic X-ray scattering or X-ray tomography with X-ray tubes. Two prototype ASICs were designed in UMC 110 nm technology. MÖNCH01 contains only some test cells used to assess technology performance and make basic design choices. MÖNCH02 is a fully functional, small scale prototype of 4 × 4 mm2, containing an array of 160 × 160 pixels. This array is subdivided in five blocks, each featuring a different pixel architecture. Two blocks have statically selectable preamplifier gains and target synchrotron applications. In low gain mode they should provide single photon sensitivity (at 6-12 keV) as well as a reasonable dynamic range for such a small area ( > 120 photons). In high gain they target high resolution, low flux experiments where charge sharing can be exploited to reach ?m resolution. Three other architectures address possible uses at XFELs and implement automatic switching between two gains to increase the dynamic range, as well as input overvoltage control. The paper presents the MÖNCH project and first results obtained with the MÖNCH02 prototype.

Dinapoli, R.; Bergamaschi, A.; Cartier, S.; Greiffenberg, D.; Johnson, I.; Jungmann, J. H.; Mezza, D.; Mozzanica, A.; Schmitt, B.; Shi, X.; Tinti, G.

2014-05-01

202

Investigating the surface characteristics of heterogeneous polymer systems is important for understanding how to better tailor surfaces and engineering specific reactions and desirable properties. Here we report on the surface properties for a blend consisting of a major component, a linear polyurethane or thermoplastic elastomer (TPU), and a minor component that is a hybrid network. The hybrid network consists of a fluorous polyoxetane soft block and a hydrolysis/condensation inorganic (HyCoin) network. Phase separation during coating formation results in surface concentration of the minor fluorous hybrid domain. The TPU is H12MDI/BD(50)-PTMO-1000 derived from bis(cyclohexylmethylene)-diisocyanate and butane diol (50 wt %) and poly(tetramethylene oxide). Surface modification results from a novel network-forming hybrid composed of poly(trifluoroethoxymethyl-methyl oxetane) diol) (3F) as the fluorous moiety end-capped with 3-isocyanatopropylriethoxysilane and bis(triethoxysilyl)ethane (BTESE) as a siliceous stabilizer. We use an integrated approach that combines elemental analysis of the near surface via X-ray photoelectron microscopy with surface mapping using atomic force microscopy that presents topographical and phase imaging along with nanomechanical properties. Overall, this versatile, high-resolution approach enabled unique insight into surface composition and morphology that led to a model of heterogeneous surfaces containing a range of constituents and properties. PMID:25268217

Nair, Sithara S; McCullough, Eric J; Yadavalli, Vamsi K; Wynne, Kenneth J

2014-11-01

203

NASA Technical Reports Server (NTRS)

The hybrid antenna discussed here is defined as a dielectric lens-antenna as a special case of an extended hemi-spherical dielectric lens that is operated in the diffraction limited regime. It is a modified version of the planar antenna on a lens scheme developed by Rutledge. The dielectric lens-antenna is fed by a planar-structure antenna, which is mounted on the flat side of the dielectric lens-antenna using it as a substrate, and the combination is termed a hybrid antenna. Beam pattern and aperture efficiency measurements were made at millimeter and submillimeter wavelengths as a function of extension of the hemi- spherical lens and different lens sizes. An optimum extension distance is found experimentally and numerically for which excellent beam patterns and simultaneously high aperture efficiencies can be achieved. At 115 GHz the aperture efficiency was measured to be (76 4 +/- 6) % for a diffraction limited beam with sidelobes below -17 dB. Results of a single hybrid antenna with an integrated Superconductor-Insulator-Superconductor (SIS) detector and a broad-band matching structure at submillimeter wavelengths are presented. The hybrid antenna is diffraction limited, space efficient in an array due to its high aperture efficiency, and is easily mass produced, thus being well suited for focal plane heterodyne receiver arrays.

Buttgenbach, Thomas H.

1993-01-01

204

1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

T. EVANS; ET AL

2000-08-01

205

Solanum nigrum and S. villosum, and their sexual hybrids with S. tuberosum and S. demissum respectively, were inoculated with a complex race of Phytophthora infestans. No visible reaction was seen on S. villosum and one genotype of S. nigrum. Another genotype of S. nigrum and the hybrids showed a hypersensitive response on most inoculated leaves. In one experiment, some sporulation

I. T. Colon; R. Eijlander; D. J. Budding; M. T. Ijzendoorn; M. M. J. Pieters; J. Hoogendoorn

1992-01-01

206

Quantum Gibbs ensemble Monte Carlo

NASA Astrophysics Data System (ADS)

We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions.

Fantoni, Riccardo; Moroni, Saverio

2014-09-01

207

NASA Astrophysics Data System (ADS)

A polarimeter, to observe exoplanets in the visible and infrared, was built for the "Observatoire du Mont Mégantic" (OMM) to replace an existing instrument and reach 10-6 precision, a factor 100 improvement. The optical and mechanical designs are presented, with techniques used to precisely align the optical components and rotation axes to achieve the targeted precision. A photo-elastic modulator (PEM) and a lock-in amplifier are used to measure the polarization. The typical signal is a high DC superimposed to a very faint sinusoidal oscillation. Custom electronics was developed to measure the AC and DC amplitudes, and characterization results are presented.

Leclerc, Melanie R.; Côté, Patrice; Duchesne, François; Bastien, Pierre; Hernandez, Olivier; Colonna d'Istria, Pierre; Demers, Mathieu; Girard, Marc; Savard, Maxime; Lemieux, Dany; Thibault, Simon; Brousseau, Denis

2014-08-01

208

Recombinant adeno-associated viral (AAV) vectors have been shown to be one of the most promising vectors for therapeutic gene delivery because they can induce efficient and long-term transduction in non-dividing cells with negligible side-effects. However, as AAV vectors mostly remain episomal, vector genomes and transgene expression are lost in dividing cells. Therefore, to stably transduce cells, we developed a novel AAV/transposase hybrid-vector. To facilitate SB-mediated transposition from the rAAV genome, we established a system in which one AAV vector contains the transposon with the gene of interest and the second vector delivers the hyperactive Sleeping Beauty (SB) transposase SB100X. Human cells were infected with the AAV-transposon vector and the transposase was provided in trans either by transient and stable plasmid transfection or by AAV vector transduction. We found that groups which received the hyperactive transposase SB100X showed significantly increased colony forming numbers indicating enhanced integration efficiencies. Furthermore, we found that transgene copy numbers in transduced cells were dose-dependent and that predominantly SB transposase-mediated transposition contributed to stabilization of the transgene. Based on a plasmid rescue strategy and a linear-amplification mediated PCR (LAM-PCR) protocol we analysed the SB100X-mediated integration profile after transposition from the AAV vector. A total of 1840 integration events were identified which revealed a close to random integration profile. In summary, we show for the first time that AAV vectors can serve as template for SB transposase mediated somatic integration. We developed the first prototype of this hybrid-vector system which with further improvements may be explored for treatment of diseases which originate from rapidly dividing cells. PMID:24116154

Zhang, Wenli; Solanki, Manish; Müther, Nadine; Ebel, Melanie; Wang, Jichang; Sun, Chuanbo; Izsvak, Zsuzsanna; Ehrhardt, Anja

2013-01-01

209

We describe the construction of a high-resolution radiation hybrid (RH) map of the domestic cat genome, which includes 2,662 markers, translating to an estimated average intermarker distance of 939 kilobases (Kb). Targeted marker selection utilized the recent feline 1.9x genome assembly, concentrating on regions of low marker density on feline autosomes and the X chromosome, in addition to regions flanking interspecies chromosomal breakpoints. Average gap (breakpoint) size between cat-human ordered conserved segments is less than 900 Kb. The map was used for a fine-scale comparison of conserved syntenic blocks with the human and canine genomes. Corroborative fluorescence in situ hybridization (FISH) data were generated using 129 domestic cat BAC-clones as probes, providing independent confirmation of the long-range correctness of the map. Cross-species hybridization of BAC probes on divergent felids from the genera Profelis (serval) and Panthera (snow leopard) provides further evidence for karyotypic conservation within felids, and demonstrates the utility of such probes for future studies of chromosome evolution within the cat family and in related carnivores. The integrated map constitutes a comprehensive framework for identifying genes controlling feline phenotypes of interest, and to aid in assembly of a higher coverage feline genome sequence. PMID:18951970

Davis, Brian W.; Raudsepp, Terje; Wilkerson, Alison J. Pearks; Agarwala, Richa; Schäffer, Alejandro A.; Houck, Marlys; Ryder, Oliver A.; Chowdhdary, Bhanu P.; Murphy, William J.

2008-01-01

210

We describe the construction of a high-resolution radiation hybrid (RH) map of the domestic cat genome, which includes 2662 markers, translating to an estimated average intermarker distance of 939 kilobases (kb). Targeted marker selection utilized the recent feline 1.9x genome assembly, concentrating on regions of low marker density on feline autosomes and the X chromosome, in addition to regions flanking interspecies chromosomal breakpoints. Average gap (breakpoint) size between cat-human ordered conserved segments is less than 900 kb. The map was used for a fine-scale comparison of conserved syntenic blocks with the human and canine genomes. Corroborative fluorescence in situ hybridization (FISH) data were generated using 129 domestic cat BAC clones as probes, providing independent confirmation of the long-range correctness of the map. Cross-species hybridization of BAC probes on divergent felids from the genera Profelis (serval) and Panthera (snow leopard) provides further evidence for karyotypic conservation within felids, and demonstrates the utility of such probes for future studies of chromosome evolution within the cat family and in related carnivores. The integrated map constitutes a comprehensive framework for identifying genes controlling feline phenotypes of interest, and to aid in assembly of a higher coverage feline genome sequence. PMID:18951970

Davis, Brian W; Raudsepp, Terje; Pearks Wilkerson, Alison J; Agarwala, Richa; Schäffer, Alejandro A; Houck, Marlys; Chowdhary, Bhanu P; Murphy, William J

2009-04-01

211

Hollow cylindrical multi-walled hybrid nanotubes go through dynamic growth and subsequent disappearance during the biomimetic fabrication of hexagonal calcite platelets, simulating the in vivo purpose-driven self-assembly of tubular plasma-membrane calcium-ion channels for biomaterials to adapt, respond and repair. PMID:22022703

Liu, Fenglin; Kang, Wenpei; Zhao, Chenhao; Su, Yunlan; Wang, Dujin; Shen, Qiang

2011-12-14

212

Hybrid information privacy system: integration of chaotic neural network and RSA coding

Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified

Ming-Kai Hsu; Jeff Willey; Ting N. Lee; Harold H. Szu

2005-01-01

213

Integrative control strategy of regenerative and hydraulic braking for hybrid electric car

Since electric braking involvement, the braking system of an EV, HEV and FCV becomes much more complex than conventional mechanical alone braking system. The target in this hybrid braking system is to recover the braking energy as much as possible and meanwhile maintain a good braking performance for vehicle safety. For this purpose, the control of this braking system is

Liang Chu; Wanfeng Sun; Liang Yao; Yongsheng Zhang; Yang Ou; Wenruo Wei; Minghui Liu; Jun Li

2009-01-01

214

A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2)?=?0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater. PMID:24288068

Chakrabortty, S; Sen, M; Pal, P

2014-03-01

215

NASA Astrophysics Data System (ADS)

Carbon nanotubes and metal oxide semiconductors have emerged as important materials for p-type and n-type thin-film transistors, respectively; however, realizing sophisticated macroelectronics operating in complementary mode has been challenging due to the difficulty in making n-type carbon nanotube transistors and p-type metal oxide transistors. Here we report a hybrid integration of p-type carbon nanotube and n-type indium-gallium-zinc-oxide thin-film transistors to achieve large-scale (>1,000 transistors for 501-stage ring oscillators) complementary macroelectronic circuits on both rigid and flexible substrates. This approach of hybrid integration allows us to combine the strength of p-type carbon nanotube and n-type indium-gallium-zinc-oxide thin-film transistors, and offers high device yield and low device variation. Based on this approach, we report the successful demonstration of various logic gates (inverter, NAND and NOR gates), ring oscillators (from 51 stages to 501 stages) and dynamic logic circuits (dynamic inverter, NAND and NOR gates).

Chen, Haitian; Cao, Yu; Zhang, Jialu; Zhou, Chongwu

2014-06-01

216

ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

2004-06-01

217

P(VDF-TrFE) ferroelectrics: Integration in hybrid and thin-film memories

Poly(vinylidene fluoride-trifluoroethylene) P(VDF-TrFE) is a promising ferroelectric polymer with potential application for both high density hybrid and printable low-cost memory. This talk will review the structural, polarization, and electrical properties of this polymer ferroelectric, identifying the key challenges and opportunities at the system level from both processing and architectural standpoints. The unique hysteresis and time-response properties permit fabrication of large

Michael O. Thompson; Connie Lew; Johan Carlsson; Per Brahms

2008-01-01

218

Superalloy integral impeller is a vital functional component of an aeroengine, but it is very difficult to manufacture if using the traditional five-axis NC machining method because of its complex free surfaces. Lamination deposition technology using high energy beams can overcome these difficulties and can be used to directly manufacture complex superalloy double helix integral impellers. However, subsequent re-machining should

Xiong Xinhong; Zhang Haiou; Wang Guilan; Wang Guoxian

2010-01-01

219

Monte Carlo (MC) simulations of beam interaction and transport in matter are increasingly considered as essential tools to support several aspects of radiation therapy. Despite the vast application of MC to photon therapy and scattered proton therapy, clinical experience in scanned ion beam therapy is still scarce. This is especially the case for ions heavier than protons, which pose additional issues like nuclear fragmentation and varying biological effectiveness. In this work, we present the evaluation of a dedicated framework which has been developed at the Heidelberg Ion Beam Therapy Center to provide automated FLUKA MC simulations of clinical patient treatments with scanned proton and carbon ion beams. Investigations on the number of transported primaries and the dimension of the geometry and scoring grids have been performed for a representative class of patient cases in order to provide recommendations on the simulation settings, showing that recommendations derived from the experience in proton therapy cannot be directly translated to the case of carbon ion beams. The MC results with the optimized settings have been compared to the calculations of the analytical treatment planning system (TPS), showing that regardless of the consistency of the two systems (in terms of beam model in water and range calculation in different materials) relevant differences can be found in dosimetric quantities and range, especially in the case of heterogeneous and deep seated treatment sites depending on the ion beam species and energies, homogeneity of the traversed tissue and size of the treated volume. The analysis of typical TPS speed-up approximations highlighted effects which deserve accurate treatment, in contrast to adequate beam model simplifications for scanned ion beam therapy. In terms of biological dose calculations, the investigation of the mixed field components in realistic anatomical situations confirmed the findings of previous groups so far reported only in homogenous water targets. This work can thus be useful to other centers commencing clinical experience in scanned ion beam therapy. PMID:25079387

Bauer, J; Sommerer, F; Mairani, A; Unholtz, D; Farook, R; Handrack, J; Frey, K; Marcelos, T; Tessonnier, T; Ecker, S; Ackermann, B; Ellerbrock, M; Debus, J; Parodi, K

2014-08-21

220

NASA Astrophysics Data System (ADS)

Monte Carlo (MC) simulations of beam interaction and transport in matter are increasingly considered as essential tools to support several aspects of radiation therapy. Despite the vast application of MC to photon therapy and scattered proton therapy, clinical experience in scanned ion beam therapy is still scarce. This is especially the case for ions heavier than protons, which pose additional issues like nuclear fragmentation and varying biological effectiveness. In this work, we present the evaluation of a dedicated framework which has been developed at the Heidelberg Ion Beam Therapy Center to provide automated FLUKA MC simulations of clinical patient treatments with scanned proton and carbon ion beams. Investigations on the number of transported primaries and the dimension of the geometry and scoring grids have been performed for a representative class of patient cases in order to provide recommendations on the simulation settings, showing that recommendations derived from the experience in proton therapy cannot be directly translated to the case of carbon ion beams. The MC results with the optimized settings have been compared to the calculations of the analytical treatment planning system (TPS), showing that regardless of the consistency of the two systems (in terms of beam model in water and range calculation in different materials) relevant differences can be found in dosimetric quantities and range, especially in the case of heterogeneous and deep seated treatment sites depending on the ion beam species and energies, homogeneity of the traversed tissue and size of the treated volume. The analysis of typical TPS speed-up approximations highlighted effects which deserve accurate treatment, in contrast to adequate beam model simplifications for scanned ion beam therapy. In terms of biological dose calculations, the investigation of the mixed field components in realistic anatomical situations confirmed the findings of previous groups so far reported only in homogenous water targets. This work can thus be useful to other centers commencing clinical experience in scanned ion beam therapy.

Bauer, J.; Sommerer, F.; Mairani, A.; Unholtz, D.; Farook, R.; Handrack, J.; Frey, K.; Marcelos, T.; Tessonnier, T.; Ecker, S.; Ackermann, B.; Ellerbrock, M.; Debus, J.; Parodi, K.

2014-08-01

221

A novel quantum transport simulation method based on the real-time path integral is presented. Applying the WKB approximation, the density matrix of an electron and phonon bath system is reduced to a numerically tractable form. The WKB approximation brings about two non-Markovian Langevin equations for center-of-mass and relative coordinates. Simulation is reduced to solve these classical equations. Another significant assumption

K. Katayama; S. Kamohara; S. Itoh

1990-01-01

222

Path-integral Monte Carlo study of a lithium impurity in para-hydrogen: Clusters and the bulk liquid

Simulation studies using the path-integral formulation of quantum statistical mechanics are reported for single atomic lithium impurities in bulk liquid para-hydrogen and in clusters, Li(p-H2)n, with n=12, 13, 32, 33, and 34. Over the range of temperatures studied in the clusters (T=2.5–6.0 K), the lithium impurity is found to reside outside or at the surface of the clusters. Nevertheless, perturbations

Daphna Scharf; Glenn J. Martyna; Michael L. Klein

1993-01-01

223

Electronic integration of fuel cell and battery system in novel hybrid vehicle

NASA Astrophysics Data System (ADS)

The objective of this work was to integrate a lithium ion battery pack, together with its management system, into a hydrogen fuel cell drive train contained in a lightweight city car. Electronic units were designed to link the drive train components using conventional circuitry. These were built, tested and shown to perform according to the design. These circuits allowed start-up of battery management system, motor controller, fuel cell warm-up and torque monitoring. After assembling the fuel cell and battery in the vehicle, full system tests were performed. Analysis of results from vehicle demonstrations showed operation was satisfactory. The conclusion was that the electronic integration was successful, but the design needed optimisation and fine tuning. Eight vehicles were then fitted with the electronically integrated fuel cell-battery power pack. Trials were then started to test the integration more fully, with a duration of 12 months from 2011 to 2012 in the CABLED project.

Fisher, Peter; Jostins, John; Hilmansen, Stuart; Kendall, Kevin

2012-12-01

224

HORIZONTAL HYBRID SOLAR LIGHT PIPE: AN INTEGRATED SYSTEM OF DAYLIGHT AND ELECTRIC LIGHT

This project will test the feasibility of an advanced energy efficient perimeter lighting system that integrates daylighting, electric lighting, and lighting controls to reduce electricity consumption. The system is designed to provide adequate illuminance levels in deep-floor...

225

NASA Astrophysics Data System (ADS)

A novel hybrid chemical sensor array composed of individual In2O3 nanowires, SnO2 nanowires, ZnO nanowires, and single-walled carbon nanotubes with integrated micromachined hotplates for sensitive gas discrimination was demonstrated. Key features of our approach include the integration of nanowire and carbon nanotube sensors, precise control of the sensor temperature using the micromachined hotplates, and the use of principal component analysis for pattern recognition. This sensor array was exposed to important industrial gases such as hydrogen, ethanol and nitrogen dioxide at different concentrations and sensing temperatures, and an excellent selectivity was obtained to build up an interesting 'smell-print' library of these gases. Principal component analysis of the sensing results showed great discrimination of those three tested chemicals, and in-depth analysis revealed clear improvement of selectivity by the integration of carbon nanotube sensors. This nanoelectronic nose approach has great potential for detecting and discriminating between a wide variety of gases, including explosive ones and nerve agents.

Chen, Po-Chiang; Ishikawa, Fumiaki N.; Chang, Hsiao-Kang; Ryu, Koungmin; Zhou, Chongwu

2009-03-01

226

Integrated switching system interface for the voice-grade services of a hybrid fiber-coax system

NASA Astrophysics Data System (ADS)

Hybrid fiber-coax (HFC) systems have been developed to efficiently deliver television and other broadband services along with voice-grade services to the subscriber's premises. The efficiency of an HFC system, from both a call processing and an economic point of view, is enhanced by providing an integrated interface into the local digital switch (LDS) for the voice- grade services. Two interfaces in the public domain that can be used to provide the integrated voice-grade interface to the LDS are known as the TR-08 Interface, defined in Bellcore's Technical Reference document TR-NWT-000008, and the TR-303 Interface, defined in Bellcore's Technical Reference document TR-NWT-000303. This paper presents a brief overview of the TR-08 and TR-303 interfaces to LDSs. The advantages of using the TR-303 Interface for an HFC system's voice-grade interface to the LDS are discussed for the operations, administration, maintenance, and provisioning (OAM&P) functions. Also discussed are the economic considerations of providing plain old telephone service (POTS) and integrated services digital network circuits in HFC systems via TR-08 and TR-303 interfaces to the LDS.

Arvidson, Wayne; Jiang, Song

1995-11-01

227

We propose new schemes for integrating the stochastic differential equations of dissipative particle dynamics (DPD) in simulations of dilute polymer solutions. The hybrid DPD models consist of hard potentials that describe the microscopic dynamics of polymers and soft potentials that describe the mesoscopic dynamics of the solvent. In particular, we develop extensions to the velocity-Verlet and Lowe's approaches - two representative DPD time-integrators - following a subcycling procedure whereby the solvent is advanced with a timestep much larger than the one employed in the polymer time-integration. The introduction of relaxation parameters allows optimization studies for accuracy while maintaining the low computational complexity of standard DPD algorithms. We demonstrate through equilibrium simulations that a 10-fold gain in efficiency can be obtained with the time-staggered algorithms without loss of accuracy compared to the non-staggered schemes. We then apply the new approach to investigate the scaling response of polymers in equilibrium as well as the dynamics of {lambda}-phage DNA molecules subjected to shear.

Symeonidis, Vasileios [Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912 (United States); Karniadakis, George Em [Division of Applied Mathematics, Brown University, 182 George Street, Box F, Providence, RI 02912 (United States)]. E-mail: gk@dam.brown.edu

2006-10-10

228

NASA Astrophysics Data System (ADS)

Synthesis of functional hybrid nanoscale objects has been a core focus of the rapidly progressing field of nanomaterials science. In particular, there has been significant interest in the integration of evolutionally optimized biological systems such as proteins, DNA, virus particles and cells with functional inorganic building blocks to construct mesoscopic architectures and nanostructured materials. However, in many cases the fragile nature of the biomolecules seriously constrains their potential applications. As a consequence, there is an on-going quest for the development of novel strategies to modulate the thermal and chemical stabilities, and performance of biomolecules under adverse conditions. This feature article highlights new methods of ``inorganic molecular wrapping'' of single or multiple protein molecules, individual double-stranded DNA helices, lipid bilayer vesicles and self-assembled organic dye superstructures using inorganic building blocks to produce bio-inorganic nanoconstructs with core-shell type structures. We show that spatial isolation of the functional biological nanostructures as ``armour-plated'' enzyme molecules or polynucleotide strands not only maintains their intact structure and biochemical properties, but also enables the fabrication of novel hybrid nanomaterials for potential applications in diverse areas of bionanotechnology.

Patil, Avinash J.; Li, Mei; Mann, Stephen

2013-07-01

229

A hybrid approach for integrated healthcare cooperative purchasing and supply chain configuration.

This paper presents an innovative and flexible approach for recommending the number, size and composition of purchasing groups, for a set of hospitals willing to cooperate, while minimising their shared supply chain costs. This approach makes the financial impact of the various cooperation alternatives transparent to the group and the individual participants, opening way to a negotiation process concerning the allocation of the cooperation costs and gains. The approach was developed around a hybrid Variable Neighbourhood Search (VNS)/Tabu Search metaheuristic, resulting in a flexible tool that can be applied to purchasing groups with different characteristics, namely different operative and market circumstances, and to supply chains with different topologies and atypical cost characteristics. Preliminary computational results show the potential of the approach in solving a broad range of problems. PMID:24370921

Rego, Nazaré; Claro, João; Pinho de Sousa, Jorge

2014-12-01

230

An electronic performance support system (EPSS) introduces challenges on contextualized and personalized information delivery. Recommender systems aim at delivering and suggesting relevant information according to users preferences, thus EPSSs could take advantage of the recommendation algorithms that have the effect of guiding users in a large space of possible options. The JUMP project aims at integrating an EPSS with a

Leo Iaquinta; Anna Lisa Gentile; Pasquale Lops; Marco de Gemmis; Giovanni Semeraro

2007-01-01

231

Hybrid DC and AC-Linked Microgrids: Towards Integration of Distributed Energy Resources

This paper presents a microgrid paradigm with both DC and AC links, which may provide an effective way to integrate a heterogeneous set of small-size distributed energy resources into the existing electric power infrastructure. The collection of aggregated energy resource units at each level represents those distributed resources to the upper level as a single self-regulated entity (as a DC

Zhenhua Jiang; Xunwei Yu

2008-01-01

232

Integrating Wind Farm to the Grid Using Hybrid Multiterminal HVDC Technology

Since wind generation is one of the most mature renewable energy technologies, it will have the greatest share of future renewable energy portfolio. Due to the special character- istics of the wind generation, it requires extensive research to explore the best choice for wind power integration. In light of the practical project experience, this paper explores the feasibility of using

Xia Chen; Haishun Sun; Jinyu Wen; Wei-Jen Lee; Xufeng Yuan; Naihu Li; Liangzhong Yao

2011-01-01

233

Expert systems have been successfully introduced into network management for some years. But they have showed some limits in coping with the evolution of network configurations and poor expertise which is characteristic of this domain. We propose, in this paper, an intelligent and integrated fault management system, by studying different artificial intelligence techniques such as model-based expert systems, neural networks,

Shanliang Jiang; D. Siboni; A. A. Rhissa; G. Beuchot

1995-01-01

234

We demonstrate 240-Gb\\/s 64-QAM signal modulation and detection. 20-Gbaud 64-QAM signals are generated employing hybrid integration of silica PLCs and LiNbO3 phase modulators, and demodulated with a digital coherent receiver without using any pilot signals.

Akihide Sano; Takayuki Kobayashi; Koichi Ishihara; Hiroji Masuda; Shuto Yamamoto; Kunihiko Mori; Etsushi Yamazaki; Eiji Yoshida; Yutaka Miyamoto; Takashi Yamada; Hiroshi Yamazaki

2009-01-01

235

A combination of cytometric (chromosome sorting), molecular (dot blot hybridization using radioactive and\\/or biotinylated DNA probes) and cytogenetic (G-banding) evaluation is described which allows the rapid identification of single copy and repetitive viral integrates and their assignment to chromosome groups or even individual chromosomes.

K.-J. Hutter; H. Klefenz; Kl. Goerttler

1990-01-01

236

Monte Carlo sampling from the quantum state space. II

High-quality random samples of quantum states are needed for a variety of tasks in quantum information and quantum computation. Searching the high-dimensional quantum state space for a global maximum of an objective function with many local maxima or evaluating an integral over a region in the quantum state space are but two exemplary applications of many. These tasks can only be performed reliably and efficiently with Monte Carlo methods, which involve good samplings of the parameter space in accordance with the relevant target distribution. We show how the Markov-chain Monte Carlo method known as Hamiltonian Monte Carlo, or Hybrid Monte Carlo, can be adapted to this context. It is applicable when an efficient parameterization of the state space is available. The resulting random walk is entirely inside the physical parameter space, and the Hamiltonian dynamics enable us to take big steps, thereby avoiding strong correlations between successive sample points while enjoying a high acceptance rate. We use examples of single and double qubit measurements for illustration.

Yi-Lin Seah; Jiangwei Shang; Hui Khoon Ng; David John Nott; Berthold-Georg Englert

2014-07-29

237

Hybrid routing technique for a fault-tolerant, integrated information network

NASA Technical Reports Server (NTRS)

The evolutionary growth of the space station and the diverse activities onboard are expected to require a hierarchy of integrated, local area networks capable of supporting data, voice, and video communications. In addition, fault-tolerant network operation is necessary to protect communications between critical systems attached to the net and to relieve the valuable human resources onboard the space station of time-critical data system repair tasks. A key issue for the design of the fault-tolerant, integrated network is the development of a robust routing algorithm which dynamically selects the optimum communication paths through the net. A routing technique is described that adapts to topological changes in the network to support fault-tolerant operation and system evolvability.

Meredith, B. D.

1986-01-01

238

Multi-Energy, Fast Counting Hybrid CZT Pixel Detector with Dedicated Readout Integrated Circuit

A new mixed signal front-end readout electronics integrated circuit (IC) called HILDA (Hyperspectral Imaging with Large Detector Arrays) has been developed for two-dimensional CdZnTe (CZT) pixel detector arrays. The CZT array is directly bonded on top of the IC. The CZT array and the HILDA-IC have matching geometric pixel\\/channel structure and dimensions, a 16times16 array of 0.5 mm times 0.5

Martin Clajus; Victoria B. Cajipe; Satoshi Hayakawa; T. O. Turner; Paul D. Willson

2006-01-01

239

Complex problems in life science research give rise to multidisciplinary collaboration, and hence, to the need for heterogeneous database integration. The tumor suppressor p53 is mutated in close to 50% of human cancers, and a small drug-like molecule with the ability to restore native function to cancerous p53 mutants is a long-held medical goal of cancer treatment. The Cancer Research

Vadim Y. Bichutskiy; Richard Colman; Rainer K. Brachmann; Richard H. Lathrop

240

Hybrid integration approach of VCSELs for miniaturized optical deflection of microparticles

NASA Astrophysics Data System (ADS)

In recent years, optical manipulation has gained increasing interest, especially in combination with microfluidics. This combination offers promising tools for a fast and cost-effective sample analysis and manipulation. The contamination-free handling of micrometer-sized particles without any mechanical contact is an attractive tool for biology and medicine. VCSELs (vertical-cavity surface-emitting lasers) are an excellent choice for the trapping lasers, offering the opportunity of parallel particle manipulation by using two-dimensional VCSEL arrays, and of miniaturization by means of integration. In this paper, we present two novel concepts for the realization of the so-called integrated optical trap, resembling a strongly miniaturized version of the typically bulky setup of an optical trap. For this purpose, AlGaAs-GaAs-based VCSEL arrays with a very small device pitch were fabricated. We show the realization of integration-ready particle manipulation devices, both with top-emitting and with bottom-emitting densely packed VCSEL arrays. The smallest pitch of 18 ?m is achieved with bottom-emitting VCSEL arrays, having mesa diameters of only 16 ?m.

Bergmann, Anna; Khan, Niazul Islam; Martos Calahorro, Jose Antonio; Wahl, Dietmar; Michalzik, Rainer

2012-06-01

241

Hybrid integration of an eight-channel WDM transmitter and receiver module at 980 nm

NASA Astrophysics Data System (ADS)

The inherent information bandwidth of optical fibers between the wavelengths 1.3 and 1.6 micrometers is in the terahertz range. One obvious way to exploit this bandwidth is to use wavelength-division multiplexing (WDM). The Canadian Solid State Optoelectronics Consortium (SSOC), an association of industry, university, and federal government research laboratories, has been developing the component technologies required to demonstrate the operation of an eight channel WDM system. This paper discusses the integration of the transmitter (Tx) and the receiver (Rx) modules using a thin film process on alumina substrates. The Tx module contains a fully integrated eight channel DBR laser array with two quad-laser driver circuits. The signal from the lasers is combined into a single waveguide and is then carried off-chip via a polarization maintaining optical fiber. The Rx module is made up of an integrated receiver circuit, and a series of amplifiers providing the gain required for signal and clock recovery. The receiver circuit consists of an echelle grating which disperses the eight distinct wavelengths into a bank of InGaAs metal-semiconductor-metal (MSM) detectors. Some of the performance parameters of the Tx and Rx modules are presented.

Berolo, Ezio; Coyne, W.; Hua, Heng; James, R.; Kuley, R. M.; Lisicka-Skrzek, Ewa; Millar, G.; Vineberg, Karen A.; Fallahi, Mahmoud; Barber, Richard A.; Chatenoud, F.; Wang, Weijian; Koteles, Emil S.

1995-03-01

242

Integrated thermal and energy management of plug-in hybrid electric vehicles

NASA Astrophysics Data System (ADS)

In plug-in hybrid electric vehicles (PHEVs), the engine temperature declines due to reduced engine load and extended engine off period. It is proven that the engine efficiency and emissions depend on the engine temperature. Also, temperature influences the vehicle air-conditioner and the cabin heater loads. Particularly, while the engine is cold, the power demand of the cabin heater needs to be provided by the batteries instead of the waste heat of engine coolant. The existing energy management strategies (EMS) of PHEVs focus on the improvement of fuel efficiency based on hot engine characteristics neglecting the effect of temperature on the engine performance and the vehicle power demand. This paper presents a new EMS incorporating an engine thermal management method which derives the global optimal battery charge depletion trajectories. A dynamic programming-based algorithm is developed to enforce the charge depletion boundaries, while optimizing a fuel consumption cost function by controlling the engine power. The optimal control problem formulates the cost function based on two state variables: battery charge and engine internal temperature. Simulation results demonstrate that temperature and the cabin heater/air-conditioner power demand can significantly influence the optimal solution for the EMS, and accordingly fuel efficiency and emissions of PHEVs.

Shams-Zahraei, Mojtaba; Kouzani, Abbas Z.; Kutter, Steffen; Bäker, Bernard

2012-10-01

243

A two-stage waste air treatment system, consisting of hybrid bioreactors (modified bioscrubbers) and a biofilter, was used to treat waste air containing chlorinated ethenes - trichloroethylene (TCE) and tetrachloroethylene (PCE). The bioreactor was operated with loadings in the range 0.46-5.50gm(-3)h(-1) for TCE and 2.16-9.02gm(-3)h(-1) for PCE. The biofilter loadings were in the range 0.1-0.97gm(-3)h(-1) for TCE and 0.2-2.12gm(-3)h(-1) for PCE. Under low pollutant loadings, the efficiency of TCE elimination was 23-25% in the bioreactor and 54-70% in the biofilter. The efficiency of PCE elimination was 44-60% in the bioreactor and 50-75% in the biofilter. The best results for the bioreactor were observed one week after the pollutant loading was increased. However, the process did not stabilize. In the next seven days contaminant removal efficiency, enzymatic activity and biomass content were all diminished. PMID:24316808

Tabernacka, Agnieszka; Zborowska, Ewa; Lebkowska, Maria; Borawski, Maciej

2014-01-15

244

We show how the integrators used for the molecular dynamics step of the Hybrid Monte Carlo algorithm can be further improved. These integrators not only approximately conserve some Hamiltonian H but conserve exactly a nearby shadow Hamiltonian H-tilde. This property allows for a new tuning method of the molecular dynamics integrator and also allows for a new class of integrators (force-gradient integrators) which is expected to reduce significantly the computational cost of future large-scale gauge field ensemble generation.

Clark, M. A. [Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts 02138 (United States); Joo, Balint [Jefferson Lab, 12000 Jefferson Avenue, Newport News, Virginia 23606 (United States); Kennedy, A. D. [Tait Institute and SUPA, School of Physics and Astronomy, University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Silva, P. J. [Centro de Fisica Computacional, Universidade de Coimbra (Portugal)

2011-10-01

245

Nanostructured titania films are of growing interest due to their application in future photovoltaic technologies. Therefore, a lot of effort has been put into the controlled fabrication and tailoring of titania nanostructures. The controlled sol-gel synthesis of titania, in particular in combination with block copolymer templates, is very promising because of its high control on the nanostructure, easy application and cheap processing possibilities. This tutorial review gives a short overview of the structural control of titania films gained by using templated sol-gel chemistry and shows how this approach is extended by the addition of further functionality to the films. Different expansions of the sol-gel templating are possible by the fabrication of gradient samples, by the addition of a homopolymer, by the combination with micro-fluidics and also by the application of novel precursors for low-temperature processing. Moreover, hierarchically structured titania films can be fabricated via the subsequent application of several sol-gel steps or via the inclusion of colloidal templates in a one-step process. Integrated function in the block copolymer used in the sol-gel synthesis allows for the fabrication of an integrated blocking layer or an integrated hole-conductor. Both approaches grant a one-step fabrication of two components of a working solar cell, which make them very promising towards a cheap solar cell production route. Looking to the complete solar cell, the top contact is also of great importance as it influences the function of the whole solar cell. Thus, the mechanisms acting in the top contact formation are also reviewed. For all these aspects, characterization techniques that allow for a structural investigation of nanostructures inside the active layers are important. Therefore, the characterization techniques that are used in real space as well as in reciprocal space are explained shortly as well. PMID:22415549

Rawolle, Monika; Niedermeier, Martin A; Kaune, Gunar; Perlich, Jan; Lellig, Philipp; Memesa, Mine; Cheng, Ya-Jun; Gutmann, Jochen S; Müller-Buschbaum, Peter

2012-08-01

246

Integrating matrix solution of the hybrid state vector equations for beam vibration

NASA Technical Reports Server (NTRS)

A simple, versatile, and efficient computational technique has been developed for dynamic analysis of linear elastic beam and rod type of structures. Moreover, the method provides a rather general solution approach for two-point boundary value problems that are described by a single independent spatial variable. For structural problems, the method is implemented by a mixed state vector formulation of the differential equations, combined with an integrating matrix solution procedure. Highly accurate solutions are easily achieved with this approach. Example solutions are given for beam vibration problems including discontinuous stiffness and mass parameters, elastic restraint boundary conditions, concentrated inertia loading, and rigid body modes

Lehman, L. L.

1982-01-01

247

High quality proton beams from hybrid integrated laser-driven ion acceleration systems

NASA Astrophysics Data System (ADS)

We consider a hybrid acceleration scheme for protons where the laser generated beam is selected in energy and angle and injected into a compact linac, which raises the energy from 30 to 60 MeV. The laser acceleration regime is TNSA and the energy spectrum is determined by the cutoff energy and proton temperature. The dependence of the spectrum on the target properties and the incidence angle is investigated with 2D PIC simulations. We base our work on widely available technologies and on laser with a short pulse, having in mind a facility whose cost is approximately 15 M€. Using a recent experiment as the reference, we choose the laser pulse and target so that the energy spectrum obtained from the 3D PIC simulation is close to the one observed, whose cutoff energy was estimated to be over 50 MeV. Laser accelerated protons in the TNSA regime have wide energy spectrum and broad divergence. In this paper we compare three transport lines, designed to perform energy selection and beam collimation. They are based on a solenoid, a quadruplet of permanent magnetic quadrupoles and a chicane. To increase the maximum available energy, which is actually seen as an upper limit due to laser properties and available targets, we propose to inject protons into a small linac for post-acceleration. The number of selected and injected protons is the highest with the solenoid and lower by one and two orders of magnitude with the quadrupoles and the chicane respectively. Even though only the solenoid enables achieving to reach a final intensity at the threshold required for therapy with the highest beam quality, the other systems will be very likely used in the first experiments. Realistic start-to-end simulations, as the ones reported here, are relevant for the design of such experiments.

Sinigardi, Stefano; Turchetti, Giorgio; Rossi, Francesco; Londrillo, Pasquale; Giove, Dario; De Martinis, Carlo; Bolton, Paul R.

2014-03-01

248

Hybrid information privacy system: integration of chaotic neural network and RSA coding

NASA Astrophysics Data System (ADS)

Electronic mails are adopted worldwide; most are easily hacked by hackers. In this paper, we purposed a free, fast and convenient hybrid privacy system to protect email communication. The privacy system is implemented by combining private security RSA algorithm with specific chaos neural network encryption process. The receiver can decrypt received email as long as it can reproduce the specified chaos neural network series, so called spatial-temporal keys. The chaotic typing and initial seed value of chaos neural network series, encrypted by the RSA algorithm, can reproduce spatial-temporal keys. The encrypted chaotic typing and initial seed value are hidden in watermark mixed nonlinearly with message media, wrapped with convolution error correction codes for wireless 3rd generation cellular phones. The message media can be an arbitrary image. The pattern noise has to be considered during transmission and it could affect/change the spatial-temporal keys. Since any change/modification on chaotic typing or initial seed value of chaos neural network series is not acceptable, the RSA codec system must be robust and fault-tolerant via wireless channel. The robust and fault-tolerant properties of chaos neural networks (CNN) were proved by a field theory of Associative Memory by Szu in 1997. The 1-D chaos generating nodes from the logistic map having arbitrarily negative slope a = p/q generating the N-shaped sigmoid was given first by Szu in 1992. In this paper, we simulated the robust and fault-tolerance properties of CNN under additive noise and pattern noise. We also implement a private version of RSA coding and chaos encryption process on messages.

Hsu, Ming-Kai; Willey, Jeff; Lee, Ting N.; Szu, Harold H.

2005-03-01

249

Silica nano-ridges connections based on a fluidic approach for hybrid integrated photonics

NASA Astrophysics Data System (ADS)

We report a practical and novel concept based on reproducible fluidic mechanisms coupled with silica nano-particles for the development of nano-optical-connections directly on organic integrated photonic chips. Silica nano-rib waveguides have been shaped with various widths ranging between 50 nm and 300 nm and about a hundred ?m in length respectively. An effective nano-photonic coupling mechanism has been demonstrated and a sub-wavelength propagation regime obtained between two organic rib tapers and waveguides with a perpendicular and a parallel configuration respectively. The specific silica nano-rib-waveguides structures show off optical losses propagation ranging around [37- 68] dB/mm at visible and infra-red (IR) wavelengths. Such flexible devices offer a versatile fabrication control by changing respectively nano-particles and surfactant concentrations. Thus, they present great potential regarding future applications for shaping nano-connections and high-density network integrations between original optical segmented circuits such as plots, lines or any pre-formed photonics structures.

B"che, B.; Jimenez, A.; Courbin, L.; Camberlein, L.; Doré, F.; Duval, D.; Artzner, F.; Gaviot, E.

2010-05-01

250

We demonstrate free-space space-division-multiplexing (SDM) with 15 orbital angular momentum (OAM) states using a three-dimensional (3D) photonic integrated circuit (PIC). The hybrid device consists of a silica planar lightwave circuit (PLC) coupled to a 3D waveguide circuit to multiplex/demultiplex OAM states. The low excess loss hybrid device is used in individual and two simultaneous OAM states multiplexing and demultiplexing link experiments with a 20 Gb/s, 1.67 b/s/Hz quadrature phase shift keyed (QPSK) signal, which shows error-free performance for 379,960 tested bits for all OAM states. PMID:24514976

Guan, Binbin; Scott, Ryan P; Qin, Chuan; Fontaine, Nicolas K; Su, Tiehui; Ferrari, Carlo; Cappuzzo, Mark; Klemens, Fred; Keller, Bob; Earnshaw, Mark; Yoo, S J B

2014-01-13

251

This report summarizes the results of a field verification pilot site investigation that involved the installation of a hybrid integrated active desiccant/vapor-compression rooftop heating, ventilation, and air-conditioning (HVAC) unit at an elementary school in the Atlanta Georgia area. For years, the school had experienced serious humidity and indoor air quality (IAQ) problems that had resulted in occupant complaints and microbial (mold) remediation. The outdoor air louvers of the original HVAC units had been closed in an attempt to improve humidity control within the space. The existing vapor compression variable air volume system was replaced by the integrated active desiccant rooftop (IADR) system that was described in detail in an Oak Ridge National Laboratory (ORNL) report published in 2004 (Fischer and Sand 2004). The IADR system and all space conditions have been monitored remotely for more than a year. The hybrid system was able to maintain both the space temperature and humidity as desired while delivering the outdoor air ventilation rate required by American Society of Heating, Refrigerating and Air-Conditioning Engineers Standard 62. The performance level of the IADR unit and the overall system energy efficiency was measured and found to be very high. A comprehensive IAQ investigation was completed by the Georgia Tech Research Institute before and after the system retrofit. Before-and-after data resulting from this investigation confirmed a significant improvement in IAQ, humidity control, and occupant comfort. These observations were reported by building occupants and are echoed in a letter to ORNL from the school district energy manager. The IADR system was easily retrofitted in place of the original rooftop system using a custom curb adapter. All work was completed in-house by the school's maintenance staff over one weekend. A subsequent cost analysis completed for the school district by the design engineer of record concluded that the IADR system being investigated was actually less expensive to install than other less-efficient options, most of which were unable to deliver the required ventilation while maintaining the desired space humidity levels.

Fischer, J

2005-12-21

252

A 10Gb/s transimpedance amplifier for hybrid integration of a Ge PIN waveguide photodiode

NASA Astrophysics Data System (ADS)

The presented paper describes a 10 Gbps optical receiver. The transimpedance amplifier (TIA) is realized in standard 0.35 ?m SiGe BiCMOS technology. The main novelty of the presented design - investigated in the European Community project HELIOS - is the hybrid connection of the optical detector. The used Germanium photodetector will be directly mounted onto the receiver. A model of the relevant parasitics of the photodetector itself and the novel connection elements (micropads, metal vias and metal lines) is described. Based on this photodetector model an optical receiver circuit was optimized for maximum sensitivity at data rates in the range of 10 Gbps. The design combines a TIA and two limiting amplifier stages followed by a 50 ? CML-style logic-level output driver. To minimize power supply noise and substrate noise, a fully differential design is used. A dummy TIA provides a symmetrical input signal reference and a control loop is used to compensate the offset levels. The TIA is built around a common-emitter stage and features a feedback resistor of 4.2 ?. The total transimpedance of the complete receiver chain is in the range of 275 k?. The value of the active feedback resistor can be reduced via an external control voltage to adapt the design to different overall gain requirements. The two limiting amplifier stages are realized as differential amplifiers with voltage followers. The output buffer is implemented with cascode differential amplifiers. The output buffer is capable of driving a differential 50? output with a calculated output swing of 800mVp-p. Simulations show an overall bandwidth of 7.2 GHz. The lower cutoff frequency is below 60 kHz. The equivalent input noise current is 408 nA. With an estimated total photodiode responsivity of 0.5 A/W this allows a sensitivity of around - 23.1 dBm (BER = 10-9). The device operates from a single 3.3 V power supply and the TIAs and the limiting amplifier consume 32 mA.

Polzer, A.; Gaberl, W.; Swoboda, R.; Zimmermann, H.; Fedeli, J.-M.; Vivien, L.

2010-05-01

253

Propulsion Airframe Aeroacoustic Integration Effects for a Hybrid Wing Body Aircraft Configuration

NASA Technical Reports Server (NTRS)

An extensive experimental investigation was performed to study the propulsion airframe aeroacoustic effects of a high bypass ratio engine for a hybrid wing body aircraft configuration where the engine is installed above the wing. The objective was to provide an understanding of the jet noise shielding effectiveness as a function of engine gas condition and location as well as nozzle configuration. A 4.7% scale nozzle of a bypass ratio seven engine was run at characteristic cycle points under static and forward flight conditions. The effect of the pylon and its orientation on jet noise was also studied as a function of bypass ratio and cycle condition. The addition of a pylon yielded significant spectral changes lowering jet noise by up to 4dB at high polar angles and increasing it by 2 to 3dB at forward angles. In order to assess jet noise shielding, a planform representation of the airframe model, also at 4.7% scale was traversed relative to the jet nozzle from downstream to several diameters upstream of the wing trailing edge. Installations at two fan diameters upstream of the wing trailing edge provided only limited shielding in the forward arc at high frequencies for both the axisymmetric and a conventional round nozzle with pylon. This was consistent with phased array measurements suggesting that the high frequency sources are predominantly located near the nozzle exit and, consequently, are amenable to shielding. The mid to low frequencies sources were observed further downstream and shielding was insignificant. Chevrons were designed and used to impact the distribution of sources with the more aggressive design showing a significant upstream migration of the sources in the mid frequency range. Furthermore, the chevrons reduced the low frequency source levels and the typical high frequency increase due to the application of chevron nozzles was successfully shielded. The pylon was further modified with a technology that injects air through the shelf of the pylon which was effective in reducing low frequency noise and moving jet noise sources closer to the nozzle exit. In general, shielding effectiveness varied as a function of cycle condition with the cutback condition producing higher shielding compared to sideline power. The configuration with a more strongly immersed chevron and a pylon oriented opposite to the microphones produced the largest reduction in jet noise. In addition to the jet noise source, the shielding of a broadband point noise source was documented with up to 20 dB of noise reduction at directivity angles directly under the shielding surface.

Czech, Michael J.; Thomas, Russell H.; Elkoby, Ronen

2010-01-01

254

NASA Astrophysics Data System (ADS)

Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on an active megathrust is stored as potential slip, referred to as slip deficit, along locked sections of the fault. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip. Areas of unreleased slip indicate where the potential for large events remain. The location of recent earthquakes and their distribution of slip can be estimated from instrumentally recorded seismic and geodetic data. However, long-term slip-deficit modelling requires detailed information on the size and distribution of slip for pre-instrumental events over hundreds of years covering more than one 'seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of reconstructing slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them; they act as long term recorders of the vertical component of deformation. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Rather than accepting any one realisation as the definite model satisfying the coral displacement data, a Monte Carlo approach identifies a suite of models consistent with the observations. Using a Genetic Algorithm to accelerate the identification of desirable models, we have developed a Monte Carlo Slip Estimator- Genetic Algorithm (MCSE-GA) which exploits the full range of uncertainty associated with the displacements. Each iteration of the MCSE-GA samples different values from within the spread of uncertainties associated with each coral displacement. The Genetic Algorithm element of the MCSE-GA allows it to recombine the information stored in a population of randomly generated models to rapidly converge on a possible solution. These solutions are evaluated and those satisfying a threshold number of observations join an ensemble of models contributing to a final Weighted Average Model (WAM). The WAM represents a high resolution estimate of the slip distribution responsible for any given set of displacements. Analysis of the slip values of the ensemble models allows areas of high confidence to be identified where the standard deviation is low. Similarly, areas of low confidence will be found where standard deviations are high. This presentation will demonstrate the ability of the MCSE-GA to produce both accurate models of slip for a number of recent instrumentally recorded earthquakes along the Sunda Trench and estimates of slip during 1797 and 1833 paleoearthquakes that are consistent with those produced from other techniques.

Lindsay, Anthony; McCloskey, John; Simão, Nuno; Murphy, Shane; Bhloscaidh, Mairead Nic

2014-05-01

255

Uniformity study of wafer-scale InP-to-silicon hybrid integration

NASA Astrophysics Data System (ADS)

In this paper we study the uniformity of up to 150 mm in diameter wafer-scale III-V epitaxial transfer to the Si-on-insulator substrate through the O2 plasma-enhanced low-temperature (300°C) direct wafer bonding. Void-free bonding is demonstrated by the scanning acoustic microscopy with sub-?m resolution. The photoluminescence (PL) map shows less than 1 nm change in average peak wavelength, and even improved peak intensity (4% better) and full width at half maximum (41% better) after 150 mm in diameter epitaxial transfer. Small and uniformly distributed residual strain in all sizes of bonding, which is measured by high-resolution X-ray diffraction Omega-2Theta mapping, and employment of a two-period InP-InGaAsP superlattice at the bonding interface contributes to the improvement of PL response. Preservation of multiple quantum-well integrity is also verified by high-resolution transmission electron microscopy.

Liang, Di; Chapman, David C.; Li, Youli; Oakley, Douglas C.; Napoleone, Tony; Juodawlkis, Paul W.; Brubaker, Chad; Mann, Carl; Bar, Hanan; Raday, Omri; Bowers, John E.

2011-04-01

256

NASA Astrophysics Data System (ADS)

The work presented herein describes the system design and performance evaluation of a miniaturized near-infrared fluorescence (NIRF) frequency-domain photon migration (FDPM) system with non-contact excitation and homodyne detection capability for small animal fluorescence tomography. The FDPM system was developed specifically for incorporation into a Siemens micro positron emission tomography/computed tomography (microPET/CT) commercial scanner for hybrid small animal imaging, but could be adapted to other systems. Operating at 100 MHz, the system noise was minimized and the associated amplitude and phase errors were characterized to be ±0.7% and ±0.3°, respectively. To demonstrate the tomographic ability, a commercial mouse-shaped phantom with 50 µM IRDye800CW and 68Ga containing inclusion was used to associate PET and NIRF tomography. Three-dimensional mesh generation and anatomical referencing was accomplished through CT. A third-order simplified spherical harmonics approximation (SP3) algorithm, for efficient prediction of light propagation in small animals, was tailored to incorporate the FDPM approach. Finally, the PET-NIRF target co-localization accuracy was analyzed in vivo with a dual-labeled imaging agent targeting orthotopic growth of human prostate cancer. The obtained results validate the integration of time-dependent fluorescence tomography system within a commercial microPET/CT scanner for multimodality small animal imaging.

Darne, Chinmay D.; Lu, Yujie; Tan, I.-Chih; Zhu, Banghe; Rasmussen, John C.; Smith, Anne M.; Yan, Shikui; Sevick-Muraca, Eva M.

2012-12-01

257

The work presented herein describes the system design and performance evaluation of a miniaturized near-infrared fluorescence (NIRF) frequency-domain photon migration (FDPM) system with non-contact excitation and homodyne detection capability for small animal fluorescence tomography. The FDPM system was developed specifically for incorporation into a Siemens micro positron emission tomography/computed tomography (microPET/CT) commercial scanner for hybrid small animal imaging, but could be adapted to other systems. Operating at 100 MHz, the system noise was minimized and the associated amplitude and phase errors were characterized to be ±0.7% and ±0.3°, respectively. To demonstrate the tomographic ability, a commercial mouse-shaped phantom with 50 µM IRDye800CW and ??Ga containing inclusion was used to associate PET and NIRF tomography. Three-dimensional mesh generation and anatomical referencing was accomplished through CT. A third-order simplified spherical harmonics approximation (SP?) algorithm, for efficient prediction of light propagation in small animals, was tailored to incorporate the FDPM approach. Finally, the PET-NIRF target co-localization accuracy was analyzed in vivo with a dual-labeled imaging agent targeting orthotopic growth of human prostate cancer. The obtained results validate the integration of time-dependent fluorescence tomography system within a commercial microPET/CT scanner for multimodality small animal imaging. PMID:23171509

Darne, Chinmay D; Lu, Yujie; Tan, I-Chih; Zhu, Banghe; Rasmussen, John C; Smith, Anne M; Yan, Shikui; Sevick-Muraca, Eva M

2012-12-21

258

The segmentation and detection of various types of nodules in a Computer-aided detection (CAD) system present various challenges, especially when (1) the nodule is connected to a vessel and they have very similar intensities; (2) the nodule with ground-glass opacity (GGO) characteristic possesses typical weak edges and intensity inhomogeneity, and hence it is difficult to define the boundaries. Traditional segmentation methods may cause problems of boundary leakage and “weak” local minima. This paper deals with the above mentioned problems. An improved detection method which combines a fuzzy integrated active contour model (FIACM)-based segmentation method, a segmentation refinement method based on Parametric Mixture Model (PMM) of juxta-vascular nodules, and a knowledge-based C-SVM (Cost-sensitive Support Vector Machines) classifier, is proposed for detecting various types of pulmonary nodules in computerized tomography (CT) images. Our approach has several novel aspects: (1) In the proposed FIACM model, edge and local region information is incorporated. The fuzzy energy is used as the motivation power for the evolution of the active contour. (2) A hybrid PMM Model of juxta-vascular nodules combining appearance and geometric information is constructed for segmentation refinement of juxta-vascular nodules. Experimental results of detection for pulmonary nodules show desirable performances of the proposed method. PMID:23690876

Li, Bin; Chen, Kan; Tian, Lianfang; Yeboah, Yao; Ou, Shanxing

2013-01-01

259

NASA Astrophysics Data System (ADS)

Magneto-rheological (MR) fluid dampers can be used to reduce the traffic induced vibration in highway bridges and protect critical structural components from fatigue. Experimental verification is needed to verify the applicability of the MR dampers for this purpose. Real-time hybrid simulation (RTHS), where the MR dampers are physically tested and dynamically linked to a numerical model of the highway bridge and truck traffic, provides an efficient and effective means to experimentally examine the efficacy of MR dampers for fatigue protection of highway bridges. In this paper a complex highway bridge model with 263?178 degrees-of-freedom under truck loading is tested using the proposed convolution integral (CI) method of RTHS for a semiactive structural control strategy employing two large-scale 200 kN MR dampers. The formation of RTHS using the CI method is first presented, followed by details of the various components in the RTHS and a description of the implementation of the CI method for this particular test. The experimental results confirm the practicability of the CI method for conducting RTHS of complex systems.

Jiang, Zhaoshuo; Jig Kim, Sung; Plude, Shelley; Christenson, Richard

2013-10-01

260

Temperature gradient focusing (TGF) is a counterflow gradient focusing technique, which utilizes a temperature gradient across a microchannel or capillary to separate analytes. With an appropriate buffer, the temperature gradient creates a gradient in both the electric field and electrophoretic velocity. Combined with a bulk counter flow, ionic species concentrate at a unique point where the total velocity sums to zero and separate from each other. Scanning TGF uses varying bulk flow so that a large number of analytes that have large differences in electrophoretic mobility can be sequentially focused and passed by a single detection point. Up to now, scanning TGF examples have been performed using a linear temperature gradient which has limitations in improving peak capacity and resolution at the same time. In this work, we develop a bilinear temperature gradient along the separation channel that improves both peak capacity and separation resolution simultaneously. The temperature profile along the channel consists of a very sharp gradient used to preconcentrate the sample followed by a shallow gradient that increases separation resolution. A specialized design is developed for the heaters to achieve the bilinear profile using both analytical and numerical modeling. The heaters are integrated onto a hybrid PDMS/glass chip fabricated using conventional sputtering and soft-lithography techniques. Separation performance is characterized by separating several different dyes and amino acids that have close electrophoretic mobilities. Experiments show a dramatic improvement in peak capacity and resolution in comparison to the standard linear temperature gradient. PMID:22404579

Shameli, Seyed Mostafa; Glawdel, Tomasz; Liu, Zhen; Ren, Carolyn L

2012-03-20

261

Monte Carlo methods Sequential Monte Carlo

Monte Carlo methods Sequential Monte Carlo A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 1 / 85 #12;Generic Problem Consider a sequence of probability distributions, Fn = Fn 1 F. A. Doucet (MLSS Sept. 2011) Sequential Monte Carlo Sept. 2011 2 / 85 #12;Generic Problem

Doucet, Arnaud

262

A new heterogeneous integration technique has been developed and demonstrated to integrate vertical cavity surface emitting lasers (VCSELs) on silicon CMOS integrated circuits for optical interconnect applications. Individual ...

Perkins, James Michael, 1978-

2007-01-01

263

This study presents a bi-directional multi-level power electronic interface for the grid interactions of plug-in hybrid electric vehicles (PHEVs) as well as a novel bi-directional power electronic converter for the combined operation of battery/ultracapacitor hybrid energy storage systems (ESS). The grid interface converter enables beneficial vehicle-to-grid (V2G) interactions in a high power quality and grid friendly manner; i.e, the grid interface converter ensures that all power delivered to/from grid has unity power factor and almost zero current harmonics. The power electronic converter that provides the combined operation of battery/ultra-capacitor system reduces the size and cost of the conventional ESS hybridization topologies while reducing the stress on the battery, prolonging the battery lifetime, and increasing the overall vehicle performance and efficiency. The combination of hybrid ESS is provided through an integrated magnetic structure that reduces the size and cost of the inductors of the ESS converters. Simulation and experimental results are included as prove of the concept presenting the different operation modes of the proposed converters.

Onar, Omer C [ORNL] [ORNL

2011-01-01

264

NASA Astrophysics Data System (ADS)

In the 20+ years of Doppler observations of stars, scientists have uncovered a diverse population of extrasolar multi-planet systems. A common technique for characterizing the orbital elements of these planets is the Markov Chain Monte Carlo (MCMC), using a Keplerian model with random walk proposals and paired with the Metropolis-Hastings algorithm. For approximately a couple of dozen planetary systems with Doppler observations, there are strong planet-planet interactions due to the system being in or near a mean-motion resonance (MMR). An N-body model is often required to accurately describe these systems. Further computational difficulties arise from exploring a high-dimensional parameter space (~7 × number of planets) that can have complex parameter correlations, particularly for systems near a MMR. To surmount these challenges, we introduce a differential evolution MCMC (DEMCMC) algorithm applied to radial velocity data while incorporating self-consistent N-body integrations. Our Radial velocity Using N-body DEMCMC (RUN DMC) algorithm improves upon the random walk proposal distribution of the traditional MCMC by using an ensemble of Markov chains to adaptively improve the proposal distribution. RUN DMC can sample more efficiently from high-dimensional parameter spaces that have strong correlations between model parameters. We describe the methodology behind the algorithm, along with results of tests for accuracy and performance. We find that most algorithm parameters have a modest effect on the rate of convergence. However, the size of the ensemble can have a strong effect on performance. We show that the optimal choice depends on the number of planets in a system, as well as the computer architecture used and the resulting extent of parallelization. While the exact choices of optimal algorithm parameters will inevitably vary due to the details of individual planetary systems (e.g., number of planets, number of observations, orbital periods, and signal-to-noise of each planet), we offer recommendations for choosing the DEMCMC algorithm's algorithmic parameters that result in excellent performance for a wide variety of planetary systems.

Nelson, Benjamin; Ford, Eric B.; Payne, Matthew J.

2014-01-01

265

NASA Astrophysics Data System (ADS)

The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.

Joosten, A.; Bochud, F.; Moeckli, R.

2014-08-01

266

Bacterial Colony Hybridization (Colony lift) AIM: Identify which bacterial colonies on a petri dish on petri dishes Whatman Filter #1, trimmed to fit in the petri dish, does not need to be sterile fragment as a template. Large Pyrex tray Clean petri dishes Parafilm Hybridization oven set at 42Â°C

Abou Elela, Sherif

267

Bioartificial liver support systems are expected to be an effective therapy as a "bridge" for liver transplantation or reversible acute liver disease. A major roadblock in the application of bioartificial livers is the need for a bioreactor that fully meets the requirements of hepatocyte culture, mass transfer and immunobarriers. In this study, we developed a three-dimensional hybrid bioreactor (3DHB) on a base of single-layer skin polyethersulfone hollow fibers by integrating with polyurethane scaffolds. The mass transfer of bilirubin and albumin from the intracapillary space to the extracapillary space of the hollow fibers was not significantly different between 3DHBs and hollow fiber bioreactors (HFBs). Cell viability staining showed that high-density hepatocytes were uniformly found in different regions of the 3DHB after 7 days of culture. Liver-specific functions of human mature hepatocytes cultured in the 3DHB, such as albumin secretion, urea production, ammonia removal rate and cytochrome P450 activity, were maintained stably and were significantly higher compared with the HFB. These results indicated that the 3DHB has good mass transfer and improves cell distribution and liver-specific functions. Meanwhile, the ammonia and unconjugated bilirubin concentrations in plasma from patients with liver failure were significantly decreased during 6 h of circulation by hepatocytes cultured in the 3DHB. Most hepatocytes in the 3DHB were viable after 6 h exposure to the patient plasma. We further demonstrated that bioartificial liver systems with 3DHB can remove toxins from and endure the deleterious effects of the patient plasma. Therefore, the 3DHB has the potential to accomplish different actions for the clinical application of bioartificial livers. PMID:23963686

Zhang, Shichang; Chen, Li; Liu, Tao; Wang, Zhengguo; Wang, Yingjie

2014-01-01

268

Error in Monte Carlo, quasi-error in Quasi-Monte Carlo

While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.

R. H. Kleiss; A. Lazopoulos

2005-04-12

269

NASA Astrophysics Data System (ADS)

We integrate geologic and geophysical information to characterize the upper-plate structure of the Monte Vista fault zone (MVFZ). The MVFZ is part of the larger Foothills thrust fault system and extends 25 km from Los Altos southeast to Los Gatos along the front of the range. At the northwestern end of the MVFZ, Franciscan and Miocene sedimentary rocks are thrust over Quaternary deposits. Upper-plate rocks of the Plio-Quaternary Santa Clara Formation are folded, with Miocene rocks exposed in the core of a hanging-wall anticline. The fold axes parallel the southeast strike of the MVFZ for 5-6 km, then diverge southward from the fault zone and disappear near Saratoga, to be replaced by deeply dissected, nearly flat-lying Santa Clara Formation and/or Pleistocene fan deposits. These relations suggest that there is a structural transition within the upper plate of the MVFZ. North of this transition, the Santa Clara Formation may be as thick as 600-700 m, if its surface distribution and bedding attitudes represent uniform thickness across the axis of a prominent synclinal trough. South of the transition, water wells within the undeformed surficial deposits encountered "blue shale" at a depth of about 120 m. If the drillers' "blue shale" is Franciscan basement, this abrupt change in thickness of the young deposits should produce a 10 mGal anomaly that is not seen on a longitudinal gravity profile across the transition. Given this absence, the blue shale is probably part of the Santa Clara Formation, rather than Franciscan basement. North of the transition, the absence of local gravity lows over the thick, downfolded Santa Clara Formation suggests that the Santa Clara is not as thick as projected from the geologic relations or is denser than similar deposits elsewhere in the region. Thickening of the Plio-Quaternary deposits northeastward across the MVFZ is suggested by a magnetic high that extends basinward to the northeast. The gradient of the edge anomaly is only 500-1000 m wide, indicating that the source is shallow and lies within the Santa Clara Formation or younger alluvium. Intensity measurements on core from the alluvial section in Santa Clara Valley range from 0.6 to 400 mA/m and would be sufficient to produce the amplitude (10-30 nT) of the edge anomaly. Gravity modeling suggests that the fill in the Cupertino basin just northeast of the MVFZ is 3-4 km thick. The presence of a thick Cenozoic sedimentary sequence in the basin is supported by biomarkers in oil from a well in Los Gatos indicating oil generation from Miocene source rocks at a depth of 2-3 km. Integration of gravity modeling and seismic-refraction data in the Los Gatos area indicates that the Cupertino basin is 3.5 km deep if velocities > 5 km/s are attributed to Franciscan basement beneath Santa Clara Valley. Near-surface velocities measured on Franciscan basement south of Los Gatos are as low as 3-3.5 km/s. We attribute the reduction of velocity to cracks and fracturing, which affects velocity more drastically than density. By 1 km depth, velocities measured on Franciscan basement are 5-5.5 km/s probably because cracks and fractures are sealed with increasing overburden. Oil and water well data help constrain the geometry of the Franciscan rocks in the upper plate of the MVFZ as a thin flap, about 2-3 km-wide, overlying the thick Cupertino basin deposits.

Langenheim, V. E.; Catchings, R. D.; McLaughlin, R. J.; Jachens, R. C.; Wentworth, C. M.; Stanley, R. G.; Mankinen, E. A.

2002-12-01

270

NASA Astrophysics Data System (ADS)

We demonstrate the effectiveness of a statistical potential (SP) in the description of fermions in a worm-algorithm path-integral Monte Carlo simulation of a few 3He atoms floating on a 4He layer adsorbed on graphite. The SP in this work yields successful results, as manifested by the clusterization of 3He, and by the observation that the 3He atoms float on the surface of 4He. We display the positions of the particles in 3D coordinate space, which reveal clusterization of the 3He component. The correlation functions are also presented, which give further evidence for the clusterization.

Ghassib, Humam B.; Sakhel, Asaad R.; Obeidat, Omar; Al-Oqali, Amer; Sakhel, Roger R.

2012-01-01

271

We demonstrate the effectiveness of a statistical potential (SP) in the description of fermions in a worm-algorithm path-integral Monte Carlo simulation of a few 3He atoms floating on a 4He layer adsorbed on graphite. The SP in this work yields successful results, as manifested by the clusterization of 3He, and by the observation that the 3He atoms float on the surface of 4He. We display the positions of the particles in 3D coordinate space, which reveal clusterization of the 3He component. The correlation functions are also presented, which give further evidence for the clusterization. PMID:22400696

Ghassib, Humam B; Sakhel, Asaad R; Obeidat, Omar; Al-Oqali, Amer; Sakhel, Roger R

2012-01-01

272

Monte Carlo methods: Application to hydrogen gas and hard spheres

Quantum Monte Carlo (QMC) methods are among the most accurate for computing ground state properties of quantum systems. The two major types of QMC we use are Variational Monte Carlo (VMC), which evaluates integrals arising from the variational principle, and Diffusion Monte Carlo (DMC), which stochastically projects to the ground state from a trial wave function. These methods are applied

Mark Douglas Dewing

2001-01-01

273

NASA Astrophysics Data System (ADS)

Broadening the bandwidth of electromagnetic wave absorbers has greatly challenged material scientists. Here, we propose a two-layer hybrid absorber consisting of a non-planar metamaterial (MM) and a magnetic microwave absorbing material (MAM). The non-planar MM using magnetic MAMs instead of dielectric substrates shows good low frequency absorption and low reflection across a broad spectrum. Benefiting from this and the high frequency strong absorption of the MAM layer, the lightweight hybrid absorber exhibits 90% absorptivity over the whole 2-18 GHz range. Our result reveals a promising and flexible method to greatly extend or control the absorption bandwidth of absorbers.

Li, Wei; Wu, Tianlong; Wang, Wei; Guan, Jianguo; Zhai, Pengcheng

2014-01-01

274

Crimes forecasting is an important area in the field of criminology. Linear models, such as regression and econometric models, are commonly applied in crime forecasting. However, in real crimes data, it is common that the data consists of both linear and nonlinear components. A single model may not be sufficient to identify all the characteristics of the data. The purpose of this study is to introduce a hybrid model that combines support vector regression (SVR) and autoregressive integrated moving average (ARIMA) to be applied in crime rates forecasting. SVR is very robust with small training data and high-dimensional problem. Meanwhile, ARIMA has the ability to model several types of time series. However, the accuracy of the SVR model depends on values of its parameters, while ARIMA is not robust to be applied to small data sets. Therefore, to overcome this problem, particle swarm optimization is used to estimate the parameters of the SVR and ARIMA models. The proposed hybrid model is used to forecast the property crime rates of the United State based on economic indicators. The experimental results show that the proposed hybrid model is able to produce more accurate forecasting results as compared to the individual models. PMID:23766729

Alwee, Razana; Hj Shamsuddin, Siti Mariyam; Sallehuddin, Roselina

2013-01-01

275

NASA Technical Reports Server (NTRS)

The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.

Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.

1994-01-01

276

Problem statement: While classical PID controllers are sensitive to va riations in the system parameters, Fuzzy controllers do not need pr ecise information about the system variables in order to be effective. However, PID controllers are better able to control and minimize the steady sta te error of the system. To enhance the controller perf ormance, hybridization of these two

Pornjit Pratumsuwan; Siripun Thongchai; Surapun Tansriwong

277

This paper introduces a method to unit sizing hybrid wind\\/Photovoltaic\\/Fuel Cell generation system for a typical domestic load that is not located near the electric grid. In this configuration the combination of a battery, an electrolyser, and a hydrogen storage tank are used as the energy storage system. The aim of this design is finding the configuration, among a set

Hossein Kord; Ahmad Rohani

278

The algorithms and equations used in the calculation of long-range pollutant transport and dispersion are presented from the meteorological data sources through the calculation of air concentrations. The model calculation methods are a hybrid between Eulerian and Lagrangian approaches. A single pollutant particle represents the initial source. Advection and diffusion calculations are made in a Lagrangian framework. As the dispersion

Draxler

1992-01-01

279

Monte Carlo EM for Generalized Linear Mixed Models using Randomized Spherical Radial

Monte Carlo EM for Generalized Linear Mixed Models using Randomized Spherical Radial Integration by Monte Carlo methods. However, in practice, the Monte Carlo sample sizes required for convergence for such methods. One solution is to use Monte Carlo approximation, as proposed by Wei and Tanner (1990

Booth, James

280

Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL(-1). We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step 'turn on' pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens. PMID:23929394

Zuo, Peng; Li, XiuJun; Dominguez, Delfina C; Ye, Bang-Ce

2013-10-01

281

Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL?1. We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step ‘turn on’ pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens. PMID:23929394

Zuo, Peng; Dominguez, Delfina C.; Ye, Bang-Ce

2014-01-01

282

This report summarizes the results of a research and development (R&D) program to design and optimize an active desiccant-vapor compression hybrid rooftop system. The primary objective was to combine the strengths of both technologies to produce a compact, high-performing, energy-efficient system that could accommodate any percentage of outdoor air and deliver essentially any required combination of temperature and humidity, or

2005-01-01

283

In this paper we propose a novel method for automatic detection of microaneurysms (MA) and hemorrhages (HG)grouped as red lesions. Candidate extraction is achieved by automatic seed generation (ASG) which is devoid of morphological top hat transform (MTH). For classification we tested on linear discriminant classifier (LMSE), kNN, GMM, SVM and proposed a Hybrid classifier that incorporates kNN and GMM

Sandip Pradhan; S. Balasubramanian; V. Chandrasekaran

2008-01-01

284

Optical detection of glucose, high drug loading capacity, and self-regulated drug delivery are simultaneously possible using a multifunctional hybrid nanogel particle under a rational design in a colloid chemistry method. Such hybrid nanogels are made of Ag nanoparticle (NP) cores covered by a copolymer gel shell of poly(4-vinylphenylboronic acid-co-2-(dimethylamino)ethyl acrylate) [p(VPBA-DMAEA)]. The introduction of the glucose sensitive p(VPBA-DMAEA) gel shell onto Ag NPs makes the polymer-bound Ag NPs responsive to glucose. While the small sized Ag cores (10 +/- 3 nm) provide fluorescence as an optical code, the responsive polymer gel shell can adapt to a surrounding medium of different glucose concentrations over a clinically relevant range (0-30 mM), convert the disruptions in homeostasis of glucose level into optical signals, and regulate release of preloaded insulin. This shows a new proof-of-concept for diabetes treatment that exploits the properties from each building block of a multifunctional nano-object. The highly versatile multifunctional hybrid nanogels could potentially be used for simultaneous optical diagnosis, self-regulated therapy, and monitoring of the response to treatment. PMID:20731458

Wu, Weitai; Mitra, Nivedita; Yan, Elsa C Y; Zhou, Shuiqin

2010-08-24

285

This article briefly describes the development of an Associate Degree Nursing (ADN) hybrid program in the community college setting. Discussion follows for integrating the hybrid learner into the ADN program and solving some distance-learning concerns. Best practices for faculty, administration, and students are presented with discussion of the need to increase the pool of RNs by providing nontraditional learning alternatives

Marie Huffmaster Thomas; Susan Scott Baker

2008-01-01

286

Kinetic Monte Carlo with fields: diffusion in heterogeneous systems

NASA Astrophysics Data System (ADS)

It is commonly perceived that to achieve breakthrough scientific discoveries in the 21^st century an integration of world leading experimental capabilities with theory, computational modeling and high performance computer simulations is necessary. Lying between the atomic and the macro scales, the meso scale is crucial for advancing materials research. Deterministic methods result computationally too heavy to cover length and time scales relevant for this scale. Therefore, stochastic approaches are one of the options of choice. In this talk I will describe recent progress in efficient parallelization schemes for Metropolis and kinetic Monte Carlo [1-2], and the combination of these ideas into a new hybrid Molecular Dynamics-kinetic Monte Carlo algorithm developed to study the basic mechanisms taking place in diffusion in concentrated alloys under the action of chemical and stress fields, incorporating in this way the actual driving force emerging from chemical potential gradients. Applications are shown on precipitation and segregation in nanostructured materials. Work in collaboration with E. Martinez, LANL, and with B. Sadigh, P. Erhart and A. Stukowsky, LLNL. Supported by the Center for Materials at Irradiation and Mechanical Extremes, an Energy Frontier Research Center funded by the U.S. Department of Energy (Award # 2008LANL1026) at Los Alamos National Laboratory [4pt] [1] B. Sadigh et al. to be published [2] E. Martinez et al. J. Comp. Phys. 227 (2008) 3804-3823

Alfredo Caro, Jose

2011-03-01

287

Improved conversion efficiency of as-grown InGaN/GaN quantum-well solar cells for hybrid integration

NASA Astrophysics Data System (ADS)

We report on the photovoltaic characteristics of solar cells based on 15 and 30 InxGa1-xN/GaN (x = 0.10 and 0.19) multiquantum wells (MQWs) grown on sapphire. Doubling the number of MQWs increases the peak external quantum efficiency by a factor of 2 for both In contents. Devices with 19% In, with a spectral cutoff at 465 nm, exhibit an open-circuit voltage of 1.7 V and a short-circuit current density of 3.00 mA/cm2 under 1 sun AM1.5G illumination, leading to a conversion efficiency of 2.00%, making them promising for hybrid integration with non-III-nitride photovoltaic devices.

Valdueza-Felip, Sirona; Mukhtarova, Anna; Grenet, Louis; Bougerol, Catherine; Durand, Christophe; Eymery, Joel; Monroy, Eva

2014-03-01

288

NASA Astrophysics Data System (ADS)

320×240 pixels GaAs Schottky barrier detector arrays were fabricated, hybridized to silicon readout circuits, and subsequently evaluated. The detector chip was based on semi-insulating LEC GaAs material. The square shaped pixel detector elements were of the Schottky barrier type and had a pitch of 38 ?m. The GaAs wafers were thinned down prior to the fabrication of the ohmic back contact. After dicing, the chips were indium bump, flip-chip bonded to CMOS readout circuits based on charge integration, and finally evaluated. A bias voltage between 50 and 100 V was sufficient to operate the detector. Results on I- V characteristics, noise behaviour and response to X-ray radiation are presented. Images of various objects and slit patterns were acquired by using a standard dental imaging X-ray source. The work done was a part of the XIMAGE project financed by the European Community (Brite-Euram).

Irsigler, R.; Andersson, J.; Alverbro, J.; Borglind, J.; Fröjdh, C.; Helander, P.; Manolopoulos, S.; O'Shea, V.; Smith, K.

1999-09-01

289

NASA Astrophysics Data System (ADS)

We propose two vertical handoff schemes for cellular network and wireless local area network (WLAN) integration: integrated service-based handoff (ISH) and integrated service-based handoff with queue capabilities (ISHQ). Compared with existing handoff schemes in integrated cellular/WLAN networks, the proposed schemes consider a more comprehensive set of system characteristics such as different features of voice and data services, dynamic information about the admitted calls, user mobility and vertical handoffs in two directions. The code division multiple access (CDMA) cellular network and IEEE 802.11e WLAN are taken into account in the proposed schemes. We model the integrated networks by using multi-dimensional Markov chains and the major performance measures are derived for voice and data services. The important system parameters such as thresholds to prioritize handoff voice calls and queue sizes are optimized. Numerical results demonstrate that the proposed ISHQ scheme can maximize the utilization of overall bandwidth resources with the best quality of service (QoS) provisioning for voice and data services.

Xia, Weiwei; Shen, Lianfeng

290

NASA Astrophysics Data System (ADS)

An interesting discovery is reported in that G-rich hairpin-based recognition probes can self-assemble into a nano-architecture based on the integration of an intermolecular G-quadruplex structure with the sticky-end pairing effect in the presence of target DNAs. Moreover, GNPs modified with partly complementary DNAs can intensively aggregate by hybridization-based intercalation between intermolecular G-quadruplexes, indicating an inspiring assembly mechanism and a powerful colorimetric DNA detection. The proposed intermolecular G-quadruplex-integrated sticky-end pairing assembly (called GISA)-based colorimetric system allows a specific and quantitative assay of p53 DNA with a linear range of more than two orders of magnitude and a detection limit of 0.2 nM, suggesting a considerably improved analytical performance. And more to the point, the discrimination of single-base mismatched target DNAs can be easily conducted via visual observation. The successful development of the present colorimetric system, especially the GISA-based aggregation mechanism of GNPs is different from traditional approaches, and offers a critical insight into the dependence of the GNP aggregation on the structural properties of oligonucleotides, opening a good way to design colorimetric sensing probes and DNA nanostructure. An interesting discovery is reported in that G-rich hairpin-based recognition probes can self-assemble into a nano-architecture based on the integration of an intermolecular G-quadruplex structure with the sticky-end pairing effect in the presence of target DNAs. Moreover, GNPs modified with partly complementary DNAs can intensively aggregate by hybridization-based intercalation between intermolecular G-quadruplexes, indicating an inspiring assembly mechanism and a powerful colorimetric DNA detection. The proposed intermolecular G-quadruplex-integrated sticky-end pairing assembly (called GISA)-based colorimetric system allows a specific and quantitative assay of p53 DNA with a linear range of more than two orders of magnitude and a detection limit of 0.2 nM, suggesting a considerably improved analytical performance. And more to the point, the discrimination of single-base mismatched target DNAs can be easily conducted via visual observation. The successful development of the present colorimetric system, especially the GISA-based aggregation mechanism of GNPs is different from traditional approaches, and offers a critical insight into the dependence of the GNP aggregation on the structural properties of oligonucleotides, opening a good way to design colorimetric sensing probes and DNA nanostructure. Electronic supplementary information (ESI) available: Experimental section, supplementary Figures and perspectives. See DOI: 10.1039/c3nr03547f

Li, Hongbo; Wu, Zai-Sheng; Shen, Zhifa; Shen, Guoli; Yu, Ruqin

2014-01-01

291

NASA Astrophysics Data System (ADS)

Ion toroidal rotation in the counter current direction has been measured in C-Mod during lower hybrid (LH) RF power injection. Toroidal momentum input from the LH waves determines the initial increase of the counter current ion toroidal rotation. Due to the fast build up time of the plateau (< 1msec), the electron distribution function is assumed to be in steady state. We calculate the toroidal momentum input of LH wave to electrons by iterating a full wave code (TORIC-LH) with a Fokker Plank code (CQL3D) to obtain a self consistent steady state electron distribution function. On the longer time scale, comparable to the transport time (˜100msec), ion rotation is changing due to the constant momentum transfer from electrons to ions and the radial flux of ion toroidal momentum by Reynolds stress and collisional viscosity. We suggest a way to evaluate the viscosity terms for the low flow level rotation bya modifielectrostatic gyrokinetic code.

Lee, Jungpyo; Wright, John; Bonoli, Paul; Parker, Ron; Catto, Peter; Podpaly, Yuri; Rice, John; Reinke, Matt; Parra, Felix

2010-11-01

292

Shell model Monte Carlo methods

We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of {gamma}-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs.

Koonin, S.E. [California Inst. of Tech., Pasadena, CA (United States). W.K. Kellogg Radiation Lab.; Dean, D.J. [Oak Ridge National Lab., TN (United States)

1996-10-01

293

We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. To this end, we implement a new lattice Monte Carlo approach based on a non-uniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and it yields a position-dependent coupling and a corresponding non-uniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions.

Berger, C E; Drut, J E

2014-01-01

294

We determine the ground-state energy and Tan's contact of attractively interacting few-fermion systems in a one-dimensional harmonic trap, for a range of couplings and particle numbers. To this end, we implement a new lattice Monte Carlo approach based on a non-uniform discretization of space, defined via Gauss-Hermite quadrature points and weights. This particular coordinate basis is natural for systems in harmonic traps, and it yields a position-dependent coupling and a corresponding non-uniform Hubbard-Stratonovich transformation. The resulting path integral is performed with hybrid Monte Carlo as a proof of principle for calculations at finite temperature and in higher dimensions.

C. E. Berger; E. R. Anderson; J. E. Drut

2014-10-29

295

This paper presents the modeling and analysis of a greenhouse-integrated power system consisting of solar photovoltaic panels, electrolyzer bank and Polymer Electrolyte Membrane (PEM) fuel cell stacks. Electric power is generated in an array of solar photovoltaic modules. Excess energy after meeting the requirements of the greenhouse during peak sunshine hours, is supplied to an electrolyzer bank to generate hydrogen

A. Ganguly; D. Misra; S. Ghosh

2010-01-01

296

ERIC Educational Resources Information Center

This paper deals with the problematic nature of the transition between education and the workplace. A smooth transition between education and the workplace requires learners to develop an integrated knowledge base, but this is problematic as most educational programmes offer knowledge and experiences in a fragmented manner, scattered over a…

Zitter, Ilya; Hoeve, Aimee

2012-01-01

297

Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

2010-11-17

298

NASA Technical Reports Server (NTRS)

The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.

Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell

1994-01-01

299

This work presents a hybrid finite element boundary integral algorithm to solve the problem of scattering from finite and infinite arrays of two-dimensional overfilled cavities engraved in a perfectly electric conducting flat screen. The solution region is divided into interior regions containing the cavities and their protruding portions, and the region exterior to the overfilled cavities. The finite element formulation is applied only inside the interior regions to derive a linear system of equations associated with field unknowns. Using two-boundary formulation, the surface integral equation employing the half-space Green's function is applied on the boundary located at the interface of protruding portions of the cavities and the half-space as a boundary constraint to truncate the solution region. Placing the truncation boundary on the protruding portions of the cavities results in highly efficient solution in terms of computational resources, which makes the algorithm well suited for the optimization problems involving scattering from grating surfaces. The near fields are generated for finite and infinite arrays of overfilled cavities with different dimensions. PMID:23201808

Alavikia, Babak; Ramahi, Omar M

2012-11-01

300

This report is composed of the lecture notes from the first half of a 32-hour graduate-level course on Monte Carlo methods offered at KAPL. These notes, prepared by two of the principle developers of KAPL`s RACER Monte Carlo code, cover the fundamental theory, concepts, and practices for Monte Carlo analysis. In particular, a thorough grounding in the basic fundamentals of Monte Carlo methods is presented, including random number generation, random sampling, the Monte Carlo approach to solving transport problems, computational geometry, collision physics, tallies, and eigenvalue calculations. Furthermore, modern computational algorithms for vector and parallel approaches to Monte Carlo calculations are covered in detail, including fundamental parallel and vector concepts, the event-based algorithm, master/slave schemes, parallel scaling laws, and portability issues.

Brown, F.B.; Sutton, T.M.

1996-02-01

301

Although the clinical pathway (CP) predefines predictable standardized care process for a particular diagnosis or procedure, many variances may still unavoidably occur. Some key index parameters have strong relationship with variances handling measures of CP. In real world, these problems are highly nonlinear in nature so that it's hard to develop a comprehensive mathematic model. In this paper, a rule extraction approach based on combing hybrid genetic double multi-group cooperative particle swarm optimization algorithm (PSO) and discrete PSO algorithm (named HGDMCPSO/DPSO) is developed to discovery the previously unknown and potentially complicated nonlinear relationship between key parameters and variances handling measures of CP. Then these extracted rules can provide abnormal variances handling warning for medical professionals. Three numerical experiments on Iris of UCI data sets, Wisconsin breast cancer data sets and CP variances data sets of osteosarcoma preoperative chemotherapy are used to validate the proposed method. When compared with the previous researches, the proposed rule extraction algorithm can obtain the high prediction accuracy, less computing time, more stability and easily comprehended by users, thus it is an effective knowledge extraction tool for CP variances handling. PMID:20703636

Du, Gang; Jiang, Zhibin; Diao, Xiaodi; Yao, Yang

2012-04-01

302

Scalable Integration of Li5 FeO4 towards Robust, High-Performance Lithium-Ion Hybrid Capacitors.

Lithium-ion hybrid capacitors have attracted great interest due to their high specific energy relative to conventional electrical double-layer capacitors. Nevertheless, the safety issue still remains a drawback for lithium-ion capacitors in practical operational environments because of the use of metallic lithium. Herein, single-phase Li5 FeO4 with an antifluorite structure that acts as an alternative lithium source (instead of metallic lithium) is employed and its potential use for lithium-ion capacitors is verified. Abundant Li(+) amounts can be extracted from Li5 FeO4 incorporated in the positive electrode and efficiently doped into the negative electrode during the first electrochemical charging. After the first Li(+) extraction, Li(+) does not return to the Li5 FeO4 host structure and is steadily involved in the electrochemical reactions of the negative electrode during subsequent cycling. Various electrochemical and structural analyses support its superior characteristics for use as a promising lithium source. This versatile approach can yield a sufficient Li(+) -doping efficiency of >90?% and improved safety as a result of the removal of metallic lithium from the cell. PMID:25208971

Park, Min-Sik; Lim, Young-Geun; Hwang, Soo Min; Kim, Jung Ho; Kim, Jeom-Soo; Dou, Shi Xue; Cho, Jaephil; Kim, Young-Jun

2014-11-01

303

NSDL National Science Digital Library

The STP MonteCarloEstimation program estimates the area under the curve given by the square-root of (1-x^2) between 0 and 1 using the Monte Carlo hit and miss method. STP MonteCarloEstimation is part of a suite of Open Source Physics programs that model aspects of Statistical and Thermal Physics (STP). The program is distributed as a ready-to-run (compiled) Java archive. Double clicking the stp_MonteCarloEstimation.jar file will run the program if Java is installed on your computer.

Gould, Harvey; Tobochnik, Jan; Christian, Wolfgang; Cox, Anne

2009-01-26

304

Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a

Carey E. Floyd; R. E. Coleman; R. J. Jaszczak

1985-01-01

305

Quantum Monte Carlo method for attractive Coulomb potentials

Starting from an exact lower bound on the imaginary-time propagator, we\\u000apresent a Path-Integral Quantum Monte Carlo method that can handle singular\\u000aattractive potentials. We illustrate the basic ideas of this Quantum Monte\\u000aCarlo algorithm by simulating the ground state of hydrogen and helium.

J. S. Kole; H. De Raedt

2001-01-01

306

Quantum Monte Carlo method for attractive Coulomb potentials

Starting from an exact lower bound on the imaginary-time propagator, we present a path-integral quantum Monte Carlo method that can handle singular attractive potentials. We illustrate the basic ideas of this quantum Monte Carlo algorithm by simulating the ground state of hydrogen and helium.

J. S. Kole; H. de Raedt

2001-01-01

307

Auxiliary Field Quantum Monte Carlo in Continuum Systems

The auxiliary field Quantum Monte Carlo method allows Monte Carlo to be performed in any basis. This is accomplished by using the Hubbard-Stratonavich transformation to transform two body interactions into an integral over one body interactions. In practice this method has been difficult to use because while exact, it suffered from a phase problem more severe than the sign problem

Luke Shulenburger

2005-01-01

308

The INTEGRAL Mass Model - TIMM

The INTEGRAL Mass Model (TIMM) was started in 1995 and aimed to create a detailed geometrical model of the whole INTEGRAL satellite on computer. In parallel, a comprehensive Monte Carlo simulation code (called GGOD) has been developed. The mass model and the Monte Carlo code together enable the in-flight operation of INTEGRAL to be simulated at the individual event level.

C. Ferguson; E. J. Barlow; A. J. Bird; A. J. Dean; A. B. Hill; S. E. Shaw; J. B. Stephen; S. Sturner; T. V. Tikkanen; G. Weidenspointner; D. R. Willis

2003-01-01

309

10 Gb/s Reconfigurable Optical Logic Gate Using a Single Hybrid-Integrated SOA-MZI

NASA Astrophysics Data System (ADS)

A novel reconfigurable Boolean device based on a single Mach-Zehnder interferometer with semiconductor optical amplifiers is demonstrated at 10 Gb/s using intensity return-to-zero modulated signals. The experimental results show that the device can be dynamically reconfigured to operate as a logic XOR, AND, OR, and NOT gate using optical switches. By properly adjusting the input powers, an extinction ratio higher than 10 dB may be obtained. The potential of integration of this architecture makes it an interesting approach in photonic computing and optical signal processing.

Martinez, J. M.; Ramos, F.; Martí, J.

310

Monte Carlo approach to turbulence

The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained.

Düben, P; Jansen, K; Mesterhazy, D; Münster, G

2009-01-01

311

Monte Carlo approach to turbulence

The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained.

P. Düben; D. Homeier; K. Jansen; D. Mesterhazy; G. Münster

2009-11-03

312

NSDL National Science Digital Library

At its core, the LEGOÂ® MINDSTORMSÂ® NXT product provides a programmable microprocessor. Students use the NXT processor to simulate an experiment involving thousands of uniformly random points placed within a unit square. Using the underlying geometry of the experimental model, as well as the geometric definition of the constant Ï (pi), students form an empirical ratio of areas to estimate a numerical value of Ï. Although typically used for numerical integration of irregular shapes, in this activity, students use a Monte Carlo simulation to estimate a common but rather complex analytical formâthe numerical value of the most famous irrational number, Ï.

AMPS GK-12 Program,

313

We present a straightforward method for simultaneously enhancing the electrical conductivity, environmental stability, and photocatalytic properties of graphene films through one-step transfer of CVD graphene and integration by introducing TiO2/graphene oxide layer. A highly durable and flexible TiO2 layer is successfully used as a supporting layer for graphene transfer instead of the commonly used PMMA. Transferred graphene/TiO2 film is directly used for measuring the carrier transport and optoelectronic properties without an extra TiO2 removal and following deposition steps for multifunctional integration into devices because the thin TiO2 layer is optically transparent and electrically semiconducting. Moreover, the TiO2 layer induces charge screening by electrostatically interacting with the residual oxygen moieties on graphene, which are charge scattering centers, resulting in a reduced current hysteresis. Adsorption of water and other chemical molecules onto the graphene surface is also prevented by the passivating TiO2 layer, resulting in the long term environmental stability of the graphene under high temperature and humidity. In addition, the graphene/TiO2 film shows effectively enhanced photocatalytic properties because of the increase in the transport efficiency of the photogenerated electrons due to the decrease in the injection barrier formed at the interface between the F-doped tin oxide and TiO2 layers. PMID:24578338

Jeong, Hee Jin; Kim, Ho Young; Jeong, Hyun; Han, Joong Tark; Jeong, Seung Yol; Baeg, Kang-Jun; Jeong, Mun Seok; Lee, Geon-Woong

2014-05-28

314

Computationally efficient Monte Carlo EM algorithms for generalized linear mixed models

Maximum likelihood estimation in generalized linear mixed models usually involves intractable integrals that may be of high dimension. To reduce the dimensions of the integrals involved in computation, a reduced form of the score equation obtained by exploiting conditional independence and random effects structures can be used. Two Monte Carlo methods, one based on direct Monte Carlo integration and the

Yi-Hau Chen

2006-01-01

315

Hybrid solar-fossil fuel power generation

In this thesis, a literature review of hybrid solar-fossil fuel power generation is first given with an emphasis on system integration and evaluation. Hybrid systems are defined as those which use solar energy and fuel ...

Sheu, Elysia J. (Elysia Ja-Zeng)

2012-01-01

316

Hybrid Interventions in Limb Salvage

Hybrid interventions have become an integral part of our strategy for limb salvage in patients with multilevel arterial occlusive disease. In this article, we describe the commonly used hybrid interventions and review their indications and outcomes. Iliac stenting and femoral endarterectomy are the two most frequently performed procedures in hybrid cases. Short- and long-term outcomes of hybrid interventions are at least comparable to conventional endovascular and surgical revascularization procedures. Hybrid revascularization offers the efficiency and convenience of a single-stage revascularization. PMID:23805341

Huynh, Tam T.T.; Bechara, Carlos F.

2013-01-01

317

NASA Astrophysics Data System (ADS)

This study focused on creating a new tristructural isotropic (TRISO) coated particle fuel performance model and demonstrating the integration of this model into an existing system of neutronics and heat transfer codes, creating a user-friendly option for including fuel performance analysis within system design optimization and system-level trade-off studies. The end product enables both a deeper understanding and better overall system performance of nuclear energy systems limited or greatly impacted by TRISO fuel performance. A thorium-fueled hybrid fusion-fission Laser Inertial Fusion Energy (LIFE) blanket design was used for illustrating the application of this new capability and demonstrated both the importance of integrating fuel performance calculations into mainstream design studies and the impact that this new integrated analysis had on system-level design decisions. A new TRISO fuel performance model named TRIUNE was developed and verified and validated during this work with a novel methodology established for simulating the actual lifetime of a TRISO particle during repeated passes through a pebble bed. In addition, integrated self-consistent calculations were performed for neutronics depletion analysis, heat transfer calculations, and then fuel performance modeling for a full parametric study that encompassed over 80 different design options that went through all three phases of analysis. Lastly, side studies were performed that included a comparison of thorium and depleted uranium (DU) LIFE blankets as well as some uncertainty quantification work to help guide future experimental work by assessing what material properties in TRISO fuel performance modeling are most in need of improvement. A recommended thorium-fueled hybrid LIFE engine design was identified with an initial fuel load of 20MT of thorium, 15% TRISO packing within the graphite fuel pebbles, and a 20cm neutron multiplier layer with beryllium pebbles in flibe molten salt coolant. It operated at a system power level of 2000 MWth, took about 3.5 years to reach full plateau power, and was capable of an End of Plateau burnup of 38.7 %FIMA if considering just the neutronic constraints in the system design; however, fuel performance constraints led to a maximum credible burnup of 12.1 %FIMA due to a combination of internal gas pressure and irradiation effects on the TRISO materials (especially PyC) leading to SiC pressure vessel failures. The optimal neutron spectrum for the thorium-fueled blanket options evaluated seemed to favor a hard spectrum (low but non-zero neutron multiplier thicknesses and high TRISO packing fractions) in terms of neutronic performance but the fuel performance constraints demonstrated that a significantly softer spectrum would be needed to decrease the rate of accumulation of fast neutron fluence in order to improve the maximum credible burnup the system could achieve.

Powers, Jeffrey J.

2011-12-01

318

NASA Astrophysics Data System (ADS)

The development of sensor technologies and the increase in user requirements have resulted in many different approaches for efficient building model generation. Three-dimensional building models are important in various applications, such as disaster management and urban planning. Despite this importance, generation of these models lacks economical and reliable techniques which take advantage of the available multi-sensory data from single and multiple platforms. Therefore, this research develops a framework for fully-automated building model generation by integrating data-driven and model-driven methods as well as exploiting the advantages of images and LiDAR datasets. The building model generation starts by employing LiDAR data for building detection and approximate boundary determination. The generated building boundaries are then integrated into a model-based image processing strategy, because LiDAR derived planes show irregular boundaries due to the nature of LiDAR point acquisition. The focus of the research is generating models for the buildings with right-angled-corners, which can be described with a collection of rectangles (e.g., L-shape, T-shape, U-shape, gable roofs, and more complex building shapes which are combinations of the aforementioned shapes), under the assumption that the majority of the buildings in urban areas belong to this category. Therefore, by applying the Minimum Bounding Rectangle (MBR) algorithm recursively, the LiDAR boundaries are decomposed into sets of rectangles for further processing. At the same time the quality of the MBRs are examined to verify that the buildings, from which the boundaries are generated, are buildings with right-angled-corners. These rectangles are preliminary model primitives. The parameters that define the model primitives are adjusted using detected edges in the imagery through the least-squares adjustment procedure, i.e., model-based image fitting. The level of detail in the final Digital Building Model is based on the number of recursions during the MBR processing, which in turn are determined by the LiDAR point density. The model-based image fitting refines the search space and resolves the matching ambiguities in multiple images, which results in higher quality boundaries. This research thus develops an approach which not only automates the building model generation, but also improves the accuracy of the building model itself.

Kwak, Eunju

319

An integrated Sequential Injection (SI)/Flow Injection (FI) system furnished with a miniaturized LED-based fluorometric detector is presented in this work for expedient bioaccessibility tests of orthophosphate in soils. Equipped with a microcolumn of conical shape containing 50mg of soil, the hybrid flow system was resorted to on-line dynamic leaching and real-time quantification of pools of mobilizable orthophosphate using a bi-directional syringe pump and multiposition valve. The flexibility of the flow manifold was harnessed to explore both bi-directional and uni-directional flow extraction modes with the added degree of freedom of on-line dilution of extracts whenever needed. Bioaccessible orthophosphate was split in three fractions, the so-called NH4Cl fraction containing labile exchangeable phosphates, the alkaline fraction with Fe and Al-bound phosphates and the acidic fraction containing Ca-bound phosphates. The prevailing molybdenum blue photometric detection method is replaced by spectrofluorometric detection based on the ion pair formation between the phosphomolybdate heteropolyacid and rhodamine B with the subsequent quenching of the dye fluorescence. The dedicated optoelectronic detector was integrated in a secondary FI manifold and operated according to the fluorometric paired emitter-detector diode (FPEDD) principle involving two light emitting diodes as fluorescence inductors and one as detector of LED-induced fluorescence. Demonstrated with the analysis of a standard reference material (SRM 2711) and a real agricultural soil, the developed FI/SI fractionation system with FPEDD detection is proven reliable against the standard molybdenum blue method (p>0.05), and useful for investigation of the leaching kinetics of orthophosphate in bioaccessibility tests through in-line recording of the extraction profiles. PMID:25435227

Fiedoruk, Marta; Cocovi-Solberg, David J; Tymecki, Lukasz; Koncki, Robert; Miró, Manuel

2015-02-01

320

C H A P T E R THE MARKOV CHAIN MONTE CARLO

C H A P T E R 12 THE MARKOV CHAIN MONTE CARLO METHOD: AN APPROACH TO APPROXIMATE COUNTING AND INTEGRATION Mark Jerrum Alistair Sinclair In the area of statistical physics, Monte Carlo algorithms based enumeration and optimization. The "Markov chain Monte Carlo" method has been applied to a variety

Lee, Stephen

321

Quantum Monte Carlo Simulations of Solid 4 P.A. Whitlock1

Quantum Monte Carlo Simulations of Solid 4 He P.A. Whitlock1 and S.A. Vitiello2 1 Computer Carlo calculations at zero temperature; diffusion Monte Carlo, and finally, the finite temperature path integral Monte Carlo method. A brief introduction to the technique will be given followed by a discussion

Whitlock, Paula

322

GPU-based Monte Carlo simulation for light propagation in complex heterogeneous tissues

animals," Curr. Opin. Biotechnol. 16(1), 79Â88 (2005). 7. A. P. Gibson, J. C. Hebden, and S. R. Arridge Monte Carlo code for photon migration through complex heterogeneous media including the adult human head-21-14086. 14. L. V. Wang, and S. L. Jacques, "Hybrid model of Monte Carlo simulation and diffusion theory

Tian, Jie

323

High-coherence semiconductor lasers based on integral high-Q resonators in hybrid Si/III-V platforms

The semiconductor laser (SCL) is the principal light source powering the worldwide optical fiber network. The ever-increasing demand for data is causing the network to migrate to phase-coherent modulation formats, which place strict requirements on the temporal coherence of the light source that no longer can be met by current SCLs. This failure can be traced directly to the canonical laser design, in which photons are both generated and stored in the same, optically lossy, III-V material. This leads to an excessive and large amount of noisy spontaneous emission commingling with the laser mode, thereby degrading its coherence. High losses also decrease the amount of stored optical energy in the laser cavity, magnifying the effect of each individual spontaneous emission event on the phase of the laser field. Here, we propose a new design paradigm for the SCL. The keys to this paradigm are the deliberate removal of stored optical energy from the lossy III-V material by concentrating it in a passive, low-loss material and the incorporation of a very high-Q resonator as an integral (i.e., not externally coupled) part of the laser cavity. We demonstrate an SCL with a spectral linewidth of 18 kHz in the telecom band around 1.55 ?m, achieved using a single-mode silicon resonator with Q of 106. PMID:24516134

Santis, Christos Theodoros; Steger, Scott T.; Vilenchik, Yaakov; Vasilyev, Arseny; Yariv, Amnon

2014-01-01

324

Quasi-Monte Carlo methods for lattice systems: A first look

NASA Astrophysics Data System (ADS)

We investigate the applicability of quasi-Monte Carlo methods to Euclidean lattice systems for quantum mechanics in order to improve the asymptotic error behavior of observables for such theories. In most cases the error of an observable calculated by averaging over random observations generated from an ordinary Markov chain Monte Carlo simulation behaves like N, where N is the number of observations. By means of quasi-Monte Carlo methods it is possible to improve this behavior for certain problems to N-1, or even further if the problems are regular enough. We adapted and applied this approach to simple systems like the quantum harmonic and anharmonic oscillator and verified an improved error scaling. Catalogue identifier: AERJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERJ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence version 3 No. of lines in distributed program, including test data, etc.: 67759 No. of bytes in distributed program, including test data, etc.: 2165365 Distribution format: tar.gz Programming language: C and C++. Computer: PC. Operating system: Tested on GNU/Linux, should be portable to other operating systems with minimal efforts. Has the code been vectorized or parallelized?: No RAM: The memory usage directly scales with the number of samples and dimensions: Bytes used = “number of samples” × “number of dimensions” × 8 Bytes (double precision). Classification: 4.13, 11.5, 23. External routines: FFTW 3 library (http://www.fftw.org) Nature of problem: Certain physical models formulated as a quantum field theory through the Feynman path integral, such as quantum chromodynamics, require a non-perturbative treatment of the path integral. The only known approach that achieves this is the lattice regularization. In this formulation the path integral is discretized to a finite, but very high dimensional integral. So far only Monte Carlo, and especially Markov chain-Monte Carlo methods like the Metropolis or the hybrid Monte Carlo algorithm have been used to calculate approximate solutions of the path integral. These algorithms often lead to the undesired effect of autocorrelation in the samples of observables and suffer in any case from the slow asymptotic error behavior proportional to N, if N is the number of samples. Solution method: This program applies the quasi-Monte Carlo approach and the reweighting technique (respectively the weighted uniform sampling method) to generate uncorrelated samples of observables of the anharmonic oscillator with an improved asymptotic error behavior. Unusual features: The application of the quasi-Monte Carlo approach is quite revolutionary in the field of lattice field theories. Running time: The running time depends directly on the number of samples N and dimensions d. On modern computers a run with up to N=216=65536 (including 9 replica runs) and d=100 should not take much longer than one minute.

Jansen, K.; Leovey, H.; Ammon, A.; Griewank, A.; Müller-Preussker, M.

2014-03-01

325

Monte Carlo methods Monte Carlo Principle and MCMC

Monte Carlo methods Monte Carlo Principle and MCMC A. Doucet Carcans Sept. 2011 A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 1 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles A. Doucet (MLSS Sept. 2011) MCMC Sept. 2011 2 / 91 #12;Overview of the Lectures 1 Monte Carlo Principles 2 Markov

Doucet, Arnaud

326

The development of a roof spray system for passive/hybrid building cooling is described. Progress to date in defining and evaluating the issues and constraints relevant to spray roof cooling is described in the context of Butler's passive/hybrid manufactured buildings development program. (MHR)

Huffman, J. B.; Lindsey, L. L.; Snyder, M. K.

1981-03-10

327

Enhancements in Continuous-Energy Monte Carlo Capabilities in SCALE

Monte Carlo tools in SCALE are commonly used in criticality safety calculations as well as sensitivity and uncertainty analysis, depletion, and criticality alarm system analyses. Recent improvements in the continuous-energy data generated by the AMPX code system and significant advancements in the continuous-energy treatment in the KENO Monte Carlo eigenvalue codes facilitate the use of SCALE Monte Carlo codes to model geometrically complex systems with enhanced solution fidelity. The addition of continuous-energy treatment to the SCALE Monaco code, which can be used with automatic variance reduction in the hybrid MAVRIC sequence, provides significant enhancements, especially for criticality alarm system modeling. This paper describes some of the advancements in continuous-energy Monte Carlo codes within the SCALE code system.

Bekar, Kursat B [ORNL] [ORNL; Celik, Cihangir [ORNL] [ORNL; Wiarda, Dorothea [ORNL] [ORNL; Peplow, Douglas E. [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL; Dunn, Michael E [ORNL] [ORNL

2013-01-01

328

Hybrid Genetic Algorithms: A Review

Hybrid genetic algorithms have received significant interest in recent years and are being increasingly used to solve real-world problems. A genetic algorithm is able to incorporate other techniques within its framework to produce a hybrid that reaps the best from the combination. In this paper, different forms of integration between genetic algorithms and other search and optimization techniques are reviewed.

Tarek A. El-mihoub; Adrian A. Hopgood; Lars Nolle; Alan Battersby

2006-01-01

329

Hybrid materials refer to any of a class of materials in which organic and inorganic components intimately mixed. This however does not mean that simple physical mixtures of organic and inorganic compounds are hybrid materials. The rider is that the mixing be at the nanometric scale. Hybrids can either be homogeneous systems of miscible organic and inorganic components or they

unknown authors

330

NASA Astrophysics Data System (ADS)

Gasification has been used in industry on a relatively limited scale for many years, but it is emerging as the premier unit operation in the energy and chemical industries. The switch from expensive and insecure petroleum to solid hydrocarbon sources (coal and biomass) is occurring due to the vast amount of domestic solid resources, national security and global warming issues. Gasification (or partial oxidation) is a vital component of "clean coal" technology. Sulfur and nitrogen emissions can be reduced, overall energy efficiency is increased and carbon dioxide recovery and sequestration are facilitated. Gasification units in an electric power generation plant produce a fuel gas for driving combustion turbines. Gasification units in a chemical plant generate synthesis gas, which can be used to produce a wide spectrum of chemical products. Future plants are predicted to be hybrid power/chemical plants with gasification as the key unit operation. The coupling of an Integrated Gasification Combined Cycle (IGCC) with a methanol plant can handle swings in power demand by diverting hydrogen gas from a combustion turbine and synthesis gas from the gasifier to a methanol plant for the production of an easily-stored, hydrogen-consuming liquid product. An additional control degree of freedom is provided with this hybrid plant, fundamentally improving the controllability of the process. The idea is to base-load the gasifier and use the more responsive gas-phase units to handle disturbances. During the summer days, power demand can fluctuate up to 50% over a 12-hour period. The winter provides a different problem where spikes of power demand can go up 15% within the hour. The following dissertation develops a hybrid IGCC / methanol plant model, validates the steady-state results with a National Energy Technical Laboratory study, and tests a proposed control structure to handle these significant disturbances. All modeling was performed in the widely used chemical process simulators Aspen Plus and Aspen Dynamics. This dissertation first presents a simple approximate method for achieving the objective of having a gasifier model that can be exported into Aspen Dynamics. Limitations in the software dealing with solids make this a necessary task. The basic idea is to use a high molecular weight hydrocarbon that is present in the Aspen library as a pseudo fuel. For many plantwide dynamic studies, a rigorous high-fidelity dynamic model of the gasifier is not needed because its dynamics are very fast and the gasifier gas volume is a relatively small fraction of the total volume of the entire plant. The proposed approximate model captures the essential macro-scale thermal, flow, composition and pressure dynamics. This paper does not attempt to optimize the design or control of gasifiers, but merely presents an idea of how to dynamically simulate coal gasification in an approximate way. This dissertation also presents models of the downstream units of a typical IGCC. Dynamic simulations of the H2S absorption/stripping unit, Water-gas Shift (WGS) reactors, and CO2 absorption/stripping unit are essential for the development of stable and agile plantwide control structures of this hybrid power/chemical plant. Due to the high pressure of the system, hydrogen sulfide is removed by means of physical absorption. SELEXOLRTM (a mixture of the dimethyl ethers of polyethylene glycol) is used to achieve a gas purity of less than 5 ppm H2S. This desulfurized synthesis gas is sent to two water gas shift reactors that convert a total of 99% of carbon monoxide to hydrogen. Physical absorption of carbon dioxide with Selexol produces a hydrogen rich stream (90 mol% H2) to be fed into combustion turbines or to a methanol plant. Steady-state economic designs and plantwide control structures are developed in this dissertation. A steady-state economic design, control structure, and successful turndown of the methanol plant are shown in this dissertation. The Plantwide control structure and interaction among units are also shown. The methanol plant was si

Robinson, Patrick J.

331

Monte Python: Monte Carlo code for CLASS in Python

NASA Astrophysics Data System (ADS)

Monte Python is a parameter inference code which combines the flexibility of the python language and the robustness of the cosmological code CLASS into a simple and easy to manipulate Monte Carlo Markov Chain code.

Audren, Benjamin; Lesgourgues, Julien; Benabed, Karim; Prunet, Simon

2013-07-01

332

Improved geometry representations for Monte Carlo radiation transport.

ITS (Integrated Tiger Series) permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. ITS allows designers to predict product performance in radiation environments.

Martin, Matthew Ryan (Cornell University)

2004-08-01

333

NASA Astrophysics Data System (ADS)

The challenges of mitigating climate change and generating sustainable renewable energy are inseparable and can be addressed by synergistic integration of geothermal energy production with secure geologic CO2 storage (GCS). Pressure buildup can be a limiting factor for GCS and geothermal reservoir operations, due to a number of concerns, including the potential for CO2 leakage and induced seismicity, while pressure depletion can limit geothermal energy recovery. Water-use demands can also be a limiting factor for GCS and geothermal operations, particularly where water resources are already scarce. Economic optimization of geothermal-GCS involves trade-offs of various benefits and risks, along with their associated costs: (1) heat extraction per ton of delivered CO2, (2) permanent CO2 storage, (3) energy recovery per unit well (and working-fluid recirculation) costs, and (4) economic lifetime of a project. We analyze a hybrid, multi-stage approach using both formation brine and injected CO2 as working fluids to attempt to optimize the benefits of sustainable energy production and permanent CO2 storage, while conserving water resources and minimizing environmental risks. We consider a range of well-field patterns and operational schemes. Initially, the fluid production is entirely brine. After CO2 breakthrough, the fraction of CO2 in production, which is called the CO2 "cut", increases with time. Thus, brine is the predominant working fluid for early time, with the contribution of CO2 to heat extraction increasing with CO2 cut (and time). We find that smaller well spacing between CO2 injectors and producers favors earlier CO2 breakthrough and a more rapid rise in CO2 cut, which increases the contribution of recirculated CO2, thereby improving the heat extraction per ton of delivered CO2. On the other hand, larger well spacing increases permanent CO2 storage, energy production per unit well cost, while reducing the thermal drawdown rate, which extends the economic lifetime of a project. For the range of cases considered, we were never able to eliminate the co-production of brine; thus, brine management is likely to be important for reservoir operations, whether or not brine is considered as a candidate working fluid. Future work will address site-specific reservoir conditions and infrastructure factors, such as proximity to potential CO2 sources. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

Buscheck, T. A.; Chen, M.; Lu, C.; Sun, Y.; Hao, Y.; Elliot, T. R.; Celia, M. A.; Bielicki, J. M.

2012-12-01

334

Public Infrastructure for Monte Carlo Simulation : publicMCatBATAN

NASA Astrophysics Data System (ADS)

The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.

Waskita, A. A.; Prasetyo, N. A.; Akbar, Z.; Handoko, L. T.

2010-06-01

335

In this chapter we discuss Monte Carlo sampling methods for solving large scale stochastic programming problems. We concentrate on the “exterior” approach where a random sample is generated outside of an optimization procedure, and then the constructed, so-called sample average approximation (SAA), problem is solved by an appropriate deterministic algorithm. We study statistical properties of the obtained SAA estimators. The

Alexander Shapiro

2003-01-01

336

ERIC Educational Resources Information Center

Monte Carlo methods are used to simulate activities in baseball such as a team's "hot streak" and a hitter's "batting slump." Student participation in such simulations is viewed as a useful method of giving pupils a better understanding of the probability concepts involved. (MP)

Houser, Larry L.

1981-01-01

337

Experiments with Monte Carlo Othello

In this paper, we report on our experiments with using Monte Carlo simulation (specifically the UCT algorithm) as the basis for an Othello playing program. Monte Carlo methods have been used for other games in the past, most recently and notably in successful Go playing programs. We show that Monte Carlo-based players have potential for Othello, and that evolutionary algorithms

Philip Hingston; Martin Masek

2007-01-01

338

Quantum Monte Carlo Helsinki 2011

Quantum Monte Carlo Helsinki 2011 Marius Lewerenz MSME/CT, UMR 8208 CNRS, UniversitÂ´e Paris Est? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 What is a Monte Carlo method? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.3 What are Monte Carlo methods good for? . . . . . . . . . . . . . . . . . . . . . . . 5 1

Boyer, Edmond

339

Monte-Carlo Tests Diplomarbeit

Monte-Carlo Tests Diplomarbeit Wiebke Werft Mathematisches Institut der Heinrich.2 Suffizienz und VollstÃ¤ndigkeit . . . . . . . . . . . . . . . . . . . . 5 2 Monte-Carlo Tests 8 2.1 Formulierung des Testproblems . . . . . . . . . . . . . . . . . . . 8 2.2 Definition des Monte-Carlo Tests

340

Abstract Introducing the original ideas of using Monte-Carlo simulation in computer Go. Adding new ideas to pure Monte-Carlo approach for computer Go. • Progressive pruning • All moves as first heuristic • Temperature • Simulated annealing • Depth-2 tree search Conclusion: • With the ever-increasing power of computers, we can add more knowl- edge to the Monte-Carlo approach to get

B. Bouzy; B. Helmstetter; Tsan-sheng Hsu

341

Monte Carlo photon benchmark problems

Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems.

Whalen, D.J.; Hollowell, D.E.; Hendricks, J.S.

1991-01-01

342

Monte Carlo photon benchmark problems

Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs.

Whalen, D.J. (Brigham Young Univ., Provo, UT (USA)); Hollowell, D.E.; Hendricks, J.S. (Los Alamos National Lab., NM (USA))

1990-01-01

343

Background The aim of this study was to evaluate the value of CA15-3 for the diagnostic integration of molecular imaging findings performed with hybrid positron emission tomography and computed tomography (PETCT) technology. Methods We retrospectively selected 45 patients with a median age of 60 years (range 39–85 years) and a previous history of breast cancer (BC) who had already been treated with surgery and other treatments. Three measurements of CA15-3 were collected within 1 year before PETCT examination, at 6–9 months 3–6 months and 0–3 months before PETCT. The prolonged clinical outcome or imaging follow-up was used to define disease relapse. An increase in tumor marker value was compared with PETCT findings and disease relapse. Sensitivity and specificity for both tests were calculated with respect to clinical outcome. Results Disease relapse was detected in 16 out of 45 BC patients. CA15-3 and PETCT showed 75% sensitivity with a specificity percentage of 76% for CA15-3 and 79% for PETCT. Serum CA15-3 expression levels were significantly higher in BC patients with multiple metastatic sites with hepatic involvement. Analysis of serial CA15-3 serum levels showed an increase in CA15-3 3–6 months before PETCT could identify BC patients at risk for relapse (AUC?=?0.81). Moreover, patients receiving anti-hormonal or chemotherapy medications with negative PETCT and positive CA15-3 relapsed after a median time of 158 days compared to patients who were negative for both tests and who were free from disease for at least 1 year. Conclusions Our results showed that serial increases in CA15-3 can be used to predict positive PETCT results in BC patients during follow-up. Increased levels of CA15-3 may be considered an early warning sign in patients needing accurate molecular imaging investigations, as they are at higher risk of recurrence. In cases of elevated levels, multiple lesions or liver involvement may exist. Also, patients receiving chemotherapeutic or anti-hormonal treatment who have negative PETCT scans and increased CA15-3 serum levels should be considered at risk for relapse, because the CA15-3-linked biochemical signal of the presence of a tumor can predict positive metabolic imaging. PMID:24886519

2014-01-01

344

NASA Astrophysics Data System (ADS)

Every neutrino experiment requires a Monte Carlo event generator for various purposes. Historically, each series of experiments developed their own code which tuned to their needs. Modern experiments would benefit from a universal code (e.g. PYTHIA) which would allow more direct comparison between experiments. GENIE attempts to be that code. This paper compares most commonly used codes and provides some details of GENIE.

Dytman, Steven

2011-10-01

345

ERIC Educational Resources Information Center

Online course enrollment has increased dramatically over the past few years. The authors cite the reasons for this rapid growth and the opportunities open for enhancing teaching/learning techniques such as video conferencing and hybrid class combinations. The authors outlined an example of an accelerated learning, eight-class session course…

Beckwith, E. George; Cunniff, Daniel T.

2009-01-01

346

NASA Astrophysics Data System (ADS)

This paper is concerned with the development of efficient algorithms for propagating parametric uncertainty within the context of the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) approach to the analysis of complex vibro-acoustic systems. This approach models the system as a combination of SEA subsystems and FE components; it is assumed that the FE components have fully deterministic properties, while the SEA subsystems have a high degree of randomness. The method has been recently generalised by allowing the FE components to possess parametric uncertainty, leading to two ensembles of uncertainty: a non-parametric one (SEA subsystems) and a parametric one (FE components). The SEA subsystems ensemble is dealt with analytically, while the effect of the additional FE components ensemble can be dealt with by Monte Carlo Simulations. However, this approach can be computationally intensive when applied to complex engineering systems having many uncertain parameters. Two different strategies are proposed: (i) the combination of the hybrid FE/SEA method with the First Order Reliability Method which allows the probability of the non-parametric ensemble average of a response variable exceeding a barrier to be calculated and (ii) the combination of the hybrid FE/SEA method with Laplace's method which allows the evaluation of the probability of a response variable exceeding a limit value. The proposed approaches are illustrated using two built-up plate systems with uncertain properties and the results are validated against direct integration, Monte Carlo simulations of the FE and of the hybrid FE/SEA models.

Cicirello, Alice; Langley, Robin S.

2014-03-01

347

Monte Carlo event generators are essential components of almost all experimental analyses and are also widely used by theorists and experiments to make predictions and preparations for future experiments. They are all too often used as "black boxes", without sufficient consideration of their component models or their reliability. In this set of three lectures we hope to open the box and explain the physical bases behind the models they use. We focus primarily on the general features of parton showers, hadronization and underlying event generation.

Michael H. Seymour; Marilyn Marx

2013-04-24

348

Monte Carlo fluorescence microtomography

NASA Astrophysics Data System (ADS)

Fluorescence microscopy allows real-time monitoring of optical molecular probes for disease characterization, drug development, and tissue regeneration. However, when a biological sample is thicker than 1 mm, intense scattering of light would significantly degrade the spatial resolution of fluorescence microscopy. In this paper, we develop a fluorescence microtomography technique that utilizes the Monte Carlo method to image fluorescence reporters in thick biological samples. This approach is based on an l0-regularized tomography model and provides an excellent solution. Our studies on biomimetic tissue scaffolds have demonstrated that the proposed approach is capable of localizing and quantifying the distribution of optical molecular probe accurately and reliably.

Cong, Alexander X.; Hofmann, Matthias C.; Cong, Wenxiang; Xu, Yong; Wang, Ge

2011-07-01

349

Multicanonical Multigrid Monte Carlo

To further improve the performance of Monte Carlo simulations of first-order phase transitions we propose to combine the multicanonical approach with multigrid techniques. We report tests of this proposition for the $d$-dimensional $\\Phi^4$ field theory in two different situations. First, we study quantum tunneling for $d = 1$ in the continuum limit, and second, we investigate first-order phase transitions for $d = 2$ in the infinite volume limit. Compared with standard multicanonical simulations we obtain improvement factors of several resp. of about one order of magnitude.

W. Janke; T. Sauer

1993-05-20

350

Exact pseudofermion action for Monte Carlo simulation of domain-wall fermion

NASA Astrophysics Data System (ADS)

We present an exact pseudofermion action for hybrid Monte Carlo simulation (HMC) of one-flavor domain-wall fermion (DWF), with the effective 4-dimensional Dirac operator equal to the optimal rational approximation of the overlap-Dirac operator with kernel H = cHw(1 + d?5Hw) - 1, where c and d are constants. Using this exact pseudofermion action, we perform HMC of one-flavor QCD, and compare its characteristics with the widely used rational hybrid Monte Carlo algorithm (RHMC). Moreover, to demonstrate the practicality of the exact one-flavor algorithm (EOFA), we perform the first dynamical simulation of the (1 + 1)-flavors QCD with DWF.

Chen, Yu-Chih; Chiu, Ting-Wai

2014-11-01

351

Exact ground state Monte Carlo method for Bosons without importance sampling

Generally ``exact'' quantum Monte Carlo computations for the ground state of many bosons make use of importance sampling. The importance sampling is based either on a guiding function or on an initial variational wave function. Here we investigate the need of importance sampling in the case of path integral ground state (PIGS) Monte Carlo. PIGS is based on a discrete

M. Rossi; M. Nava; L. Reatto; D. E. Galli

2009-01-01

352

Secondproofs Monte Carlo and Quasi-Monte Carlo Methods 2008

Pierre L'Ecuyer r Art B. Owen Editors Monte Carlo and Quasi-Monte Carlo Methods 2008 #12;Secondproofs lecuyer@iro.umontreal.ca Art B. Owen Department of Statistics Stanford University Sequoia Hall Stanford, CA 94305 USA owen@stanford.edu ISBN 978-3-642-04106-8 DOI 10.1007/978-3-642-04107-5 e-ISBN978

L'Ecuyer, Pierre

353

Hybrid inflation along waterfall trajectories

By performing a bayesian Monte-Carlo-Markov-Chain analysis, it is shown that a large number of e-folds (more than 60) are generated classically during the waterfall after hybrid inflation in a large part of the parameter space of the model. As a result, the observable perturbation modes leave the Hubble radius during waterfall inflation. The power spectrum of adiabatic perturbations is red, possibly in agreement with CMB constraints. A particular attention has been given to study only the regions for which quantum backreactions do not affect the classical dynamics. Implications concerning the preheating and the absence of topological defects in our universe are discussed.

Clesse, Sebastien

2010-01-01

354

Monte Carlo Generation of Bohmian Trajectories

We report on a Monte Carlo method that generates one-dimensional trajectories for Bohm's formulation of quantum mechanics that doesn't involve differentiation or integration of any equations of motion. At each time, t=n\\delta t (n=1,2,3,...), N particle positions are randomly sampled from the quantum probability density. Trajectories are built from the sorted N sampled positions at each time. These trajectories become the exact Bohm solutions in the limits N->\\infty and \\delta t -> 0. Higher dimensional problems can be solved by this method for separable wave functions. Several examples are given, including the two-slit experiment.

T. M. Coffey; R. E. Wyatt; W. C. Schieve

2008-07-01

355

Exact retrospective Monte Carlo computation of arithmetic average Asian options

Taking advantage of the recent litterature on exact simulation algorithms (Beskos, Papaspiliopoulos and Roberts) and unbiased estimation of the expectation of certain fonctional integrals (Wagner, Beskos et al. and Fearnhead et al.), we apply an exact simulation based technique for pricing continuous arithmetic average Asian options in the Black and Scholes framework. Unlike existing Monte Carlo methods, we are no

Benjamin Jourdain; Mohamed Sbai

2007-01-01

356

Monte Carlo and Quasi-Monte Carlo algorithms for the Barker-Ferry equation with low

Monte Carlo and Quasi-Monte Carlo algorithms for the Barker-Ferry equation with low complexity ? T. The quasi-Monte Carlo (QMC) solutions obtained by QRNs are compared with the Monte Carlo (MC) solutions) converges [3] and the solution can be evaluated by a MC estimator. 2 Monte Carlo and Quasi-Monte Carlo

Whitlock, Paula

357

Hybrid Systems: From Verification to Falsification

We propose HyDICE, Hybrid DIscrete Continuous Exploration, a multi-layered approach for hybrid-system testing that integrates con- tinuous sampling-based robot motion planning with discrete searching. The discrete search uses the discrete transitions of the hybrid system and coarse-grained decompositions of the continuous state spaces or related projections to guide the motion planner during the search for witness trajectories. Experiments presented in

Erion Plaku; Lydia E. Kavraki; Moshe Y. Vardi

2007-01-01

358

Practical Markov Chain Monte Carlo

Markov chain Monte Carlo using the Metropolis-Hastings algorithm is a general method for the simulation of stochastic processes having probability densities known up to a constant of proportionality. Despite recent advances in its theory, the practice has remained controversial. This article makes the case for basing all inference on one long run of the Markov chain and estimating the Monte

Charles J. Geyer

1992-01-01

359

MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

Marcus, Ryan C. [Los Alamos National Laboratory

2012-07-25

360

Reflexive Monte-Carlo search uses the Monte-Carlo search of a given level to improve the search of the upper level. We describe the application to Morpion Solitaire. For the non touching version, reflexive M onte-Carlo search breaks the current record and establishes a new record of 78 moves.

Tristan Cazenave

361

DEVELOPMENTS ON MONTE CARLO GO

We have developed two go programs, Olga and Oleg, using a Monte Carlo ap- proach, simpler than Bruegmann's (Bruegmann, 1993), and based on (Abramson, 1990). We have set up experiments to assess ideas such as progressive pruning, transpositions, temperature, simulated annealing and depth-two tree search within the Monte Carlo framework. We have shown that progressive pruning alone gives better results

Bruno Bouzy; Bernard Helmstetter

362

Hybrid Monte-Carlo on Hilbert Spaces Alexandros Beskosa,

, in the study of conditioned diffusions [11] and the Bayesian approach to inverse problems [20]. The aim.m.stuart@warwick.ac.uk (Andrew Stuart) Preprint submitted to Stochastic Processes and Applications July 26, 2010 #12;its density

Stuart, Andrew

363

NASA Astrophysics Data System (ADS)

We report on the on-chip fabrication of high performance flexible micro-supercapacitor (MSC) arrays with hybrid electrodes of multi-walled carbon nanotube (MWNT)/V2O5 nanowire (NW) composites and a solid electrolyte, which could power the SnO2 NW UV sensor integrated on the same flexible substrate. The patterned MSC using hybrid electrodes of MWNT/V2O5 NW composites with 10 vol% of V2O5 NWs exhibited excellent electrochemical performance with a high volume capacitance of 80 F cm-3 at a scan rate of 10 mV s-1 in a PVA-LiCl electrolyte and good cycle performance to maintain 82% of the capacitance after 10 000 cycles at a current density of 11.6 A cm-3. The patterned MSC also showed an excellent energy density of 6.8 mW h cm-3, comparable to that of a Li-thin film battery (1-10 mW h cm-3), and a power density of 80.8 W cm-3 comparable to that of state-of-the-art MSCs. In addition, the flexible MSC array on a PET substrate showed mechanical stability over bending with a bending radius down to 1.5 mm under both compressive and tensile stress. Even after 1000 bending cycles at a bending radius of 7 mm, 94% of the initial capacitance was maintained. Furthermore, we have shown the operation of a SnO2 NW UV sensor using such a fabricated MSC array integrated into the same circuit on the PET substrate.We report on the on-chip fabrication of high performance flexible micro-supercapacitor (MSC) arrays with hybrid electrodes of multi-walled carbon nanotube (MWNT)/V2O5 nanowire (NW) composites and a solid electrolyte, which could power the SnO2 NW UV sensor integrated on the same flexible substrate. The patterned MSC using hybrid electrodes of MWNT/V2O5 NW composites with 10 vol% of V2O5 NWs exhibited excellent electrochemical performance with a high volume capacitance of 80 F cm-3 at a scan rate of 10 mV s-1 in a PVA-LiCl electrolyte and good cycle performance to maintain 82% of the capacitance after 10 000 cycles at a current density of 11.6 A cm-3. The patterned MSC also showed an excellent energy density of 6.8 mW h cm-3, comparable to that of a Li-thin film battery (1-10 mW h cm-3), and a power density of 80.8 W cm-3 comparable to that of state-of-the-art MSCs. In addition, the flexible MSC array on a PET substrate showed mechanical stability over bending with a bending radius down to 1.5 mm under both compressive and tensile stress. Even after 1000 bending cycles at a bending radius of 7 mm, 94% of the initial capacitance was maintained. Furthermore, we have shown the operation of a SnO2 NW UV sensor using such a fabricated MSC array integrated into the same circuit on the PET substrate. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr04138k

Kim, Daeil; Yun, Junyeong; Lee, Geumbee; Ha, Jeong Sook

2014-09-01

364

We review the fundamental challenge of fermion Monte Carlo for continuous systems, the "sign problem". We seek that eigenfunction of the many-body Schriodinger equation that is antisymmetric under interchange of the coordinates of pairs of particles. We describe methods that depend upon the use of correlated dynamics for pairs of correlated walkers that carry opposite signs. There is an algorithmic symmetry between such walkers that must be broken to create a method that is both exact and as effective as for symmetric functions, In our new method, it is broken by using different "guiding" functions for walkers of opposite signs, and a geometric correlation between steps of their walks, With a specific process of cancellation of the walkers, overlaps with antisymmetric test functions are preserved. Finally, we describe the progress in treating free-fermion systems and a fermion fluid with 14 ^{3}He atoms.

Kalos, M. H.; Pederiva, F.

1998-12-01

365

We report on the on-chip fabrication of high performance flexible micro-supercapacitor (MSC) arrays with hybrid electrodes of multi-walled carbon nanotube (MWNT)/V2O5 nanowire (NW) composites and a solid electrolyte, which could power the SnO2 NW UV sensor integrated on the same flexible substrate. The patterned MSC using hybrid electrodes of MWNT/V2O5 NW composites with 10 vol% of V2O5 NWs exhibited excellent electrochemical performance with a high volume capacitance of 80 F cm(-3) at a scan rate of 10 mV s(-1) in a PVA-LiCl electrolyte and good cycle performance to maintain 82% of the capacitance after 10?000 cycles at a current density of 11.6 A cm(-3). The patterned MSC also showed an excellent energy density of 6.8 mW h cm(-3), comparable to that of a Li-thin film battery (1-10 mW h cm(-3)), and a power density of 80.8 W cm(-3) comparable to that of state-of-the-art MSCs. In addition, the flexible MSC array on a PET substrate showed mechanical stability over bending with a bending radius down to 1.5 mm under both compressive and tensile stress. Even after 1000 bending cycles at a bending radius of 7 mm, 94% of the initial capacitance was maintained. Furthermore, we have shown the operation of a SnO2 NW UV sensor using such a fabricated MSC array integrated into the same circuit on the PET substrate. PMID:25184811

Kim, Daeil; Yun, Junyeong; Lee, Geumbee; Ha, Jeong Sook

2014-10-21

366

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1{radical}N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and {integral}{sub {minus}{infinity}}{sup {infinity}} f(x) dx = 1. Since f(x) is seldom known explicitly, Monte Carlo particle random walks sample f(x) implicitly. Unless there is a largest possible history score, the empirical f(x) must eventually decrease more steeply than l/x{sup 3} for the second moment ({integral}{sub {minus}{infinity}}{sup {infinity}} x{sup 2}f(x) dx) to exist.

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-05-01

367

The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S{sub N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S{sub N} regions are fully coupled in the sense that no assumption is made about geometrical separation of decoupling. The fully coupled Monte Carlo/S{sub N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S{sub N} calculation is to be performed. The Monte Carlo and S{sub N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and group sources. The hybrid method provides a new method of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S{sub N} is well suited for by itself. The fully coupled Monte Carlo/S{sub N} method has been implemented in the S{sub N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and group sources, and linkage subroutines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S{sub N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating the S{sub N} calculations. The Monte Carlo routines have been successfully vectorized, with approximately a factor of five increases in speed over the nonvectorized version. The hybrid method is capable of solving forward, inhomogeneous source problems in X-Y and R-Z geometries. This capability now includes mulitigroup problems involving upscatter and fission in non-highly multiplying systems. 8 refs., 8 figs., 1 tab.

Baker, R.S.; Filippone, W.F. (Arizona Univ., Tucson, AZ (USA). Dept. of Nuclear and Energy Engineering); Alcouffe, R.E. (Los Alamos National Lab., NM (USA))

1991-01-01

368

Angular biasing in implicit Monte-Carlo

Calculations of indirect drive Inertial Confinement Fusion target experiments require an integrated approach in which laser irradiation and radiation transport in the hohlraum are solved simultaneously with the symmetry, implosion and burn of the fuel capsule. The Implicit Monte Carlo method has proved to be a valuable tool for the two dimensional radiation transport within the hohlraum, but the impact of statistical noise on the symmetric implosion of the small fuel capsule is difficult to overcome. We present an angular biasing technique in which an increased number of low weight photons are directed at the imploding capsule. For typical parameters this reduces the required computer time for an integrated calculation by a factor of 10. An additional factor of 5 can also be achieved by directing even smaller weight photons at the polar regions of the capsule where small mass zones are most sensitive to statistical noise.

Zimmerman, G.B.

1994-10-20

369

Parallelizing Monte Carlo with PMC

PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

1994-11-01

370

Semiconductor/High-Tc-Superconductor Hybrid ICs

NASA Technical Reports Server (NTRS)

Hybrid integrated circuits (ICs) containing both Si-based semiconducting and YBa(2)Cu(3)O(7-x) superconducting circuit elements on sapphire substrates developed. Help to prevent diffusion of Cu from superconductors into semiconductors. These hybrid ICs combine superconducting and semiconducting features unavailable in superconducting or semiconducting circuitry alone. For example, complementary metal oxide/semiconductor (CMOS) readout and memory devices integrated with fast-switching Josephson-junction super-conducting logic devices and zero-resistance interconnections.

Burns, Michael J.

1995-01-01

371

Multiscale Finite-Difference-Diffusion-Monte-Carlo Method for Simulating Dendritic Solidification

We present a novel hybrid computational method to simulate accurately dendritic solidification in the low undercooling limit where the dendrite tip radius is one or more orders of magnitude smaller than the characteristic spatial scale of variation of the surrounding thermal or solutal diffusion field. The first key feature of this method is an efficient multiscale diffusion Monte Carlo (DMC)

Mathis Plapp; Alain Karma

2000-01-01

372

Three-dimensional forest light interaction model using a Monte Carlo method

A model for light interaction with forest canopies is presented, based on Monte Carlo simulation of photon transport. A hybrid representation is used to model the discontinuous nature of the forest canopy. Large scale structure is represented by geometric primitives defining shapes and positions of the tree crowns and trunks. Foliage is represented within crowns by volume-averaged parameters describing the

Peter R. J. North

1996-01-01

373

Hybrid Multicomponent Hydrogels for Tissue Engineering

Artificial ECMs that not only closely mimic the hybrid nature of the natural ECM but also provide tunable material properties and enhanced biological functions are attractive candidates for tissue engineering applications. This review summarizes recent advances in developing multicomponent hybrid hydrogels by integrating modular and heterogeneous building blocks into well-defined, multifunctional hydrogel composites. The individual building blocks can be chemically, morphologically, and functionally diverse, and the hybridization can occur at molecular level or microscopic scale. The modular nature of the designs, combined with the potential synergistic effects of the hybrid systems, has resulted in novel hydrogel matrices with robust structure and defined functions. PMID:19107720

Jia, Xinqiao; Kiick, Kristi L.

2009-01-01

374

Distributed Control and Stochastic Analysis of Hybrid Systems Supporting Safety Critical Real and Stochastic Analysis of Hybrid Systems Supporting Safety Critical Real-Time Systems Design (HYBRIDGE) DOCUMENT-Time Systems Design WP9: Risk assessment for a distributed control system Sequential Monte Carlo simulation

Del Moral , Pierre

375

This study was undertaken to develop a reliable method to enumerate and map somatically acquired, clonal, murine leukemia virus (MuLV) proviral insertions in acute myeloid leukemia (AML) cells from the BXH-2 mouse strain. This was achieved by using fluorescence in situ hybridization combined with tyramide signal amplification (FISH-TSA) and an 8.8 kilobase pair (kb) full-length ecotropic MuLV or 2.0 kb MuLV envelope (env) gene probe. Two-color FISH was utilized combining chromosome-specific probes for regions near the telomere and/or centromere and the MuLV probes. The technique reliably detected germline and somatically acquired, tumor-specific, MuLV proviruses in BXH-2 AML cell lines. It was possible to readily verify homozygous insertions at endogenous ecotropic MuLV loci, Emv1 (chromosome 5), Emv2 (chromosome 8) and a BXH-2 strain-specific locus (chromosome 11). This strategy also verified the presence of molecularly cloned proviral insertions within the mouse Nf1 gene and another locus on distal chromosome 11, as well as on chromosome 7 and chromosome 9 in BXH-2 AML cell line B117. The technique was also used to detect several new tumor-specific, proviral insertions in BXH-2 AML cell lines. PMID:10958940

Acar, H; Copeland, N G; Gilbert, D J; Jenkins, N A; Largaespada, D A

2000-08-01

376

The suppression subtractive hybridization (SSH) approach, a PCR based approach which amplifies differentially expressed cDNAs (complementary DNAs), while simultaneously suppressing amplification of common cDNAs, was employed to identify immuneinducible genes in insects. This technique has been used as a suitable tool for experimental identification of novel genes in eukaryotes as well as prokaryotes; whose genomes have been sequenced, or the species whose genomes have yet to be sequenced. In this article, I have proposed a method for in silico functional characterization of immune-inducible genes from insects. Apart from immune-inducible genes from insects, this method can be applied for the analysis of genes from other species, starting from bacteria to plants and animals. This article is provided with a background of SSH-based method taking specific examples from innate immune-inducible genes in insects, and subsequently a bioinformatics pipeline is proposed for functional characterization of newly sequenced genes. The proposed workflow presented here, can also be applied for any newly sequenced species generated from Next Generation Sequencing (NGS) platforms. PMID:23519487

Badapanda, Chandan

2013-01-01

377

JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

2014-01-01

378

We describe two Go programs, and , developed by a Monte-Carlo approach that is simpler than Bruegmann's (1993) approach. Our method is based on Abramson (1990). We performed experiments to assess ideas on (1) progressive pruning, (2) all moves as first heuristic, (3) temperature, (4) simu- lated annealing, and (5) depth-two tree search within the Monte-Carlo frame- work. Progressive pruning

Bruno Bouzy; Bernard Helmstetter

2003-01-01

379

Monte Carlo Methods for Inference and Learning

Monte Carlo Methods for Inference and Learning Guest Lecturer: Ryan Adams CSC 2535 http://www.cs.toronto.edu/~rpa #12;Overview Â·Monte Carlo basics Â·Rejection and Importance sampling Â·Markov chain Monte Carlo Â·Metropolis-Hastings and Gibbs sampling Â·Slice sampling Â·Hamiltonian Monte Carlo #12;Computing Expectations We

Hinton, Geoffrey E.

380

Monte Carlo Simulation of Interacting Electron Models

Monte Carlo Simulation of Interacting Electron Models by a New Determinant Approach by Mucheng discusses the calculation of determinants and Monte Carlo simulation of Hub- bard models by a new and a Monte Carlo summation algorithm to evaluate the relevant diagram determinant sums. Index words: Monte

Robinson, Robert W.

381

Application of biasing techniques to the contributon Monte Carlo method

Recently, a new Monte Carlo Method called the Contribution Monte Carlo Method was developed. The method is based on the theory of contributions, and uses a new receipe for estimating target responses by a volume integral over the contribution current. The analog features of the new method were discussed in previous publications. The application of some biasing methods to the new contribution scheme is examined here. A theoretical model is developed that enables an analytic prediction of the benefit to be expected when these biasing schemes are applied to both the contribution method and regular Monte Carlo. This model is verified by a variety of numerical experiments and is shown to yield satisfying results, especially for deep-penetration problems. Other considerations regarding the efficient use of the new method are also discussed, and remarks are made as to the application of other biasing methods. 14 figures, 1 tables.

Dubi, A.; Gerstl, S.A.W.

1980-01-01

382

The Monte Carlo method in quantum field theory

This series of six lectures is an introduction to using the Monte Carlo method to carry out nonperturbative studies in quantum field theories. Path integrals in quantum field theory are reviewed, and their evaluation by the Monte Carlo method with Markov-chain based importance sampling is presented. Properties of Markov chains are discussed in detail and several proofs are presented, culminating in the fundamental limit theorem for irreducible Markov chains. The example of a real scalar field theory is used to illustrate the Metropolis-Hastings method and to demonstrate the effectiveness of an action-preserving (microcanonical) local updating algorithm in reducing autocorrelations. The goal of these lectures is to provide the beginner with the basic skills needed to start carrying out Monte Carlo studies in quantum field theories, as well as to present the underlying theoretical foundations of the method.

Colin Morningstar

2007-02-20

383

Optimum and efficient sampling for variational quantum Monte Carlo

Quantum mechanics for many-body systems may be reduced to the evaluation of integrals in 3N dimensions using Monte-Carlo, providing the Quantum Monte Carlo ab initio methods. Here we limit ourselves to expectation values for trial wavefunctions, that is to Variational quantum Monte Carlo. Almost all previous implementations employ samples distributed as the physical probability density of the trial wavefunction, and assume the Central Limit Theorem to be valid. In this paper we provide an analysis of random error in estimation and optimisation that leads naturally to new sampling strategies with improved computational and statistical properties. A rigorous lower limit to the random error is derived, and an efficient sampling strategy presented that significantly increases computational efficiency. In addition the infinite variance heavy tailed random errors of optimum parameters in conventional methods are replaced with a Normal random error, strengthening the theoretical basis of optimisation. The method is ...

Trail, John Robert; 10.1063/1.3488651

2010-01-01

384

The internal structure of deuterons weakly influences a motion of their center of mass (translational motion) when the separation between deuterons is much larger than the nuclear radius. The scenario can be different when the wave function of two deuterons has a formal singularity along the line connecting them. The singularity is smeared out within the core of the deuteron radius around this line and the state becomes physical. Inside the core translational and internal nuclear coordinates (responsible for binding energy) are equally fast and become hybridized resulting in the increase of the binding energy in that region. This reduces the Coulomb barrier and allows fusion of deuterons which are "hot" inside the core and "cold" outside. In experiments the hybridized state can be obtained from the usual incident flux if to adjust its angular distribution. Manipulating the flux of low energy deuterons one can get fusion of them with a not small probability.

B. Ivlev

2013-12-20

385

This paper is concerned with filtering of a hybrid model with a number of linear systems coupled by a hidden switching process.\\u000a The most probable trajectory approach is used to derive a finite-dimensional recursive filter. Such scheme is applied to nonlinear\\u000a systems using a piecewise-linear approximation method. Numerical examples are provided and computational experiments are reported.

Q. Zhang

386

Monte Carlo and Quasi-Monte Carlo for Art B. Owen

Monte Carlo and Quasi-Monte Carlo for Statistics Art B. Owen Abstract This article reports Monte Carlo methods can be used. There was a special emphasis on areas where Quasi-Monte Carlo ideas This survey is aimed at exposing good problems in statistics to researchers in Quasi- Monte Carlo. It has

Owen, Art

387

Quasi-Monte Carlo Sampling to improve the Efficiency of Monte Carlo EM

Quasi-Monte Carlo Sampling to improve the Efficiency of Monte Carlo EM Wolfgang Jank Department@rhsmith.umd.edu November 17, 2003 Abstract In this paper we investigate an efficient implementation of the Monte Carlo EM al- gorithm based on Quasi-Monte Carlo sampling. The Monte Carlo EM algorithm is a stochastic version

Jank, Wolfgang

388

NASA Astrophysics Data System (ADS)

This paper presents research activities carried out at VTT Technical Research Centre of Finland in the field of hybrid integration of optics, electronics and mechanics. Main focus area in our research is the manufacturing of electronic modules and product structures with printed electronics, film-over-molding and polymer sheet lamination technologies and the goal is in the next generation of smart systems utilizing monolithic polymer packages. The combination of manufacturing technologies such as roll-to-roll -printing, injection molding and traditional component assembly is called Printed Hybrid Systems (PHS). Several demonstrator structures have been made, which show the potential of polymer packaging technology. One demonstrator example is a laminated structure with embedded LED chips. Element thickness is only 0.3mm and the flexible stack of foils can be bent in two directions after assembly process and was shaped curved using heat and pressure. The combination of printed flexible circuit boards and injection molding has also been demonstrated with several functional modules. The demonstrators illustrate the potential of origami electronics, which can be cut and folded to 3D shapes. It shows that several manufacturing process steps can be eliminated by Printed Hybrid Systems technology. The main benefits of this combination are small size, ruggedness and conformality. The devices are ideally suited for medical applications as the sensitive electronic components are well protected inside the plastic and the structures can be cleaned easily due to the fact that they have no joints or seams that can accumulate dirt or bacteria.

Karioja, Pentti; Mäkinen, Jukka-Tapani; Keränen, Kimmo; Aikio, Janne; Alajoki, Teemu; Jaakola, Tuomo; Koponen, Matti; Keränen, Antti; Heikkinen, Mikko; Tuomikoski, Markus; Suhonen, Riikka; Hakalahti, Leena; Kopola, Pälvi; Hast, Jukka; Liedert, Ralf; Hiltunen, Jussi; Masuda, Noriyuki; Kemppainen, Antti; Rönkä, Kari; Korhonen, Raimo

2012-04-01

389

NASA Astrophysics Data System (ADS)

City models visualisation, buildings, structures and volumetric information, is an important task in Computer Graphics and Urban Planning. The different formats and data sources involved in the visualisation make the development of applications a big challenge. We present a homogeneous web visualisation framework using X3DOM and MEDX3DOM for the visualisation of these urban objects. We present an integration of different declarative data sources, enabling the utilization of advanced visualisation algorithms to render the models. It has been tested with a city model composed of buildings from the Madrid University Campus, some volumetric datasets coming from Air Quality Models and 2D layers wind datasets. Results show that the visualisation of all the urban models can be performed in real time on the Web. An HTML5 web interface is presented to the users, enabling real time modifications of visualisation parameters.

Congote, J.; Moreno, A.; Kabongo, L.; Pérez, J.-L.; San-José, R.; Ruiz, O.

2012-10-01

390

Density-of-states based Monte Carlo methods for simulation of biological systems

NASA Astrophysics Data System (ADS)

We have developed density-of-states [1] based Monte Carlo techniques for simulation of biological molecules. Two such methods are discussed. The first, Configurational Temperature Density of States (CTDOS) [2], relies on computing the density of states of a peptide system from knowledge of its configurational temperature. The reciprocal of this intrinsic temperature, computed from instantaneous configurational information of the system, is integrated to arrive at the density of states. The method shows improved efficiency and accuracy over techniques that are based on histograms of random visits to distinct energy states. The second approach, Expanded Ensemble Density of States (EXEDOS), incorporates elements from both the random walk method and the expanded ensemble formalism. It is used in this work to study mechanical deformation of model peptides. Results are presented in the form of force-extension curves and the corresponding potentials of mean force. The application of this proposed technique is further generalized to other biological systems; results will be presented for ion transport through protein channels, base stacking in nucleic acids and hybridization of DNA strands. [1]. F. Wang and D. P. Landau, Phys. Rev. Lett., 86, 2050 (2001). [2]. N. Rathore, T. A. Knotts IV and J. J. de Pablo, Biophys. J., Dec. (2003).

Rathore, Nitin; Knotts, Thomas A.; de Pablo, Juan J.

2004-03-01

391

Photonic integrated circuits based on silica and polymer PLC

NASA Astrophysics Data System (ADS)

Various methods of hybrid integration of photonic circuits are discussed focusing on merits and challenges. Material platforms discussed in this report are mainly polymer and silica. We categorize the hybridization methods using silica and polymer waveguides into two types, chip-to-chip and on-chip integration. General reviews of these hybridization technologies from the past works are reviewed. An example for each method is discussed in details. We also discuss current status of our silica PLC hybrid integration technology.

Izuhara, T.; Fujita, J.; Gerhardt, R.; Sui, B.; Lin, W.; Grek, B.

2013-03-01

392

NASA Astrophysics Data System (ADS)

ePix100 is the first variant of a novel class of integrating pixel ASICs architectures optimized for the processing of signals in second generation LINAC Coherent Light Source (LCLS) X-Ray cameras. ePix100 is optimized for ultra-low noise application requiring high spatial resolution. ePix ASICs are based on a common platform composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog to digital converters per column. The ePix100 variant has 50?mx50?m pixels arranged in a 352x384 matrix, a resolution of 50e- r.m.s. and a signal range of 35fC (100 photons at 8keV). In its final version it will be able to sustain a frame rate of 1kHz. A first prototype has been fabricated and characterized and the measurement results are reported here.

Caragiulo, P.; Dragone, A.; Markovic, B.; Herbst, R.; Nishimura, K.; Reese, B.; Herrmann, S.; Hart, P.; Blaj, G.; Segal, J.; Tomada, A.; Hasi, J.; Carini, G.; Kenney, C.; Haller, G.

2014-09-01

393

Hierarchically ordered ZnO nanorods (NRs) decorated nanoporous-layer-covered TiO2 nanotube array (ZnO NRs/NP-TNTAs) nanocomposites have been prepared by an efficient, two-step anodization route combined with an electrochemical deposition strategy, by which monodispersed one-dimensional (1D) ZnO NRs were uniformly grown on the framework of NP-TNTAs. The crystal phases, morphologies, optical properties, photocatalytic as well as photoelectrocatalytic performances of the well-defined ZnO NRs/NP-TNTAs heterostructures were systematically explored to clarify the structure-property correlation. It was found that the ZnO NRs/NP-TNTAs heterostructure exhibits significantly enhanced photocatalytic and photoelectrocatalytic performances, along with favorable photostability toward degradation of organic pollutants under UV light irradiation, as compared to the single component counterparts. The remarkably enhanced photoactivity of ZnO NRs/NP-TNTAs heterostructure is ascribed to the intimate interfacial integration between ZnO NRs and NP-TNTAs substrate imparted by the unique spatially branched hierarchical structure, thereby contributing to the efficient transfer and separation of photogenerated electron-hole charge carriers. Moreover, the specific active species during the photocatalytic process was unambiguously determined and photocatalytic mechanism was tentatively presented. It is anticipated that our work could provide new insights for the construction of various hierarchical 1D-1D hybrid nanocomposites for extensive photocatalytic applications. PMID:25363649

Xiao, Fang-Xing; Hung, Sung-Fu; Tao, Hua Bing; Miao, Jianwei; Yang, Hong Bin; Liu, Bin

2014-12-21

394

First principles quantum Monte Carlo

Present quantum Monte Carlo codes use statistical techniques adapted to find the amplitude of a quantum system or the associated eigenvalues. Thus, they do not use a true physical random source. It is demonstrated that, in fact, quantum probability admits a description based on a specific class of random process at least for the single particle case. Then a first principle Monte Carlo code that exactly simulates quantum dynamics can be constructed. The subtle question concerning how to map random choices in amplitude interferences is explained. Possible advantages of this code in simulating single hit experiments are discussed.

J. M. A. Figueiredo

2005-10-06

395

Proton Upset Monte Carlo Simulation

NASA Technical Reports Server (NTRS)

The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

2009-01-01

396

Monte Carlo simulations of the randomly forced Burgers equation

The behaviour of the one-dimensional random-forced Burgers equation is investigated in the path integral formalism, using a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf equation) eventually leads to constraints on

P. Düben; D. Homeier; K. Jansen; D. Mesterhazy; G. Münster; C. Urbach

2008-01-01

397

Tests for Unit Roots: A Monte Carlo Investigation

Recent work by Said and Dickey (1984, 1985), Phillips (1987), and Phillips and Perron (1988) examines tests for unit roots in the autoregressive part of mixed autoregressive integrated moving average models (tests for stationary). Monte Carlo experiments show that these unit-root tests have different finite-sample distributions from the unit-root tests developed by Fuller (1976) and Dickey and Fuller (1979, 1981)

G. William Schwert

1989-01-01

398

Hybrid Silicon AWG Lasers and Buffers

NASA Astrophysics Data System (ADS)

Silicon photonics promises the low cost integration of optical components with CMOS electronics thus enabling optical interconnects in future generation processors. The hybrid silicon platform (HSP) is one approach to make optically active components on silicon. While many optical components on the HSP have been demonstrated, few photonic integrated circuits (PICs), consisting of multiple elements, have been demonstrated. In this dissertation, two Hybrid Silicon PICs and their building blocks will be presented. The first PIC to be presented is a multiwavelength laser based on an AWG. It consists of Fabry-Perot cavities integrated with hybrid silicon amplifiers and an intracavity filter in the form of an AWG with a channel spacing of 360 GHz. Four-channel lasing operation is shown. Single-sided fiber-coupled output powers as high as 35 µW are measured. The SMSR is as high as 35 dB. Various device characteristics are compromised as the AWG was attacked during the III-V process, thus showing the need to properly protect passive components during III-V processing. The second PIC to be presented is a fully integrated optical buffer. The device consists of a hybrid silicon switch, a 1.1 m long silicon waveguide, and cascaded hybrid silicon amplifiers. The passive delay line is protected by dielectric layers to limit passive losses to 0.5 dB/cm. Noise filters in the form of saturable absorbers are integrated in the buffer to allow for a larger number of recirculations in the delay line compared to a delay without filters. Tapers are used to transition the mode from the passive region to the hybrid region with losses as low as 0.22 dB per transition and reflectivities below -35 dB. Error free operation of the hybrid silicon switch is demonstrated in all four paths. The integrated buffer failed due to low yield, showing the current limitations of the HSP.

Kurczveil, Geza

399

MontePython: Implementing Quantum Monte Carlo using Python

NASA Astrophysics Data System (ADS)

We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible. Program summaryProgram title: MontePython Catalogue identifier: ADZP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 49 519 No. of bytes in distributed program, including test data, etc.: 114 484 Distribution format: tar.gz Programming language: C++, Python Computer: PC, IBM RS6000/320, HP, ALPHA Operating system: LINUX Has the code been vectorised or parallelized?: Yes, parallelized with MPI Number of processors used: 1-96 RAM: Depends on physical system to be simulated Classification: 7.6; 16.1 Nature of problem: Investigating ab initio quantum mechanical systems, specifically Bose-Einstein condensation in dilute gases of 87Rb Solution method: Quantum Monte Carlo Running time: 225 min with 20 particles (with 4800 walkers moved in 1750 time steps) on 1 AMD Opteron TM Processor 2218 processor; Production run for, e.g., 200 particles takes around 24 hours on 32 such processors.

Nilsen, Jon Kristian

2007-11-01

400

Anomalous scaling in the random-force-driven Burgers equation: A Monte Carlo study

We present a new approach to determine numerically the statistical behavior of small-scale structures in hydrodynamic turbulence. Starting from the functional integral for the random-force-driven Burgers equation we show that Monte Carlo simulations allow for the computation of structure function scaling exponents to high precision. Given the general applicability of Monte Carlo methods, this opens up the possibility to address also other systems relevant to turbulence within this framework.

Mesterhazy, David

2011-01-01

401

Anomalous scaling in the random-force-driven Burgers' equation: a Monte Carlo study

NASA Astrophysics Data System (ADS)

We present a new approach to determine numerically the statistical behavior of small-scale structures in hydrodynamic turbulence. Starting from the functional integral representation of the random-force-driven Burgers' equation we show that Monte Carlo simulations allow us to determine the anomalous scaling of high-order moments of velocity differences. Given the general applicability of Monte Carlo methods, this opens up the possibility of also addressing other systems relevant to the turbulence within this framework.

Mesterházy, D.; Jansen, K.

2011-10-01

402

Anomalous scaling in the random-force-driven Burgers' equation: a Monte Carlo study

We present a new approach to determine numerically the statistical behavior of small-scale structures in hydrodynamic turbulence. Starting from the functional integral representation of the random-force-driven Burgers' equation we show that Monte Carlo simulations allow us to determine the anomalous scaling of high-order moments of velocity differences. Given the general applicability of Monte Carlo methods, this opens up the possibility

D. Mesterházy; K. Jansen

2011-01-01

403

Anomalous scaling in the random-force-driven Burgers equation: A Monte Carlo study

We present a new approach to determine numerically the statistical behavior of small-scale structures in hydrodynamic turbulence. Starting from the functional integral representation of the random-force-driven Burgers equation we show that Monte Carlo simulations allow us to determine the anomalous scaling of high-order moments of velocity differences. Given the general applicability of Monte Carlo methods, this opens up the possibility to address also other systems relevant to turbulence within this framework.

David Mesterhazy; Karl Jansen

2011-04-07

404

Hybrid inflation along waterfall trajectories

We identify a new inflationary regime for which more than 60 e-folds are generated classically during the waterfall phase occuring after the usual hybrid inflation. By performing a bayesian Monte-Carlo-Markov-Chain analysis, this scenario is shown to take place in a large part of the parameter space of the model. When this occurs, the observable perturbation modes leave the Hubble radius during waterfall inflation. The power spectrum of adiabatic perturbations is red, possibly in agreement with CMB constraints. A particular attention has been given to study only the regions for which quantum backreactions do not affect the classical dynamics. Implications concerning the preheating and the absence of topological defects in our universe are discussed.

Sebastien Clesse

2010-06-23

405

Monte Carlo calculations of nuclei

Nuclear many-body calculations have the complication of strong spin- and isospin-dependent potentials. In these lectures the author discusses the variational and Green`s function Monte Carlo techniques that have been developed to address this complication, and presents a few results.

Pieper, S.C. [Argonne National Lab., IL (United States). Physics Div.

1997-10-01

406

I review the status of the general-purpose Monte Carlo event generators for the LHC, with emphasis on areas of recent physics developments. There has been great progress, especially in multi-jet simulation, but I mention some question marks that have recently arisen.

Michael H. Seymour

2010-08-17

407

NSDL National Science Digital Library

This is the description and instructions for the Monte Carlo Estimation of Pi applet. It is a simulation of throwing darts at a figure of a circle inscribed in a square. It shows the relationship between the geometry of the figure and the statistical outcome of throwing the darts.

Mcgath, Gary; Trunfio, Paul

1996-01-01

408

Is Monte Carlo embarrassingly parallel?

Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

2012-07-01

409

In this lecture I discuss jet-shape distributions and describe how from jet evolution one may design Monte Carlo simulations which are used in the analysis of short distance distributions in $\\ee$-annihilation, lepton-hadron and hadron-hadron collisions

Giuseppe Marchesini

2005-01-24

410

June 14, 2014 ERRATA FOR EXPLORING MONTE CARLO METHODS William L. Dunn and J. Kenneth Shultis - # j ) y i = R(1 - 2# j ) p. 78, line 14 composite method composition method p. 80, 3rd line # i+1 # f j (x t ) # i+1 # f i (x t ) p. 81, penultimate Eq. f # (#) = . . . # 0.8336, f # (#) = . . . # 0

Shultis, J. Kenneth

411

Writing programs to play the classical Asian game of Go is considered one of the grand challenges of artifi- cial intelligence. Traditional game tree search methods have failed to conquer Go because the search space is so vast and because static evaluation of board posi- tions is extremely difficult. There has been consider- able progress recently in using Monte Carlo

Peter Drake; Steve Uurtamo

2007-01-01

412

Small hybrid solar power system

This paper introduces a novel concept of mini-hybrid solar power plant integrating a field of solar concentrators, two superposed Organic Rankine Cycles (ORC) and a (bio-)Diesel engine. The Organic Rankine Cycles include hermetic scroll expander-generators11The word expander is often used to characterize units recovering the expansion energy of a gas, in particular when based on a volumetric machine. The word

M. Kane; D. Larrain; D. Favrat; Y. Allani

2003-01-01

413

Hybrid Power Management Program Continued

NASA Technical Reports Server (NTRS)

Hybrid Power Management (HPM) is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The advanced power devices include ultracapacitors and photovoltaics. HPM has extremely wide potential with applications including power-generation, transportation, biotechnology, and space power systems. It may significantly alleviate global energy concerns, improve the environment, and stimulate the economy.

Eichenberg, Dennis J.

2002-01-01

414

Monte-Carlo Exploration for Deterministic Planning

Search methods based on Monte-Carlo simulation have recently led to breakthrough performance im- provements in difficult game-playing domains such as Go and General Game Playing. Monte-Carlo Random Walk (MRW) planning applies Monte- Carlo ideas to deterministic classical planning. In the forward chaining planner ARVAND, Monte- Carlo random walks are used to explore the local neighborhood of a search state for

Hootan Nakhost; Martin Müller

2009-01-01

415

Monte Carlo Simulations and Generation of the SPI Response

NASA Technical Reports Server (NTRS)

In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss our extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and inflight calibration data with MGEANT simulation.

Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

2003-01-01

416

Monte Carlo Simulations and Generation of the SPI Response

NASA Technical Reports Server (NTRS)

In this paper we discuss the methods developed for the production of the INTEGRAL/SPI instrument response. The response files were produced using a suite of Monte Carlo simulation software developed at NASA/GSFC based on the GEANT-3 package available from CERN. The production of the INTEGRAL/SPI instrument response also required the development of a detailed computer mass model for SPI. We discuss ow extensive investigations into methods to reduce both the computation time and storage requirements for the SPI response. We also discuss corrections to the simulated response based on our comparison of ground and infiight Calibration data with MGEANT simulations.

Sturner, S. J.; Shrader, C. R.; Weidenspointner, G.; Teegarden, B. J.; Attie, D.; Cordier, B.; Diehl, R.; Ferguson, C.; Jean, P.; vonKienlin, A.

2003-01-01

417

Target Density Normalization for Markov Chain Monte Carlo Algorithms

Techniques for evaluating the normalization integral of the target density for Markov Chain Monte Carlo algorithms are described and tested numerically. It is assumed that the Markov Chain algorithm has converged to the target distribution and produced a set of samples from the density. These are used to evaluate sample mean, harmonic mean and Laplace algorithms for the calculation of the integral of the target density. A clear preference for the sample mean algorithm applied to a reduced support region is found, and guidelines are given for implementation.

Caldwell, Allen

2014-01-01

418

Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 1 Population Monte Carlo Methods

Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 1 Population Monte Carlo Methods Christian P. Robert UniversitÂ´e Paris Dauphine #12;Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 2 1 A Benchmark example #12;Population Monte Carlo Methods/OFPR/CREST/May 5, 2003 3 Even simple models may lead

Robert, Christian P.

419

Towards Monte Carlo Simulations on Large Nuclei ï¿½ August 2014 Towards Monte Carlo Simulations published method to compute properties on neutron matter using variational Monte Carlo simulations published a method of performing variational Monte Carlo calculations on neutron matter comprised of up

Washington at Seattle, University of - Department of Physics, Electroweak Interaction Research Group

420

HOPSPACK: Hybrid Optimization Parallel Search Package.

In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica

2008-12-01

421

Hybrid messaging of adaptive workflow engine

To realize the adaptive workflow engine, the messaging system is the backbone. Hybrid architectures were given to adapt different MOMs (message-oriented middleware) protocols. After exploring publish\\/subscribe and point-to-point messaging models, the server-side components modeling business processes are employed to leverage underlying middleware services, and hybrid ways to integrate JMS with EJB for the reliable and fault-tolerant platforms are fulfilled. Detailed

Cao WeiQi; Li JuanZi; Wang KeHong

2003-01-01

422

The monte carlo newton-raphson algorithm

It is shown that the Monte Carlo Newton-Raphson algorithm is a viable alternative to the Monte Carlo EM algorithm for finding maximum likelihood estimates based on incomplete data. Both Monte Carlo procedures require simulations from the conditional distribution of the missing data given the observed data with the aid of methods like Gibbs sampling and rejective sampling. The Newton-Raphson algorithm

Anthony Y. C. Kuk; Yuk W. Cheng

1997-01-01

423

Monte Carlo Simulations of Model Nonionic Surfactants

Monte Carlo Simulations of Model Nonionic Surfactants A.P. Chatterjee and A.Z. Panagiotopoulos was studied by histogram reweight- ing grand canonical Monte Carlo simulations. Two di erent sets of site volume fractions using lattice Monte Carlo simulations performed in the canonical constant NV T ensemble

424

Monte Carlo Experiments: Design and Implementation.

ERIC Educational Resources Information Center

Illustrates the design and planning of Monte Carlo simulations, presenting nine steps in planning and performing a Monte Carlo analysis from developing a theoretically derived question of interest through summarizing the results. Uses a Monte Carlo simulation to illustrate many of the relevant points. (SLD)

Paxton, Pamela; Curran, Patrick J.; Bollen, Kenneth A.; Kirby, Jim; Chen, Feinian

2001-01-01

425

A Monte Carlo Study of Titrating Polyelectrolytes

A Monte Carlo Study of Titrating Polyelectrolytes Magnus Ullnery and Bo Jonssonz Physical Chemistry Journal of Chemical Physics 104, 3048-3057 (1996) Monte Carlo simulations have been used to study three di the conformations towards more extended structures. In the Monte Carlo simulations presented here, focus

Peterson, Carsten

426

A Monte Carlo Study of Titrating Polyelectrolytes

A Monte Carlo Study of Titrating Polyelectrolytes Magnus Ullner y and Bo JÂ¨onsson z Physical, Sweden Journal of Chemical Physics 104, 3048Â3057 (1996) Monte Carlo simulations have been used to study of the polymer more difficult and biases the conformations towards more extended structures. In the Monte Carlo

Peterson, Carsten

427

Monte Carlo Methods in Statistics Christian Robert

Monte Carlo Methods in Statistics Christian Robert UniversitÂ´e Paris Dauphine and CREST, INSEE September 2, 2009 Monte Carlo methods are now an essential part of the statistician's toolbox, to the point! We recall in this note some of the advances made in the design of Monte Carlo techniques towards

Boyer, Edmond

428

MONTE CARLO METHOD AND SENSITIVITY ESTIMATIONS

MONTE CARLO METHOD AND SENSITIVITY ESTIMATIONS A. de Lataillade a;#3; , S. Blanco b , Y. Clergent b on a formal basis and simple radiative transfer examples are used for illustration. Key words: Monte Carlo submitted to Elsevier Science 18 February 2002 #12; 1 Introduction Monte Carlo methods are commonly used

Dufresne, Jean-Louis

429

Advanced Monte Carlo Methods: American Options

doesn't fit well with Monte Carlo methods which go forwards in time American options Â p. 4 #12;ProblemAdvanced Monte Carlo Methods: American Options Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford for Monte Carlo methods is the accurate and efficient pricing of options with optional early exercise

Giles, Mike

430

Abstract Introducing the original ideas of using Monte-Carlo simulation in computer Go. Adding new ideas to pure Monte-Carlo approach for computer Go. • Progressive pruning • All moves as first heuristic • Temperature • Simulated annealing • Depth-2 tree search Parallel Monte-Carlo tree search Conclusion: • With the ever-increasing power of computers, we can add more knowl- edge to the

Tsan-sheng Hsu

431

Monte Carlo simulations in SPET and PET

Monte Carlo methods are extensively used in Nuclear Medicine to tackle a variety of problems that are diffi- cult to study by an experimental or analytical approach. A review of the most recent tools allowing application of Monte Carlo methods in single photon emission tomography (SPET) and positron emission tomography (PET) is presented. To help potential Monte Carlo users choose

I. Buvat; I. Castiglioni

2002-01-01

432

Advanced Monte Carlo Methods: American Options

backwards in time doesn't fit well with Monte Carlo methods which go forwards in time American options Â pAdvanced Monte Carlo Methods: American Options Prof. Mike Giles mike.giles@maths.ox.ac.uk Oxford for Monte Carlo methods is the accurate and efficient pricing of options with optional early exercise

Giles, Mike

433

Monte Carlo Go Using Previous Simulation Results

The researches on Go using Monte Carlo method are treated as hot topics in these years. In particular, Monte Carlo Tree Search algorithm such as UCT made great contributions to the development of computer Go. When Monte Carlo method was used for Go, the previous simulation results were not usually stored. In this paper, we suggest a new idea of

Takuma TOYODA; Yoshiyuki KOTANI

2010-01-01

434

Thermodynamic Scaling Gibbs Ensemble Monte Carlo

Thermodynamic Scaling Gibbs Ensemble Monte Carlo: A new method for determination of phase for correspondence. EÂmail:azp2@cornell.edu #12; We combine Valleau's thermodynamic scaling Monte Carlo concept Monte Carlo simulations. There has been significant recent progress in molecular simulation method

435

NASA Astrophysics Data System (ADS)

Starting from equations of state of nucleonic and color-superconducting quark matter and assuming a first-order phase transition between these, we construct an equation of state of stellar matter, which contains three phases: a nucleonic phase, as well as two-flavor and three-flavor color-superconducting phases of quarks. Static sequences of the corresponding hybrid stars include massive members with masses of ~2 M? and radii in the range of 13 ? R ? 16 km. We investigate the integral parameters of rapidly rotating stars and obtain evolutionary sequences that correspond to constant rest-mass stars spinning down by electromagnetic and gravitational radiation. Physically new transitional sequences are revealed that are distinguished by a phase transition from nucleonic to color-superconducting matter for some configurations that are located between the static and Keplerian limits. The snapshots of internal structure of the star, displaying the growth or shrinkage of superconducting volume as the star's spin changes, are displayed for constant rest mass stars. We further obtain evolutionary sequences of rotating supramassive compact stars and construct pre-collapse models that can be used as initial data to simulate a collapse of color-superconducting hybrid stars to a black hole.

Ayvazyan, N. S.; Colucci, G.; Rischke, D. H.; Sedrakian, A.

2013-11-01

436

Monte Carlo calculation of monitor unit for electron arc therapy

Purpose: Monitor unit (MU) calculations for electron arc therapy were carried out using Monte Carlo simulations and verified by measurements. Variations in the dwell factor (DF), source-to-surface distance (SSD), and treatment arc angle ({alpha}) were studied. Moreover, the possibility of measuring the DF, which requires gantry rotation, using a solid water rectangular, instead of cylindrical, phantom was investigated. Methods: A phase space file based on the 9 MeV electron beam with rectangular cutout (physical size=2.6x21 cm{sup 2}) attached to the block tray holder of a Varian 21 EX linear accelerator (linac) was generated using the EGSnrc-based Monte Carlo code and verified by measurement. The relative output factor (ROF), SSD offset, and DF, needed in the MU calculation, were determined using measurements and Monte Carlo simulations. An ionization chamber, a radiographic film, a solid water rectangular phantom, and a cylindrical phantom made of polystyrene were used in dosimetry measurements. Results: Percentage deviations of ROF, SSD offset, and DF between measured and Monte Carlo results were 1.2%, 0.18%, and 1.5%, respectively. It was found that the DF decreased with an increase in {alpha}, and such a decrease in DF was more significant in the {alpha} range of 0 deg. - 60 deg. than 60 deg. - 120 deg. Moreover, for a fixed {alpha}, the DF increased with an increase in SSD. Comparing the DF determined using the rectangular and cylindrical phantom through measurements and Monte Carlo simulations, it was found that the DF determined by the rectangular phantom agreed well with that by the cylindrical one within {+-}1.2%. It shows that a simple setup of a solid water rectangular phantom was sufficient to replace the cylindrical phantom using our specific cutout to determine the DF associated with the electron arc. Conclusions: By verifying using dosimetry measurements, Monte Carlo simulations proved to be an alternative way to perform MU calculations effectively for electron arc therapy. Since Monte Carlo simulations can generate a precalculated database of ROF, SSD offset, and DF for the MU calculation, with a reduction in human effort and linac beam-on time, it is recommended that Monte Carlo simulations be partially or completely integrated into the commissioning of electron arc therapy.

Chow, James C. L.; Jiang Runqing [Radiation Medicine Program, Princess Margaret Hospital, University Health Network, Toronto, Ontario M5G 2M9 (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario M5G 2M9 (Canada) and Department of Physics, Ryerson University, Toronto, Ontario M5B 2K3 (Canada); Department of Medical Physics, Grand River Regional Cancer Center, Kitchener, Ontario N2G 1G3 (Canada)

2010-04-15

437

A new method to assess Monte Carlo convergence

The central limit theorem can be applied to a Monte Carlo solution if the following two requirements are satisfied: (1) the random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these are satisfied, a confidence interval based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the type of Monte Carlo tally being used. The Monte Carlo practitioner has only a limited number of marginally quantifiable methods that use sampled values to assess the fulfillment of the second requirement; e.g., statistical error reduction proportional to 1[radical]N with error magnitude guidelines. No consideration is given to what has not yet been sampled. A new method is presented here to assess the convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores, f(x), where the random variable x is the score from one particle history and [integral][sub [minus][infinity

Forster, R.A.; Booth, T.E.; Pederson, S.P.

1993-01-01

438

The Rhenish stoneware from the Monte Cristi shipwreck, Dominican Republic

Top of a Monte Cristi Bartmannkrug (RSW 6). 8 Top of a Monte Cristi Bartmannkrug (RSW 7). . . . 36 37 9 Complete base of a Monte Cristi Bartmannkrug. . . . . 39 10 Handle of a Monte Cristi Bartmannkrug, reconstructed from three sherds. . . . . 40... Top of a Monte Cristi Bartmannkrug (RSW 6). 8 Top of a Monte Cristi Bartmannkrug (RSW 7). . . . 36 37 9 Complete base of a Monte Cristi Bartmannkrug. . . . . 39 10 Handle of a Monte Cristi Bartmannkrug, reconstructed from three sherds. . . . . 40...

Lessmann, Anne Wood

2012-06-07

439

Comparative Monte Carlo Efficiency by Monte Carlo Analysis

We propose a modified power method for computing the subdominant eigenvalue $\\lambda_2$ of a matrix or continuous operator. Here we focus on defining simple Monte Carlo methods for its application. The methods presented use random walkers of mixed signs to represent the subdominant eigenfuction. Accordingly, the methods must cancel these signs properly in order to sample this eigenfunction faithfully. We present a simple procedure to solve this sign problem and then test our Monte Carlo methods by computing the $\\lambda_2$ of various Markov chain transition matrices. We first computed ${\\lambda_2}$ for several one and two dimensional Ising models, which have a discrete phase space, and compared the relative efficiencies of the Metropolis and heat-bath algorithms as a function of temperature and applied magnetic field. Next, we computed $\\lambda_2$ for a model of an interacting gas trapped by a harmonic potential, which has a mutidimensional continuous phase space, and studied the efficiency of the Metropolis ...

Rubenstein, B M; Doll, J D

2010-01-01

440

Efficient pseudo-random number generation for monte-carlo simulations using graphic processors

NASA Astrophysics Data System (ADS)

A hybrid approach based on the combination of three Tausworthe generators and one linear congruential generator for pseudo random number generation for GPU programing as suggested in NVIDIA-CUDA library has been used for MONTE-CARLO sampling. On each GPU thread, a random seed is generated on fly in a simple way using the quick and dirty algorithm where mod operation is not performed explicitly due to unsigned integer overflow. Using this hybrid generator, multivariate correlated sampling based on alias technique has been carried out using both CUDA and OpenCL languages.

Mohanty, Siddhant; Mohanty, A. K.; Carminati, F.

2012-06-01

441

Kinematics of Multigrid Monte Carlo

We study the kinematics of multigrid Monte Carlo algorithms by means of acceptance rates for nonlocal Metropolis update proposals. An approximation formula for acceptance rates is derived. We present a comparison of different coarse-to-fine interpolation schemes in free field theory, where the formula is exact. The predictions of the approximation formula for several interacting models are well confirmed by Monte Carlo simulations. The following rule is found: For a critical model with fundamental Hamiltonian H(phi), absence of critical slowing down can only be expected if the expansion of in terms of the shift psi contains no relevant (mass) term. We also introduce a multigrid update procedure for nonabelian lattice gauge theory and study the acceptance rates for gauge group SU(2) in four dimensions.

M. Grabenstein; K. Pinn

1992-07-03

442

Exclusive Monte Carlo modelling of NLO DGLAP evolution

The next-to-leading order (NLO) evolution of the parton distribution functions (PDFs) in QCD is a common tool in the lepton-hadron and hadron-hadron collider data analysis. The standard NLO DGLAP evolution is formulated for inclusive (integrated) PDFs and done using inclusive NLO kernels. We report here on the ongoing project, called KRKMC, in which NLO DGLAP evolution is performed for the exclusive multiparton (fully unintegrated) distributions (ePDFs) with the help of the exclusive kernels. These kernels are calculated within the two-parton phase space for the non-singlet evolution, using Curci-Furmanski-Petronzio factorization scheme. The multiparton distribution, with multiple use of the exclusive NLO kernels, is implemented in the Monte Carlo program simulating multi-gluon emission from single quark emitter. High statistics tests ($\\sim 10^{10}$ events) show that the new scheme works perfectly well in practice and, at the inclusive (integrated) level, is equivalent with the traditional inclusive NLO DGLAP evolution. Once completed, this new technique is aimed as a building block for the new more precise NLO parton shower Monte Carlo, for W/Z production at LHC and for ep scattering, as well as a starting point for other perturbative QCD based Monte Carlo projects.

S. Jadach; M. Skrzypek; A. Kusina; M. Slawinska

2010-01-29

443

Bilinear quantum Monte Carlo: Expectations and energy differences

We propose a bilinear sampling algorithm in the Green's function Monte Carlo for expectation values of operators that do not commute with the Hamiltonian and for differences between eigenvalues of different Hamiltonians. The integral representations of the Schroedinger equations are transformed into two equations whose solution has the form [Psi][sub a](x) t(x,y) [Psi][sub b](y), where [Psi][sub a] and [Psi][sub b] are the wavefunctions for the two related systems and t(x,y) is a kernel chosen to couple x and y. The Monte Carlo process, with random walkers on the enlarged configuration space x [circle times] y, solves these equations by generating densities whose asymptotic form is the above bilinear distribution. With such a distribution, exact Monte Carlo estimators can be obtained for the expectation values of quantum operators and for energy differences. We present results of these methods applied to several test problems, including a model integral equation, and the hydrogen atom.

Zhang, S.; Kalos, M.H. (Cornell Univ., Ithaca, NY (United States))

1993-02-01

444

Bacteria Allocation Using Monte Carlo

NSDL National Science Digital Library

This applet, created by David Hill and Lila Roberts, uses the Monte Carlo technique to simulate a count of bacteria that are present as a result of a certain sampling process. This simulation could be modified to perform other experiments. This experiment is geared towards high school calculus students or probability courses for mathematics majors in college. Students must possess a basic understanding of probability concepts before performing this experiment. Overall, it is a nice activity for a mathematics classroom.

Hill, David R.; Roberts, Lila F.

2009-11-24

445

June 14, 2014 ERRATA FOR EXPLORING MONTE CARLO METHODS William L. Dunn and J. Kenneth Shultis para. Section 3.53 m = 16807 a = 16807 p. 76, Example 4.4, line 8 yi = R(1 - j) yi = R(1 - 2j) p. 78 from bottom, f(x) fm(x) p. 112, Eqs. (5.48) and (5.49) j p. 112, Eq. (5.49) N j=1 M j=1 p. 120, first

Shultis, J. Kenneth

446

NASA Technical Reports Server (NTRS)

A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

Platt, M. E.; Lewis, E. E.; Boehm, F.

1991-01-01

447

The MCLIB library: Monte Carlo simulation of neutron scattering instruments

Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.

Seeger, P.A.

1995-09-01

448

Exploring Various Monte Carlo Simulations for Geoscience Applications

NASA Astrophysics Data System (ADS)

Computer simulations are increasingly important in geoscience research and development. At the core of stochastic or Monte Carlo simulations are the random number sequences that are assumed to be distributed with specific characteristics. Computer generated random numbers, uniformly distributed on (0, 1), can be very different depending on the selection of pseudo-random number (PRN), or chaotic random number (CRN) generators. Equidistributed quasi-random numbers (QRNs) can also be used in Monte Carlo simulations. In the evaluation of some definite integrals, the resulting error variances can even be of different orders of magnitude. Furthermore, practical techniques for variance reduction such as Importance Sampling and Stratified Sampling can be implemented to significantly improve the results. A comparative analysis of these strategies has been carried out for computational applications in planar and spatial contexts. Based on these experiments, and on examples of geodetic applications of gravimetric terrain corrections and gravity inversion, conclusions and recommendations concerning their performance and general applicability are included.

Blais, R.

2010-12-01

449

Exploring pseudo- and chaotic random Monte Carlo simulations

NASA Astrophysics Data System (ADS)

Computer simulations are an increasingly important area of geoscience research and development. At the core of stochastic or Monte Carlo simulations are the random number sequences that are assumed to be distributed with specific characteristics. Computer-generated random numbers, uniformly distributed on (0, 1), can be very different depending on the selection of pseudo-random number (PRN) or chaotic random number (CRN) generators. In the evaluation of some definite integrals, the resulting error variances can even be of different orders of magnitude. Furthermore, practical techniques for variance reduction such as importance sampling and stratified sampling can be applied in most Monte Carlo simulations and significantly improve the results. A comparative analysis of these strategies has been carried out for computational applications in planar and spatial contexts. Based on these experiments, and on some practical examples of geodetic direct and inverse problems, conclusions and recommendations concerning their performance and general applicability are included.

Blais, J. A. Rod; Zhang, Zhan

2011-07-01

450

Monte Carlo simulations of fermion systems: the determinant method

Described are the details for performing Monte Carlo simulations on systems of fermions at finite temperatures by use of a technique called the Determinant Method. This method is based on a functional integral formulation of the fermion problem (Blankenbecler et al., Phys. Rev D 24, 2278 (1981)) in which the quartic fermion-fermion interactions that exist for certain models are transformed into bilinear ones by the introduction (J. Hirsch, Phys. Rev. B 28, 4059 (1983)) of Ising-like variables and an additional finite dimension. It is on the transformed problem the Monte Carlo simulations are performed. A brief summary of research on two such model problems, the spinless fermion lattice gas and the Anderson impurity problem, is also given.

Gubernatis, J.E.

1985-01-01

451

Model-Free Control Design for Hybrid Magnetic Levitation System

This study investigates three model-free control strategies including a simple proportional-integral-differential (PID) scheme, a fuzzy-neural-network (FNN) control and a robust control for a hybrid magnetic levitation (maglev) system. In general, the lumped dynamic model of a hybrid maglev system can be derived by the transforming principle from electrical energy to mechanical energy. In practice, this hybrid maglev system is inherently

Rong-Jong Wai; Jeng-Dao Lee; Chiung-Chou Liao

2005-01-01

452

A hybrid shop-floor control system for food manufacturing

This paper describes research in the area of hybrid control using simulation including a suitable architecture and a test-bed developed for experimenting with hybrid manufacturing systems - that is, manufacturing systems containing both continuous and discrete processing activities. The paper builds on the RapidCIM control system developed at Penn State University and makes innovations in this work including, integrating continuous

MANUEL J. MORENO-LIZARANZU; RICHARD A. WYSK; JOONKI HONG; VITTALDAS V. PRABHU

2001-01-01

453

Clinical Role of Hybrid Imaging

Recent technological advances have fueled the growth in hybrid radionuclide and CT imaging of the heart. Noninvasive imaging studies are reliable means to diagnose coronary artery disease (CAD), stratify risk, and guide clinical management. Myocardial perfusion scintigraphy is a robust, widely available noninvasive modality for the evaluation of ischemia from known or suspected CAD. Cardiac CT (coronary artery calcium score and coronary CT angiography) has emerged as a clinically robust noninvasive anatomic imaging test, capable of rapidly diagnosing or excluding obstructive CAD. Both anatomic and functional modalities have strengths and weaknesses, and can complement each other by offering integrated structural and physiologic information. As we discuss below, in selected patients, hybrid imaging may facilitate more accurate diagnosis, risk stratification, and management in a “one-stop shop” setting. PMID:23467390

Hsiao, Edward M.; Ali, Bilal

2013-01-01

454

A HYBRID LANGUAGE FOR MODELING, SIMULATION AND VERIFICATION

, such as an integrated circuit manufacturing plant, a brewery, and process industry plants (van Beek et al., 2002-event systems, and hybrid systems that can dy- namically change from lower to higher index, and vice versa

Reniers, Michel

455

Photovoltaic nanocrystal scintillators hybridized on Si solar cells

Photovoltaic nanocrystal scintillators hybridized on Si solar cells for enhanced conversion@bilkent.edu.tr Abstract: We propose and demonstrate semiconductor nanocrystal based photovoltaic scintillators integrated on solar cells to enhance photovoltaic device parameters including spectral responsivity, open circuit

Demir, Hilmi Volkan

456

Hybrid and multifield inflation

In this thesis I study the generation of density perturbations in two classes of inflationary models: hybrid inflation and multifield inflation with non-minimal coupling to gravity. In the case of hybrid inflation, we ...

Sfakianakis, Evangelos I

2014-01-01

457

Difficulties in Detecting Hybridization

Discusses difficulties in detecting hybridization. Comments on the proposal by T. Sang and Y. Zhong to distinguish between hybridization and lineage sorting which are processes that can produce discordant gene trees; ...

Holder, Mark T.; Anderson, Jennifer A.; Holloway, Alisha K.

2001-01-01

458

Hybrid networks are networks that have wired as well as wireless components. Several routing protocols exist for traditional wired networks and mobile ad-hoc networks. However, there are very few routing protocols designed for hybrid networks...

Gupta, Avinash

2012-06-07

459

Monte-Carlo Go Reinforcement Learning Experiments

Abstractó This paper describes experiments using reinforcement learning techniques to compute pattern urgencies used during simulations performed in a Monte-Carlo Go architecture. Currently, Monte-Carlo is a popular technique for computer Go. In a previous study, Monte-Carlo was associated with domain-dependent knowledge in the Go-playing program Indigo. In 2003, a 3x3 pattern database was built manually. This paper explores the possibility

Bruno Bouzy; Guillaume Chaslot

2006-01-01

460

The PHOBOS Glauber Monte Carlo

``Glauber'' models are used to calculate geometric quantities in the initial state of heavy ion collisions, such as impact parameter, number of participating nucleons and initial eccentricity. The four RHIC experiments have different methods for Glauber Model calculations, leading to similar results for various geometric observables. In this document, we describe an implementation of the Monte Carlo based Glauber Model calculation used by the PHOBOS experiment. The assumptions that go in the calculation are described. A user's guide is provided for running various calculations.

B. Alver; M. Baker; C. Loizides; P. Steinberg

2008-05-28